People are writing open letters to me? Weird.
I will answer the question, though:
...you started the collaboration of competing toolkits that led to the creation of Dojo. How did you do it!?!?
We did it by pointing out to folks who were working on their own things that the personal effort radius exists (i.e., we'd each pioneered some aspect of the complete toolkit but could never drive it to completion on our own) and that we can get further together than apart. From there, experience took over. This is a notable contrast to what happened after and in different circles where developers maybe didn't have as many KLOC of JS under their belts. With fewer sleepless nights of debugging, optimizing, and porting it's harder to see what's ahead and apply the discipline necessary to avoid it. Part of that accommodation for the future is compromise. And why would anyone want to compromise and work with other people when they don't feel like they really need to? It's fun being the smartest person in the room, and if you don't let other people in as equals, you always are. Swallowing that pride is the big transition. Isn't it always?
I guess what it comes down to is that I didn't try to organize people who didn't see the value in organization: instead, I tried to organize folks whose experience was valuable in terms of personal maturity and not just facility with code. We picked a hard technical problem and an easier social problem knowing that the social aspects were more critical. Did we succeed? Yes, but only in the ways we set out to succeed. Dojo continues and is outstanding for the problems it was designed to solve, but the JS market has strong winner-take-all dynamics and the high-end toolkits like Dojo all compete for a highly-knowledgeable, experienced set of developers who understand not only the problem they have today but also the problems they're going to have in a month or two. We built Dojo with folks like that, for folks like that. I should have known then what I see clearly now, though: the fact that there are relatively few experienced, disciplined developers in the world means that when you build things for them, relatively few people will understand what you've done.
Welcome to the club, Justin.
JavaScript is a lovable language. Real closures, first class functions, incredibly dynamic behavior...it's a joy when you know it well.
Less experienced JS programmers often feel as though they're waltzing in a minefield, though. At many steps along the path to JS enlightenment everything feels like it's breaking down around you. The lack of block lexical scope sends you on pointless errands, the various OO patterns give you fits as you try to do anything but what's in the examples, and before you know it even the trusty "dot" operator starts looking suspect. What do you mean that this
doesn't point to the object I got the function from?
Repairs for some of the others are on the way in ES6 so I want to focus on the badly botched situation regarding "promiscuous this
", in particular how ES5 has done us few favors and why we're slated to continue the cavalcade of failure should parts of the language sprout auto-binding.
Here's the problem in 5 lines:
var obj = {
_counter: 0,
inc: function() { return this._counter++; },
};
node.addEventListener("click", obj.inc);
See the issue? obj.inc
results in a reference to the inc
method without any handle or reference to its original context (obj
). This is asymmetric with the behavior we see when we directly call methods since in that case the dot operator populates the ThisBinding
scope. We can see it clearly when we assign to intermediate variables:
var _counter = 0; // global "_counter", we'll see why later
var inc = obj.inc;
obj.inc(); // 1
obj.inc(); // 2
inc(); // 1
Reams have been written on the topic, and ES5's belated and weak answer is to directly transcribe what JS libraries have been doing by providing a bind()
method that returns a new function object that carries the correct ThisBinding
. Notably, you can't un-bind a bound function object, nor can you treat a bound function as equal to its unbound ancestor. This, then, is just an API formalism around the pattern of using closures to carry the ThisBinding
object around:
var bind = function(obj, name) {
return function() {
return obj[name].apply(obj, arguments);
};
};
// Event handling now looks like:
// node.addEventListener("click", bind(obj, "inc"));
var inc = bind(obj, "inc");
obj.inc(); // 1
obj.inc(); // 2
inc(); // 3
inc === obj.inc; // false
ES5's syntax is little better but it is built-in and can potentially perform much better:
var inc = obj.inc.bind(obj);
// In a handler:
node.addEventListener("click", obj.inc.bind(obj));
Syntax aside, we didn't actually solve the big problems since unbound functions can still exist, meaning we still have to explain to developers that they need to think of the dot operator doing different things based on what charachters happen to come after the thing on the right-hand side of the dot. Worse, when you get a function it can either be strongly-bound (i.e., it breaks the .call(otherThis, ...)
convention) or unbound -- potentially executing in the "wrong" ThisBinding
. And there's no way to tell which is which.
So what would be better?
It occurs to me that what we need isn't automatic binding for some methods, syntax for easier binding, or even automatic binding for all methods. No, what we really want is weak binding; the ability to retrieve a function object through the dot operator and have it do the right thing until you say otherwise.
We can think of weak binding as adding an annotation about the source object to a reference. Each de-reference via [[Get]] creates a new weak binding which is then used when a function is called. This has the side effect of describing current [[Get]] behavior when calling methods (since the de-reference would carry the binding and execution can be described separately). As a bonus, this gives us the re-bindability that JS seems to imply should be possible thanks to the .call(otherThis)
contract:
var o = {
log: function(){
console.log(this.msg);
},
msg: "hello, world!",
};
var o2 = {
msg: "howdy, pardner!",
};
o.log(); // "hello, world!"
o2.log = o.log;
// calling log through o2 replaces weak binding
o2.log(); // "howdy, pardner!"
But won't this break the entire interwebs!?!?
Maybe not. Hear me out.
We've already seen our pathological case in earlier examples. Here's the node listener use-case again, this time showing us exactly what context is being used for unbound methods:
document.body.addEventListener("click", function(evt) {
console.log(this == document.body); // true in Chrome and FF today
}, true);
We can think of dispatch of the event calling the anonymous function with explicit ThisBinding
, using something like listener.call(document.body, evt);
as the call signature for each registered handler in the capture phase. Now, it's pretty clear that this is whack. DOM dispatch changing the ThisBinding
of passed listeners is an incredibly strange side-effect and means that even if we add weak binding, this context doesn't change. At this point though we can clearly talk about the DOM API bug in the context of sane, consistent language behavior. The fact that event listeners won't preserve weak binding and will continue to require something like this is an issue that can be wrestled down in one working group:
node.addEventListener("click",
(function(evt) { ... }).bind(otherThis),
true);
The only case I can think of when weak bindings will change program semantics is when unbound method calls in the global object do work on this
in a way that is intentional. We have this contrived example from before too, but as you can see, it sure looks like a bug, no?
var _counter = 0; // a.k.a.: "this._counter", a.k.a.: "window._counter"
var obj = {
_counter: 0,
inc: function() { return this._counter++; },
};
var inc = obj.inc;
obj.inc(); // 1
obj.inc(); // 2
console.log(obj._counter, this._counter); // 2, 0
inc(); // 1
inc(); // 2
console.log(obj._counter, this._counter); // 2, 2
If this turns out to be a problem in real code, we can just hide weak bindings behind some use
directive.
Weak binding now gives us a middle ground: functions that are passed to non-pathological callback systems "do the right thing", most functions that would otherwise need to have been bound explicitly can Just Work (and can be rebound to boot), and the wonky [[Get]] vs. [[Call]] behavior of the dot operator is resolved in a tidy way. One less bit of unexploded ordinance removed.
So the question now is: why won't this work? TC39 members, what's to keep us from doing this in ES6?
Update: Mark Miller flags what looks to be a critical flaw:
var obj = {
callbacks: [],
register: function(func) {
for (var i = 0, i < this.callbacks.length; i++) {
this.callbacks[i]();
}
},
};
obj.register(foo.bar); // Does the wrong thing!
The problem here is our call into each of the callback functions which still execute in the scope of the wrong object. This means that legacy code still does what it always did, but that's just as broken as it was. We'd still need new syntax to make things safe. Ugg.
In which you talk me into finally getting a Twitter account by explaining to me why I don't understand Twitter.
I'm a Twitter luddite for perhaps the most pedantic of excuses: for years I've scratched my head at why what seemed like a solved problem has eluded Twitter in its search for scale with stability. A new presentation by Twitter engineer Raffi Krikorian deepens my confusion. First the numbers:
Avg. Inbound Tweets / Second |
800 |
Max. Inbound Tweets / Second |
3283 |
Tweet Size (bytes) |
200 |
Registered Users (M) |
150 |
Max Fanout (M) |
6.1 |
Social networks like Twitter are just that -- networks -- and to understand Twitter as a network we want to know how much traffic the Twitter "backbone" is routing. Knowing that Twitter does 800 messages inbound per second doesn't tell us but an estimate is possible. From a talk last year by another Twitter engineer, we know that Twitter users have less than 200 followers on average. That means that despite the eye-popping 6.1M follower (in networking terms "fanout") count for Lady GaGa, we should expect most tweets to generate significantly less load. Dealing just in averages, we should expect baseline load to be roughly 100K delivery attempts per second. Peak traffic is likely less than 1.5M delivery attempts per second (4K senders w/ double the average connectedness plus some padding for high-traffic outliers).
Knowing that peak loads are 4x average loads is useful and we can provision based on that. We also know that Twitter doesn't guarantee message order and has no SLA for delivery which means we can deal with the Lady GaGa case by smearing delivery for users with huge fanout, ordering by something smart (most active users get messages first?). Heck, Twitter doesn't even guarantee delivery, so we could even go best-effort if the system is congested, taking total load into account for the smear size of large senders or recovering out of band later by having listeners que a DB. So far our requirements are looking pretty sweet. Twitter's constraints significantly ease the engineering challenge for the core routing and delivery function (the thing that should never be down).
What about tweet size? How much will an individual tweet tax a network? Can we handle tweets as packets? Tweet text is clamped to 200 bytes (as per Raffi's slides) but Tweets now support extra metadata. The Twitter API Wiki notes that this metadata is also limited, clamped to 512 bytes. Assuming we need a GUID-sized counter for a unique tweet ID, that puts our payload at 200+512+16 = 728 bytes. That's less than half the size of default ethernet MTU -- 1500 bytes. IP allows packets up to 64K in size, and with jumbo ethernet frames we could avoid fragmentation at the link level and still accomodate 9K packets, but there's no need to worry about that now.
Twitter's subscriber base also fits neatly in the IPv4 address range of ~4 billion unique addresses. Even if we were to give every subscriber an address for every one of their subscribed delivery endpoints (SMS, web, etc.), we'd still fit nicely in IPv4 space. Raffi's slides show that they want to serve all of earth which means eventually switching to IPv6, but that's so far away from the trend line that we can ignore it for now. That means we can handle addressing (source and destination) and data in the size of a single IP packet and still have room to grow.
So now we're down to the question that's been in the back of my mind for years: can we buy Twitter's core routing and delivery function off the shelf? And if so, how much would it cost, assuming continued network growth? Assuming 4x average peak and a 2K/s inbound message baseline (enough to get them through 2011?) and an average fanout of 300 (we're being super generous here, after all), we're looking at 2.5 million packets to route per second. If we treat each delivery endpoint as an IP address and again multiply deliveries by endpoints and assume 4 delivery endpoints per user, we 're looking at a need to provision for 10M deliveries per second.
Is that a lot? Maybe, but I have reason to think not.
10M 1.5KB packets is ~15GB/second of traffic. Core routers now do terabits of traffic per second (125GB), but most of that traffic doesn't correspond to unique routes. Instead, we need to figure out if hardware can do either the 2.5M or 10M new "connections" per second that the Twitter workload implies. Ciscos's mid-range 7600 series appears to be able to handle 15M packets per second of raw forwarding. Remember, this is an "internal" network, no advanced L3 or L4 services -- just moving packets from one subnet to another as fast as possible, so quoting numbers with all the "real world" stuff turned off is OK.
I'm still not sure that I fully grok the limits of the gear I see for sale since I'm not a network engineer and most "connection per second" numbers I see appear to be related to VPN and Firewall/DPI. It looks like the likely required architecture would have multiple tiers of routing/switching to do things efficiently and not blow out routing tables, but overall it still seems doable to me. This workload is admittedly weird in it's composition relative to stateful TCP traffic and I have no insight into what that might to do in off-the-shelf hardware -- it might just be the sticky wicket. Knowing that there's some ambiguity here, I hope someone with more router experience can comment on reducing the Twitter workload to off-the-shelf hardware.
Perhaps the large number of unique and short-lived routes would require extra tiers that might reduce the viability of a hardware solution (if only economically)? ISTM that even if hardware can only keep 2-4M routes in memory at once and can only do a fraction of that in new connections per second, this could still be made to work with semi-intelligent "edge" coalescing and/or MPLS tagging...although based on the time it takes to get a word of memory from main memory (including the cache miss) on modern hardware, it seems feasible that tuned hardware should be able to do at least 1M route lookups per second which puts the current baseline well within hardware and the 2011 growth goals within reach.
So I'm left back where I started, wondering what's so hard? Yes, Twitter does a lot besides delivering messages, but all of those things (that I understand and/or know about) have the wonderful behavior that they're either dealing with the (relatively low) inbound rate of 4K messages/s (max) or that they're embarrassingly parallel.
So I ask you, lazyweb, what have I missed?
I've been meaning to move this blog off a sub-domain affixed to the Dojo Project for some time, but I finally got all the pieces lined up last night. This blog will continue to be technical in nature and I'll set up another location here on infrequently.org
for political stuff. Thanks for being patient with any latent brokenness.
The concepts of negative externality and moral hazard describe situations where one person can impose costs on another without paying for it, often resulting in less-than optimal outcomes for everyone. That sounds a lot like what's going on with organizations that won't upgrade from IE6 to me. Lets quickly consider both sides of the browser equation and then sketch out some possible solutions, keeping in mind that the assumed goal is a better, less frustrating web experience for users and developers. We'll also look to see how this stacks up with the fairness goal of buyers paying full-freight for the costs of production.
Firms have incentive to maximize return on investments, meaning not switching immediately when better browsers are available, even if the nominal price is zero since the real price may be much higher. Retraining, support, validation, and rework of existing systems that won't work with a new browser all add up to create a large disincentive to any change. A new browser -- or even a new version of an existing browser -- has to be worth enough to outweigh those potential costs. It may cost real money just to figure out if upgrading will cost a lot. Lets assume that organizations are deciding under uncertainty.
Web developers want their customers to pay at least what it costs to produce an app. This may be hard to estimate. They'd also like to deliver competitive apps at as low a cost as possible and often want to maximize the size of their addressable market, which means supporting the broadest swath of browsers as possible. They could build features once for old browsers and again (perhaps better) for new browsers, but that's expensive. Only the largest sites and firms can contemplate such a strategy, and usually as a way of mopping up marginal market share once they've "won" the primary market battle. Developers of new apps have strong incentives to build to the least common denominator and address the largest potential market.
So what browsers to include? There's historical data on browser share but things move slowly enough that the future is going to look a lot like the present, particularly related to development cycles. Public statistics on browser share may not even resemble the market for a vertically focused product. Enterprise software developers can count on more legacy browser users than consumer sites. In any case, it's unlikely that a firm knows all of its future customers. It pays to be conservative about what browsers to support.
What if a developer builds an app, bears the pain of supporting old browsers, but does not sell many units to users of old browsers? There's potential deadweight loss in this case, but it might be OK; the developer reduced their uncertainty and that's worth something.
What's good for a single firm may be bad for the ecosystem, though. The cumulative effects of this dynamic compound. Application buyers are also the market for browsers, but on different time scales. The costs of a browser upgrade may not be known and may dwarf the cost of any individual app, making it unlikely that cost savings for an app targeted at newer browsers will win the day. More likely the customer will lean on their supplier to support their old browser. Mismatches in size and clout between vendors and clients amplify this dynamic. What small consulting firm can tell a Fortune 500 firm who may be their largest customer to go stuff it if they don't upgrade from IE 6? Small vendors may be able to target more than just the supported browsers at their largest client, but again potentially taking deadweight losses. Large, slow moving organizations may hurt individual apps but cumulatively they can also rob the market of growth thanks to the third linkage: the connection between browser makers and application developers.
It might come as a surprise, but browser vendors care very much what web developers do. We see this in the standards process where a lack of use is cause for removing features from specs. After all, standards are insurance policies that developers and customers take out against their investment in technologies -- in this case browsers and the features they support. It doesn't make sense to insure features that nobody is using. Developers whose clients are slow-moving may shy away from using new features, robbing the process of the feedback that's critical in cementing progress. With the feedback loop weakened, browser makers may assume that developers don't want new features or don't want the ones they've built. Worse, they may wrongly think that developers just want better/faster versions of existing features, not new features that open up new markets.
I've glossed over lots of details at every step here, but by now we can see how the dynamic caused by legacy content in organizations that demand continuity robs us all of forward momentum. More frustratingly, we can also see how everyone in the process is behaving rationally(ish) and without obvious malice. That doesn't mean the outcomes are good. If firms could make new web features available for their suppliers to target faster, they would strengthen the feedback loop between developers and browser makers and also reduce their own procurement costs for applications, assuming they could continue to use their old applications. The key to enabling this transition to a better equilibrium lies in reducing those potential costs of change. In many ways, that comes down to reducing the uncertainty. If new features could work along side legacy content without retraining, added support costs, and without the need for exploratory work to understand the potential impacts, organizations should be more willing to accept modern applications. We need to make free cheaper.
There are other ways of addressing market imbalances like this, of course. One traditional answer is for governments to tax those who externalize their costs onto others, bringing the actual price of goods back into line. Regulation to prevent externalization in the first place can also be effective (e.g., the Clean Water Act). The use of the courts to find and provide remedies sometimes works but looks implausible here -- you'd need a court to accept a theory of "browser pollution" in order to show harm. Derivative contracts may allow first parties (developers) to spread their potential costs, assuming they can find buyers who can judge the risks, but this looks to be a long way out for web development. Building basic schedules for relatively differentiated goods is hard enough. Asking others to trade on one small-ish aspect of a development process feels far-fetched.
Reasonable people disagree about how we should attack the problem. My own thinking on the topic has certainly evolved.
For a long time I viewed standards as a solution, and once my faith waned -- based on a lack of evidence that standards could do what was being asked of them -- I turned to JavaScript to help fill the gaps. It was only when I came to realize how the rate of progress determines our prospects for a better future that I started looking for other solutions. The dynamics I've outlined here are roughly how I came to view what I did for a living in 2008, when I began to look into building a swappable renderer for IE based on WebKit. Chrome Frame is an attempt to drive the price of free closer to zero and in the prcoess improve the rate of progress. That's the reason that Chrome Frame is opt-in for every page and doesn't render everything with the Chrome engine. That's the reason we've created MSI packages that keep IT administrators in control and continue to do all our work in public. Rendering everything via Chrome or giving admins any reason to distrust the plugin wouldn't reduce the uncertainty and therefore wouldn't do anything to address the part of the process of progress that has been broken for so long.
Next time you hear someone say "if only I could use X", remember that the way we'll get to a better future is by bringing everyone else along for the ride. We won't get there by telling them what to do or by implying with moral overtones that their locally optimal decision is "wrong". Instead we can bring them along by understanding their interests and working to reduce the very real friction that robs us all of a better future. You can do your part by opting your pages into the future and working with your users to help them understand how cheap free has truly become.
Older Posts
Newer Posts