Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Thoughts On A Job Done

Most of the time, when a bit of software you work on floats out of your life and into the collective past, there's a sense of mourning. But that's not how I feel about Chrome Frame.

The anodyne official blog post noting the retirement six months hence isn't the end of something good, it's the acknowledgement that we're where we wanted to be. Maybe not all of us, but enough to credibly say that the tide has turned. The trend lines are more than hopeful, and in 6 months any lingering controversy over this looks like it'll be moot. Windows XP is dying, IE 6 & 7 are echoes of their former menace, and IE 8 is finally going the same way. Most of the world's users are now at the front of the pack where new browser releases are being delivered without friction thanks to auto-update. The evergreen bit of the web is expanding, and the whole platform is now improving as a result. This is the world we hoped to enable when Chrome Frame was first taking form.

I joined Google in December '08 expressly because it's the sort of company that could do something like Chrome Frame and not screw it up by making it an attention hogging nuisance of a toolbar or a trojan-horse for some other, more "on brand" product. GCF has never been that, not because that instinct is somehow missing from people that make it through the hiring process; no, GCF has always been a loss-maker and a poor brand ambassador because management accepted that what is good for the web is good for Google. Acting in that long-term interest is just the next logical step.

Truth be told, we weren't even the right folks for the job. We were just the only ones both willing and able to do it. MSFT has the freaking source code for IE and Windows. There was always grim joking on the team that they could have put GCF together in a weekend, whereas it took us more than a year and change to make it truly stable. Honestly, if I thought MSFT was the sort of place that would have done something purely good for the web like GCF, I probably would have applied there instead. But in '08, the odds of that looked slim.

Having run the idea for something like Chrome Frame past one of the core IE team engineers at MIX that year, the response I got was "oh, you're some kind of a dreamer...a visionary". I automatically associate the word "visionary" with "time-wasting wanker", so that was 0 for 2 on the positive adjective front. And discussions with others were roughly on par. Worse, when the IE team did want to enable a cleaner break with legacy via the X-UA-Compatible flag, the web standards community flipped out in a bout of mind-blowing shortsightedness...and MSFT capitulated. Score 1 for standards, -1 for progress.

For me, personally, this has never been about browsers and vendors and all the politics wrapped up in those words: it has been about making the web platform better. To do that means reckoning with the problem from the web-developer perspective: any single vendor only ships a part of the platform that webdevs perceive.

Web developers don't view a single browser, or a single version of a browser that's on hundreds of millions of devices as a platform. Compared to the limited reach of "native" platforms, it seems head-scratching at first, but the promise of the web has always been universal access to content, and web developers view the full set of browsers that make up the majority of use as their platform. That virtue that both makes the web the survivable, accessible, universal platform that can't be replaced as well as the frustrating, slow, uneven development experience that so many complain about.

The only way to ensure that web developers see the platform improving is to make sure that the trailing edge is moving forward as fast as the leading edge. The oldest cars and power plants are exponentially worse polluters than the most modern ones; if you want to do the most for the world, get the clunkers off the road and put scrubbers on those power plants. Getting clunkers off the road is what upgrade campaigns are, and GCF has been a scrubber.

All power plants, no matter how well scrubbed, must eventually be retired. The trend is now clear: the job coming to a close. Most of the world's desktop users are now on evergreen browsers, and with the final death of WindowsXP in sight, the rest are on the way out. Webdevs no longer face a single continuous slope of pain. We can consider legacy browsers as the sort of thing we should be building fallback experiences for, not first-class experiences. The goal of making content universally accessible doesn't require serving the exact same experience to everyone. That's what has always made the web great, and now's the time for non-evergreen browsers to take their place in the fallback bucket, no longer looming large as our biggest collective worry.

I'm proud to be a small part of the team that made Chrome Frame happen, and I'm grateful to Google for having given me the chance to do something truly good for the web.

s/Future/Promise/g

One of the things I've poured myself into this year -- with a merry band of contributors including Domenic Denicola, Anne van Kesteren, Jake Archibald, Mark Miller, Erik Arvidsson, and many others -- has been a design for Promises that DOM and JS can both adopt.

There's a (very) long history of Promises, Deferreds, and various other Promise-ish things in JS which I won't bore you with here except to note that there are very few client-side libraries which don't include such a thing and use it as a core idiom for dealing with async behvaior (e.g., XHR). jQuery, Dojo, Q, WinJS, Cujo, Closure, YUI, Ember (via RSVP), and all the rest use this style of contract pervasively and have for years. In fact, it's so common that Domenic Denicola and others have gone as far to rustle up a community standard for how they should interop under the banner of Promises/A+. The major libraries are coalescing around that contract and so it seems time, finally, to make our biggest and most important library -- DOM -- savvy to them too.

The recent history starts (arbitrarily) a couple of years ago and ends 2 weeks ago. In that time, a single design has evolved that not only DOM could get behind, but also which TC39 has agreed in principle to endorse and support going forward, thanks in large part to Mark Miller's analysis of the competing styles of use which proves strongly that the A+-compatible API we've designed need not upset anybody's applecart.

The TC39 meeting was a key turning point: up until 2 weeks ago, DOM had a version of this design under the name Future. I made the decision to not use the name Promise for that work because without TC39's agreement on a design, the DOM variant could at some point find itself both camping on a global name and disagreeing with JS about semantics or naming of particular APIs. That sort of thing might have lead to the suspicion by DOM folks that TC39 was out of touch and slow, and by TC39 that DOM rushed in like fools into a space that's pretty clearly something the std lib should include (even if it couldn't do so for multiple years due to publication and spec timing issues).

Meanwhile, in the background, several DOM APIs have started to adopt Futu...er...Promises, notably Web Crypto and Web MIDI. There has also been lively discussion about other APIs that can benefit from moving to a standard mechanism for describing async operations.

It seems, in each individual case, like this shouldn't be such a big deal. Some APIs have callbacks, some use events...what's the fuss?

The big reason to spend months of my life on this problem, and to harass other very busy people to do the same, is to me the core value of web standards: when they're working well, they create a uniform surface area that describes a coherent platform. We are the beneficiaries of this uniformity today regarding events, and they are a major piece of the design language which DOM API authors can reliably use to help describe bits of their design. Promises, like Events, are yet another tool in the box that DOM APIs authors can use, and thanks to sane constructors and the ability to subclass built into the design, it's possible for end-user code to eventually put down the custom implementations of Promise-like things and simply rely on the platform to do what platforms should do: make cheap and easy what previously was common but expensive.

As of this week, the WHATWG DOM spec has changed its naming to reflect the consensus design, substituting Promise for Future, renaming accept() to fulfill(), and removing a few of the methods that didn't have consensus or were agreed to be unnecessary in a 1.0.

Thanks to this broad consensus over the design, both Mozilla and Google have begun to implement Promises in our respective engines. Further, the W3C TAG agreed at last week's meeting to recommend to spec authors that they adopt Promises for asynchronous, single-valued operations. This is also great news because the TAG has gone from being a body which is largely reactive to one that has begun to become pro-active, taking a more active role in API oversight and integration across Working Groups to help ensure the coherence of the overall platform's architecture and design.

The job of moving many of today's APIs which use ad-hoc callback systems or vend Promise-like-but-not-quite objects is far from over, much the way providing constructors for DOM operations is a work in progress...but I have many reasons to hope, not least of all because folks like Mark, Tab, Domenic, Yehuda, and Anne are working together in good faith to help make it so.

This, then, is how we can collectively add new primitive types to the platform: begin with community and evidence, build consensus around a design slowly (starting with the key stakeholders), and eventually work across the entire platform to integrate these primitives pervasively.

It takes people who are willing to put down an us-vs-them perspective and collaborate honestly and openly to make it happen, but moving the web forward always does. Promises are budding proof that such a thing isn't beyond our institutions, vendors, and platform leaders to do. Collaboration across the spectrum from users, to spec organizations, to vendors can happen. The platform can be reformed and rationalized, and even the most recalcitrant of DOM spec authors are willing to listen when presented with evidence that their APIs aren't idiomatic or could be improved to help make the commons better, even when it comes at some risk and cost to their own APIs.

Comments.

A reminder: I approve comments manually.

I don't mind vulgarity, wrongness, or even the mild ad-homenim...as long as it contributes to the discussion. Should you post a comment here that does not contribute, expect it to be binned or treated as spam -- without warning. Be witty, be carefree, be data-driven, be on point, but most of all, write something that will try to convince someone else you're right. The elation of asserting something you feel is self-evident is a joy that falls into despair far too quickly and is too easily dashed by argument and evidence.

Use-Case Zero

Some weeks back I lobbed an overly terse "noooooooooo!!!!!" at the W3C Web Application Security Working Group over revisions to the CSP 1.1 API; specifically a proposed reduction of the surface area to include only bits for which they could think of direct use-cases in a face-to-face meeting. At the time I didn't have the bandwidth to fully justify my objections. Better late than never.

For those who aren't following the minutiae of the CSP spec, it began life as a Mozilla-driven effort to enable page authors to control the runtime behavior of their documents via an HTTP header. It was brought to the W3C, polished, and shipped last year as CSP 1.0; a much better declarative spec than it started but without much in the way of API. This is a very good way for any spec to get off the ground. Having a high-level declarative form gives implementers something they can ship and prove interop with very quickly. The obvious next step is to add an API.

Late last year I started digging into CSP, both for a personal project to implement a "user CSP" extension for Chrome, and to work with the spec authors to see what state the proposed API was in and how it could be improved. The short form of my analysis of the original CSP proposal was that it was pretty good, but missed a few notes. The new proposal, however, is a reflection of the declarative machinery, not an explanation of that machine.

Not coincidentally, this is also the essential difference between thinking in terms of a welded-shut C++ implementation and a user-serviceable JavaScript design.

For example, the previously proposed API provided methods on a SecurityPolicy class like allowsConnectionTo(url) which outline an API that the browser might plausibly need to enforce a policy at runtime. The new API includes no such methods. As someone working with CSP, you suspect the browser indeed has such a method on just such an object, but the ability to use it yourself to compose new and useful behaviors is now entirely curtailed. This is the extreme version of the previous issues: an API that explains would make an attempt to show how the parser is invoked -- presumably as a string argument to a constructor for SecurityPolicy. Similarly, showing how multiple policies combine to form a single effective policy would have lead away from document.securityPolicy as something that appears to be a single de-serialized SecurityPolicy and instead be written in terms of a list of SecurityPolicy instances which might have static methods that are one-liners for the .forEach(...) iteration that yeilds the aggregate answer.

So why should any WG bother with what I just described?

First, because they'll have to eventually, and by showing only enough skin to claim to have an API in this round, the question will be raised: how does one implement new rules without waiting for the spec to evolve? The Extend The Web Forward idea that flows naturally from p(r)ollyfills has shown real power which this new design puts further from reach...but it won't keep people from doing it. What about implementing something at runtime using other primitives like a Navigation Controller? Indeed, the spec might get easier to reason about if it considered itself a declarative layer on top of something like the Navigation Controller design for all of the aspects that interact with the network.

There are dozens of things that no over-worked spec author in a face-to-face will think to do with each of the platform primitives we create that are made either easier or harder for the amount of re-invention that's needed to augment each layer. Consider CSS vs. HTML's respective parsing and object models: both accept things they don't understand, but CSS throws that data away. HTML, by contrast, keeps that data around and reflects it in attributes, meaning that it has been possible for more than a decade to write behavioral extensions to HTML that don't require re-parsing documents, only looking at the results of parsing. CSS has resisted all such attempts at gradual runtime augmentation in part because of the practical difficulties in getting that parser-removed data back and it's a poorer system for it. CSP can either enable these sorts of rich extensions (with obvious caveats!) or it can assume its committee knows best. This ends predictably: with people on the committee re-implementing large bits of the algorithms over and over and over again for lack of extension points, only to try to play with new variations. This robs CSP and its hoped-for user-base of momentum.

Next, the desire to reflect and not explain has helped the spec avoid reckoning with poor use of JavaScript types. The document.securityPolicy object doesn't conceptually de-sugar to anything reasonable except a list of policy objects...but that more primitive SecurityPolicy object type doesn't appear anywhere in the description. This means that if anyone wants to later extend or change the policy in a page, a new mechanism will need to be invented for showing how that happens: meta tags parsed via DOM, not objects created in script and added to a collection. All of which is objectionable on the basis that all that will happen is that some objects will be created and added to the collection that everyone suspects is back there anyway. This is like only having innerHTML and not being able to construct DOM objects any other way, and the right way to be presented with the need to go build idiomatic types for what will eventually be exposed one way or another is to try to design the API as though it was being used to implement the declarative form. JavaScript first gets you both good API and a solid explanation of the lifecycle of the system.

There is, of course, another option: CSP 1.1 could punt on an API entirely. That's a coherent position that eliminates these frictions, and I'm not sure it's a bad option given how badly the API has been mangled recently. But it's not the best solution.

I've got huge hope for CSP; I think it's one of the best things to happen to webappsec ever. What happens about the API will always be overshadowed by the value that it is already delivering. But as a design microcosm, its API is a petri-dish sized version of scenario-solving vs. layering, and a great example of how layering can deliver value over time; particularly to people who aren't in the room when the design is being considered. An API that explains by showing how declarative layers on top of imperative is one that satisfies use case zero: show your work.

I Swear This Blog Isn't About Elections...

...but if it were, there would be time to cover the W3C Advisory Board election. This is truly inside-baseball stuff, as most of the AB's work happens in member-only areas of the W3C website and most of what they effect is W3C process.

So why care? Because spec licensing matters, sadly.

Let me first outline my view: in an ideal world, specification language for web standards would be in the public domain or dedicated to it. The body of intellectual property that is non-copyright is fought over via an independent process, and to the extent that it accumulates in a standards organization, it should also be possible for the group of members that have contributed to take their ball and start again somewhere else.

Why does this matter? Competition.

Standards bodies should not be insulated from the pressure to try to deliver better results. There are, of course, pathologies around this; some of which are common enough to have names: "pay for play", "venue shopping", etc. But in general, to the extent that many bodies can produce substitue goods, it gives them a reason to differentiate. The concrete example here is WHATWG vs. W3C. I don't think it's controversial to assert that without the WHATWG, the current W3C would be f'd. Competition makes everyone better, even for products that are "free" for consumers and are the product of community effort.

This, then, is why it's such a terrible idea for the W3C's Advisory Committee (the people who have some power) to elect representatives to the Advisory Board (who have even more power) that are willing to take the self-interested side of the W3C against liberal licensing of specs over the competition-enabling position that liberal licensing makes everyone better off (to a first approximation).

If you are an AC rep, the time is now to quiz candidates on this point. If you truly think that the W3C is a unique community, it's important to realize that what makes it that special is a set of shared values, not a death-grip on legacy intellectual property rights. Fixating on that ownership is the fast-path to making everyone worse off; by the time it truly becomes important, it's so likely that the organization that cares about it needs competition to right the ship or get out of the way.

Older Posts

Newer Posts