Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

The Case Against Synchronous Worker APIs

Several times in the past few months I've been presented with this question: is it good or useful to provide synchronous APIs to web workers?

Having considered the question at some length, it seems to me the answer must now be "no".

Consider IndexedDB. It's not implemented in any browser yet, but some hard-working soul did the arduous work to specify a second synchronous version of its API which is meant to be available only in Workers where it can't lock up the UI thread. That spec work was probably done because it was thought that a synchronous API would be nicer to use than the async version. As a result, the API is now double the size, but only in some contexts. I came across this while attempting to rework the IDB API to use Futures in order to improve usability in a backwards-compatible way.

So why not the sync version? At least 2 reasons:

It's this second concern that I think it truly fatal to the cause of sync worker APIs: assuming they work and are popular, they will create a world in which it's necessary to put limits on their overall running time...limits that will be circumvented by breaking up the work into smaller chunks and dealing with it asynchronously inside the worker. Likewise, anyone building a serious app that's trying to do the right thing by the user will factor their worker's tasks into small enough chunks that they can both service "stop" messages and distribute progress notifications to the UI. There might be scenarios where such messages aren't necessary and where users aren't coveting CPUs and batteries...where sending SIGHUP doesn't matter. But the intersection of those scenarios and the client-side web seems mostly to be a happy accident: your code might not have encountered enough data to create the problems. Yet.

This is particularly clear in the IDB cases: upgrading, iterating over, and updating hundreds of thousands of items of data is the sort of thing that will take a while, and is likely in response to some application semantic the user cares about: synchronizing mail, migrating to a faster schema layout in the process of some upgrade, etc. A blind for loop is asking for trouble. All of this might work fine in a dev environment with a (small) staging set of data...but it's recipe for disaster when power-users with tons of data encounter it. What then? If the APIs an app depends on are all synchronous, it's a huge boulder to roll up a hill to provide notifications, chunk up work, and refactor around async-ish patterns that chunk work up. If the work was async in the first place, the burden is much lower. So even apps that aren't Doing It Right (TM) are likely to reap some benefit down the line from thinking in terms of async first.

There are other arguments that you can field against these sorts of APIs, particularly ones that double-up API surface area, but it doesn't seem to me that they're necessary. The person attempting to justify synchronous worker APIs who provides a good argument for ergonomics and learnability still has all their work ahead of them: they must show that these APIs are not harmful to the user experience. After all, Workers were added to the platform as a way of improving UX (by moving work off the main thread). And I fear they cannot do so without violating core JS semantics.

So let's pour one out for our sync API dreams: we're gonna miss you, control flow integration. But not for too long. Generators, iterators, and yield will see you avenged.

Why What You're Reading About Blink Is Probably Wrong

By now you've seen the news about Blink on HN or Techmeme or wherever. At this moment, every pundit and sage is attempting to write their angle into the annoucement and tell you "what it means". The worst of these will try to link-bait some "hot" business or tech phrase into the title. True hacks will weave a Google X and Glass reference into it, or pawn off some "GOOGLE WEB OF DART AND NACL AND EVIL" paranoia as prescience (sans evidence, of course). The more clueful of the ink-stained clan will constrain themselves to objective reality and instead pen screeds for/against diversity despite it being a well-studied topic to which they're not adding much.

May the deities we've invented forgive us for the tripe we're about to sell each other as "news".

What's bound to be missing in most of this coverage is what's plainly said, if not in so many words, in the official blog post: going faster matters.

Not (just) code execution, but cycle times: how long does it take you to build a thing you can try out, poke at, improve, or demolish? We mere humans do better when we have directness of action. This is what Bret Victor points us towards -- the inevitable constraints of our ape-derived brains. Directness of action matters, and when you're swimming through build files for dozens of platforms you don't work on, that's a step away from directness. When you're working to fix or prevent regressions you can't test against, that's a step away. When compiles and checkouts take too long, that's a step away. When landing a patch in both WebKit and Chromium stretches into a multi-day dance of flags, stub implementations, and dep-rolls, that's many steps away. And each step hurts by a more-than-constant factor.

This hit home for me when I got my first workstation refresh. I'd been working on Chrome on Windows for nearly a year in preparation for the Chrome Frame release, and all the while I'd been hesitant to ask for one of the shiny new boxes that the systems people were peddling like good-for-you-crack -- who the hell was I to ask for new hardware? They just gave me this shiny many-core thing a year ago, after all. And I had a linux box besides. And a 30" monitor. What sort of unthankful bastard asks for more? Besides, as the junior member of the team, surely somebody else should get the allocation first.

Months later they gave me one anyway. Not ungrateful, I viewed the new system with trepidation: it'd take a while to set up and I was in the middle of a marathon weekend debugging session over a crazy-tastic re-entracy bug in a GCF interaction with urlmon.dll that was blocking the GCF launch. If there was a wrong time to change horses, surely this was it. At some point it dawned that 5-10 minute link times provided enough time to start staging/configuring at the shiny i7 box.

A couple of hours later the old box was still force-heating the eerily dark, silent, 80-degree floor of the SF office -- it wasn't until a couple of weeks later that I mastered the after-hours A/C -- when my new, even hotter workstation had an OS, a checkout, compiler, and WinDBG + cargo-culted symserver config. One build on the new box and I was hooked.

5-10 minute links went to 1-2...and less in many cases because I could now enable incremental linking! And HT really worked on the i7's, cutting build times further. Hot damn! In what felt like no-time at all, my drudgery turned to sleuthing/debugging bliss (if there is such a thing). I could make code changes, compile them, and be working with the results in less time than it took to make coffee. Being able to make changes and then feel them near-instantly turned the tide, keeping me in the loop longer, letting me explore faster, and making me less afraid to change things for fear of the time it would take to roll back to a previous state. It wasn't the webdev nirvana of ctrl-r, but it was so liberating that it nearly felt that way. What had been a week-long investigation was wrapped up in a day. The launch was un-blocked (at least by that bug) and the world seemed new.

The difference was directness.

The same story repeats itself over and over again throughout the history of Chrome: shared-library builds, ever-faster workstations, trybots and then faster trybots, gyp (instead of Scons), many different forms of distributed builds, make builds for gyp (courtesy of Evan Martin), clang, and of course ninja (also Evan...dude's a frickin hero). Did I mention faster workstations? They've made all the same sort of liberating difference. Truly and honestly, in ways I cannot describe to someone who has not felt the difference between ctrl-r development and the traditional Visual Studio build of a massive project, these are the things that change your life for the better when you're lashed to the mast of a massive C++ behemoth.

If there is wisdom in the Chrome team, it is that these projects are not only recognized as important, but the very best engineers volunteer to take them on. They seem thankless, but Chrome is an environment that rewards this sort of group-adaptive behavior: the highest good you can do as an engineer is to make your fellow engineers more productive.

And that's what you're missing from everything else you're reading about this announcement today. To make a better platform faster, you must be able to iterate faster. Steps away from that are steps away from a better platform. Today's WebKit defeats that imperative in ways large and small. It's not anybody's fault, but it does need to change. And changing it will allow us to iterate faster, working through the annealing process that takes a good idea from drawing board to API to refined feature. We've always enjoyed this freedom in the Chromey bits of Chrome, and unleashing Chrome's Web Platform team will deliver the same sorts of benefits to the web platform that faster iteration and cycle times have enabled at the application level in Chrome.

Why couldn't those cycle-time-improving changes happen inside WebKit? After all, much work has happened in the past 4 years (often by Googlers) to improve the directness of WebKit work: EWS bots, better code review flow, improved scripts and tools for managing checkins, the commit queue itself. The results have been impressive and have enabled huge growth and adoption by porters. WebKit now supports multiple multi-process architecture designs, something like a half-dozen network stack plug-ins, and similar diversity at every point where the engine calls back to outside systems for low-level implementation (GPU, network, storage, databases, fonts...you name it). The community is now committed to enabling porters, and due to WebKit's low-ish level of abstraction each new port raises the tax paid by every other port. As James Robinson has observed, this diversity creates an ongoing drag when the dependencies are intertwined with core APIs in such a way that they can bite you every time you go to make a change. The Content API boundary is Blink's higher-level "embedding" layer and encapsulates all of those concerns, enabling much cleaner lines of sight through the codebase and the removal of abstractions that seek only to triangulate between opaque constraints of other ports. Blink gives developers much more assurance that when they change something, it's only affecting the things they think it's affecting. Moving without fear is the secret of all good programming. Putting your team in a position to move with more surety and less fear is hugely enabling.

Yes, there are losses. Separating ourselves from a community of hugely talented people who have worked with us for years to build a web engine is not easy. The decision was wrenching. We'll miss their insight, intelligence, and experience. In all honesty, we may have paid too high a price for too long because of this desire to stay close to WebKit. But whatever the "right" timing may have been, the good that will come from this outweighs the ill in my mind.

Others will cover better than I can how this won't affect your day-to-day experience of WebKit-derived browser testing, or how it won't change the feature-set of Chrome over-night, or how the new feature governance process is more open and transparent. But the most important thing is that we'll all be going faster, either directly via Blink-embedding browsers or via benchmarks and standards conformance shaming. You won't feel it overnight, but it's the sort of change in model that enables concrete changes in architecture and performance and that is something to cheer about -- change is the predicate for positive change, after all.

Two Governments, Both Alike In Dignity

Disclaimer: I'm engaged to Frances Berriman, the front-end lead at the UK's Government Digital Service. She did not approve this post. It is, however, the product of many of our discussions. You'll understand shortly why this is relevant.

It seems entirely rational to be skeptical about governments doing things well. My personal life as a migrant to the UK is mediated by heaving bureaucracies, lawyer-greased wheels, and stupefyingly arbitrary rules. Which is to say nothing of the plights of my friends and co-workers -- Google engineers, mostly -- whose migration stories to the US are horrifying in their idiocy. Both the US and UK seem beholden to big-stupid: instead of trying to attract and keep the best engineers, both countries seem hell-bent to keep them out. Heaven forbid they make things of value here (and pay taxes, contribute to society, etc.)! It takes no imagination whatsoever for me to conjure the banality and cruelty that are the predictable outcomes of inflexible, anachronistic, badly-formulated policy.

You see it perhaps most clearly when this banality is translated to purely transactional mediums. PDFs that you must fax -- only to have a human re-type the results on the other end. Crazy use of phones (of course, only during business hours). Physical mail -- the slowest and worst thing for a migrant like myself -- might be the most humane form of the existing bureaucracy in the US. Your expectations are set at "paper", and physical checks and "processing periods" measured in weeks feel somehow of a piece. It has always "worked" thus.

It's befuddling then to have been a near witness to nothing short of the wholesale re-building of government services here in the UK to be easiest to navigate by these here newfangled computrons. And also just flat-out easy. The mantra is "digital by default", and they seem to be actually pulling it off. Let me count the ways:

  1. High-level policy support for the effort
  2. Working in the open. Does your government do its development on github?
  3. Designing services for the humans that use them, not the ones who run them
  4. Making many processes that were either only-physical or damn infuriating near-trivial to do online
  5. Making key information understandable. Give the UK "limited corporation" page a view. Now compare to the California version. Day, meet night.
  6. Saving both government and citizens massive amounts of money in the process.

They even have a progress bar for how many of the ministries have been transformed in this way.

Over the same timeframe I've known a few of the good folks who have put themselves in the position of trying to effect changes like this at Code for America. It's anathema in the valley to say anything less than effusive about CFA -- anything but how they're doing such good, important work. How CFA has the potential to truly transform the way citizens and government interact. Etc, etc. And it's all true. But while CFA has helped many in the US understand the idea that things could be better, the UK's Government Digital Service has gone out and done it.

So what separates them?

First, the sizes of the challenges need to be compared. The US has 5x the population, an economy that's 6x larger, and a federalist structure that makes fixing many problems more daunting than most UK citizens can possibly imagine. Next, it should be noted that London is a better place to try to hire the Right People (TM). Yes, it's much more expensive to live here, but software salaries are also much lower (both in relative and absolute terms). There wasn't as much tech going on here as in the valley to start with, and the gold-rush to produce shiny but less competent versions of existing websites for world+dog (aka: "the app ruse") hasn't created the engineering hiring frenzy here that it has stateside.

There's also a general distrust in the American psyche about the core proposition of the government doing good things. Public-spiritedness seems to so many of my generation a sort of dusty nostalgia that went the way of hemp and tie-dye. Close encounters with modern American government do little to repair the image. But all of those seem surmountable. The US has more of everything, including the Right People (TM). Indeed, the UK is managing an entire first-world's set of services on a smaller GDP.

Why then do US public services, to be blunt, largely still suck? The largest differences I've observed are about model. Instead of having a mandate to change things from the inside, the organizational clout to do it, and enough budget to make a big dent out of the gates (e.g. gov.uk) CFA is in the painful position of needing to be invited while at the same time trying to convince talented and civic-minded engineers and designers to work for well below industry pay for a limited time on projects that don't exist yet.

Put yourselves in the shoes of a CFA Fellow: you and your compatriots are meant to help change something important in the lives of citizens of a municipality that has "invited" you but which is under no real pressure to change, has likely moved no laws or contracts out of the way to prepare for your arrival, and they know you're short-timers. Short-timers that someone else is taking all the risk on and paying for?

What lasting change will you try to effect when you know that you've got a year (tops) and that whatever you deliver must be politically palatable to entrenched interests? And what about next year's Fellows? What will they be handed to build on? What lasting bit of high-leverage infrastructure and service-design will they be contributing to?

The contrast between that and the uncomfortably-named "civil servants" of the GDS could not be more stark. I don't get the sense that any of them think their job is a lifetime assignment -- most assume they'll be back at agencies any day now, and some of the early crop have already moved on in the way nerds tend to do -- but at the pub they talk in terms of building for a generation, doing work that will last, and changing the entire ethos of the way services are produced and consumed. Doing more human work. And then they wake up the next morning and have the authority and responsibility to go do it.

I don't want to be down on CFA. Indeed, it feels very much like the outside-government precursor to the GDS: mySociety. mySociety was put together by many of the same public-spirited folks who initially built the Alpha of what would a year later become gov.uk and the GDS. Like CFA, mySociety spent years pleading from the outside, making wins where it could -- and in the process refining ideas of what needed to change and how. But it was only once the model changed and they grabbed real leverage that they were able to make lasting change for the better.

I fear CFA and the good, smart, hard-working people who are pouring their lives into it aren't missing anything but leverage -- and won't make the sort of lasting change they want without it. CFA as an organization doesn't seem to understand that's the missing ingredient.

America desperately needs for its public services to make the same sort of quantum leap that the UK's are making now. It is such an important project, in fact, that it cannot be left to soft-golved, rose-tinted idealism. People's lives are being made worse by mis-placed public spending, badly executed projects, and government services that seem to treat service as an afterthought.

CFA could be changing this, and we owe it to ourselves and our friends there to ask clearly why that change hasn't been forthcoming yet. The CFA Fellows model has no large wins under its belt, no leverage, and no outward signs of introspection regarding its ability to deliver vs. the GDS model. Lets hope something larger is afoot beneath that placid surface.

Update: I failed to mention in the first version of this post that the one of the largest philosophical differences between the two countries is the respective comfort levels with technocratic competence.

There exists a strain of fatalism about government in the US that suggests that because government doesn't often do things well, it shouldn't try. It's a distillation of the stunted worldviews of the libertarian and liberal-tarian elite and it pervades the valley. Of course governments that nobody expects anything of will deliver crappy service; how could it be otherwise?

What one witnesses here in the UK is the belief that regardless of what some theory says, it's a problem when government does its job badly. To a lesser degree than I sense in years past, but still markedly moreso than in the US, the debate here isn't about can the government get good at something, but why isn't it better at the things the people have given it responsibility for?

As a result, the question quickly turns how one can expect a government to manage procurement of technical, intricate products for which it's the only buyer (or supplier) without the competence to evaluate those products -- let alone manage operations of them.

Outsourcing's proponents had their go here and enormous, open-ended, multi-year contracts yielded boondoggle after boondoggle. By putting contractors in a position of power over complexity, and starving the in-house experts of staffing and resources to match, the government forfeited it's ability to change its own services to meet the needs of citizens.

What changed with gov.uk was that the government decided that it had to get good at the nuts and bolts of delivering the services, outsourcing bits and pieces of small work, but owning the whole and managing it in-house. Having the Right People (TM) working on your team matters. If they're at a contractor, they have a different responsibility and fiduciary duty. When the ownership of the product is mostly in-house, ambiguities borne of incomplete contract theory are settled in favor of the citizen (or worst case, government) interest, not the profit motive.

The gov.uk folks say "government should only do what only government can do", but my observation has been that that's not the end of the discussion: doing it well and doing it badly are still differentiable quantities. And doing better by citizens is good. Clearing space to do good is the essential challenge.

Cassowary on NPM

I continue to work on-and-off on the JS Cassowary port and now, thanks to some help from Isaac, new packages are up on NPM. The API is still marginally unstable and I expect we'll be undergoing re-licensing sometime in the near future, but it's very near a 0.1 release.

Reforming the W3C TAG

And so it has come to pass that W3C Technical Architecture Group (TAG) elections are afoot. Nominations have ended and the candidates have been announced. There are four seats open and nine candidates running, so it's worth understanding why anyone should vote for the reformers (myself, Yehuda Katz, Anne van Kesteren, Peter Linss, and Marcos Caceres). For general background, see my previous post. Here I'll include more specifics, so if that sounds boring, here's a kitten!

After doing much reading of TAG meeting minutes, f2f notes, issues, delivered products, and findings I've come to a sobering conclusion: the TAG isn’t focused on eliminating the biggest sources of developer pain today. Now, you can argue that this might not be their job, but I won't agree. It's the TAGs job to help ensure the health of the platform, both for publishers and search engines, but also for authors. And the web as a platform is in some real trouble today.

There doesn't seem to be a sense of urgency in the TAG about the corrosive effects of poor design and integration on the overall usability and health of the system. The Web to the TAG, as I can understand it through the meeting minutes and notes, is a collection of documents that represent internally-referential collections of data which are are linked to other documents, not a series of applications that are attempting ever more impressive feats of richness on a platform that stymies them every time you hit one of the seams. In reality it is (aspirationally) both things but the very real tensions between them don't appear in the TAG's work, leading me to believe that it doesn't comprehend the latter aspect of web development today and what that means for the future health and viability of the platform.

I drone on and on and on about layering because explaining each bit of the platform in terms of the one below it relieves much of the tension created by disconnected declarative forms and APIs. This matters because in today's web when you go ever so slightly off the path paved by a spec's use-cases, the drop-off is impossibly steep, and the only way to keep from risking life-threatening abstraction level transitions is to flood the entire canyon with JavaScript and hope you can still swim in the resulting inland sea of complexity. This is what our biggest, "best" webapps do today, relying on ENORMOUS piles of JavaScript that largely serve to re-create what browsers already do in the hopes of marginally extending that capability. It's simply nuts, but the TAG doesn't seem to acknowledge the threat this poses to everything it holds dear: linking, declarative forms, data...it's all about to be lost beneath the waves, and because the TAG doesn't understand the growing importance of JS, it seemingly doesn't see the threat. Declarative forms disappear first beneath imperatively-delivered complexity; lingua-franca APIs next. Without ways of getting what your app needs while keeping one foot on the declarative path, app developers do what they must; declarative data and relationships become "nice to haves" not "the way you do it". Layering provides easy steps between the levels of abstraction, avoiding the need to re-create what the platform was already doing for you along with whatever custom thing you need -- and it's not the TAG's current agenda.

If elected, I will work to fix that. The TAG is the right group to formulate and articulate a theory of good layering in the web platform's architecture and it's the only group at the W3C whose mission is to help spec authors wrestle with large-scale design and integration problems like this. My background is in apps, and JS, and browsers, and I work at one of the few places deeply invested in ensuring that we maintain a healthy, declarative web for the future. I care tremendously about the viability of the largely-declarative web. Through my work with Dimitry Glazkov and many others on Web Components I've done as much as anyone in the last decade to help build a bridge between the JS and declarative worlds. Dimitry and I created and led the team here at Google that have put Shadow DOM, CSS Variables, Custom Elements, Mutation Observers (and Object.observe) into specs and on the platform agenda, all with the explicit goal of creating better layering; explaining the magic in today's platform and drawing connections between the bits that had none before. And I think we need to keep building more of those bridges, but it's hard when W3C culture views that agenda with suspicion. Why would any WG concern itself with integration with specs outside its charter? It's the TAGs job to inject that global perspective. I believe the TAG should pursue the following work as a way of filling its charter:

If that sounds like meaningful progress to you, I'd appreciate your organization's vote; along with your consideration to vote for my fellow reformers: Yehuda Katz, Anne van Kesteren, Peter Linss, and Marcos Cáceres. AC reps for each organization can vote here and have 4 votes to allocate in this election. Voting closes near the end of the month, and it's also holiday season, so if you work at a member organization and aren't the AC rep, please, find out who that person in your organization is and make sure they vote. The TAG can't fix the web or the W3C, but I believe that with the right people involved it can do a lot more to help the well-intentioned people who are hard at work in the WGs to build in smarter ways that pay all of us back in the long run.

Older Posts

Newer Posts