Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

That PE Thang

The interwebs, they are aboil!

People worthy of respect are publicly shaming those who don't toe the Progressive Enhancement line. Heroes of the revolution have taken to JavaScript's defense. The rebuttals are just as compelling.

Before we all nominate as Shark or Jet, I'd like to take this opportunity to point out the common ground: both the "JS required" ("requiredJS"?) and "PE or Bust" crews implicitly agree that HTML+CSS isn't cutting it.

PE acknowledges this by treating HTML as scaffold to build the real meaning on top of. RequiredJS brings its own scaffolding. Apart from that, the approaches are largely indistinguishable.

In ways large and small, the declarative duo have let down application developers. It's not possible to stretch the limited vocabularies they provide to cover enough of the relationships between data, input, and desired end states that define application construction. You can argue that they never were, but I don't think that's essentially true. Progressive Enhancement, for a time, might have gotten you everywhere you wanted to be. What's different now is that more of us want to be places that HTML+CSS are unwilling or incapable of taking us.

The HTML data model is a shambles, the relationship between data, view, and controller (and yes, browsers have all of those things for every input element) is opaque to the point of shamanism: if I poke this attribute this way, it does a thing! Do not ask why, for there is no why. At the macro scale, HTML isn't growing any of the "this is related to that, and here's how" that the humble <a href="..."> provided; particularly not about how our data relates to its display. Why, in 2013, isn't there a (mostly) declarative way to populate a table from JSON? Or CSV? Or XML? It's just not on the agenda.

And CSS...it's so painful and sad that mentioning it feels like rubbing salt in a wound that just refuses to heal.

Into these gaps, both PE and requiredJS inject large amounts of JS, filling the yawning chasm of capability with...well...something. It can be done poorly. And it can be done well. But for most sites, widgets, apps, and services the reality is that it must be done.

Despite it all, I'm an optimist (at least about this) because I see a path that explains our past success with declarative forms and provides a way for them to recapture some of their shine.

Today, the gap between "what the built-in-stuff can do" and "what I need to make my app go" is so vast at the high-end that it's entirely reasonable for folks like Tom to simply part ways with the built-ins. If your app is made of things that HTML doesn't have, why bother? At the more-content-than-app end, we're still missing ways to markup things that microformats and schema.org have given authors for years: places, people, events, products, organizations, etc. But HTML can still be stretched very nearly that far, so long as the rest of the document is something that HTML "can do".

What's missing here is a process for evolving HTML more quickly in response to evidence that it's missing essential features. To the extent that I cringe at today's requiredJS sites and apps, it's not because things don't work with JS turned off (honestly, I don't care...JS is the very lowest level of the platform...of course turning it off would break things), but because stuffing the entire definition of an application into a giant JS string deprives our ecosystem of evidence that markup could do it. It's not hard to imagine declarative forms for a lot of what's in Ember. Sure, you'll plumb quite a bit through JS when something isn't built into the framework or easily configured, but that's no different than where the rest of the web is.

Web Components are poised to bridge this gap. No, they don't "work" when JS is disabled, but it's still possible to hang styles off of declarative forms that are part of a document up-front. Indeed, they're the ultimate in progressive enhancement.

I'd be lying if I were to claim that bringing the sexy back to markup wasn't part of the plan for Web Components the whole time. Dimitri's "you're crazy" look still haunts me when I recall outlining the vision for bringing peace to the HTML vs. JS rift by explaining how HTML parsing works in in terms of JS. It has been a dream of mine that our tools would uncover (enabling understanding of) what's so commonly needed that HTML should include it in the next iteration. In short, to enable direct evolution. To do science, not fumbling alchemy.

It's key to understand is that "PE or Bust" vs. "requiredJS" isn't a battle anyone can win today. The platform needs to give us a way to express ourselves even when it hasn't been prescient of our needs -- which, of course, it can't be all the time. Until now, there hasn't been that outgas, so of course we reach out and use the turing-complete language at our finger tips to go re-build what we must to get what we want.

The developer economics will be stacked that way until Web Components are the norm. Call it PE++ (or something less clunky, if you must), but the war is about to be over. PE vs. requiredJS will simply cease to be an interesting discussion.

Can't wait for that day.

For Jo

This was originally drafted as response to [Jo Rabin's blog post] discussing a meetup the W3C TAG hosted last month. For some reason, I was having difficulty adding comments there.

Hi Jo,

Thanks for the thoughtful commentary, and for the engaging chat at the meetup. Your post mirrors some of my own thinking about what the TAG can be good for.

I can't speak for everyone on the TAG, but like me, most of the new folks who have joined have backgrounds as web developers. For the last several months, we've been clearing away old business from the agenda, explicitly to make way for new areas of work which mirror some of your ideas. At the meeting which the meetup was attached to, the TAG decided at the urging of the new members to become a functional API review board. The goal of that project is to encourage good layering practice across the platform and to help WGs specify better, more coherent, more idiomatic APIs. That's a long-game thing to do, and you can see how far we've got to go in terms of making even the simplest idioms universal.

Repairing these sorts of issues is what the TAG, organizationally, is suited to do. Admittedly, it has not traditionally shown much urgency or facility with them. We're changing that, but it takes some time. Hopefully you'll see much more to cheer in the near future.

As for the overall constituency and product, I feel strongly that one of the things we've accomplished in our efforts to reform the TAG is that we're re-focusing the work to emphasize the ACTUAL web. Stuff that's addressable with URLs and has a meaningful integration point with HTML. Too much time has been wasted worrying about problems we don't have, or for good reasons, are unlikely to have. Again, I don't speak for the TAG, but I promise to continue to fight for the pressing problems of webdevs.

The TAG can use this year to set an agenda, show positive progress, and deliver real changes in specs. Already we're making progress with Promises, constructability, layering (how do the bits relate?), and extensibility. We also have a task to explain what's important and why. That's what has lead to efforts like the Extensible Web Manifesto. You'll note other TAG members as signatories.

Along those lines, the TAG has also agreed to begin work on documents that will help spec authors understand how to approach the design process with the constraints of idiomaticness and layering in mind. That will take time, but it's being informed by our direct, hands-on work with spec authors and WGs today.

So the lines are drawn: the TAG is refocusing, taking up the architectural issues that cause real people real harm in the web we actually have, and those who think we ought to be minding some other store aren't very much going to like it. I'm OK with that, and I hope to have your support in making it happen.

Regards

Why JavaScript?

One strain of objection I often hear about the project of making the web more extensible is that it implies travelling further down the JavaScript rabbit hole. The arguments often include:

These, incidentally, are mirrors to the fears that many have about the web becoming "too reliant" on JavaScript. But that's a topic for another post.

Lets examine these in turn.

The question of what languages a platform admits as first-class isn't about the languages -- not really, anyway. It's about the conventions of the lowest observable level of abstraction. We have many languages today that cooperate at runtime on "classical" platforms (Windows/Linux/OSX) and the JVM because they collaborate on low-level machine operations. In the C-ish OSes, that's about moving words around memory area and using particular calling conventions for structuring input and outputs to kernel API thunks. Above that it's all convention; see COM. Similarly, JVM languages interop at the level of JVM bytecode.

The operational semantics of these platforms are incredibly low level. The flagship languages and most of the runtime behavior of programs are built up from these very low-level contracts. Where interop happens at an API level, it's usually about a large-ish standard library which obeys most of the same calling conventions (even if its implementation is radically different).

The web has it the other way around. It achieved broad compatibility by starting the bidding at extremely high level semantics which, initially, had very little in the way of a contract beyond bugwards compatibility with whatever Netscape or MSFT shipped last. The coarse, interpret-it-as-you-go contract of HTML is one of the things that has made it such a hardy survivor. JavaScript was added later, and while it has lower-level operational semantics than HTML or CSS, that history of bolting JS on later has led to the current project of encouraging extensibility and layering; e.g., through Web Components. It's also why those who cargo-cult their experiences of other platforms onto the web find themselves adrift. There just isn't a shared lower level on which to interoperate.

That there aren't other languages interfacing with the web successfully today is, in part, the natural outcome of a lack of shared lower-level idioms on which those languages could build-up runtimes on. It's no accident that CoffeeScript, TypeScript, and even Dart find themselves running mostly on top of JS VMs. There's no lower level in the platform to contemplate.

Which brings us to the second argument: there are other, better languages...surely we could just all agree on some bytecode format for the web that would allow everyone to get along...right?

This is possible, but implausible.

Implausibility is the only reason I pour time and effort into trying to improve JS and not something else. The Nash Equilibrium of the web gives rise to predicable plays: assuming that incentives for adopting low-level descriptions of JS (as any such bytecode would have to describe JS as well as everything else) are not evenly distributed, movement by any group that is not all of the competitors stymies compatibility, which after all is the whole goal. Any language that wishes to interoperate with JavaScript and the existing DOM is best off describing its runtime in terms of JavaScript for fear that the threat to not adopting a compatible bytecode is credible. Compatibility strategies that straddle the fence can work, but it's not a short (or clear) game to play. And introducing an abstraction that's not fundamentally lower-level than JS (and/or does not fully subsume its semantics) is simply doomed. It would lack the power to even credibly hold out hope for a compatible future.

So, yes, there are better languages. Yes, you could put them in a browser. But unless you possess the power to put them in every browser, they don't matter unless their operational semantics are 1:1 with JavaScript.

You can see how I ended up on TC39. It's not that I think JS is great (it has well-documented flashes of genius, but so does any competitor worth mentioning) or even the perfect language for the web. But it is the *one language that every vendor is committed to shipping compatibly*. Evolving JS has the leverage to add/change the semantics of the platform in a way that no other strategy credibly can, IMO.

This leaves us with the last objection: JS doesn't fully describe everything in the web platform, so why not recant and switch horses before it's too late to turn back?

This misreads platforms vs. runtimes. All successful platform have privileged APIs and behaviors. Successful, generative platforms merely reduce the surface area of this magic and ensure that privileged APIs "blend in" well -- no funky calling conventions, no alien semantics, etc. Truly great platforms leave developers thinking they're the only ship in the entire ocean and that it is a uniform depth the whole way across. It's hard to think of a description any more at odds with the web platform. Having acknowledged the necessity and ubiquity of privileged APIs, the framing is now right to ask: what can be done about it?

I've made it my work for the past 3+ years -- along with a growing troupe of fellow thinkers -- to answer this charge by reducing the scope and necessity of magic in everyday web development. To describe how something high-level in the platform works in terms of JS isn't to deny some other language a fair shot or to stretch too far with JS, it's simply to fill in the obvious gaps by asking the question "how are these bits connected?"

Those connections and that archeological dig are what are most likely to turn up the sort of extensible, layered, compatible web platform that shares core semantics across languages. You can imagine other ways of doing it, but I don't think you can get there from here. And the possible is all that matters.

That Old-Skool Smell, Part 2

The last post covered a few of the ways that the W3C isn't effective facilitating the discussions that lead to new standards work and, more generally, how trying to participate feels as though you are being transported back to a slower, more mediated era.

Which brings up a couple of things I've noticed across the W3C and which can likely be fixed more quickly. But some background first: due to W3C rules, it's hard to schedule meetings (usually conference calls) quickly. You often need 2 weeks notice for it to happen under a W3C-condoned WG, but canceling meetings is, as we all know, much easier. As a result, many groups set up weekly or bi-weekly meetings but, in practice, meet much less frequently. This lightens the burden for those participating heavily in one or two topics, but leaves occasional participants and those trying to engage from non-majority time-zones at a serious dis-advantage because the notice of meeting cancellation is near-universally handled via mailing list messages.

Yes, you read that right, the W3C uses mailing lists to manage meeting notices. In 2013. And there is no uniformity across groups.

Thanks to Peter Linss, the TAG is doing better: there's an ical feed for all of our upcoming meetings that anyone can subscribe to. Yes, notices are still sent to the list, but you no longer need to dig through email to attempt to find out if the regularly-scheduled meeting is going to happen. Wonder of wonders, I can just look at my calendar...at least when it comes to the TAG.

That this is new says, to my mind, everything you need to know about how the current structure of the W3C's spending on technical infrastructure and staff has gone unchallenged for far, far too long. The TAG is likewise starting to make a move from CVS to Git...and once again it finds itself at the vanguard of organizational practice. That here has been no organization-wide attempt to get WGs to move to more productive tools is, to me, an indicator of how many in positions of authority (if not power) at the WG and on the Staff think things are going. That this state of affairs isn't prima-facia evidence of the need for urgent change and modernization says volumes. As usual, it's not about the tools, but about the way the tools help the organization meet (or fail to meet) its goals. Right now, "better" looks like what nearly every member organization's software teams are already doing. Modernizing in this environment will be a relief, not a burden.

It's also sort of shocking to find that there are no dashboards. Anywhere. For anything -- at least not ones that I can find.

No progress or status dashboard to give the organization a sense for what's currently happening, no dashboard to show charter and publication milestones across groups, no visible indicators about which groups are highly active and which are fading away.

If the W3C has an optics problem -- and I submit that it does -- it's not doing itself any favors by burying the evidence of its overall trajectory and in arcane mailing lists.

There is, at base, a question raised by this and many other aspects of W3C practice: how can the organization be seen to be a good steward of member time, attention, and resources when it does not seem to pay much mind to the state of the workshop. I'd be delighted to see W3C staff liasons for WGs working to make products visible, easy to engage with, and efficient to contribute to as their primary objective. As it is, I don't sense that's their role. And that's just not great customer service.. I hope I'm wrong, or I hope that changes.

That Old-Skool Smell

One of the things that the various (grumpy) posts covering the recent W3C TAG / webdev meetup here in London last month brought back to mind for me was a conversation that happened in the TAG meeting about the ways that the W3C can (or can't) facilitate discussion between webdevs, browser vendors, and "standards people".

The way the W3C has usually done this is via workshops. Here's an examplar from last year. The "how to participate" link is particularly telling:

Position papers are required to be eligible to participate in this workshop. Organizations or individuals wishing to attend must submit a position paper explaining their perspectives on a workshop topic of their choice no later than 01 July 2013. Participants should have an active interest in the area selected, ensuring other workshop attendees will benefit from the topic and their presence.

Position papers should:

  • Explain the participant's perspective on the topic of the Workshop
  • Explain their viewpoint
  • Include concrete examples of their suggestions

Refer to the position papers submitted for a similar W3C workshop to see what a position paper generally implies.

It is necessary to submit a position paper for review by the Program Committee. If your position paper is selected by the Program Committee, you will receive a workshop invitation and registration link. Please see Section "Important dates" for paper submission and registration deadlines.

ZOMGWTFBBQ. If the idea is that the W3C should be a salon for academic debate, this process fits well. If, on the other hand, the workshop is meant to create sort of "interested stakeholders collaborating on a hard problem" environment that, e.g., Andrew Betts from FT Labs and other have helped to create around the offline problem (blog post on that shortly, I promise), this might be exactly the wrong way to do it.

But it's easy to see how you get to this sort of scary-sounding process: to keep gawkers from gumming up the works it's necessary to create a (low) barrier to entry. Preferably one that looks higher than it really is. Else, the thinking goes, the event will devolve into yet-another-tech-meetup; draining the discussions of the urgency and focus that only arise when people invested in a problem are able to discus it deeply without distraction. The position paper and selection process might fill the void -- particularly if you don't trust yourself enough to know who the "right people" to have in the room might be. Or perhaps you have substantial research funding and want academic participants to feel at home; after all, this is the sort of process that's entirely natural in the research setting. Or it could be simple momentum: this is the way the W3C has always attempted to facilitiate and nobody has said "it's not working" loudly enough to get anything to change.

So let me, then, be the first: it's not working.

Time, money, and effort is being wasted. The workshop model, as currently formulated, is tone-deaf. It rarely gets the right people in the room.

Replacements for this model will suffer many criticisms: you could easily claim that the FT and Google-hosted offline meetings weren't "open". Fair. But they have produced results, much the way side-line and hallway-track meetings about other topics have similarly been productive in other areas.

The best model the W3C has deployed thus far has been the un-conference model used at TPAC '11 and '12, due largely to the involvement of Tantek Çelik. That has worked because many of the "right people" are already there, although, in many cases, not enough. And it's worth saying that this has usually been an order-of-magnitude less productive than the private meetings I've been a part of at FT, Mozilla, Google, and other places. Those meetings have been convened by invested community members trying to find solutions, and they have been organized around explicit invites. It's the proverbial smoke-filled room, except nobody smokes (at least in the room), nobody wears suits, and there's no formal agenda. Just people working hard to catalog problems and design solutions in a small group of people who represent broader interests...and it works.

The W3C, as an organization, needs to be relevant to the concerns of web developers and the browser vendors who deliver solutions to their problems, and that to do that it must speak their language. Time for the academic patina to pass into history. The W3C's one and only distinguishing characteristic is that some people still believe that it can be a good facilitator for evolving the real, actual, valuable web. Workshops aren't working and need to be replaced with something better. Either the W3C can do that or we will continue to do it "out here", and I don't think anyone really wants that.

Update: A couple of insightful comments via twitter:

Sylvain nails one of the big disconnects for me: it's not about format, it's about who is "convening" the discussion. Andrew Betts has done an amazing job inviting the right people, and in the unconference style format, you need a strong moderator to help pull out the wheat from the chaff. In both cases, we've got examples where "local knowledge" of the people and the problems is the key to making gatherings productive. And the W3C process doesn't start with that assumption.

Next:

I think this is right. A broad scope tends to lead to these sorts of big workshop things that could cover lots of ground...but often don't lead to much. This is another axis to judge the workshop format on, and I'm not sure I could tell you what the hoped-for outcomes of workshops are that matter to devs, implementers, and the standards process. I'd like to hear from W3C AC reps and staff who think it is working, though.

Older Posts

Newer Posts