Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

JS 1.8 Function Expressions: The Opposite of "Good"

JavaScript is crying out for a way to write functions more tersely. I've added my suggestion to the debate, and was vaguely aware that Mozilla had implemented "function expressions". It wasn't until I saw their use in the (excellent) ProtoVis examples that I realized how stomach-churningly bad the syntax for them is. Instead of dropping the word function from the declaration of lambdas, they kill the { } charachters, leaving the big visual turd at the front of the lambda while simultaneously omitting the symmetrical visual aid that allows programmers to more easily spot missing terminators.

Oof.

It'd be one thing of the feature saved enough effort (typing) to justify the confusion, but it doesn't. Bitter syntactic sugar in a language that's already complicated by whitespace issues should be avoided.

Perspective

So after receiving a keyboard-lashing from myself and others regarding some comments he made, Chris has walked a lot of it back and bravely noted where his new perspective conflicts with the old. It takes guts to do that.

My original comments were born of my frustration the internecine strife that seems to follow every discussion about OSS licensing. It's one of the reasons that I was so grateful for Dion's 100-point scale for judging community because it helps put all of this stuff into a perspective that lets you evaluate what's more likely to be good for a broad audience from what's more likely to hurt people along the way. There will always be both individual and corporate involvement in OSS, and both are good things, but neither are unalloyed forces for lightness and right. They've got down-sides. OSS communities and hackers should be honest about them and evaluate them on the merits and likely outcomes. It took me a long time to come to that perspective and it's and one that I'm happy to see Chris weighing. Kudos to him for taking it in stride.

Correction: I wrongly attributed the 100-point scale to Ben instead of Dion. Totally n00b mistake. My apologies.

Automated Dojo Layer Builds in ZF 1.9.0 Preview

Early on in discussions with the excellent folks at Zend, one of the possibilities that made everyone in the room excited was the ability to use server-side smarts about client-side work to automate performance optimizations in ZF apps. After lots of great work on getting Dojo integrated, Zend Framework is making that a reality by support automated custom builds in ZF 1.9.0's preview release.

What does this buy you? You get to use the Zend helpers for Dojo as you normally would, simplifying how you pull in code, declare components, and build your UI. What this new integration saves you is the tedium of figuring out which components you're using everywhere, building a layer file for it, kicking off a build, and remembering to re-visit the layer definition when you project adds or removes modules. Hopefully ZF 1.9 should lower the barrier to taking advantage of the full range of Dojo-based optimizations, making it easier to prototype quickly and deploy easily. Exciting stuff!

OSCON dojo.beer() Tonight!

Sorry for the late notice, but the inimitable Matthew Russell has organized another dojo.beer() event for 7pm tonight (Wed, July 22nd) at O'Flaherty's pub in San Jose, near the convention center. Should be a great time, so if you're in the area, hopefully we'll see you there!

Benchmarking Is Hard: Reddit Edition

In which I partially defend Microsoft and further lament the state of tech "journalism".

A very short open letter:

Dear interwebs:

Please stop mis-representing the results of benchmarks. Or, at a minimum, please stop blogging the results in snide language that shows your biases. It makes the scientific method sad.

Thank you.

Alex Russell

Today's example of failure made manifest comes via Reddit's programming section (easy target, I know), but deserves some special attention thanks to such witty repartee as:

Using slow-motion video? What a great idea. Maybe we can benchmark operating systems like that.

Maybe we can....and maybe we should. It might yield improvements in areas of OS performance that impact user experience. With a methodology that represents end-user perception, you should be able to calculate the impact of different scheduling algorithms on UI responsiveness, something that desktop Linux has struggled with.

The test under mockery may have problems, but they're not the ones the author assumes. It turns out that watching for visual indications of "doneness" is a better-than-average way to judge overall browser performance (assuming fixed hardware, testing from multiple network topologies, etc.). After all, perceived performance in browsing is all that matters. No one discounts a website's performance because when you visit they happen to let browsers cache resources that get used across pages or because they use a CDN to improve resource loading parallelism. In the real world, anything you can do to improve the perceived latency of a web site or application is a win.

MSFT's test methodology (pdf) does a good job in balancing several factors that affect latency for end-users, including resources that are loaded after onload or in sub-documents, potential DNS lookup timing issues, and the effects of network-level parallelism on page loading. Or at least it would in theory. The IE team's published methodology is silent on points such as how and where DNS caches may be in play and what was done to mitigate them, but the level of overall rigor is quite good.

So what's wrong with the MSFT test? Not much, except that they didn't publish their code or make the test rig available for new releases of browsers to be run against. As a result, the data is more likely to be incorrect because it's stale than to be incorrect due to methodology problems. New browser versions are being released all the time, rendering the conclusions from the Microsoft study already obsolete. Making the tests repeatable by opening up the test rig or filling in the gaps in the methodology would fix that issue while lending the tests the kind of credibility that the Sun Spider and V8 benchmarks now enjoy.

This stands in stark opposition to this latest "benchmark". Indeed, while the source code was posted, it only deepens my despair. By loading the "real world sites" from a local copy, much of the excellent work being done to improve browser performance at the network level is totally eliminated. Given the complexity of real-world sites and the number of resources loaded by say, Facebook.com, changes that eliminate the effects of the network make the tests highly suspect. While excoriating JavaScript benchmarks as not representing the real world accurately, the test author eliminated perhaps the largest contributor to page loading latency and perceived performance. Ugg.

Instead of testing real-world websites (where network topology and browser networking makes a difference), the author tested local, "dehydrated" versions of websites. The result is that "loading times" weren't tested, but rather a test of "local resource serving times and site-specific optimizations around the onload event" was run . Testing load times would have accounted for resources loaded after the onload event fired, too. There's reason to think that neither time to load from local disk nor time for a page to fire the onload handler dominate (or even indicate) real world performance.

I'm grateful that this test showed that Chrome loads and renders things quickly from local disk. I also have no doubt that Chrome loads real websites very quickly, but this test doesn't speak to that.

It's frustrating that the Reddits and Slashdots of the world have such poor collective memory and such faulty filtering that they can't seem to keep themselves from promoting these types of bias re-enforcing stories on a regular basis. Why, oh why, can't we have better tech journalism?

Older Posts

Newer Posts