My book, CORS in Action, is now available as part of Manning’s Early Access Program! I’ve been spending a lot of busy nights writing, and its crazy to finally see this live! Chapters 1-3 are available now, and more chapters (as well as edits to existing chapters) will be released on a regular basis. CORS in Action will provide a comprehensive overview of how cross-origin resource sharing works, from both a server's and a client's perspective. If you are curious about CORS, have a look here:

For a limited time, use code dotd1114tw at checkout out to recieve 50% off!

Here are my favorite songs from 2012. This year I kept a running Rdio playlist of music I liked, so it was easy to pull this together. The tracks aren't in any particular order (well, alphabetical order), but if I had to rank my top 5 it would go:

  1. Younger Us - Japandroids (with my favorite lyric of the year: "Remember that night you were already in bed, said "fuck it" got up to drink with me instead")
  2. Angels - The xx
  3. Thinkin Bout You - Frank Ocean
  4. We Are Never Ever Getting Back Together - Taylor Swift (yes, seriously)
  5. We Are Young - fun. (I love the juxtaposition of this song against the Japandroids song)

There were a lot of great albums this year, but for me, two perched above the rest and stayed on repeat for much of the year:

Japandroids - Celebration Rock The xx - Coexist

  • Japandroids - Celebration Rock - Is there a more awesome or appropriate album title? Kicks ass from start to finish.
  • The xx - Coexist - This album is like coming up for air after Celebration Rock. Begins and ends with two beautiful love songs.

I've been playing around with benchmark.js to do some performance testing in JavaScript. Its an easy-to-use library, but because its so easy, it hides a lot of its details behind the scenes. Mathias Bynens and John-David Dalton, the authors of benchmark.js, wrote a great overview of how benchmark.js works. But I wanted a better understanding of what's going on, so I took a deeper look at the code.

Here's an example of a performance test using benchmark.js. This particular test measures the performance of adding a span element to the page:


<a href="#" onclick="{async: true}); return false;">run test</a>

<div id="mydiv"></div>

<script src=""></script>

var bench = new Benchmark('insertNode',

// The function to test
function() {
  mydiv.insertAdjacentHTML('beforeend', '<span></span>');

// Additional options for the test
  'setup': function() {
    var mydiv = document.getElementById('mydiv');
  'teardown': function() {
    mydiv.innerHTML = '';


The code above represents a single benchmark, which is the performance test and any associated code. The fundamental unit of work in a benchmark is the test function. This is the thing we are measuring. Traditional performance tests ask the user to specify how many times to run the test (for example, run this test 1000 times and tell me how long it takes). The magic of benchmark.js is that it automatically calculates the number of iterations. The user is responsible for writing the test, benchmark.js handles the rest.

Check out the weird scoping in this particular test: mydiv is declared in the setup function, but is referenced in the test function and the teardown function. This works because benchmark.js doesn't call those functions directly. Instead, it strips the JS code out of those functions and "compiles" it to this:

function(t1354904061091) {
  var r1354904061091,

  // From setup()
  var mydiv = document.getElementById('mydiv');;
  while(i1354904061091--) {
    // From test function.
    mydiv.insertAdjacentHTML('beforeend', '<span></span>');

  // From teardown()
  mydiv.innerHTML = '';

  return {elapsed: r1354904061091, uid: "uid1354904061091"};

This is the heart of a benchmark.js test. Don’t let the obfuscated variable names fool you; this code is very simple. It looks like a performance test one would write by hand: call the setup function, start the timer, run the test function a few times, measure the time, and finally call the teardown function. I call this chunk of code a unit. The setup, test, and teardown functions in a unit all share the same scope, and are run in the context of the benchmark itself (i.e. this === the benchmark instance).

Units form the building blocks of a cycle. There is almost a 1-to-1 mapping between a unit and a cycle, but a cycle actually consists of two units. The first unit runs the setup, test and teardown functions one time to see if the code throws any errors (this is a nice feature; we don't want to run the entire test only to discover an error in the teardown function). If there are no errors, the unit runs again, this time with multiple iterations. (I’m not sure why the error check is done once per cycle, it seems like something that could be done once per benchmark).

So how does benchmark.js actually determine how many times to run a test function? Benchmark.js aims to run a test as fast as possible without sacrificing accuracy. It finds this sweet spot by running a few cycles to get a sense of how long the test runs. I call this the analysis phase. It starts by dipping its toes in the water with a few iterations, and then continues to increase the number of iterations until it reaches a percent uncertainty of at most 1% (or a min time or max time is reached, if specified by the user).

A note about the timer used to run the test. The code above measures time by calling the obscure The actual timer could be any one of the following:

Benchmark.js chooses the timer with the finest resolution available on the platform. It takes the timer's resolution into account when calculating the number of iterations. So while using JavaScript's Date object (with a resolution of 15ms) is not ideal, benchmark.js compensates by running the tests for more iterations.

Once a number of iterations is calculated, benchmark.js enters what I call the sampling phase. This phase runs the tests and stores the results of each cycle in the Benchmark.prototype.stats.sample array. The number of iterations is stored in Benchmark.prototype.count. This is the number of iterations per cycle, not the total number of iterations across all cycles. The count is not necessarily fixed; it may increase if benchmark.js determines it needs more iterations to get an accurate reading.

One point of confusion is the Benchmark.prototype.cycles property. It sounds like it should store the total number of cycles run by the benchmark. But instead it stores the number of cycles run during the analysis phase. If you’d like the number of cycles run during the sampling phase, look at the length of the Benchmark.prototype.stats.sample array.

As the test runs, the Benchmark.prototype.stats object stores the results. The stats are calculated and updated after each cycle, so they will exist even if the benchmark is aborted before finishing. There are a few places where results are stored:

A benchmark also emits various events during the course of a test:

  • onStart - Called once, before the entire benchmark starts.
  • onCycle - Called after each cycle completes. Fires during both the analysis and sampling phases.
  • onComplete - Called once, after the entire benchmark completes.
  • onError - Called if the JS code has an error.
  • onAbort - Called if the test is aborted.
  • onReset - Reset properties (and abort if running).

Finally, benchmarks can also be organized into a suite. A suite is a collection of benchmarks, and is useful for grouping benchmarks. A suite has methods to operate over its benchmarks (such as forEach), as well as an analogous set of events that operate at the suite level (for example, onCycle is called after each benchmark completes).

I hope this gives a good introduction to what happens when runnning a benchmark.js test. To recap, here's an outline of how a test is executed:

  • For each benchmark in a suite:
    • Fire event: Suite.onStart()
    • For each benchmark:
      • Fire event: Benchmark.onStart()
      • For each sampled run
        • Run unit once to check for errors
          • setup()
          • testfn()
          • teardown()
        • Run unit multiple times and measure results
          • setup()
          • for each Benchmark.count
            • testfn()
          • teardown()
        • Fire event: Benchmark.onCycle()
      • Fire event: Benchmark.onComplete()
      • Fire event: Suite.onCycle()
    • Fire event: Suite.onComplete()

Michael Hausenblas and I have just released a revamped version of the site. The site includes resources on using CORS, including details on how to configure CORS for various servers. It also has, which helps test CORS support on various servers. The site's source code is hosted on GitHub, so feel free to contribute!

Its around this time of the election year (and over and over throughout the years) that I hear the common refrain of “your vote doesn’t matter” (especially in a decidedly “blue” state like Illinois). Yes, I understand the mathematics of how one vote out of 100 million will not make a dent in the outcome. But I don’t believe that means you shouldn’t vote.

Consider the Jim Crow laws that promoted racial segregation and permeated the South for almost 100 years. One reason for their existence was that the same politicians were elected into office year after year. There was nothing shady about this, they were legitimately elected. The problem was that those who would vote to change the laws had no vote. From Wikipedia:

...the establishment Democrats were passing laws to make voter registration and electoral rules more restrictive, with the result that political participation by most blacks and many poor whites began to decrease. Between 1890 and 1910, ten of the eleven former Confederate states, starting with Mississippi, passed new constitutions or amendments that effectively disfranchised most blacks and tens of thousands of poor whites through a combination of poll taxes, literacy and comprehension tests, and residency and record-keeping requirements. Grandfather clauses temporarily permitted some illiterate whites to vote.

Voter turnout dropped drastically through the South as a result of such measures.

...Those who could not vote were not eligible to serve on juries and could not run for local offices. They effectively disappeared from political life, as they could not influence the state legislatures, and their interests were overlooked.

Many of the arguments justifying voting tend towards a philosophical call to duty. But here is a concrete example from this country’s history of how significant voting is. Regardless of whether a large group of people can’t vote (due to unjust voter registration laws back then) or won’t vote (due to voter apathy now), the impact on the ballot is the same. I’m not saying that not voting leads to racist laws, but only that our rights should not be taken for granted; without a vote, without that voice, we stand to lose a lot.