Opinion: Can You Beat Usain Bolt? Not So Fast!

The Rio 2016 Olympics will be a memorable event for a number of reasons, but as it draws to an end tonight (Sydney time), I thought it would probably be nice to have one Olympic-themed post.

In an era of internet distribution, traditional forms of publishing such as newspapers and magazines have found it tough to maintain their readership. Of course, they have taken the jump to the internet, but at the same point in time, have found it hard to maintain their earnings in the face of content aggregators “stealing” their content, and alternative free sources. To try and offer some more value to their readers and better make use of the medium, some news sources have gone to some lengths to improve their presentation by using interactive elements with their news stories.

One of these interactive demos was Can you beat Usain Bolt out of the blocks? by the New York Times. This particular demo was published about a week ago and has made the rounds on social media, with various people posting screenshots of their so-called achievements and others expressing some amusement at the applet.

nyt-interactive-ubolt(Screenshot of the applet at New York Times)

The applet simulates the auditory experience of waiting for the starting gun, and measures your reaction time accordingly. For that big data twist, the figures measured are plotted and graphed against other people who have measured their times below.

pc-frequency(Sample data screenshot)

It seemed a rather interesting take on a reaction time test, so I thought I’d give it a whirl myself … as an engineer.

My Reaction Time is … ?

I decided to try my luck with the applet, only to find that it wasn’t working properly under Firefox on my HP Stream 8 tablet, because of its puny RAM resulting in constant swapping and its weak Intel Atom CPU. So I grabbed out my Android smartphone and tried my luck … only to find I was quite average averaging about 340ms.

Maybe that would have been enough for most people, but I can’t accept that result. I had a feeling that I should have been faster, so I hopped onto my desktop to find my response time was about 120ms. Same webpage, tested within minutes of each other in the dead of the night, wearing the same headphones.

I repeated the test across a few different platforms to see what the results were:

test-result-variance

I tried using the mouse (MS), keyboard (KB), a laptop with a “clicky” trackpad (CP), a laptop with a regular trackpad being tapped (TP), my Android 5.1 based phone, and my iOS tablet. Ten trials were run for each platform. As you can see from the results, the mean values do vary quite a bit … from a low of 120.7ms through to a high of 342.9ms. For the most part, the standard deviations for most trials were fairly similar with the exception of the keyboard and clickpad.

So what is my reaction time really? If Usain Bolt managed a 155ms reaction time – this applet is telling me I’m both faster and slower than Usain Bolt depending on what device I use.

fs-resultIn fact, sometimes I was even fast enough to trigger the “false start” protection. It seems that quite a few readers were also able to do so – maybe because some were trying to anticipate the gun, but others may be due to other causes.

fs-frequencyRegardless, I did the same for all platforms – namely, same headphones, subjectively checking for the volume to be about the same, and closing my eyes and listening out for the bang. The factors that cause the variations in reported time are likely to be inherent to the system.

Latency: The Unknown Variable

If you haven’t guessed already, the problem lies in latency. When it comes to computers, simply trying to get the exact time a button is pressed, or trying to play a sound at a specified time is hard, especially if you want to be millisecond accurate as the display on the page is.

The following are just some of the types of latency that I can think of off the top of my head – not all of them necessarily apply to this particular applet, but might apply to any similar response-time critical activity depending on how it is coded and under what platform it is run. This is far from an exhaustive list, but it should serve to illustrate that trying to measure 155ms on such platforms is a potentially futile exercise.

Output Latency

The whole game starts with the audio prompts. But getting that audio to your ears is not as straightforward as it sounds. With a cursory glance at the resources of the webpage, it seems that the sounds are encoded as MP3 files. A little known fact about MP3 encoding (unless you’ve perhaps tried to have an MP3 audio stream with a video file) is that it is not time accurate due to the fixed length of frames and the use of filters which require padding at the beginning and end to ensure all sounds are reproduced. Depending on the encoder and decoder, the latency is about 24ms.

bang-audio

If we add to the fact that the “bang” sound itself might not start exactly at the zero-point due to the attack time, then we add a little more latency. Decoding the bang_2.mp3 file that I was served, it seems there is 29ms from the beginning to the peak of the waveform. The good thing about this? You could compensate for this, as it should be a relatively constant value dependent on the encoder and decoder, but there might be an odd case where it’s not accurate.

Now we might have the decoded audio, but we have to “send” this through to the sound card. The process of going from the application layer (the browser) to the sound card is a relatively long one as well. For one, the decoded audio is sent to the system mixer which is responsible for handling multiple program access to sound cards. In Windows, this is KMixer, which introduces a 30ms delay to the audio. There is also potential for further delay, especially in periods of CPU starvation, as KMixer can allocate further buffers to try and prevent cases of audio crackle where the audio isn’t arriving to the mixer in time to be played. Direct access through DirectSound is possible, but will screw with other apps accessing the audio device and isn’t normally used by browsers. Further to this delay, we have to add the time to go through the driver stack and bus, as well as the DAC. Depending on the connectivity, USB 2.0 cold add about 6ms in the drivers due to internal buffering. The DAC is roughly 1ms of delay.

Other operating systems and platforms have different delays – Android for example has had different delays between versions, previously being over 150ms and steadily improving towards 15ms, whereas iOS has been very good from the start, having a claimed 5.8ms round trip delay or virtually no latency at all. Owing to these differences, it’s not easy to compensate for a latency which varies depending on the platform and its configuration.

Now, if we have our audio signal, now it has to be reproduced and reach our ears. I don’t know about the latency of a speaker or amplifier, but it should be pretty instantaneous compared to the speed of sound which is about 340m/s. If you were running a pair of speakers about 1m away, then you would have about a 3ms disadvantage compared to someone with a pair of directly wired headphones slapped over their ears. If you used a Bluetooth headset, then you’re out of the running entirely, because the encode-transmit-receive-decode chain is at least 100ms.

Processing Latency

The applet within the browser is responsible for co-coordinating all of the features of the demonstration, and one of the most important jobs aside from playing the necessary audio samples is to keep a track of the time elapsed and record when a click is received to stop the timer.

Surprisingly, this is somewhere there could be a potential for large variations, as I witnessed myself with my Atom-based tablet which choked due to a lack of CPU time and couldn’t provide any usable results. I also experienced this to some degree with other platforms with stuttering stopwatch timer and unrealistically short times in some cases, suggestive that the timing process is far from reliable.

The first thing to realize is that when running any application on a multitasking operating system, all processing time is “shared” amongst process in quanta (timeslices). Depending on how frequently these come around and how much processing gets done, you can’t realistically time more granularly than the frequency of timeslices as that’s how frequently you could (say) check on a system timer. As a result, you can’t necessarily get an arbitrarily accurate time.

The other constraint, of course, is that it’s operating within a browser using Javascript. Traditional methods of getting timing were not very accurate, but there is a better way but that’s down to the frame only. On a computer of 60Hz refresh rate, that would result in time granularity of 16.7ms, so it’s likely you will get a time anywhere from 0 to 16.7ms “off” the true value. In fact, there was probably some very nifty code in there to try and make things more accurate, but that’s probably also computationally intensive.

As Javascript is an interpreted language, the performance of the code varies depending on the browser, any optimizations, and the computing resources available. It’s conceivable that under some browsers, the way the commands are implemented are not 100% identical resulting in different code behaviour.

Input Latency

The game ends with you clicking the mouse, pressing a key, or touching on a screen or touch-pad. As any gamer will tell you, these all involve latency in some way.

One way is the polling rate of the device in question. Most “no-branded” devices are polled at a 125Hz rate, which introduces up to 8ms of delay. Using gaming-style devices optimized for latency can reduce this to 1ms, as my desktop does. Of course, a mouse with its few switches generally responds as soon as clicked, but a keyboard is made of a matrix of rows and columns which are scanned for intersections to find key-presses. This process only happens at a particular rate, thus keyboard users may be disadvantaged with another up to 20-30ms delay.

Depending on the way the controller is implemented, it may fire an event as soon as a switch is seen to close, or it may fire after a “debounce” period passes, due to the chattering of the contacts on the switch which may cause false clicking. Most switches need about 5ms or so of debounce time, but this may not be necessary on a single trigger.

Using a touch-screen, however, opens a new can of worms. On touch-screens, the processing can be quite onerous, as the driver has to distinguish between a tap (click) and double-tap (zoom) which can result in a 300ms latency. Luckily, there’s a way around this, which they seem to have used.

can-you-beat-meta-headers

However, even if this is the case, there are still plenty of other spaces for delays to creep in. Similarly with a keyboard, the touch-screen is a matrix which is scanned, and depending on the digitiser, the rate of scan can be 60Hz or even less in some cases, and may involve further scanning to get a better accuracy in case of noise. The delay is up to 34ms. The drivers and the OS can add a further 20ms to the processing resulting in about a 54ms disadvantage.

In the case of touch-pads on laptops, they don’t know anything about the context in which they are running, and so need to wait to check if it was a tap, tap-and-hold (drag), double-tap or slide (move pointer). As a result, without using the button that is fitted that “immediately” fires a click, it needs to wait a “window” of time to decide what this movement is. This costs about 150ms from my observation.

Of course, you can short-circuit this delay with the use of a click button, and I did try that. Similar results were achieved as per the keyboard, and it seems likely this was due to the need to “push” the key through its travel to actuate it. This equates to a distance of about 1cm and in the case of touch-screens, it may even have to travel back. Assuming 1m/s movement rate (just a guess), that would cost 10ms in itself. A mouse would involve a much shorter travel by comparison.

Other Latency and Sources for Error

This got me also thinking about other sources of error and latency. One that came to mind is just how loud the gun is might have an impact on the cognition time. If you had it soft, it might take longer to recognize (i.e. cognition) rather than if it was loud and relying on being startled.

Of course, the fact the two activities are completely different also has an impact on the reaction time. For us, we are only moving a finger which is half-way down the body. For Usain, he’s moving his whole body using muscles at the other end of the body. Based on an average 60m/s signalling speed through the body, that would have a 16ms delay just to travel 1m through the body. Based on the difference in mass movement, the time taken to accelerate the mass to trigger a reading would also affect the measurements.

The motivations and the consequences are completely different, which puts a psychological difference into the mix. For Usain, the consequences of a false start is career devastating – basically losing the chance to show the past four years of effort for someone whose life revolves around their efforts on the track. For you, just a little message on your screen telling you to try again. Because of the lack of risk, I’m sure people are more willing to anticipate the gun, or sit on the borderline of triggering the mouse to get the best time they could.

Response times are also known to vary throughout the day, in response to body condition, and has a weak correlation to intelligence (which is interesting).

Overall, the summarized list of latencies and approximate ranges is shown in the following table:

rough-summaryMakes the 155ms that it was trying to time seem a bit small now, doesn’t it? As a result, I don’t think an applet like that running in a web browser could ever provide you absolute values.

Literature Reported Reaction Times

I decided that it would be nice to try and get a grasp of what the reaction times measured and reported in selected literature publicly accessible via Google were.

Finger Reponse Times to Visual, Auditory and Tactile Modality Stimuli
Annie W.Y. Ng and Alan H.S. Chan
This paper seems to have some issues with its units, claiming sub-millisecond reaction times. I’m sure that they mean seconds rather than milliseconds for their reported values. The study used an Asus EeePC 4G laptop and a program coded in Visual Basic 6.0. Response was read from a USB numeric pad. Table II lists overall simple response time was about 350ms over 690 responses.

Comparison between Auditory and Visual Simple Reaction Times
Jose Shelton, Gideon Praveen Kumar

This paper used DirectRT software on a laptop using the spacebar as the entry method. The test used 14 subjects, reporting a mean auditory reaction time of 284ms.

A Comparative Study of Visual and Auditory Reaction Times in Males and Females
Dhangauri Shenvi, Padma Balasubramanian

This study used a total of 79 participants with a Techno Electronics Digital Display Response Time Apparatus. Table II lists the reaction time average of 620ms for boys, and 530ms for girls.

It seems that from a small sampling of reaction time papers that came up on a Google search, that the results are widely variable from study to study, but are somehow “by chance” coincident with the ranges in the graph. This is likely because they used different test apparatus, and didn’t take any care to characterize the latency and granularity of their set-up, and they may have had different test populations with a different testing goal (e.g. selections require additional cognition time). I suppose all of the tests that involve a computer will inevitably include some of the latency inherent to a multitasking operating system operating several processes, with buffers between different elements connected on different buses. The measurements you can achieve are likely to have some level of systematic error as a result.

pc-frequency

In fact, looking again at the graph, it seems there is a distinct cluster of results in between 100-200ms which may reflect PC users with internal sound cards, headphones and 1000Hz mice like myself. A secondary peak is seen about 230ms and 280ms which may represent the iOS users. The main peak around 350ms probably represents the majority of mobile users, running less-latency-optimized Android as their operating system. It’s interesting to see there are “clusters” in the graph which suggest there may be timing granularity at play, even though a glance at the values reported doesn’t seem to show it.

Conclusion

The main purpose of the applet in question to be an interactive element to add “fun” to otherwise bland news stories, and I think it did that well. It challenged readers and engaged them in a way which became somewhat popular.

However, the people visiting probably paid no thought to how the whole thing works, and what the applet means, and instead just wanted to beat Bolt’s 155ms time and claim glory. But that’s not how any of this works. The two challenges are incomparable – and so are the consequences.

A close and more careful consideration of all the influences seems to suggest that even some researchers haven’t characterized the inherent delays within their system and the granularity of the timing precision achievable. As a result, it’s likely the results have some level of systematic error, and granularity in precision. An absolute measure of reaction time is more difficult that one might expect, and will likely require a dedicated set-up to measure – e.g. high speed camera with sound trigger, a FPGA/microcontroller with multiple inputs and a high clock frequency.

So I suppose it pays to think carefully about all those interactive elements you might find lying about, as well as “infographics” which people seem to love. A lot of the time, the truth is a lot deeper than it appears.

About lui_gough

I'm a bit of a nut for electronics, computing, photography, radio, satellite and other technical hobbies. Click for more about me!
This entry was posted in Opinion and tagged , , , . Bookmark the permalink.

One Response to Opinion: Can You Beat Usain Bolt? Not So Fast!

  1. sparcie says:

    It’s really hard for a program (even a native one) to time events on a small scale like that. Even events internal to the machine (such as profiling code).

    Part of the problem is the Operating systems scheduler. It really depends on the way the kernel is coded as to when you can expect to get processing time again.

    For a pre-emptive kernel (one that creates equal sized time slices, usually based on a timer signal) the problem comes from the task scheduler. There is a delay in switching tasks, but the main problem is the lack of guarantee of when you’ll get processing time. You can never be sure of the time you have to wait between time slices on the CPU. You could end up missing several ticks on the hardware timer before getting time on the processor.

    Co-operative multitasking has a similar problem, but without the fixed length time slices the delay is a bit less predictable.

    Early Unix had another system again, where any call to kernel functions would result in a task switch (it was not really really co-operative or pre-emptive). For modern OSes happens as well, but only when the process is accessing a resource it needs to wait for (the process transitions to the sleeping state).

    Hardware interrupts can add extra delay on top at pretty much any time.

    This is the reason the sleep() function in most programming languages doesn’t guarantee accuracy in the delay.

    Sparcie

Error: Comment is Missing!