Tech Flashback: Canon DM-2500 Intelligent Organizer

Seeing as I’m not having too much luck getting to sleep, I might as well do something productive, so here comes another post. Apologies in advance if I make any mistakes …

It’s hard to imagine now in an era of smartphones that it wasn’t that long ago that even PDA’s were non-existent or unaffordable. In the early-to-mid 1990’s, in an effort to appear somewhat technologically advanced, some people replaced their pocket notebook or diary with a “digital diary”, or “electronic organizer”.

In fact, I had two, but sadly both suffered damage in one way or another and have been long disposed of. But the other day, in a thrift shop, I came across a Canon DM-2500 for a few dollars, and I thought it was worth it to buy it just to blog about it.

The “Intelligent Organizer”


The organizer is from Canon, specifically their business machines division which is responsible for desktop calculators and the like. This unit came with its box, but was otherwise missing all of its documentation. At least, we get to see the box which boasts a list of features which doesn’t seem so remarkable today, and a picture of the unit itself.


The rear of the unit makes a very honest depiction of the features and how it looks on screen. That’s far from what can be said about many advertising materials nowadays … The unit is Made in China.

2016082311288240 2016082311288239 2016082311288242 2016082311288241

To appeal to international markets, the same text is written in a variety of languages on the other sides of the box.


2016082311298244The unit has seen some work, so it’s a little scuffed. It boasts a 10Kb RAM memory, and has a slot cut in its cover so that the function buttons and search are accessible through the cover. This means that for simple “reference” purposes, the cover doesn’t need to be opened, whereas when programming is desired, the cover is opened and that exposes the QWERTY keyboard and programming function keys.

The unit itself is almost the size of my 5.5″ smartphone, both in footprint and in thickness. It weighs slightly less at 84.65 grams, but this is 20 years of progress in a picture.


Opening up the cover, we see the quick reference label on the inside panel. This is necessary as some of the organizers have fairly complicated features, and thick manuals to go along with it. Having the basic instructions available at a glance helps you to work the device when you’re “away from home”.

A QWERTY style keyboard is available, made of the rubbery buttons you find on older calculators. It’s not particularly tactile, but serves for more convenient text entry. The symbols available are hidden behind the SYM button, and everything is in upper case. Rather annoyingly, the PROG button is where you might expect backspace to be, so when correcting errors, you might instead exit the programming mode and destroy any progress you’ve made in programming a record.

Because of the close spacing of the buttons, and the slightly awkward layout, it’s not an easy job entering text. It doesn’t help that the unit seems to lag as well, and the LCD refreshes slowly, so it’s really a thumb board for a one-by-one character entry. Not so good for long addresses.

Some later units had additional features, such as free-form notes, redefinable fields for the address book, expenses lists, data exchange, etc.


The rear is somewhat scuffed as well, but features a screw-down battery hatch requiring the use of three CR2032 cells, two for main and one for back-up. There is a reset hole to reset the memory of the unit and erase all data, and there is a piezo buzzer hole to let the sound out of the case.

As such organizers pre-date the availability of Flash memory, they all almost universally use SRAM which requires power to retain data. It is a volatile type of RAM, and hence this is why battery replacement can be such a daunting task. Get the polarity wrong, or remove the wrong combination of cells, or take too long while replacing the cells and all your data is lost.

To combat this, some units had their own data ports for data transfer to a PC (additional cable and software at an additional cost) for back-up purposes. This unit doesn’t have any of these features. However, because of the risk of sudden data loss, I can’t think too many people would have favoured such units over “physical” diaries which don’t have this potential for catastrophic data loss without an easy back-up option.


It seems like the unit may have seen better days, as the LCD has some scratches on it and it doesn’t have particularly good contrast unless viewed from an oblique angle. I will continue anyway, but apologies for the slanted LCD images.

When powered on, the first thing you are welcomed with is the clock.


It’s good to see that even though it’s probably an early 90’s product, it still appears to be Y2K compliant. Pressing on the TEL button allows us to look at the phone directory.


We are prompted to search, so if you enter a few characters and enter, you can search by name, or you can just press the up-down search keys to scroll through the whole database “rolodex” style.


The display consists of one dot-matrix line, and two 7-segment lines, no doubt a cost-saving measure to reduce the complexity of the device and also reduce its cost. Field information is displayed in fixed segments underneath. More expensive (and later units) have full matrix displays which can render bolder text, lowercase text and more natural numbers.

2016082311338261 2016082311338262 2016082311338259 2016082311338260

The whole character set can be seen in the above images, and there really aren’t that many characters (45 in total). Everything is only available in upper case as well. To program a new entry, we can press the PROG button which briefly flashes up the capacity where U stands for bytes used, and E stands for bytes free.


Then it prompts you field by field and you enter the data followed by enter until the record is complete (Name, Company, Address, TEL1, TEL2).


The schedule feature is not particularly interesting. Each schedule entry is a line of text, a start time/date pair and an end time/date pair. Optionally, the alarm “flag” can be set to have the unit warn you of the event. I guess this is one big advantage of having an “electronic” diary – the possibility for alerts.


The calculator is a 10-digit “regular” calculator with no special functions. A regular desktop calculator is more functional owing to the traditional keypad layout which is faster and easier to use.


The calendar feature is a bit “lame” as it’s basically a week-by-week view of the dates, using both rows of numeric segments to display the days owing to space limitations. It’s hardly practical by any stretch of the imagination.


You do have a world-clock feature, which is useful for travellers and those doing business in different countries, but it’s probably got a few out of date timezones now, due to changes which happen occasionally.


The alarm feature is not anything special either. You get one alarm, that’s it.


The “secret” area is basically a partitioning scheme where any data stored in the modes in the secret area is only visible once logged in. It’s probably handy to protect your data from occasional prying eyes, but there’s no way to change the password once set … so lets just hope nobody oversees you typing it in because it’s not covered by asterisks either!


Once logged in, the “key” icon appears in the corner to let you know that any actions are being performed in the secure area. Pressing on the secret button logs you out back into the openly accessible area.



One thing that’s not very commonly discussed is the issue of capacity. In the period when the units were sold, aside from the “features” on the box, the next most common parameter to compare was the capacity stated in kB. I’ve seen units from 2kB through to 256kB, and formerly owned a 2kB and a 64kB unit. But how much can you actually get out of that is not clear – depending on how the data is stored, you could make more or less of the available RAM. As a result, I conducted a few calculations and experiments to flesh it out.

Available Capacity

A capacity of 10kB should equal 10240 bytes for storage. According to the screen post-reset, the unit has 10048 bytes available, so it’s likely 192 bytes are taken away for the system’s internal usage (e.g. storing the password, storing the fixed alarm, storing the calculator memory, last timezone displayed).

Record Sizes

A telephone record has the following fixed maximum field lengths:

  • Name – 24 characters
  • Company – 24 characters
  • Address – 48 characters
  • Telephone 1 – 24 characters
  • Telephone 2 – 24 characters

This totals to a record length of 144 bytes maximum. Upon storing a maximum length record, I saw a total of 153 bytes stored, thus there is an overhead of 9 bytes per record, which probably is used to separate the fields (5 bytes), and perform other administrative tasks.

Record lengths are not fixed length. I tried storing a record with a single character name, and null values for the remaining and ended up with 10 bytes used. This indicates a semi-efficient use of memory.

A schedule record has a 48 character field, with two time values recorded. Storage decreases by 60 bytes for a maximum-size schedule record.

As a result, with 10240 bytes available for storage, you could store 66 full-size phone records and two full size schedules with 22 bytes to spare.

Bytes? Really?

Because of the limited character set of just 45 characters, I’m not sure that they 10kB they talk about means 8-bit bytes at all. After all, 64 possibilities can be expressed with 6-bits, so each character can be stored in 6-bits. For the case of a numeric digit (TEL1/TEL2 fields), they have just 10 possibilities, so can be stored in 4-bits (e.g. BCD).

If 8-bit bytes were used, then the top two bits are practically free to be used as flags, and might be for indicating secret/non-secret data, helping with deletions to “defragment” records or to indicate active alarms etc. If they were not used, then it could be a bit of a waste.

If BCD was used to store the numeric phone numbers, then the TEL1/TEL2 fields would essentially be the same size as the Name field (assuming 8-bit bytes for text), and that might explain the 10kB “claimed” capacity which isn’t really a power-of-two value, although it would quickly come undone if I started storing all text and no numbers in the phone book. Alternatively, the SRAM memory might be made by combining 8kB + 2kB dies.


Here, we get to the possibly fun part – the taking apart “part” of the post.


2016082311368273Under the battery hatch, the warnings about battery replacement are repeated. They’re even repeated on a piece of transparent plastic on top of one of the batteries. The serial number and date code are on the inside of the cover as well.

While all seems to be well, the unit wasn’t in the best condition as the previous owner had replaced the cells with Energizer cells, and then forgotten about the unit. While I’ve never seen a lithium coin cell leak, I definitely saw one now. I actually spent a bit of time cleaning the mess and scraping off some of the corrosion to get it to work again. Note the central contact.


Two screws on the edge hold the cover in place, along with some internal clips. The internal PCB shows some SMD components, a few diodes to prevent mishaps in case of batteries running down or being inserted incorrectly, a tanalum capacitor and gob-top “chip on board” type construction.


The LCD is connected by a many conductor flexible cable that’s probably fairly brittle, so I didn’t touch it. I didn’t take it apart any further, as the other side would have predictably been the keyboard printed trace pattern.


The rear cover houses the piezo buzzer under a bit of tape. That’s basically it.


I suppose that in the early-90’s when anything digital and computer related was considered advanced, these units may have been considered “cool” and in some ways, the “poor man’s PDA”. Unfortunately, while they helped some people “go paperless”, they needed a battery change roughly every year which comes with a risk of complete and total data loss if not performed correctly. They were also relatively cumbersome to use as the data entry was slow, and the forms of data that could be stored were relatively limited compared to pen and paper. There were advantages in security, reusability and in having schedule alarms, but some units were also very pricey and difficult to use without having the manual. They were also vulnerable to everything that electronics would be vulnerable to – external EMI could cause some units to lock up and freeze, requiring a reset that could kill off all the data as well. In all, I suppose their “lack” of universal popularity probably explains just how impractical these might have been compared to a good “pen and paper” diary or notepad.

Posted in Computing, Electronics, Tech Flashback | Tagged , , | 2 Comments

Project: Arduino Grid Mouse Clicker (and other Pokemon Go-related fun)

I started my Pokemon Go journey as a “law abiding citizen”. That was, until, someone I knew decided to tell me about this magical thing which could tell me exactly what Pokemon would spawn in front of me at any given time. To prove it wasn’t a joke, we compared notes between what I saw, and what he saw (even though he was nowhere in my neighbourhood). Ultimately, the notes matched up, and after a while he fessed up about his “little secret” – a service known as Pokevision and various scanning tools such as PokemonGo-Map.

Because I was too busy to be playing and wouldn’t be heading out for a few days, I decided to take a peek into the world of Pokemon Go through these services. I wondered whether there were any particular spawn patterns, what the relative spawn frequencies were, was Team Instinct really the “least successful” team, and just what the gym to PokeStop ratio was. Data is always interesting.

As a result, I spent about a day on the 27th July scraping data by various means to see exactly what would turn up. While using Pokevision meant that they would do the scanning for you, and you would be indirectly contravening the terms of service, using direct scrapers meant a good chance of a ban. Of course, by then, it was only a matter of days before Niantic instituted countermeasures to stop people from scanning, and sent cease and desist notices which took down the services. All in all, I never really benefited from the whole “scanning” craze – it had all wound down by the time I actually could get out of the house and I felt a little jealous at all the gyms around me with CP2400+ Dragonites guarding them … when I didn’t even have one to call my own.

It’s clear from the recent ban waves in the Pokemon Go community that Niantic doesn’t tolerate cheaters using GPS spoofing, bots or scanning using unapproved clients. Such activities have caused a lot of server load which impacts on the game’s launches and operation, and is considered unfair. I fully understand and respect Niantic’s wishes, and continue to play the game honestly without pursuing any such “workarounds”.

The Need to Click

My first experiment was using the Pokevision service. This basically allowed you to scan for Pokemon within a certain radius of a position specified by clicking your mouse. Scans are “limited” to 30 seconds, but I found that whenever I clicked “elsewhere”, a scan was immediately run and results returned while old results were cached. In order to scan a wide area, I would need to zoom out on the map and continually click around the Sydney basin. If I had to do this by hand … it wouldn’t be very scientific or enjoyable.

So, I decided to use something I had on hand, namely an Arduino Leonardo. I called the program arraymouse and it relies on the Mouse library within Arduino. As this requires the emulation of a USB HID (Human Interface Device), this requires the use of the Arduino Leonardo (or equivalent, such as the Freetronics Leostick) as it has a “native” USB interface on the ATMEL AVR microcontroller.


The other units are generally built around UART chips which can only act as a USB CDC (Communications Device Class) serial converter, or an AVR chip programmed to do that (which can be modified if you manually reprogram it using the ICSP pins, but that’s hardly convenient).

#include <Mouse.h>

#define GRIDWIDTH 24
#define PAUSE 500
#define XWIDTH 640
#define YHEIGHT 400

int xpos=0;
int ypos=0;
int temp=0;

void setup() {

void loop() {
  while(ypos<YHEIGHT) {
    while(xpos<XWIDTH) {
      if(!(xpos%GRIDWIDTH)) {;
    while(temp<GRIDWIDTH) {

With such a program, depending on the parameters given, you can basically make it move the pointer in a grid pattern and click once in each spot. The perfection of the grid is so uncanny – a human can’t do this. This was done with the Leonardo hooked up to the USB port and Photoshop open with a brush-tool selected.


The important thing to realize is that it is “emulating” a mouse, and that is not exactly straightforward. Depending on the mouse driver, there are several quirks to be aware of:

  • One mouse movement does not equal one pixel movement on the screen. This actually depends on the mouse pointer speed as set in the driver/Control Panel.
  • One mouse movement of 10 units might not equal ten mouse movements of 1 unit. This happens especially if the driver has pointer acceleration enabled which means that larger mouse events/events spaced closer together have a non-linear multiplier effect.
  • There’s no way to know where your pointer is starting from, if it’s bumped into the edge of the screen, or if it’s come back to a certain place. You are literally driving blind.

As a result, it pays to experiment somewhat and get the values tuned to your needs by altering, reprogramming and testing. Leaving your mouse in an initial position is necessary, to ensure the right results. Note that any positional differences will “add up” over time, so to have a scan grid that works over long periods, it’s important to check the pointer never “bumps into” any edges of the screen.

You can also use the Mouse library to do mouse jiggles, and perhaps drive people insane. There’s a corresponding Keyboard library as well which lets you go and send keystrokes to the connected computer, which could come in quite handy.

In my case, I was running a little experiment to try and see whether there were any spatial patterns for the spawn of Eevee and its evolutions. To do that, I ended up using a screen recorder and altering the frame-rate to accelerate the playback so that a whole 4.5 hour observation can be seen in a minute and a half. The system wasn’t too happy and the browser eventually crashed because it ran out of memory, because Pokevision was filtering the icons on the client side (and it was literally plotting all of the spawns in the Sydney area “transparently”). It would seem from the video that there definitely are preferred regions, and its evolved forms do very sporadically appear. Sadly, where I am, it’s a total quiet-zone with no activity at all.

Pokevision was free, easy to use and popularized scanning for a large percentage of the userbase. It has since been offline for a long time, and is not likely to ever return.

The Scan-alysis

Not content with this, I decided to go one step further and cross the boundary into danger. Yes, I broke the terms of service, and it’s probably not the first time I’ve done something nasty like that. I figured, if it’s for my own curiosity, it can’t be too bad. As a result, I decided to set up a Ubuntu VM, install the necessary packages, scratch my head as things didn’t work out and solve a few package issues with Python. Only after that, and some further tinkering, did I managed to run a few instances of PokemonGo-Map over a day to capture some data for later analysis.

Please remember that the data was collected 27th July 2016, and is thoroughly out of date by now. Things are likely to have changed, so it really is only for historical relevance.


With a wide scanning area run only once or twice, I managed to find a few interesting results:

  • Gyms are in blue, Pokestops in grey. It seems the Gyms are roughly evenly spatially distributed, but Pokestops are densely clustered in the city, and rarely found around where I am (bottom left corner of the map).
  • There are approximately 5.944 PokeStops per Gym in the scanned area.
  • 139 types of Pokemon were observed to spawn in the scanned area, which is only 4 less than the number of available Pokemon to catch (145 total available as the three legendary birds, Ditto, Mew and Mewtwo are not released; and minus three regional exclusives not in Australia, namely Farfetch’d, Tauros and Mr. Mime).


  • Of the gyms, Valor and Mystic hold an almost-even split, which leaves Instinct at half the number of either one. [Insert sad-face]


  • In terms of Gym guard Pokemon, Exeggutor, Dragonite, Vaporeon take out the top three positions with a combined percentage of 44.2% (or close to half the gyms).


  • In terms of spawned Pokemon, you’d be right to be annoyed if you saw a Zubat, Doduo, Pidgey, Weedle or Rattata, as they comprise over 50% of all spawns observed. I suppose I was right when I said my neighbourhood had a Doduo problem.


  • A full list of relative spawn frequencies are shown above. Apologies for the confusing way it is plotted – bars run from right to left, the longer the bar, the rarer the Pokemon. Each division indicates a 10-fold increase in rarity (log-scale). Steps appear in the data owing to the “quantization” of spawn (i.e. spawns are integer).

Needless to say, by the next day, it seemed that the games were all over and countermeasures were put into place to make this type of scraping more difficult or even impossible. As a result, I quickly lost interest in the whole project and shelved it, declaring the whole thing “dead” as it was clear to me that Niantic didn’t want us doing it.


With an Arduino, it’s relatively trivial to emulate a mouse and cause it to click repeatedly in a grid pattern to do some scanning. Good to know if you ever run into a need for such a thing and just want to “get it over and done with” in hardware, so that it works almost universally.

The data in this article is mainly provided for historical relevance. None of the services are available anymore, direct scanning has been blocked by various means and isn’t a really viable route anymore. I didn’t actually do much but scrape some data and analyze it – I never got to catch any Pokemon as a direct result of the data.

In fact, from what I have observed playing the game is that the patterns that may have been previously established, and the frequency of encounters, have changed dramatically with tweaks to the game at Niantic’s end. Where some Pokemon were very rare and never seen in my travels (e.g. Kangaskhan), just in the past week, I’ve walked into four. As a result, the data that’s provided really was only valid back then and isn’t in any way representative of the game in its present state.

So, in the end, I have to thank Niantic for bringing me a whole lot of fun, a reason to go out, and maybe even help me get a little more healthy while enjoying myself. It’s been the best motivation by far – better than any fitness tracker, and I’ve tested quite a few. Sorry for the added server load for a day – I promise I’m not doing any more of this and haven’t been since the first wave of countermeasures was launched.

After all … when I signed up to be a Pokemon Trainer … I kind-of agreed to:

“I will travel across the land,
searching far and wide …”

rather than pretending to travel across the land … by clicking far and wide. As a result, even though I see less people playing since the bans, and lots of people whining about a dysfunctional tracker … I suppose the thrill is in the serendipity of a Pokemon encounter. Remember, the motto is “gotta catch ’em all,” not “got to catch the nice ones only.”

Stay safe fellow Pokemon Go-ers, and go Team Instinct!

Posted in Computing | Tagged , , , , | Leave a comment

Opinion: Can You Beat Usain Bolt? Not So Fast!

The Rio 2016 Olympics will be a memorable event for a number of reasons, but as it draws to an end tonight (Sydney time), I thought it would probably be nice to have one Olympic-themed post.

In an era of internet distribution, traditional forms of publishing such as newspapers and magazines have found it tough to maintain their readership. Of course, they have taken the jump to the internet, but at the same point in time, have found it hard to maintain their earnings in the face of content aggregators “stealing” their content, and alternative free sources. To try and offer some more value to their readers and better make use of the medium, some news sources have gone to some lengths to improve their presentation by using interactive elements with their news stories.

One of these interactive demos was Can you beat Usain Bolt out of the blocks? by the New York Times. This particular demo was published about a week ago and has made the rounds on social media, with various people posting screenshots of their so-called achievements and others expressing some amusement at the applet.

nyt-interactive-ubolt(Screenshot of the applet at New York Times)

The applet simulates the auditory experience of waiting for the starting gun, and measures your reaction time accordingly. For that big data twist, the figures measured are plotted and graphed against other people who have measured their times below.

pc-frequency(Sample data screenshot)

It seemed a rather interesting take on a reaction time test, so I thought I’d give it a whirl myself … as an engineer.

My Reaction Time is … ?

I decided to try my luck with the applet, only to find that it wasn’t working properly under Firefox on my HP Stream 8 tablet, because of its puny RAM resulting in constant swapping and its weak Intel Atom CPU. So I grabbed out my Android smartphone and tried my luck … only to find I was quite average averaging about 340ms.

Maybe that would have been enough for most people, but I can’t accept that result. I had a feeling that I should have been faster, so I hopped onto my desktop to find my response time was about 120ms. Same webpage, tested within minutes of each other in the dead of the night, wearing the same headphones.

I repeated the test across a few different platforms to see what the results were:


I tried using the mouse (MS), keyboard (KB), a laptop with a “clicky” trackpad (CP), a laptop with a regular trackpad being tapped (TP), my Android 5.1 based phone, and my iOS tablet. Ten trials were run for each platform. As you can see from the results, the mean values do vary quite a bit … from a low of 120.7ms through to a high of 342.9ms. For the most part, the standard deviations for most trials were fairly similar with the exception of the keyboard and clickpad.

So what is my reaction time really? If Usain Bolt managed a 155ms reaction time – this applet is telling me I’m both faster and slower than Usain Bolt depending on what device I use.

fs-resultIn fact, sometimes I was even fast enough to trigger the “false start” protection. It seems that quite a few readers were also able to do so – maybe because some were trying to anticipate the gun, but others may be due to other causes.

fs-frequencyRegardless, I did the same for all platforms – namely, same headphones, subjectively checking for the volume to be about the same, and closing my eyes and listening out for the bang. The factors that cause the variations in reported time are likely to be inherent to the system.

Latency: The Unknown Variable

If you haven’t guessed already, the problem lies in latency. When it comes to computers, simply trying to get the exact time a button is pressed, or trying to play a sound at a specified time is hard, especially if you want to be millisecond accurate as the display on the page is.

The following are just some of the types of latency that I can think of off the top of my head – not all of them necessarily apply to this particular applet, but might apply to any similar response-time critical activity depending on how it is coded and under what platform it is run. This is far from an exhaustive list, but it should serve to illustrate that trying to measure 155ms on such platforms is a potentially futile exercise.

Output Latency

The whole game starts with the audio prompts. But getting that audio to your ears is not as straightforward as it sounds. With a cursory glance at the resources of the webpage, it seems that the sounds are encoded as MP3 files. A little known fact about MP3 encoding (unless you’ve perhaps tried to have an MP3 audio stream with a video file) is that it is not time accurate due to the fixed length of frames and the use of filters which require padding at the beginning and end to ensure all sounds are reproduced. Depending on the encoder and decoder, the latency is about 24ms.


If we add to the fact that the “bang” sound itself might not start exactly at the zero-point due to the attack time, then we add a little more latency. Decoding the bang_2.mp3 file that I was served, it seems there is 29ms from the beginning to the peak of the waveform. The good thing about this? You could compensate for this, as it should be a relatively constant value dependent on the encoder and decoder, but there might be an odd case where it’s not accurate.

Now we might have the decoded audio, but we have to “send” this through to the sound card. The process of going from the application layer (the browser) to the sound card is a relatively long one as well. For one, the decoded audio is sent to the system mixer which is responsible for handling multiple program access to sound cards. In Windows, this is KMixer, which introduces a 30ms delay to the audio. There is also potential for further delay, especially in periods of CPU starvation, as KMixer can allocate further buffers to try and prevent cases of audio crackle where the audio isn’t arriving to the mixer in time to be played. Direct access through DirectSound is possible, but will screw with other apps accessing the audio device and isn’t normally used by browsers. Further to this delay, we have to add the time to go through the driver stack and bus, as well as the DAC. Depending on the connectivity, USB 2.0 cold add about 6ms in the drivers due to internal buffering. The DAC is roughly 1ms of delay.

Other operating systems and platforms have different delays – Android for example has had different delays between versions, previously being over 150ms and steadily improving towards 15ms, whereas iOS has been very good from the start, having a claimed 5.8ms round trip delay or virtually no latency at all. Owing to these differences, it’s not easy to compensate for a latency which varies depending on the platform and its configuration.

Now, if we have our audio signal, now it has to be reproduced and reach our ears. I don’t know about the latency of a speaker or amplifier, but it should be pretty instantaneous compared to the speed of sound which is about 340m/s. If you were running a pair of speakers about 1m away, then you would have about a 3ms disadvantage compared to someone with a pair of directly wired headphones slapped over their ears. If you used a Bluetooth headset, then you’re out of the running entirely, because the encode-transmit-receive-decode chain is at least 100ms.

Processing Latency

The applet within the browser is responsible for co-coordinating all of the features of the demonstration, and one of the most important jobs aside from playing the necessary audio samples is to keep a track of the time elapsed and record when a click is received to stop the timer.

Surprisingly, this is somewhere there could be a potential for large variations, as I witnessed myself with my Atom-based tablet which choked due to a lack of CPU time and couldn’t provide any usable results. I also experienced this to some degree with other platforms with stuttering stopwatch timer and unrealistically short times in some cases, suggestive that the timing process is far from reliable.

The first thing to realize is that when running any application on a multitasking operating system, all processing time is “shared” amongst process in quanta (timeslices). Depending on how frequently these come around and how much processing gets done, you can’t realistically time more granularly than the frequency of timeslices as that’s how frequently you could (say) check on a system timer. As a result, you can’t necessarily get an arbitrarily accurate time.

The other constraint, of course, is that it’s operating within a browser using Javascript. Traditional methods of getting timing were not very accurate, but there is a better way but that’s down to the frame only. On a computer of 60Hz refresh rate, that would result in time granularity of 16.7ms, so it’s likely you will get a time anywhere from 0 to 16.7ms “off” the true value. In fact, there was probably some very nifty code in there to try and make things more accurate, but that’s probably also computationally intensive.

As Javascript is an interpreted language, the performance of the code varies depending on the browser, any optimizations, and the computing resources available. It’s conceivable that under some browsers, the way the commands are implemented are not 100% identical resulting in different code behaviour.

Input Latency

The game ends with you clicking the mouse, pressing a key, or touching on a screen or touch-pad. As any gamer will tell you, these all involve latency in some way.

One way is the polling rate of the device in question. Most “no-branded” devices are polled at a 125Hz rate, which introduces up to 8ms of delay. Using gaming-style devices optimized for latency can reduce this to 1ms, as my desktop does. Of course, a mouse with its few switches generally responds as soon as clicked, but a keyboard is made of a matrix of rows and columns which are scanned for intersections to find key-presses. This process only happens at a particular rate, thus keyboard users may be disadvantaged with another up to 20-30ms delay.

Depending on the way the controller is implemented, it may fire an event as soon as a switch is seen to close, or it may fire after a “debounce” period passes, due to the chattering of the contacts on the switch which may cause false clicking. Most switches need about 5ms or so of debounce time, but this may not be necessary on a single trigger.

Using a touch-screen, however, opens a new can of worms. On touch-screens, the processing can be quite onerous, as the driver has to distinguish between a tap (click) and double-tap (zoom) which can result in a 300ms latency. Luckily, there’s a way around this, which they seem to have used.


However, even if this is the case, there are still plenty of other spaces for delays to creep in. Similarly with a keyboard, the touch-screen is a matrix which is scanned, and depending on the digitiser, the rate of scan can be 60Hz or even less in some cases, and may involve further scanning to get a better accuracy in case of noise. The delay is up to 34ms. The drivers and the OS can add a further 20ms to the processing resulting in about a 54ms disadvantage.

In the case of touch-pads on laptops, they don’t know anything about the context in which they are running, and so need to wait to check if it was a tap, tap-and-hold (drag), double-tap or slide (move pointer). As a result, without using the button that is fitted that “immediately” fires a click, it needs to wait a “window” of time to decide what this movement is. This costs about 150ms from my observation.

Of course, you can short-circuit this delay with the use of a click button, and I did try that. Similar results were achieved as per the keyboard, and it seems likely this was due to the need to “push” the key through its travel to actuate it. This equates to a distance of about 1cm and in the case of touch-screens, it may even have to travel back. Assuming 1m/s movement rate (just a guess), that would cost 10ms in itself. A mouse would involve a much shorter travel by comparison.

Other Latency and Sources for Error

This got me also thinking about other sources of error and latency. One that came to mind is just how loud the gun is might have an impact on the cognition time. If you had it soft, it might take longer to recognize (i.e. cognition) rather than if it was loud and relying on being startled.

Of course, the fact the two activities are completely different also has an impact on the reaction time. For us, we are only moving a finger which is half-way down the body. For Usain, he’s moving his whole body using muscles at the other end of the body. Based on an average 60m/s signalling speed through the body, that would have a 16ms delay just to travel 1m through the body. Based on the difference in mass movement, the time taken to accelerate the mass to trigger a reading would also affect the measurements.

The motivations and the consequences are completely different, which puts a psychological difference into the mix. For Usain, the consequences of a false start is career devastating – basically losing the chance to show the past four years of effort for someone whose life revolves around their efforts on the track. For you, just a little message on your screen telling you to try again. Because of the lack of risk, I’m sure people are more willing to anticipate the gun, or sit on the borderline of triggering the mouse to get the best time they could.

Response times are also known to vary throughout the day, in response to body condition, and has a weak correlation to intelligence (which is interesting).

Overall, the summarized list of latencies and approximate ranges is shown in the following table:

rough-summaryMakes the 155ms that it was trying to time seem a bit small now, doesn’t it? As a result, I don’t think an applet like that running in a web browser could ever provide you absolute values.

Literature Reported Reaction Times

I decided that it would be nice to try and get a grasp of what the reaction times measured and reported in selected literature publicly accessible via Google were.

Finger Reponse Times to Visual, Auditory and Tactile Modality Stimuli
Annie W.Y. Ng and Alan H.S. Chan
This paper seems to have some issues with its units, claiming sub-millisecond reaction times. I’m sure that they mean seconds rather than milliseconds for their reported values. The study used an Asus EeePC 4G laptop and a program coded in Visual Basic 6.0. Response was read from a USB numeric pad. Table II lists overall simple response time was about 350ms over 690 responses.

Comparison between Auditory and Visual Simple Reaction Times
Jose Shelton, Gideon Praveen Kumar

This paper used DirectRT software on a laptop using the spacebar as the entry method. The test used 14 subjects, reporting a mean auditory reaction time of 284ms.

A Comparative Study of Visual and Auditory Reaction Times in Males and Females
Dhangauri Shenvi, Padma Balasubramanian

This study used a total of 79 participants with a Techno Electronics Digital Display Response Time Apparatus. Table II lists the reaction time average of 620ms for boys, and 530ms for girls.

It seems that from a small sampling of reaction time papers that came up on a Google search, that the results are widely variable from study to study, but are somehow “by chance” coincident with the ranges in the graph. This is likely because they used different test apparatus, and didn’t take any care to characterize the latency and granularity of their set-up, and they may have had different test populations with a different testing goal (e.g. selections require additional cognition time). I suppose all of the tests that involve a computer will inevitably include some of the latency inherent to a multitasking operating system operating several processes, with buffers between different elements connected on different buses. The measurements you can achieve are likely to have some level of systematic error as a result.


In fact, looking again at the graph, it seems there is a distinct cluster of results in between 100-200ms which may reflect PC users with internal sound cards, headphones and 1000Hz mice like myself. A secondary peak is seen about 230ms and 280ms which may represent the iOS users. The main peak around 350ms probably represents the majority of mobile users, running less-latency-optimized Android as their operating system. It’s interesting to see there are “clusters” in the graph which suggest there may be timing granularity at play, even though a glance at the values reported doesn’t seem to show it.


The main purpose of the applet in question to be an interactive element to add “fun” to otherwise bland news stories, and I think it did that well. It challenged readers and engaged them in a way which became somewhat popular.

However, the people visiting probably paid no thought to how the whole thing works, and what the applet means, and instead just wanted to beat Bolt’s 155ms time and claim glory. But that’s not how any of this works. The two challenges are incomparable – and so are the consequences.

A close and more careful consideration of all the influences seems to suggest that even some researchers haven’t characterized the inherent delays within their system and the granularity of the timing precision achievable. As a result, it’s likely the results have some level of systematic error, and granularity in precision. An absolute measure of reaction time is more difficult that one might expect, and will likely require a dedicated set-up to measure – e.g. high speed camera with sound trigger, a FPGA/microcontroller with multiple inputs and a high clock frequency.

So I suppose it pays to think carefully about all those interactive elements you might find lying about, as well as “infographics” which people seem to love. A lot of the time, the truth is a lot deeper than it appears.

Posted in Opinion | Tagged , , , | Leave a comment