Salvage: Techway Endeavour II Computer (Part 2)

At last, it is time for the second part of the Techway Endeavour II salvage blogs. The previous post detailed the computer, its contents and its significance. This post looks at getting it running as best as I can.

Testing … 1 … 2 … 1 … 2

The first question is whether the computer would work, at all. It probably hasn’t been plugged in for at least 15 years and electronics are rarely built to last that long. The presence of the Dallas RTC module was already cause for concern, as they integrate a Lithium cell inside to maintain the SRAM and once that depletes, some boards can never boot again. The Pentium first-generation motherboards were all rather famous for being plagued with it, along with some Sun workstation boards.

Another concern was the power supply. While it looks visually fine, switchmode supplies place high demands on their components and stop working for a myriad of reasons. Those old capacitors might not be quite as good as they were in 1994 – would they still be good enough?

I already knew the hard drive was a potential problem. I’ve seen Western Digital Caviar series drives from that era drop like flies in the past few years – my stack of five drives from the 400MB era have all perished within the past two years. As a result, I didn’t want to chance the drive and left it aside to be dealt with separately so as not to waste any chances. After all, knowing if the rest of the machine would even POST would be good enough.

To do things safely, I decided to use my 300W Pure Sine-Wave Inverter and a lab benchtop power supply to create an isolated and limited power mains supply. This way, if the power supply were to fault, I shouldn’t be causing too much damage or blowing any fuses. I hooked up a PS/2 keyboard via a DIN adapter, a serial mouse and my old 22″ LCD monitor to the VGA port and hoped for the best.

I push the power button. A faint buzz comes out. The inverter trips out on overload. I sigh and quickly turn the machine off. Was that it?

I check my connections again. Nothing seems to be amiss. I give it another shot – pushing the button. Again, a faint buzz comes out but the inverter holds stable this time.


A sign of life begins to show as the board awakes from a long slumber.

The BIOS POST messages begin to print and I’m utterly amazed. The machine is alive.

Getting a Health Check

I furiously hit the DEL key to get into the BIOS. Now is the time to see whether it has amnesia, or whether it’s still got its SRAM and retained some of its settings. Taking these down could be vital, so I get the camera ready to take some shots.

The AWARD CMOS setup utility is very familiar, being quite popular. I had an AMD K6-2 300Mhz machine with the same menu setup.

It looks like the Timekeeper battery is not completely dead yet. It still thinks it’s 2016 – a little more than two years slow, so there’s a good chance it is going to fail almost any minute now. It retained the fact that the hard drive was set up in LBA mode, but had a translated geometry of 969/64/63 – dividing the cylinders by four and multiplying the heads by four. This was really only an issue if you were using DOS software using BIOS-based calls, where getting the translation geometry wrong would cause possible data corruption. The expected 32MiB of RAM is detected and a single 1.44MB floppy drive is configured.

For now, I changed the C-drive to None, as I had uninstalled the drive for further work.

Most of the settings here seem to be default – but note the age of the BIOS when it comes to Boot Sequence. I switched it around all its settings – there were only two – C, A (default) or A,C (which I prefer, so I can boot from a floppy).

It was nice to see that the chipset wasn’t entirely devoid of configuration. While you couldn’t adjust the FSB speed here (as this was still a semi-jumpered motherboard), at least you could tweak the RAM timings and wait states if you wanted to squeeze more speed out of it. I set the Onboard IDE Timing to Fastest to hopefully get a little more speed out of it.

The old fashioned power management on the board is also familiar and annoying, which is why I am glad it is disabled. Modern OSes can handle configuring these sensibly, but older OSes did not have inbuilt management so this was a convenient way to do it. But it also meant that you could sit in DOS and the hard disk would spin down or the screen would blank out. If you weren’t expecting it, you could get quite surprised (as I have been).

Finally, the PCI bus has some configurability as well, with manual IRQ set-up and latency times. Could be useful if you are optimising throughput for a video capture card or a high-speed storage array.

Saving and exiting, the POST completes just fine and it found nothing to boot. So I decided to do a RAM test – the next sort of check-up I normally do. Writing out a Memtest86+ floppy, I shoved it in the drive and booted it.

Rather nicely, the test completed with no errors. But do you see that – the cache speed is 85MB/s and the RAM speed is 48MB/s? They say that RAM is fast and cache is fastest – but even microSD cards can offer more sustained throughput than the CPU cache on this computer. That was not something I had expected, but it shows the progress we’ve made since 1995.

How Many Times Do I Have to Ask? A Hard Drive to Get the Data Back

The next step was to hook up the hard drive to my dedicated recovery machine. I was hoping to be able to read out the data into an image if the drive even spun up at all.

I hit the power button and went straight into the BIOS. The drive spun up and it even detected correctly. I had a good feeling … so I booted straight into Debian to begin the ddrescue when bad noises started to appear. Ca-clunk. Clunk.

I keyed in the command anyway and started the recovery in motion. Read errors were constantly being reported and clunking noises being made.

Early on, the drive was still trying to be fully read-out with bad blocks unscraped but patterns were beginning to appear which might suggest some sort of media defect or head issue.

Once fully trimmed and into the retry phase, the error pattern became more apparent, but some errors were also disappearing.

Over time, the errors continued to shrink and it seemed like this might be the side effect of some media damage, head damage, suspension damage, marginality in a head amplifier, contamination by outgassing, etc. Whatever it was, I had a hunch it would not be permanent.

As a result, I did not give up on the drive, and I left it to run recovery as long as it continued to return data at least once every 24 hours and it continued to spin. I wrote and published the first part expecting this process to take a short time … but …

… in all, it required over 12,346 retries and 18.5 days to complete. But it did complete and all the data was returned. All of this was done without ever shutting down the machine even once – just in case the drive would never spin up again. The drive itself wasn’t healthy so I wasn’t going to re-use it. It is instead kept for “ornamental” purposes.

To replace it, I picked up the nearest drive of a similar vintage doing nothing – a Fujitsu MPA3026AT which was a 2.62GB drive – slightly larger than the original. I’ve had good luck with most Fujitsu 3.5″ drives – they’ve given me no trouble even despite their age. I used ddrescue with the –force option to write the image back to the Fujitsu drive so that we could boot the machine.

Starting it Up After a Vital Organ Transplant

After mounting the Fujitsu in the 5.25″ to 3.5″ adapter rails, I noticed the case had a 3.5″ bay just under the power supply. D’oh. How did I miss that?

Anyway, with the drive plugged in, I met a dilemma – the drive wouldn’t spin up. After a quick nose around, I discovered that there was something funny going on with the IDE on the board, so after unplugging the secondary IDE (which I hooked to the CD-ROM as my preference), the drive spun up just fine. I checked cable orientations, swapped cables but that didn’t help the situation. My gut feeling would be pulling the SB AWE32 and twiddling its jumpers might fix it – but I didn’t want to do that just yet.

So with the drive spinning, I thought I was on my merry way until I tried to detect the drive in the BIOS. It came up with some stupid size which didn’t match the drive … so I knew that was not a good sign. As it turns out, and I had also suspected, this BIOS has the 2.1GB barrier, so I configured the drive as 4092/16/63 translated as 1023/128/63 and set it to LBA mode and … drumroll …

… it’s booting! That’s progress!

My excitement was somewhat shortlived, as soon after reaching a garish desktop, the machine rebooted and continued to do so. I was afraid there might be some marginal components causing instability, so I used the F8 menu to get into Safe Mode and then the rebooting stopped. I suspect (as with many Windows 9x installations) poorly written drivers to be the culprit.

After digging around the install, I discovered a trove of software that was improperly installed, but a few potential troublemakers. So I removed Command Antivirus, removed a few stray entries from WIN.INI and SYSTEM.INI, cleaned up AUTOEXEC.BAT and CONFIG.SYS and was able to stabilise the machine at the desktop.

That was what it looked like after I cleaned up some of the mess including a tiled wallpaper and mis-adjusted icon spacing. We can see the user was likely a Chinese user with the installation of NJWIN, there was a Virtual Gameboy Emulator as well, RealPlayer amongst some other software. But that’s still left to explore …

Instead, I wanted to try the slot-loading CD-ROM. Would it work? I dug out an old DOS CD-ROM driver floppy that used VIDE-CDD.SYS as the driver and had a DTR.EXE test program to check the transfer rate.

The good news is that the drive works … the bad news is the PIO transfer mode is very limiting …

Setting the IDE speed to Fastest and running DTR.EXE under Windows gave a good speed boost but still was unable to take full advantage of the 32X rated drive. It seems the CMD0640 IDE controller really is not particularly great … especially with a limited amount of CPU.

There was also another quirk to the machine – that the 3.5″ floppy is identified by Windows and DOS as a 5.25″ regardless of the BIOS setting. I wonder if this is a quirk due to the low CMOS battery, a BIOS bug or something else. I remember my old AMIBIOS 486 machine did have this sort of quirk at one stage which required me to install the drive as B: instead.

Moving into the Modern Era – Virtualisation

While I now know that I have a working computer, it is a bit of a pain to try and demonstrate it using the physical hardware, especially since it outputs VGA analog video. So I decided to try and virtualise it.

My first stop was go to with VMWare Workstation as I’m already using that for other VMs. Creating a new basic VM container and .vhd, I wrote the contents of the drive straight into the .vhd and immediately ran into an issue – the virtualised machine would not boot with an IOS error. As it turns out, this was because the processor is too fast – this patch seemed to cure it just fine.

With that cured, the next problem was that the graphics was very much screwed up. I installed the VMWare Tools including the VMWare SVGA II driver … but …

… even if you have the driver installed, it refuses to work. Part of the reason may well be that the unit was running Windows 95a, an earlier version which the included SVGA driver is not compatible with VMWare. Unfortunately, 16-colour 640×480 did not really suffice for my likings.

Next option was to try VirtualBox, but since I was using VMWare, I didn’t want to cause any conflicts so I gave that a pass.

Instead, I tried DOSBox as I was aware that Windows 95 could be booted on it and it has decent emulation of SoundBlaster audio amongst other things.

mount d: d:\emulation
imgmount c d:\imagefile.img -size 512,63,64,969
boot -l c

After I figured out what commands I needed to use to mount the image (shown above) and boot-strap from it, I thought I was home free.

But again, funny business was afoot. Ultimately, I could only stabilise it by setting the CPU emulation to fixed cycles mode with a low value of about 20,000-40,000 cycles to avoid any of this strange business. Then, I had to boot into Safe Mode, fire up regedit and remove HKEY_LOCAL_MACHINE\Enum to force all the hardware to re-enumerate. This allowed the DOSBox emulation to work more properly, but then I suffered graphic corruption. The fix for this is to change from the auto-detected S3 driver to a different one. Once that was done, I was finally “home free”.

The Fruits of our Labour: A Time Capsule

I’ve tried my best to remove any identifying information from the screenshots below. I do know who this machine belonged to and who used the machine with a high level of certainty and it was consistent with the location where the computer was found.

Acrobat Reader 2.1 and 4.0

I don’t ever remember having an Acrobat Reader 2.1 install, so it was interesting to see its splash screen and online help in the form of a PDF file.

Acrobat Reader 4.0 was more traditional with its help PDF, emulating a regular printed manual.

Microsoft Office 97

As expected from a machine at a university, the full Office 97 Professional suite was installed. There were thesis chapters, journal papers, e-mail messages, resumes and other documents strewn about. Below is a sample of one of them –

I wonder if the person became the person they thought they would be after all these years.


RealPlayer 5.0 was installed – apparently an upgrade is available. Rather unfortunate what happened to them.

IIS WinPlay

If you wanted to play .mp3 files, this was one of the official options from the creators of the codec themselves. That being said, with Winamp installed as well on this computer, it seems more likely that would have been used instead.

XingMPEG Player

Xing were a fairly active player in regards to MPEG players and encoders. Not famed for high quality, they tended to focus on speed and so I have used it in the past as well.

Ulead iPhoto Express

I was surprised as I thought that Apple had the iPhoto name first … but apparently not.

RTA Demo Drivers Knowledge Test

This software was the 1990 version and was commonly circulated. I suspect it originally was swiped from a library computer onto some floppies and taken home, as in the old days, we would have to book time at a library to have the chance to practice on their computer.

Various Flash Games

Very common around the period was small Flash games that could be played inside a browser online or downloaded as a self-contained .exe file of about a floppy disk size for local execution. These kinds of novelty software were very widely distributed on floppy between friends, along with joke/gag programs and sometimes trojans. These ones ran just fine even under Windows 10.

Calculator App?

I had a thought – what happens if you open an old version Calculator program from the install under modern Windows? Well, it runs, and About is stylized as Windows 10 which is pretty odd but not entirely incorrect. Other system programs call their counterparts (e.g. paintbrush.exe opens up the modern version of Paint).

Specialised Hardware

There were also FORTRAN 90 and BASIC code on the system which appears to have been used to control some data acquisition equipment at some stage.

ICQ Chat Logs

Whenever you dispose a computer, it’s probably a good idea to wipe it. The machine had a lot of software incorrectly uninstalled just by purely deleting the folder in Program Files. Aside from breaking the system in odd ways, it also doesn’t secure the data well – so I was able to determine the three ICQ accounts used and go through their logs.

Most of it is mundane, but it’s interesting to see that the “free porn” trick seems to go back a long way.

Browser Caches

The computer had an installation of Netscape Communicator which was also incorrectly uninstalled. However, a file signature search was able to uncover some rather nostalgic parts of the old internet, although sans a lot of the images.

Who can forget the iconic simplicity of the old Hotmail webmail interface?

The fact that Netscape had its own homepage which would be default on every start-up as your “introduction” to the internet. It doesn’t seem like there’s much of a war for homepage buttons as there were in the past.

The Yahoo Search Engine at the time was probably the most popular search engine in Australia – along with a search for western Sydney university before it even existed. Now that’s spooky.

The Trading Post online – apparently a very successful conversion of a paper classifieds into an online classifieds as well. Don’t hear much about them since Gumtree entered the market, but they’re still around today.

Finally, the New South Student gateway, predecessor to the myUNSW system. I like the part where it says “To obtain your Semester 2 2000 results you will need to use NewSouth Student Online or VoiceMark (9385 1999).” I wonder what that would have sounded like.

E-mail Inboxes

Further “danger” lies in the e-mail inbox files. Rather interesting to see the tradition of sending “broadcast” e-mail of good news goes back a long way …

Another thing that seems to go back a long way is the whole “we’ll pay you money to spy on you for market research” trick – this one from e-Trends. I wonder what happened to them?


It was a struggle of sorts but most of the hardware in the box was just fine. The hard disk was very much on its last legs, but with persistence, I was rewarded with a full image with no lost data. The clone was bootable, once BIOS barriers were understood, bloatware removed and software misconfigurations corrected.

Virtualising the image proved to be a slight challenge as well, with VMWare being resistant to proper graphics emulation on the Windows 95a install. It turned out that DOSBox was the best alternative, even though it had to be slowed down to ensure correct behaviour.

A stroll through the install was a time capsule of mid 90s/early 2000s memories, with the machine last used in 2003. Old software bought interesting nostalgia. Browser caches, e-mail inboxes and ICQ chat logs were also recovered, showing that things haven’t really changed all that much when it comes to the types of messages they send. It was interesting to see some of the old websites, even if they were missing the images, as some of these sorts of things aren’t preserved by the Wayback Machine.

There may well be another part to this story in the future – when I try to fix the remaining niggles with the floppy, IDE controller secondary port and perhaps upgrade it to run some sort of demonstration. But that probably will still be a while.

Posted in Computing | Tagged , , , , , , | 2 Comments

Tech Flashback: Colourful Memories – Floppies & RAM

In the early to mid 90s, computing was almost universally beige or some standard colour. The case, drives, monitors, speakers, keyboards, mice. Beige was the colour. Even removable media didn’t dare to stand out too much, coming in a set of mostly “standard” colours.

Some dared to be a little different, like the dark grey of the Verbatim Teflon 3.5″ disks, but others remained very much “to code”. Double density disks were commonly this colour of blue (or sometimes white). High density disks were usually black, but sometimes beige. Even 3.5″ floppies tended to be nearly universally black or grey jacketed, with a few exceptions.

Aside from the more modern disk featuring a plastic shutter, every disk was quite conformist. Other brands took this to the extreme, like Imation embracing an “all-black” policy through several generations of 3.5″ high density floppies.

It’s 2019, so why am I bringing this up? Because around the late 90s, computers started to get a little less boring and today, we’re stuck with the consequences. Part of the reason was Apple’s iMac G3 which came in a swathe of colours, leading to manufacturers starting to get a little creative. Another contributor was the case-modding culture. So I thought, why not start off the year with some colourful memories I’ve collected? (Pardon the pun!)

Crystal Disks

As I mentioned above, floppy disks were boring and most of them looked almost the same, save for maybe a little printing around the shutter flap of a 3.5″ disk or a branding label and jacket on a 5.25″ disk. Some manufacturers were daring enough to provide coloured labels to help sort your disks out.

Crystal disks went one step further – they coloured the disks themselves, changing the shell to a translucent coloured plastic allowing you to see inside the floppy disk. While the mechanism itself was not particularly surprising, these disks came in ten packs with two disks of each colour. Unfortunately, for some reason, I can’t find any of my blue disks.

These were probably the first computer accessories I’ve owned that were not boring beige (or other unimaginative colour) and I was very fond of the disks purely for their novelty value. Maybe these were inspired by the transparent GameBoy or the iMac G3 itself (although it had no floppy drive)- I know at least a few people who lament that we don’t have more transparent electronics, but then again, I don’t think there’s too much to see anyway.

A few coloured disks, however, doesn’t really count for much as they’re very much low cost items. But how about making something colourful for inside your PC?

Kingmax Color DDR RAM

Enter the first piece of colourful computing hardware that I ever owned. It was a Kingmax 512Mb DDR PC3200 memory stick. I know that EDGE computing had a characteristic blue stripe along the edge of some of their SDRAM sticks before, but these were the first that I owned.

Surprisingly, I still have the box for the stick, as I remember venturing down to my local computer store looking to buy some RAM. As it turns out, this one happened to be the cheapest unit that was in stock, so I took it. The colour itself did not sway my decision – but it did make it memorable.

The module from Kingmax claims to have the World #1 Color Packaging Technology, which “overturns tradition” and helps set a “new milestone by shaking off the stereotypical image of [a] monotonous semiconductor industry.” Oh how I love the superlatives – but I don’t remember being able to choose a colour, so there’s that.

As with most memory products, a lifetime warranty was offered – this module is still working today, so I haven’t had to claim it yet, but I suspect they won’t have any of these left in stock.

My box came complete with the user’s guide, of which had a number of interesting catalogue pages.

Of note is that Kingmax had been producing modules with rather unique BGA packaging for the time. Now that all modern RAM chips are BGA, it doesn’t seem like a big deal, but BGA was relatively uncommon at the time with leaded chips being the norm. As a result, the BGA modules were smaller and shorter than the regular modules, allowing “better airflow” and lower profile. According to the catalog, they had a number of relatively high performance DDR offerings, but they also offered SDRAM in BGA packages as well (I’m sure I’ve come across these once before too).

Breaking with tradition, the RAM is a pinkish-purple colour, sitting on a gold-coloured PCB. Very unique for the time and not easily achieved as well. But why go to all this trouble when the RAM would just be sitting inside a beige (or for the more lucky chasing the new alternatives of silver or black) case?

That’s because the mid 2000’s was the beginning of a new paradigm of case modifications. For the hard-core, there were CCFL lamps which you could install in the case, certain cases even had side window kits available and water cooling was just starting to become a thing. As a result, there was a chance that you could actually show these sticks off.

Of course, if you didn’t opt for the Colour RAM, there was another option – how about a red PCB instead of the regular green? Yep. They made those too.


Before you can say “ay caramba”, I was in need of more RAM for another build … but this time, oddly enough, aesthetics did play a part in my decision.

This time, I purchased GeIL RAM, a 2GB kit which I still have in one of my machines. What is in the box is a pair of “unmatched” 256Gb PC3200 modules I also bought around the same time.

Just like the module above, the packaging has windows to show the attractive modules inside.

For convenience, the serials are visible through the package which is a nice touch.

What really attracted me to the GeIL modules was the smooth, almost sexy electric-blue aluminium heat spreaders on the modules, finished off with a logo that looks it would belong on the front of a piece of Hi-Fi equipment.

Unlike some other modules, they paid attention to the top as well – the part of the module that would be viewed “edge-on” through a windowed chassis, providing a nice closure. There was a lot to like about the looks – the performance was not bad either, although in reality, having the heatspreaders probably didn’t do much for the performance.

That is, unlike these Hynix DDR2 fully buffered modules where the heat-spreaders are necessary due to the heat buildup …

.. or like those on Rambus modules which also require them for similar reasons. This was not the first time that I encountered modules with heat-spreaders – I did have an SDRAM pair that did have heatspreaders, but they looked somewhat tacky …


Before long, it seemed like everyone was cashing in on the whole “pimp my PC” culture, noting that gamers, overclockers and case-modders were the market segment most likely to spend significant amounts on hardware. Corsair’s XMS series was one of the favourites amongst overclockers.

They too took the heatspreader approach, using it to advertise their brand and series very prominently. The colour and clip was, however, a little industrious looking for my liking.

The seam along the edge was somewhat average as well. But if you didn’t like silver … other modules in the series came in black too.

Kingston HyperX DDR RAM

Kingston also took a part in the action releasing their HyperX range targeting this segment. They had a blue very similar to GeIL’s, but the finish and branding were a little “loud” in comparison.

When viewed from the edge, it’s clear that this is merely a few plates of aluminium strapped onto what otherwise looks to be a “regular” module.

I suspect very few manufacturers took the Kingmax approach of colouring the IC packages themselves because that would be a rather difficult manufacturing process that would increase costs significantly, whereas just slapping on a few pieces of metal would be cheap.


Computers which were initially boring and beige in the early-to-mid 90s underwent a facelift in the 2000s, inspired by the translucent colourful designs of the Apple iMac G3 but also the growth in case modding, gaming and overclocking. This led to the average PC starting to gain more variety – cases started eschewing beige for silver and black, PCBs started coming in numerous colours, RAM grew heatspreaders, CCFL case lighting became available, watercooling went somewhat mainstream and even the WD Velociraptor hard disk grew a perspex window as well.

Ultimately, this has led to the full-on RGB LED assasult we see today. I was astounded to find my mid-range motherboard and stock CPU cooler both come with RGB LEDs as standard. Increasingly, peripherals such as keyboards and mice have RGB LEDs integrated as well, sometimes individually addressable. Even my RAM has red LEDs on them, with heatspreaders on RAM coming in a variety of colours, fin-shapes and sizes. Even case fans are RGB-LED enabled, with many cases having full tempered glass side window options as standard. The computer has evolved from just a machine to something which people can (and do) show-off – just as car enthusiasts pop the hood, PC enthusiasts take a peek through the window. Sleeved cables, modular power supplies and cable management solutions have become an integral part of this, along with certain sorts of watercooling piping, fixtures, coolants and lighting to show it off at its best.

For better or for worse, it’s something that we’re stuck with. From my perspective, while it’s not a crime to be aesthetically appealing, the LED craze has become a bit of an issue with devices now becoming so bright that it’s almost impossible to sleep in the same room as the computer. The other thing to think of is the energy cost – I don’t need my computer to be a Christmas tree – I need my computer to compute!

So maybe it’s time we took a breather … and instead turned our RGB LEDs off or stop buying them altogether.

Posted in Tech Flashback | Tagged , , , , | 4 Comments

WebSDRs: The Urban SWL’s Dream, a DXers’ Tool, & the End of DXpeditions?

Recently, I have been getting back into some shortwave listening and utility monitoring, but not quite in the conventional sense. I still have my own local conventional radio, SDR and loop antenna. But now, I also have the power of the internet (through LTE since I’ve got no NBN service yet) … which means that I could be listening from your place as well.

Can I Borrow Your Radio and Antenna? Please?

If you’re interested in the hobby of radio like I am, it can be quite devastating to live in a large city. Depending on your geographical location, you can be cursed with a number of difficulties including:

  • Not having a backyard or access to a rooftop to be able to set up an antenna at a decent elevation, or having council restrictions/neighbours which prevent you from doing so.
  • Being on the leeward facing side of a hill which obstructs your line of sight into the city and its main transmission towers.
  • Being close to neighbours who love to use power-over-Ethernet, off-brand LED globes, arc welders and solar garden lights that spew noise all over the band and raise the noise floor.
  • Being close to high-voltage transmission lines with dirty insulators resulting in partial discharges that also create noise.
  • Having too much signal from nearby transmitters that cause intermodulation “images” in your receiver or require expensive filters to effectively handle, making DXing nearly impossible.
  • Unseasonal stormy weather which can be a risk to your gear just as you had some free time to devote to listening.

I’ve been affected by all of the above at various stages, which has been rather unfortunate. However, occasionally an opportunity would come up that I would go to another location – say for a holiday, to visit a friend for a few days, etc. This was the ideal situation for me to bring along a backpack of radio gear – a mini DXpedition in a way to explore the radio landscape somewhere else and hear all that I have been missing.

Unfortunately, such DXpeditions were rather limiting – the opportunity only arose occasionally, the need to be mobile restricted the quality of the gear I could bring and consequently the signals I could capture and there would occasionally even be a risk of danger – either to my gear being damaged or lost in transit, or myself in being accused of being a spy or of illegally eavesdropping/receiving signals. The laws in other countries are not always as liberal or as straightforward as the ones at home.

As a result, it actually makes more sense to “borrow” someone’s whole setup instead, as it’s probably already been developed over a period of time and optimised with better equipment. With the existence of the internet with greater bandwidth, greater reliability and lower costs, the concept of a remote base stated to appear. Those fortunate enough could set up a remote controlled receiver with a computer as a server, connected to the internet, so they could use it from their home in the city. Some remote bases were rather elaborate with transmit capabilities as well, but most of them were very much private – e.g. someone setting it up for their own use, or it belonging to a club for club members to use on a scheduled basis. Not being a member of any clubs, this was not something I got into. Of course, this was possible prior to the internet age by using a phone patch repeater, but that was even more niche and expensive.

What I did get into was GlobalTuners. This was a receive-only operation, originally using conventional receivers which were remotely controlled, offering just one channel at a time with some receivers public and “free” to use, others only available to subscribers. The “free” service really piqued my interest, so I gave it a try.

With GlobalTuners, I was able to tune in via different locations across the world to signals I had no chances of hearing at home. Local ATIS, broadcast radio, VOLMETs and even emergency services were available for the taking thanks to the generosity of the operators willing to offer their rigs up for use.

Unfortunately, there were a number of issues – the fact that we could only listen to one channel at a time due to the conventional receiver meant that long term monitoring was not possible, with some time limits set in place and a policy that required users to “share” the receiver and hand over control. Increasingly, the better receivers either became premium or eventually went offline as they were (potentially) worn-out due to the heavy load of constant band/frequency switching and owners were reluctant to replace their rigs only to have them meet with the same fate.

While I could see the potential in GlobalTuners, it didn’t really fit my style of listening (which verges on monitoring). Depending on what you were monitoring, there are some sites dedicated to monitoring certain services (e.g., but that wasn’t quite as convenient and streamlined as grabbing a multi-band radio and whipping it around the bands. You’d really only get to listen to what other people wanted to listen to.

Something better was required … and eventually, it came in the form of WebSDR.

Software Defined Radio – Many Receivers in One

While conventional radios still ruled the roost when I started my hobby and I still do use a number of them, a revolution was waiting in the wings. Enabled by increasing computing power and memory capacity, combined with reducing costs, the software defined radio seemingly unseated the conventional superheterodyne receiver in the space of no more than a decade.

In fact, I went all-in on SDRs from as early as 2010, purchasing a Winradio G31DDC Excalibur SDR which offered 2Mhz DDC bandwidth in a direct conversion design. I followed that up with the FunCube Dongle in 2012, then offering about 80kHz of real bandwidth but operating in VHF/UHF frequencies. Around the same time, news of the I/Q sampling mode of the RTL2832U DVB-T tuner dongles became widespread leading to the plethora of “$11 SDR” articles that really began to democratise access to radio receivers and increase interest in the whole SDR concept. Since then, I’ve also owned a BladeRF x40 with HF transverter board for even more bandwidth (~28Mhz).

The concept of an SDR is fairly simple – minimise the analog componentry and move as much of the work into the digital signal processing domain as possible. As a result, most SDRs have the bare minimum in terms of signal amplification and bandwidth filtering before digitising the signal using a high-rate ADC, converting it into the digital domain. From there, the reception and demodulation chain can be emulated in software.

With my first SDR, the technology was still in its “infancy” from the perspective of a consumer, requiring absolutely all the processing power of my most powerful dual-core computer of the time to keep-up with its requirements. However, the benefits were already immediately apparent. With a 2Mhz DDC bandwidth, it was now possible to record and replay 2Mhz-wide chunks of spectrum for monitoring of a whole band over a long time. It was possible to have up to three parallel demodulators running to receive multiple signals within this same span for comparison. It was possible to jump around a recording, as there is a waterfall-style spectrum display which makes listening and tuning into radio a visual endeavour as well. It was possible to achieve things that could not have been readily or affordably done with conventional receivers. The downside was the radio itself sometimes was not as sensitive, the demodulation quality and stability varied depending on the computer and the computer’s own RF noise can be a problem especially with monitor leads radiating quite a bit of EMI.

But since then, computing power has increased and more efficient implementation of algorithms with careful understanding of the underlying hardware has led to the existence of online SDR-based radios, addressing the issue of multiple-access.


The first real effort I would consider a quantum leap was the University of Twente’s WebSDR. Initially set-up on Christmas Eve of 2007 at the radio club of the University of Twente and maintained by PA3FWM, it had a 1.5 year interruption between November 2010 through to July 2012. Unfortunately for me, I did not become aware of it until much later.

The WebSDR at UTwente covers the full spectrum from 0-29.160Mhz and often has several hundred listeners using it at any one time. Owing to running on relatively beefy hardware with GPU acceleration, it is possible to have over 500 users listening to different channels at the same time.

Initially based on HTML and Java applets, WebSDR has now evolved into HTML5 allowing for operation on most browsers. The interface has slowly gained a number of extra abilities, but the grey background with purple-and-white waterfall palette remain characteristic of WebSDRs in general, allowing users to zoom in-and-out of the band and look for the next signal of interest all while listening. There are chatbox features and a logbook, along with the ability to see where the other users are tuned to for some inspiration.

While the interface is rather clunky, it worked quite well and the reception in Europe was absolutely mind-blowing. As someone mostly living in a high-noise floor environment in a country few shortwave broadcasters target with relatively few radio amateurs, I could never imagine the band to be as crowded and bursting with life as I’ve seen it on WebSDR.

When listening with the UTwente WebSDR, it is almost like “being there in Europe”. Due to the nature of SDR technology, you can tune into anything without disturbing anyone else. With the waterfall spectrum ability, you can see the signals and tune around in comfort with great efficiency. If you want more, you can open another window and record something as well. Most of the demodulation options you’d need are provided including the regular modes but also variable filter bandwidths and notching of interference. While it’s not quite as flexible as having your own radio with knobs, it’s not far from it. I was absolutely enamored with WebSDR, being able to hear signals I never knew existed. There were some caveats, however, which I will get into later.

Not unexpectedly, this did spark interest in some others to start running their own WebSDR. As a result, there are a number of other WebSDRs running, mostly with limited bandwidth Softrock style receivers being of more limited interest.


However, WebSDR is not the only web-based SDR project out there. I recently became aware of OpenWebRX which is an open-source SDR web server supporting a number of different SDR receivers with the possibility for integrated digital mode decoding and relatively modest DSP resource requirements. By hosting OpenWebRX, it becomes possible to share your SDR with a number of users at the same time – even with absolute strangers on the internet. As of this time, there are over 350 receivers online.


If you look on the list of OpenWebRX receivers, a majority of them are KiwiSDRs. The KiwiSDR is actually a hardware product which is a wide-band SDR and GPS cape for the BeagleBone Black that creates an OpenWebRX SDR server in a very compact, low-powered package. It uses FPGA-based acceleration to create something similar to WebSDR but on a smaller scale of anywhere from about 3 to 8 simultaneous users (depending on bandwidth and with or without waterfall operating mode). Considering the hardware used, the result is quite impressive.

It seems the popularity of the KiwiSDR stems from its reasonable cost and its performance capabilities. It’s not every-day you can afford a radio that is able to GPS-calibrate its internal oscillator for one, or serve radio the world with integrated decoding of some digital modes from something smaller than an average paperback novel.

As a result, I’d have to salute the KiwiSDR guys for creating a product that individuals can afford to buy and use to enable access to their local airwaves over the internet. For a number of years, I lamented that there was only one wideband SDR at – now with the advent of KiwiSDR, there are hundreds scattered across the world. Of course, there are some caveats, but the fact that so many generous operators out there has really enhanced my radio monitoring capabilities.

Uses, Tips and Common Frustrations

So, what can a WebSDR be used for? A lot of things:

  • Listening to shortwave/AM broadcasts which you can’t receive at home for some reason – this could be weak signals, lack of equipment through to censorship.
  • Monitoring broadcasts from different locations to check whether the signal is on the air or not.
  • Monitor the spectrum of a band while you operate from home using conventional equipment.
  • Verify if your transmissions are reaching a target area or not.
  • Listening into radio amateur communications to better understand shortwave propagation behaviour or decode various exotic digital modes.
  • Receive utility station broadcasts such as weather faxes or RTTY transmissions.
  • Receive the signal from multiple places at once.
  • Getting into shortwave radio at nearly no cost!

All you need is a semi-decent internet connection, a computer and a web browser, making the barrier of entry extremely low to the point that even inexperienced members of the public can start to use it.

You can even listen on your phone – my informal quick testing showed that the interface was usable in Chrome Mobile, although the dragging and pinching did not quite behave as I expected. Still, for listening to shortwave “on-the-go”, it’s quite a nifty outcome. The power of a 20m longwire antenna perched up high somewhere in the world from your pocket? It’s possible!

Using WebSDR/OpenWebRX receivers does come with some caveats which are worth understanding. Due to the limitations in hardware and internet bandwidth, most of these receivers can only offer relatively limited listening bandwidths of say approximately 12kHz or maybe even 20kHz. It would not be possible to receive or analyze wider signals, but even 12kHz is enough to decode DRM audio if you configure it appropriately – namely IQ mode reception, positive split mode in Dream.

I successfully was able to receive this Chinese service from a KiwiSDR –

Recordings are performed in the users’ browser, so it pays to ensure that you have a good internet connection and sufficient RAM, as any interruptions in the stream will be in the recording as well.

The internet itself is not optimised for real-time data operations, with congestion between any path between you and the server potentially causing packet loss and/or delay which can result in breaks in audio. When it comes to decoding certain forms of transmissions, such breaks can result in a loss of synchronization, causing a loss of the payload. As a result, OpenWebRX also has “plug-in extension” decoders which can help by doing the decoding on the server-side and passing the results along, avoiding this issue. However, there will be times where you might want to run local decoders – I cover this in the following section.

Despite this, it would make sense to optimise your own network to ensure that you have as good of a connection as possible to support real-time operation. Namely, avoid downloading files while trying to use SDRs, perhaps put in QoS rate limiting measures or  use an Ethernet cable rather than a Wi-Fi connection and ensuring your modem has a good signal can make the reception more stable. Failing this, it can help to choose a closer SDR, as the hops the traffic takes along the internet will vary depending on destination.

Sometimes, the bandwidth problem may not be simply at your end, so reducing bandwidth usage may be a good idea. If you are simply monitoring a service, you can disable the waterfall by setting the rate to off to reduce bandwidth consumption. This will also help those who might be on a limited quota (like myself). Also highlighted is the quick jump to band features and page up and down which allows you to more rapidly navigate the spectrum – a useful hint. The extension drop-down houses the server-side decoding extension features, if you wish to use it.

Another caveat is, by default, the audio is compressed to reduce bandwidth requirements. While this is a sensible default which allows for better bandwidth utilisation and reduces the possibility of breaks in the audio, the compression is lossy and does result in a degradation of signal-to-noise ratio which can be important for some modes. This can be disabled using the “Comp” button, but the results do vary (as sometimes you will have more breaks due to bandwidth issues as a result). There is also a Noise Blanker feature (NB) which is useful for reception in the presence of impulse noise, but also when the KiwiSDR has a case of slight input overload (denoted by a red OV indicator in the S-meter bar).

So how can we get the most from WebSDR services? The first step might be to pay attention to the audio routing.


In order to use local decoders, it is necessary to install some form of loop-back audio cable. Virtual cables introduce no additional noise or losses, but can suffer problems due to buffer overflow/underrun due to the sample-accurate mode of operation. In my case, I have a path using one audio cable for decoding from WebSDRs (one at a time), one for my conventional receiver, and one for recordings only which allows for those to operate independently and simultaneously.

By setting my default audio device to the first virtual cable, the browser-based WebSDRs play into the cable linked into my decoders. So that I can continue to use the computer and watch videos, all of the applications which support manually selecting the audio output device instead output directly to my soundcard instead, thus not interfering with the WebSDR decoding.

In some cases, it is necessary to monitor the WebSDR audio output, which can be done using software playthrough by configuring this inside the audio control panel.

Leaving this switched on, it is then possible to decide using the mixer whether the playthrough is muted and to which volume it is mixed into the other audio going to my speakers or headphones.

As many browsers now have a per-tab audio switching feature, it is possible to open multiple WebSDR receivers at once, muting all of them except the one you’re interested in listening to or decoding. This way, you can continue to examine the waterfall spectrum from other receivers or at different frequencies quasi-simultaneously to check whether the signal is better or worse at another location. You can also engage the recording feature on the “silenced” tabs and then decode the .WAV file later. Also, avoid doing this for extensive periods as you could be blocking other people from making use of that particular receiver by consuming its limited resources. Sometimes, especially from the UTwente WebSDR, you will find decoded images have an inconsistent “wobble” due to the resampling algorithm – if you make a recording using the page and decode this recording, this wobble disappears which is good to know.

Sometimes you might get strange results including very frequent breaks in decoding due to sample rate mismatch, so it pays to check the format the audio cable is using. Most modern cards use 48kHz as the default rate, so it’s often good to standardise on a single rate across all devices.

Also ensure your decoder is configured to use this same rate, which avoids resampling which can further degrade signal and be an occasional source of overflows/underruns. Of course, if I was going all-out to have more decoder chains, it could well be worthwhile running multiple virtual machines with their own individual audio routing instead.

To ensure settings and decoded results are kept separate – use a separate Fldigi settings storage by creating a shortcut and appending the profile storage location, as I have done in the screenshot above.

Another thing to be aware of is that KiwiSDRs have various time-out features which can be configured by the operator to ensure fairer use of resources and to avoid monopolization. The first is an inactivity time-out, which kicks in if you’ve left yourself “parked” on a frequency for extended periods, stopping reception. You can avoid this by tuning around every-so-often or refreshing the page once you hit the error.

The second one is the daily time limit, which is the maximum amount of reception time allowed per IP address in a day. The remaining time of your daily limit is shown in orange under the Users tab next to the receiver you are using.

Being IP-address based, this could potentially result in less-than-expected listening time if behind a corporate grade NAT or when sharing a single internet connection with others. Whatever you do, resist the temptation to change your IP address (e.g. by VPN or by grabbing a new dynamic address) – these limits are put in by the operators to improve fairness of access and they won’t be pleased if they find someone is continually monopolizing their receiver! Also, think about the rest of the community who might want to use it, even for a quick “spot” check of a signal.

Unfortunately, for the moment, hitting either one of these time-outs during a recording will cause you to lose your recording, so take special care to keep an eye on the timers and familiarise yourself with any inactivity timeout on the SDR by trial and error.

The final sort of time-out you might encounter is that of a network communication issue, which often results in a frozen OpenWebRX interface that does nothing until you refresh the page. Unfortunately, this is a fact of “internet” life – connections can break at any point between you and the server, so it pays to ensure that your network at home is as “good” as it can be to eliminate any causes of connection breakage (e.g. not using Wi-Fi). Often it can help to try another nearby receiver – the issue could be someone’s modem/link to their ISP or even the transits between the ISP and your ISP.

You might find some of the KiwiSDRs have a URL that ends in These stations are ones who, for some reason or another, could not offer a direct port forward from their public IP address to their KiwiSDR and are instead relying on a relay service provided by KiwiSDR. As a result, radios operating this way may not be as smooth or reliable as the traffic has to pass through (normally) more links to the KiwiSDR VPS (Linode at the moment if I’m not mistaken) before then travelling to the listener. This is slightly inefficient, consuming additional resources, but is the only sure-fire way of breaking free of multiple NATs, carrier-grade NATs or ISP-related firewalls and is currently offered for free, but may not be the case in the future.

It’s important to remember that the SDRs, especially KiwiSDRs, are a limited resource. Depending on their configuration, they can host either 3 users (20khz mode), 4 users (classic) or 8 users (maximum, 2 waterfall, 6 audio-only). As a result, you might encounter the above message when trying to access a receiver – the only recourse is just to wait it out until someone else relinquishes their slot.

Sometimes, you might find a public SDR has suddenly gone into a password protected mode. If you see this, it’s good to wait and try again in a few hours as it’s probably a sign that the Kiwi’s owner is on the air and probably needs their radio. Otherwise, you can find some radios which go “deaf” occasionally as if the antenna is disconnected – this is likely as the owner is transmitting and switching-out the antenna to protect the KiwiSDR from being damaged.

Finally, very occasionally, the radios can go down for a software update which often means new features. Often this doesn’t take more than ten or so minutes before it is back online again.

If you’re on a KiwiSDR that is configured for eight channels, only the first two will have the wide-band waterfall feature with the remaining six only having the audio FFT.

The End of an Era for the DXpedition?

I realised that with the great power afforded by WebSDRs in being able to listen from practically anywhere around the world, it seems that our prayers have been answered. Unfortunately, in doing so, it makes struggling to pull a signal out from the noise at home in the name of DX a little less exciting when you can just log-on somewhere and find the signal running S9+20 with absolutely no challenge at all.

The whole concept of a DXpedition in the way of packing bags and running some portable equipment at a different place just doesn’t seem so appealing anymore, if catching signals was the end goal. After all, some of these WebSDRs have exceptionally good antennas that catch really strong signals. While there is an attraction to doing it yourself, sometimes the effort isn’t worth the result.

As a result, I’d have to say that the advent of easy access to remote receivers has very much put a dampener on the whole idea of DXpeditions and even potentially DX reception. But it also opens up a potential loophole for those claiming DX reception under contest situations. Less honest people may be tempted to log onto a WebSDR to try and hear those weak signals better and claim a good exchange despite not having any copy from their equipment from home. I suppose it’s important, given the resources, to use it responsibly and accurately declare how the signals were received to prevent potential ambiguity and avoid making outstanding and unjustified claims of DX reception.

I suppose WebSDRs can also be a fascinating source of information for DXers – many of them do have autorun WSPR receivers constantly providing a data source for those to understand propagation around the clock.

What is truly amazing is the global nature of the receivers available. You can get a sense of it just by browsing the listings on a map, or via this map.

Even with anywhere from 3-8 slots per receiver, there’s no shortage of exciting places where you can catch some radio waves.


In modern suburban city living, the wonder of shortwave/HF radio is under threat of extinction. Luckily, thanks to the generosity of various operators around the world, the internet that connects them and the inexpensive computing power that we now have, it is possible to experience remote reception in a way which well surpasses that of simply remotely controlling a conventional receiver. Software defined radio technology has enabled multiple-access, simultaneous multi-channel decoding with a wide-band spectrum display that changes the radio experience into a more efficient, pleasant and visual experience. With clever optimisation, it is possible to do this in a reasonable bandwidth using relatively inexpensive hardware as demonstrated by the KiwiSDR.

As a result, the operators who provide their SDRs for public access are really providing a wonderful resource which can be used by almost anyone – those who are not fortunate enough to be in a good reception location, those who want to spot-check to see if a transmission is on the air or if it’s just their equipment, those who would like to monitor another channel while their own gear is occupied doing something else, or those who want to see if their signal is “getting out” to the world. There are so many different reasons to use it – but I think the biggest advantage is that it can make getting into the hobby of shortwave listening practically free while providing reception that is often well-optimised compared to what you might be able to conjure up at home.

So I am very much indebted to the operators of WebSDRs/OpenWebRXs/KiwiSDRs. They have bought (and will continue to bring) me many hours of joy chasing signals that I had never thought I would be able to hear. There are a few downsides to them, related to limitations in the internet and the abilities of the equipment, but for the most part, they work well enough that it “feels” like I’m operating my own radio, just that it’s someone else’s half-way around the world. If you take the time to configure your equipment well, the result is actually very usable and not far from what you might experience trying to work with your own local SDR (assuming you have one).

The downside is that the motivation to go on DXpeditions has been somewhat lost, if the sole aim is just to set up an antenna in a hotel room and do some listening. There might also be a temptation for some amateurs in contests to “cheat” and receive the audio via another path while claiming to have made a successful contact, so it is important to be honest and use the facilities responsibly. To that end, there are limitations with some servers on inactivity time and total listening time per IP-address per day to avoid abuse, so don’t take it for granted!

Unfortunately, I’m not really in a position to set up one of my own owing to the reception conditions around me and my internet connection, but if that were to change, I would probably consider putting up a public receiver as I can definitely see the benefit it brings to the community of shortwave listeners, HF DXers and radio amateurs alike. For those that do operate stations open to the public (especially the ones I have used), thank you so much for your generosity! I hope that other hams continue to allow public access to their receivers, although I do completely understand if they might be reluctant to do it or might take it down after a while.

Bonus: Chasing Shortwave Radiogram around the World

Being interested in shortwave and chasing QSL cards in the past led me to receiving more “novel” transmissions. One of the most novel was VOA Radiogram, which used a regular AM transmitter to send digital modes (as hams might do) to disseminate information to listeners using regular shortwave receivers. I spent some time monitoring the programs locally at home, but the signals were very variable at times. After going into winter and losing signal entirely, I stopped following the programs.

Since then, the program is now known as Shortwave Radiogram, still produced by Dr. Kim Andrew Elliott, transmitted out of WRMI Florida, USA and SpaceLine Bulgaria. Recently, they publicised the fact they were running the Tecsun Radios Australia #DecodeToWin reception competition, which gave me a good reason to return to monitoring the transmissions. After all, it’s not often that an Australian company is mentioned on shortwave.

As a result, I now try to receive the program from home, but also from a number of WebSDRs/KiwiSDRs to compare results, tweeting reception reports.

I hope to keep monitoring when I have the time … so check out my Twitter to see further reports.

Posted in Computing, Radio | Tagged , , , , | Leave a comment