Quick Review: Samsung Evo 32Gb (MB-MP32D) microSDHC Card

It seems that Samsung has done some shuffling with their memory cards, and the Samsung Plus is no longer available. Instead, they have instead drawn on their SSD branding scheme and bought the Evo moniker to their microSD and SD cards. They have also refreshed the packaging of the Pro.

As a few more microSDHC cards are always welcome around here, and mWave had a group deal which worked out to be about AU$16.20 per card (including postage and insurance), I decided to grab a pair to give it a go.


The evo cards come with orange colouration, similar to the plus and are specified up to 48Mb/s similar to the plus. The body of the card and adapter is coloured white, however, and the adapter is packaged with the card.


The particular package has been tailored for the Asia-Pacific and China market, with a model code of MB-MP32D, which is fairly similar to the MB-MPBGB of the Plus card. The text has been labelled over with revised text, and ten year warranty is not offered for Australian purchases from mWave. It also seems the adapter is guaranteed for one year.

DSC_7841 DSC_7842

The supplied adapter is a quality item, and is uniquely molded with good contact. The lock switch itself is very sturdy and unlikely to be loosened by inserting into an SD slot unlikely some other generic adapters.

DSC_7843 DSC_7844

The card itself also appears to be marked similarly to the Samsung Plus, although the white colouration is printed on the top and along the edges.


The card details are as follows:

Size: 31,440,502,784 bytes
CID: 1b534d3030303030100bd030d600e4f1
CSD: 400e00325b590000ea3f7f800a4040c3

The size of the card is identical to that of the Samsung Plus.

Performance Tests

HDTune Pro with Transcend RDF8


Sequential read averaged 31.3Mb/s, which is very similar to the Plus which clocked in at the same figure.

HDTune Pro with Kogan RTS5301


Sequential read on the Realtek RTS5301 clocked in at 31.9Mb/s compared to the Plus clocking in at 32.0Mb/s (virtually identical). No compatibility issues were noted.

CrystalDiskMark with Transcend RDF8


The results from CrystalDiskMark seems to be pretty similar to the Plus, and is closer to the 48Mb/s figure on the package, but not quite there.

CrystalDiskMark with Kogan RTS5301


The anomaly with the 512k write test on the RTS5301 also appears, similar to the Plus card, which implies that they are one and the same product. It also reminds us the performance is highly workload dependent.

H2testW with Transcend RDF8


The default formatting doesn’t quite allow for H2testw to test the full surface, however, no errors were found and the average writing speed throughout was 17.8Mb/s, and the average reading speed was 40.2Mb/s. A fairly solid result for a low cost card, but it seems to suggest timing issues which may influence the CrystalDiskMark results. This is the sort of performance you would expect for large sequential transactions.

Raspberry Pi Compatibility

In a recent test with the Raspberry Pi Model B+, I noted that the Samsung Plus doesn’t play well with the board. What happens with the evo, I wondered?


As it turns out, the card doesn’t seem to be compatible with my Raspberry Pi Model B+ board. However, using the Raspberry Pi Model B, it was also incompatible.


I even changed adapters in case that was the cause … however, no dice.


This may come about due to driving strength issues, or SD card driver issues with the Raspberry Pi. It is hence my advice not to purchase either the Samsung Plus or Samsung Evo for Raspberry Pi applications at this time.


It appears that the Samsung Evo branding is merely the replacement for the Samsung Plus branding. The card itself retains a very similar performance behaviour, and similar incompatibilities with the Raspberry Pi. It remains a solid buy, for a good balance of performance (i.e. faster than the competing Sandisk Ultra) provided that you aren’t intending to use it with the Raspberry Pi.

The SD Card Performance, CID and CSD databases will be updated shortly.

Posted in Computing, Flash Memory | Tagged , , , , , , | Leave a comment

Marketing Confusion: Wireless 802.11ac Data Rates Explained

For people who have looked at upgrading their network to 802.11ac, no doubt you have been perplexed by the wide range of physical layer rates advertised. Before, in the 802.11b and g era, all cards operated at just one rate – i.e. 11Mbit/s and 54Mbit/s respectively.

802.11n changed this somewhat when they introduced the concept of MIMO and spatial streams, with clients supporting multiple spatial streams effectively multiplying the “base” rate (i.e. 1-stream = 150Mbit/s, 2-streams = 300Mbit/s, 3-streams = 450Mbit/s). Dual-band operation also increased the variety of physical layer rates advertised, as many routers were adding the rates on both bands together, so it could be possible to see 150Mbit/s, 300Mbit/s and 450Mbit/s as single band routers, and 300Mbit/s, 450Mbit/s, 600Mbit/s, 750Mbit/s and 900Mbit/s as dual band routers.

Enter 802.11ac

The concept is much the same with 802.11ac, just that the numbers are a little different and the number of possibilities has gone up. The base single-stream rate for 802.11ac is 433.3Mbit/s (80Mhz, Short GI), with each spatial stream effectively multiplying this base rate. This is somewhat simplified as the 802.11ac makes provision for even larger bandwidth channels (160Mhz) and deeper modulation modes which can increase or change this “base” rate, but this is what is presently used in specifications and datasheets, so we’ll go with that.

The base single-stream rate for the 2.4Ghz band remains the same as 802.11n (as it is still 802.11n) and is 150Mbit/s (40Mhz, Short GI), with each spatial stream multiplying the base rate. An additional wrinkle is added, however, that there is a deeper modulation, known as TurboQAM which uses 256QAM and is only compatible with TurboQAM-supporting chipsets, which provides a theoretical throughput equivalent to four streams on 2.4Ghz but actually uses 3 spatial streams.

That might be hard to visualize from the text, so I made this table which lists the total throughput rates for a given number of streams in each band. Common market-available rates are marked in bold.

AC Throughput Table

Important Notes:

  • AC600 is 1+1 (i.e. 150Mbit/s on 2.4Ghz + 433 Mbit/s on 802.11ac)
  • AC750 is 2+1 configuration with 300Mbit/s on 2.4Ghz and 433Mbit/s on 802.11ac
  • AC900 often means 867Mbit/s on 802.11ac and is often equivalent to AC1200 depending on number of 2.4Ghz streams
  • AC1200 is 2+2 (i.e. 300Mbit/s on 2.4Ghz + 867Mbit/s on 802.11ac)
  • AC1750 is 3+3 (i.e. 450Mbit/s on 2.4Ghz + 1300Mbit/s on 802.11ac)
  • AC1900 appears to be 4+3, but relies on TurboQAM extension to provide 600Mbit/s on 2.4Ghz with 3 spatial streams with unchanged 802.11ac rate
  • AC2400 appears to be 4+4 but relies on TurboQAM on 2.4Ghz as AC1900 does.

As a result, you need to be careful with what you buy and whether certain models are worth the upgrade.

For example, an AC1750 and an AC1900 unit are likely to perform exactly the same for most users. For users using the 5Ghz band with 802.11ac clients, they will both connect at 1300Mbit/s. The only difference comes down to the 2.4Ghz band, where theoretically, 600Mbit/s is available from the AC1900 unit, however, this is only possible for TurboQAM capable chipsets (i.e. new 802.11ac ones running in 2.4Ghz), and won’t make a difference to your existing dual or triple-stream non-Turbo-QAM 802.11n clients. In fact, dual stream 802.11n clients can only achieve 300Mbit/s on the 2.4Ghz band anyway, so the extra stream is really “wasted” on many clients.

Likewise, if you’re looking for 5Ghz performance, paying extra for an AC750 router over an AC600 router isn’t meaningful because the extra performance comes from one additional spatial stream in the 2.4Ghz band.

If you see adapters advertised as AC900, they’re almost always the same as an adapter advertised as AC1200. The reason is that those are usually earlier adapters with only the 5Ghz AC rate of 867Mbit/s advertised, rather than the marketing practice of summing both bands and then rounding up. For that matter, you can only ever be connected on one band. There is no way a single client can see both throughputs summed together!

Building your network out for maximum performance, you will need to match the number of spatial streams on your intended operating band. If you have an AC2400 router, you will need a quadruple stream capable card to make the most of the available throughput. If you use any less, you will be limited by the number of spatial streams supported by the card! As a rule of thumb, cards supporting more spatial streams are generally harder to obtain and more expensive. But it does mean that an AC1750 card and an AC1900 card operating in 802.11ac 5Ghz mode will perform (practially) identically as they both support the same number of streams and throughput in the given band.

In the future, TurboQAM may be extended to four spatial streams for a throughput of 800Mbit/s on the 2.4Ghz band, and thus four-stream routers might be able to claim another 200Mbit/s on top (i.e. AC2600) of the AC2400 designation they market with today.


There is a wide range of gear being sold as 802.11ac, and while it is true that they do support 802.11ac modulations, they vary significantly in their ability to support a number of spatial streams, which improves throughput. As a result, users need to be careful to buy the solution that works best for them – those which produce more 2.4Ghz throughput than two streams in 2.4Ghz are unlikely to be beneficial due to interference and most 2.4Ghz devices only supporting two or less streams.

Users should also be aware of the marketing practice of rounding up, and adding band throughputs together – as a result, they might not be aware that an AC1750 card and an AC1900 card operating in 802.11ac mode both have an identical maximum link rate of 1300Mbit/s. It makes it slightly confusing to an end user, who may just look at the number and think bigger is better, and pay more to receive no tangible benefit.

Posted in Computing, Telecommunications | Tagged , , , , | Leave a comment

Tech Flashback: Fujitsu M2266SA – Part 4 – A Time Capsule Unleashed

While most people will see computer hardware as physical objects, often they have a story which is greater than what appears on the surface. For most devices, the story lives and dies with the owner, but in the case of hard drives, sometimes the story can live on.

As a quick recap, I bought the pair of drives off someone at the cost of postage, who acquired these drives as part of a computer at auction over a decade ago. As a result, the drive is mine, along with everything that may be on it. After days of grinding at it, I had produced an image (as best as I could) of whatever was readable at the time.

I had to spend several days analyzing the data (some of which I will go through here), but what was discovered paints a very rich picture of the drive’s past. I debated with myself whether I would release even some of this data, and whether that would be reckless. In the end, I came to the conclusion that the data was too interesting to be kept to myself – especially in the name of research, that the data is probably so out of date as to be irrelevant now (think 18-years ago), and whatever sensitive data there is can be redacted away before I show it to you. Only small excerpts and examples will be shown, to avoid over-clutter and releasing too much information. This way, I hope to satisfy everyone’s interest at once.

Before you ask, no, I will not release the complete image. This is all you’re getting.

Image Analysis 101

The first thing to do with a fresh image is to pop it into my favourite hex editor – WinHex.

Boot Sector Novell

The first thing I notice, which is unusual, is that the drive utilizes an MBR partitioning format but has a boot sector with Novell Inc.’s stamp on it. It likely means this drive was partitioned by Novell’s utilities at one point in time.

Partition Table EntryA look at the partition table in the template view tells us that it’s of Partition Type 0×86. According to Wikipedia, this corresponds to a Windows NT4 Server Fault Tolerant FAT16B Mirrored Volume.

As a result, I believe that means that the two drives I received were RAID mirrors of each other, and “losing” the other drive wasn’t really a big deal. But strangely enough, I couldn’t get the file system mounted in WinHex or in any operating system. I wrote the image back to a USB key just to check, but nothing could recognize it. Analyzing sector number 32, it gave me a hint …

NTFS Style File Records

Notice the 1kB records starting with FILE*? This is very similar to the modern NTFS with 1kB records starting with FILE0. So, despite the partition type being claimed FAT16B, it seems to be an earlier NTFS. Maybe it was converted using the convert utility.

Whatever it was, there was still something broken about it. I couldn’t find any $MFT. Changing the partition type ID still couldn’t get the filesystem identified, so recovery has to proceed without the aid of the file system. Maybe it means that it was a JBOD volume and losing the other disk (the first of the JBOD, with the MFT) was a big loss.

I did contact the former owner to find out more about the circumstances, and it appears that the two SCSI drives were accompanied by an IDE hard drive (likely to boot the OS). So it seems likely that Windows NT4 Server booted off the IDE drive, to run a centralized server share which resided on the two SCSI drives RAID/JBOD-ed. The Novell influence probably comes from their use of Novell to manage network login authentication and delivery of applications. Evidence of IPX networking was discovered within the “ruins”.

Lets do a quick look around the drive. Despite the bad sectors cutting into the image like swiss cheese, there were many readable text characters. For example, these two segments which suggest some sort of database, format unknown.

Fault Logs Database Fault Logs

While scrolling through, I did see a lot of text and not many empty sectors. There were also blocks of text data with binary bits “chucked” into the middle. This fact will become important for the next stage of our recovery.

File Carving

How does one find something in a book without the table of contents or the index? By searching through, page by page, until you find what you’re after.

The whole premise behind file carving is that many types of files have magic values in their headers which uniquely identify the particular sort of file. Some even have trailer values which allow you to identify the end of the file.

By searching for the start value, and copying all the data into a separate file until you reach a size limit or the end value, you can potentially extract workable files from the image.

The key drawback to this method is a loss of all file-names, and the production of many broken files. A key assumption of the file carving method is that the data is stored sequentially with no fragmentation!!! Otherwise, you get this …

02809 05166 01702

A drive which is mostly full, often is fragmented. Furthermore, for a drive with bad sectors everywhere, it is likely that many files will sit in a “hole” somewhere and be corrupted. It is very likely that smaller files will be recovered, but the rate of recovery diminishes quickly the larger the file is.

By looking at the data on the drive, I selected a few candidate file types to carve and went through the results. Most interesting was the discovery of PostScript files, which the default carver wasn’t good at. I noticed PostScript files ended with %%EOF so, I added this as the trailer value which produced much more parsable files.

It’s important to note that this technique usually produces many broken files when used, and that’s a consequence of the method used. However, some uncompressed file formats can fare well-enough, that enough of the file is recovered to make out the contents. Compressed files generally will fail to extract, but will verify their integrity. There are formats which ride somewhere in the middle which may fail due to missing items or things out of order. It’s a bit of pot luck.

Former Owner: Telecom Australia / Telstra Melbourne Transmission Control Centre

It was a pretty exciting result when I had carved most of the usable files out of the drive, as it was discovered that the drive most likely resided in Telecom Australia/Telstra Melbourne Transmission Control Centre (TCC). I have been interested in telecommunications and networking for ages, and I even applied to Telstra’s Graduate Program before, but unfortunately, didn’t make it past the second stage interviews. Surprisingly, Telstra’s Wikipedia page doesn’t make much of a mention of this part of their history at all. It’s like finding a pristine time capsule.

Historically, due to the incidence of phone phreaks in the 1950′s to 1970′s who seeked to explore and (sometimes) exploit the in-band signalling techniques of the telephone network, setting a precedent for telcos to be quite secretive about their operations. Some enthusiasts even broke into exchanges and went dumpster diving to try and recover discarded manuals which could provide them information on the equipment. It’s a bit of a surprise, given this, that the data wasn’t better protected.

Many of the files were critically broken, but since I didn’t have to manually attend to them, I was able to sift them out pretty quickly.

It appears the computer was a fileserver, but also a print server, and as a result, jobs printed to the PostScript compatible printer were stored in the spooler on the hard drive and could be carved and reproduced (around 1000 pages worth). This was a bit of a jackpot, but it did result in many identical print-outs showing up. Plain text was pervasive from terminal logs, and unknown databases, but no other files of interest were recoverable. It seems likely most of the files recovered are those which existed at the end of the server’s life, as earlier temporary files would have been overwritten.

The drive tells us a story about Telstra in the mid 1990′s (approximately 1994-1996):

  • Melbourne Transmission Control Centre was responsible for links within Victoria and Tasmania.
  • Large corporate restructuring drive going on, with back and forth between the union, leading to loss of many jobs and retraining for some. Money was a problem at the time.
  • Strong internal induction, training programs and learning reflection programs.
  • Regular meetings of staff to discuss major transmission incidents.
  • Many different internal systems involved for different purposes, reports generated very frequently. Lots of details recorded for each case, and in some cases very ingenious solutions to outages. Systems were evolving continuously in response to user demands.
  • Mixture of technologies in use – analog and digital radio links and Optical Line Transmission Equipment, as well as Plesiochronous Digital Hierarchy (PDH) and Synchronous Digital Hierarchy (SDH) systems.
  • Spare transmission capacity was numerous and vital for patching out damage to transmission lines as well as carrying video traffic for media events.
  • Some users misused the facilities for the purposes of printing out TV series plotlines, game cheat codes and walkthroughs, inspirational quotes, recipes, university assignments and pornography (who would have guessed!). Some users were very much into newsgroups, and evidence of scam operations were already visible.
  • Numbering system changes seemed to be taking effect at the time, with some phone numbers in the “new” eight digit format beginning with 9.
  • Phone, E-mail and Fax was used internally. Data call connections were also available via LAN.

Internal System Examples

Of course, the most interesting things are the actual images … and it wasn’t an easy decision on my behalf to reveal these. However, I think that by redacting the exchange names and the link identifiers in most of the reports, and given the age of the reports, that this is an acceptable balance. For the purposes of research, only excerpts are provided. The drive is technically mine, along with all the data on it.

The first system is known as INFLOG and appears to be a system by which failure reports and remediation steps are recorded and timestamped for each incident.


Another very important system is the TRAC system which records installation details, hierarchy and interconnection details amongst others.

TRAC System

They also had another system called AMS which managed alarm reports.

AMS Alarm Points Exceeding Threshold

Another system was known as NetInfo which provided information on the present links between exchanges.


There appears to be a special database for Technical Outage Investigations (TOI).

Technical Outage Investigation Database

There is, yet another system for bearer troubles, known as BOOS. It also produces reports for other bearer-related statistics.

BOOS Report

Digital Radio Bearers see special monitoring, with a Digital Radio Bearer Analysis report …

Digital Radio Bearer Analysis

… as well as Detailed Site Analysis.

Detailed Site Analysis

There is also a Degraded Performance Report for bearers which have issues.

Degraded Performance Report

Finally, to round out the internal systems, they have a corporate directory.

Telstra Corporate Directory V3-2

Radio Traffic Map

Between certain exchanges, the bearers are carried by analog or digital radio links. Their limited capacity and long paths were one reason why STD call charges were so high, and the quality was variable especially over analog links.

To see these maps between certain exchanges reinforces the idea of redundancy (protection bearer) and introduces us to the SIL RF-series (analog?) and NEC 500-series digital radio bearer links. To think that they could build 140Mbit/s links “in the air” using microwave links back in 1996 is truly impressive.

Radio Traffic Map 2Radio Traffic Map 3 Radio Traffic Map 4 Radio Traffic Map

The maps themselves illustrate how the microwave network is built – point to point. In the case of connecting distant exchanges, some of the links will be configured through (i.e. repeat) at intermediate towers/exchanges. The capacity and frequencies of the links are shown too.

Maps, Rosters, Forms and Other

The interesting results don’t stop there. It seems that Telstra were using a modem pool to try and eliminate the need to have modems and phone lines at each desktop. They called this system Datagate.

Documents Telstra Datagate Form

They also seemed quite audacious in introducing a data based network to the public. The Global Network for Lotus Notes … which I had no idea about …

Lotus Notes based Public NetworkMedia bookings for capacity to carry video events were also seen, and distributed by form.

Media Booking

They were actioned according to the following flow chart.

Media Sales Video Booking Flowchart

Speed dial chart? Yes please! The numbers and individual names have been redacted, but analog 01 numbers were seen, and a mixture of 9-leading 8-digit numbers, and 7-digit numbers were seen.

Speed Dial Chart

There is even a shift roster …

Melbourne TCC Shift Roster

They held regular meetings, with minutes …

Shift Leaders Meeting

More reports of various sorts …

TxOps Report Melb

Unplanned Outage Summary Report

And finally these two gems. It seems that while Telstra was a bit of a sinking ship at the time, some of the guys had a bit of humour left in them.


It also seems that there was a “take your daughter to work” day or something. I know my Mum used to take me to her workplace and I would do similar things in MS Paint …



This hard drive tells a very interesting and rich story about Telecom Australia/Telstra in the mid-90′s. Times have definitely changed, and the technologies have as well. Telephony was still popular in the mid 90′s, and the internet/IP networks were still mysterious to many. Bandwidth requirements were nothing compared to now.

While we still seesaw back and forth on Fibre-to-the-Home, fibre was a key staple of interconnection back in the mid-90′s. Even a 140Mbit/s OLTE/Digital Radio system link, which could serve a medium sized exchange, would now easily be saturated by a few users carrying IP traffic. Analog technologies have slowly been abolished, both in transmission and to end users (mobile phones), and microwave link usage supplanted by fibre where necessary for bandwidth and reliability reasons.

The numbering plan has also changed through time, due to demand for new numbers. Around that time, Sydney added a 9 in front of the (then) 7-digit numbers to provide for more numbers starting with 8.

Even media use of telecommunications networks has changed somewhat, with many field contribution links using their own microwave systems, satellite contribution links, and recently, even 3G/4G broadband networks to link events to their studio.

The administration systems utilized back then look relatively unsophisticated today – text only terminal interfaces might have been replaced by something more modern, although I note some companies in the banking sector still persist with these interfaces and systems today.

I suppose, in a privacy centric society today, hard disk data destruction is taken more seriously than it was back then – this data could have been in the hands of someone else over ten years earlier. Maybe then, its value and potential for misuse would have been higher.

It wasn’t explored only because the former owner did not have the requisite SCSI gear to talk to it. Now, it is mostly of historical interest, as it provides us a glimpse into the back ends of a beast we don’t normally see.

I suppose, it goes to show, if you don’t sanitize your disks when you dispose them, someone could discover your data. But it may not be the worst thing in the world. It’s the story of your hardware, and it gave this “geek” a lot of entertainment. Money well spent.

Posted in Computing, Tech Flashback, Telecommunications | Tagged , , , | 2 Comments