Given how many devices nowadays rely on Secure Digital (SD) memory cards, it’s not expected that I could say anything bad about them right? Well, as valuable as the format is, there are some frustrations with it, which I will vent in this article about SD.
The flash memory arena, prior to SD’s arrival, had primarily settled on CompactFlash (CF). CF was invented by SanDisk in 1994, and has been built around the semi-open CompactFlash Association which allowed for easy interoperability and improvements to the standard.
These cards were sturdy, built with a plastic frame and a thin plate steel cover. The cards were intelligent, and had integrated controllers to deal with the niggles of flash memory wear levelling, logical to physical block translation, ECC, reallocations, etc. They also supported various I/O capabilities for expansion cards for PDAs.
This contrasted with the format which was most popular before it – that of SmartMedia which was basically a flash chip housed in a thin card-like substrate, where the controller would be in your end device and you would be subject to compatibility issues galore.
CF had other things going for it too – it was pin-compatible through a passive wiring adapter to PCMCIA, a then-dominant laptop expansion card interface. It was also pin-compatible through a passive wiring adapter to IDE (for cards with true IDE mode), allowing the cards to perform as “makeshift” SSDs before dedicated SSDs started coming out on the market. Compatibility issues were limited, because of the standardized interface, although there was one issue with cards greater than 2Gb as they required FAT32 and some devices had no ability to work with the filesystem. Interfaces were generally forwards and backwards compatible – it would work at the fastest speed supported by both ends. The cards were also dual-voltage compatible.
Unfortunately, the sun has started to set on CF as it is starting to become technologically irrelevant. Its large form factor was unsuited to the smaller devices we demand today. IDE is no longer relevant to most users, and neither is PCMCIA, and the parallel interface involving many pins and connections is tricky to run at high speeds. In fact, even with the UDMA generational improvements in interface, the speed of the regular CF couldn’t really pass ATA166, whereas SD’s UHS buses are cranking up to match or exceed this. The last attempt to revive the CF form factor, called CFast based on SATA, broke compatibility with the old standard and wasn’t very successful. Another option was the XQD card based on PCIe, which seems limited to professional usage.
In that moment, between the death of CF and the rise of SD, was a brief lull period where we were inundated by the worst of the worst – attempts at bringing us the “next big” memory format. Contenders of the time included Sony’s Memory Stick and its family of variants (M2, Pro, Duo), and Fujifilm/Olympus’ xD. They both tried to offer smaller form factors, with similar memory capacities, to improve convenience. They both failed to varying degrees to capture the market as a whole, and that’s a good thing. The problem with both is that they were controlled by proprietary interests, and out of greed, would not open themselves up to deployment by others. Users who bought in to either system would stand to lose through a limited number of compatible devices and high costs. That’s also not to mention xD’s technical shortcomings as it was closer to SmartMedia than it was to CF, resulting in limited capacity and device-compatibility issues.
While the proprietary formats warred, an open format developed by JEDEC was brewing – that of MultiMediaCard (MMC). MMC was unveiled in 1997 by SanDisk and Siemens, and is an open standard. While today, MMC is rarely seen, the SD format was initially built on MMCs interface and form factor. SD is run by the SD Association, formed in 2000 by Panasonic, SanDisk and Toshiba, with its first main difference being the addition of Secure Digital DRM to the cards.
Frustration 1: SD Royalties
Did you know that pretty much every SD marked memory card manufactured involves a royalty fee payment to the Secure Digital Association? Likewise, every host device that is compatible also involves some royalty payments on behalf of the manufacturer.
In general, such an arrangement is considered the norm when it comes to technology, but one of the main reasons this is a frustration is because of the fact that there is a perfectly good open MMC standard which just didn’t get the support it needed, and is now fairly rare to encounter except for in embedded eMMC scenarios. The command set is slightly different, but if given the same amount of investment, MMC could have won out for a royalty free solution with similar performance results.
Further to this, SD was built upon MMC initially, and one of the key benefits of the SD standard is the addition of DRM facilities. This particular DRM allows for storage of content in a secure manner which very few to no devices actually use. As a result, many SD cards are carrying around DRM silicon which never gets used, for a suboptimal outcome. The other thing, proven time and time again, is that DRM is almost never in the favour of the consumer.
Some manufacturers have tried to make DRM free cards which are compatible to try and skirt around royalties, but so far, these and MMC are relatively unsuccessful.
Frustration 2: SD Form Factors and Adapters
In order to address the desire of device manufacturers to produce smaller form factor devices, and to address the volume and storage capacity trade-off, the SD format comes in three different form factors, only two of which remain popular today. Interestingly, MMC has a different line-up of form factors when it comes to the smaller cards.
From the left, we have the regular full-size SD card, which is still somewhat widely used today. The middle card is mini-SD, which was relatively uncommon. I don’t actually think mini-SD cards are even made anymore. The final one on the right is a micro-SD which is gaining popularity above that of the full-size in many applications.
In fact, the only reason I have a mini-SD card is because it came as a sample card in the Nokia N800 Internet Tablet which I owned, that incidentally had two full-size SD slots – otherwise I would have never used one.
The cards themselves are all electrically compatible, with passive wiring adapters needed to adapt the mini-SD and micro-SD into the full size SD.
The form factors are a frustration because of the inherent trade-offs which they assume. The micro-SD form factor is now rapidly gaining popularity, and as a result, it is likely that many users will be saddled with smaller capacity cards due to the lower volume limiting the capacity. A side effect would be the adoption of TLC memory technology, and slower speeds as a result.
The large full size cards won’t fit into many things aside from cameras and computers nowadays, but offers the best compatibility and speeds. The mini-SD is pretty much doomed with few options.
Being electrically compatible is not enough though. The passive wiring adapters are often lost when you need them most, but worse is that many of them have regular contact problems, resulting in drop-outs, and they also can interfere with the highest speed data transmission modes, causing the cards to fall back to 25MB/s (i.e. DDR-25Mhz) mode.
Frustration 3: SD Capacity, Speed and Interface Compatibility Tiers
SD is inherently defined in the number of blocks and block sizes, and in its initial inception, their designers didn’t have enough forethought to design the specification to accommodate more than 1Gb. Actually, many of you might know the original SDSC limit as 2Gb, and this was due to the use of a doubled block size which introduced compatibility issues with some earlier SD devices. The file system used with SDSC was almost universally FAT.
As a result, for early devices, you need to retain 1Gb or 2Gb cards for use. Many of them would mostly be empty space for modern cards.
People had a desire for more memory capacity, so there were non-standard hacks to the SDSC standard which pushed the capacity to 4Gb which caused extreme compatibility issues, especially when the SDHC specification for 4Gb to 32Gb cards was published. I actually lost some data with the hack 4Gb cards, and later devices which were SDHC compliant refused to format them at all. Formatting such cards required the right device, and patience, and could have used FAT or FAT32 depending on the target device.
With the SDHC specification, the CSD register was changed, and thus no original SDSC devices will interoperate with SDHC cards, although newer SDHC devices could use SDSC cards. The SDHC specification was limited to 32Gb and FAT32, which seemed again, to lack forethought.
Once 32Gb cards became common, the desire for even larger cards led to the introduction of the SDXC specification which changed very little except allowing reserved bits to be used to continue scaling up the block count (for cards up to 2TiB), and enforcing a requirement for exFAT filesystem (which, is proprietary as well).
As a result, many SDHC devices cannot officially interoperate with SDXC cards depending on how they handled the reserved bits. Those who used them anyway for the block count computation had a chance to interoperate provided the card was formatted in FAT32, in violation of the SDXC standard. Some devices claim to be able to work with SDXC cards but only formatted in FAT32 and thus do not carry the SDXC logo.
This also brings me to the issue of interface speeds and speed tiers. The SD interface was originally specified up to 25Mhz, dual data-rate clocking, at 4-bytes per transaction making 25MB/s maximum speed. This was not sufficient for many high-speed applications and hampered SD’s adoption in preference for CF when high performance was concerned.
In order to address this, faster bus interfaces were allowed under the UHS-I arrangement. This introduced SDR50, DDR50 and SDR104 modes to provide 50MB/s and 104MB/s transfer rates by using 50, 100 and 208Mhz bus clocks.
Unfortunately, while devices are often marked with the UHS-I logo, it’s not clear which bus interface speeds are used, and as a result, the performance achieved is unclear as devices can sometimes negotiate a much slower mode than expected. The performance can also be hampered by card compatibility issues, host port design and trace routing, as well as adapters which can cause signal reflections.
Consumers instead see the speeds in terms of “classes” which are different to the interface speeds. Initially a class system of “Class 2”, “Class 4”, “Class 6” and “Class 10” were offered to allow cards to specify their certified minimum write speed as 2Mb/s, 4Mb/s, 6Mb/s and 10Mb/s respectively. Of course, this wasn’t enough for the best cards, and didn’t specify read speeds, so manufacturers sometimes invented classes, such as “Class 16″ and also indicated the speeds separately, but they were not certified by the SD Association”.
Original cards were unclassed, and thus have no committed minimum write speed. Ironically, this did not make them any worse for many applications like HD Video, if they were decent cards. Instead, the lack of classing data led to devices rejecting the cards because of a mis-match of class requirements.
These cards using the original class system are mostly DDR25-type original SD interface cards.
The newer UHS-I capable cards tend to carry the UHS-1 class logo, which indicates a 10Mb/s minimum write speed. There is a UHS-3 mark for 30Mb/s minimum write speed. Other UHS-speed-classes are not often seen.
This can sometimes cause confusion, as people wonder if a card carrying Class 10 is UHS-I capable, with a UHS-I interface for faster read-back, or whether a UHS-I card is faster than a Class 10 card.
This only gets worse as a new interface with an extra row of pins, termed UHS-II is coming. Also note, UHS-I and UHS-II denote the interface generation, whereas UHS-1 and UHS-3 indicate speed class. Confusing, eh?
Frustration 4: Physical Card Robustness
By far, my biggest frustration is the lack of durability of the full size SD form factor. The specification generally results in thin plastic bodied cards welded around the seams.
The plastic itself gets brittle over time, snapping near the finger guards and the end of the card, resulting in the whole card falling apart.
It’s not just one or two cards. I’ve lost at least three cards to this sort of breakage … by contrast I haven’t lost any CompactFlash to physical failure of the package.
At least, we get to see the insides of the card. The Team card above has a more traditional PCB featuring a single package of NAND, with a gob-top mounted controller.
The Samsung card below features a special custom plastic package, mounted on a thin passive PCB.
You probably would have realized that micro-SD is not vulnerable to this and indeed, this problem was solved in some sense a while back. Kingmax had a line of cards which were branded Platinum and was made out of a physical ceramic-like substrate which had no plastic on the outside. This is very similar to the material used to package micro-SD cards. The disadvantage that it was a “thin” SD (like MMC card) due to the packaging which limited its capacity. The largest one I own is a 1Gb card, but it had been through many many insertion-removal cycles with no damage. It’s also inherently waterproof. One other manufacturer also made cards of this sort, but their name escapes me at the moment. It doesn’t seem this packaging technique is used anymore.
For what it’s worth, Secure Digital is a very important and vital standard that allows for a decent amount of interoperability without the pricey disadvantages of vendor lock-in, and makes a decent replacement for CompactFlash.
I don’t mean to criticise Secure Digital too harshly, as its contributions are significant, but just to note from the perspective of someone who has encountered, and realises the technical issues, a host of frustrations which might be encountered.
It will be interesting to see just how far we can go pushing the envelope with NAND flash technology. Will we see a 2TiB SDXC card in the near future? Quite possibly.