Review: Sandisk Ultra UHS-I 128Gb microSDXC Card (Up to 80MB/s)

At the beginning of this year, I posted about a relatively new (less than a year old) Sandisk Ultra UHS-I/Class 10 microSDXC card of the 80MB/s variety which had a data loss issue. Rather unfortunately for me, I had some personal data at the time, and my doubts got the better of me so I gave it a “second chance” since a full format and immediate integrity test passed.

Half a year later, the same demons returned. The phone it was used with started to freeze in strange ways. It was spending forever just to load up the lock screen. Eventually, I could have no more, so I removed the card and the phone went back to normal. Plugging it into the PC revealed exactly what I had come to expect of the card – data loss.

The Bad Egg

Unfortunately, the bad card was not actually reviewed on the site. When I purchased it, I was in such a hurry to commission it for use that I did the testing, neglected to save the results and never said anything about it. The bad card looked exactly like the old (pre-80MB/s edition) 128Gb cards from the front and back, with a green rear substrate with a black “painted” portion with codes. The difference was that it was a superior card when it came to read speeds as compared with the older one.

Unfortunately, it was also superior in losing data. While its older sibling still recalls data flawlessly, this one has failed twice in succession.

Its behaviour in my two preferred card readers proved to be interesting – the RDF8 seems to “time out” waiting for the card, drops the bus and says goodbye. The more patient RDF9 seems to accurately delineate which sectors could not be read and is a better candidate for doing data recovery on “slow” cards. Good to know in case there are other cards that might need recovery.

Regardless, there shouldn’t be any read errors especially for a card which hasn’t been mistreated (e.g. removed during writes), especially when the errors seem to coincide with files that had formerly been readable. I decided that it was time to return it.

The Replacement Egg microSDXC Card

Armed with all the necessary documentary evidence, I set out to return the card. I tried to call Sandisk and obtain an RMA number over the phone as the receipt had encouraged me to do so, but instead, I got a “You are not allowed to call this number,” message and then it hung up on me. As a result, I had no option but to return to the shop without an RMA number, faded receipt in hand.

Contrary to my expectations, the RMA process with Wireless1 was very easy and painless. Once they had identified the receipt number from the faded receipt, I provided the documentary evidence and they were able to issue me a new card on the spot, along with a replacement receipt on regular paper which shouldn’t fade. Much more painless than I had expected.

The package looks just like the original problematic card, and differs from the original version with the increased “Up to 80MB/s” claim.

The internal package, however, is somewhat different. The troublesome card came in a plastic pack but with a paper backing that had been adhered to the plastic. This one uses a cellophane cover over the back instead.

When it comes to “ease of use”, this new packaging was not particularly user-friendly. Despite following their open instruction, the cellophane shredded mid-tear, so I had to use some extra force to break it free.

The card itself looks different. The green substrate is gone, instead replaced by a uniform black at the back. I have good hopes this card may be different – maybe it will perform better and not lose my data?

Card-specific information is as follows:

Capacity 127,865,454,592 bytes
CID: 03534441434c434680e3572a1d0112df
CSD: 400e00325b590003b8ab7f800a404079

Of note is that the card capacity is different to the original cards – another hint as to a different implementation.

Performance Testing

Before using any card, it has to pass a commissioning test to ensure its correct operation. It makes an ideal chance to characterize its performance as well. In this review, the card will be tested with both Transcend RDF8 and RDF9 readers.

HDTune Pro Sequential Read

The card achieved an average of 79.7MB/s with the Transcend RDF8, just shy of the claimed 80MB/s. However, it did achieve an average of 89.7MB/s with the Transcend RDF9, exceeding the claims.

HDTune Pro Sequential Write

When it comes to writes, the card seems to have some bumps in the speed, but averaged 26MB/s on the RDF8 and 26.2MB/s with the RDF9. This exceeds the Class 10 minimum requirement of 10MB/s, and compares quite favourably with the Samsung Evo+ which achieved around 20-22MB/s. Based on sequential performance alone, this card is more than a match for Samsung’s Evo+.

CrystalDiskMark

Testing with CrystalDiskMark has been something I have been doing for a long time, but this card seemed to present some interesting challenges.

Transcend RDF8

When running the tests multiple times, some rather interesting deviations in the 4k performance seemed to present itself. While very slow rates are not unexpected, only on one test was 2MB/s achieved. In the last run, it hung for hours preparing the QD32 tests, and never finished.

Transcend RDF9

The test was run once on the RDF9, and the results look quite good, although the 512kB read accesses are slower than that of the RDF8 by almost half.

Because of the inconsistent performance, the results from this benchmark will not be included in the performance test database. It does point to a bigger issue, which will be subsequently discussed.

ATTO

Because CDM often has some strange results when it comes to 4kB accesses, I’ve gone back to using ATTO. However, even it was not fraught with trouble.

Transcend RDF8

When running ATTO initially, the card produced errors. The program claimed to have errors on reading the file, which seemed suspicious. On dismissing the dialog, the card was found to have an unacceptably slow write speed at 1kB accesses.

Thinking that the card may be a bit stressed under threaded-access, I disabled it and managed to obtain a result.

As the result was taken under a different test regime, it’s not possible to compare it with other tests. However, strangely, it seems that the card had very poor write throughput under 64kB, where it magically recovered.

Further testing showed it was possible to do overlapped I/O tests, but only after giving the card some time to rest. This suggests the card’s controller is perhaps acting similarly to some older SSDs which may have lacked processing power to manage the flash mapping table and perform garbage collection in the background when subject to heavy I/O and instead, resulting in “stalls”. If not that, it could be possible that the card has a throttling mechanism to attempt to prolong its life when used in inappropriate situations that involve numerous small-writes. This inconsistent I/O behaviour makes understanding its small block performance difficult.

Transcend RDF9

Repeating the same test with the other card reader still resulted in inconsistent behaviour, and results that seem to defy expectations – notice how some writes reach 50MB/s. This may indicate that the “peak” performance is higher than seen in the sequential test, but this performance cannot be maintained over the long run.

H2testw

Under more normal read/write situations with the RDF8, no data errors occurred and throughput was close to expectations.

Discussion – microSD Endurance, IOPS, TLC and Beyond

This card is the first card to really frustrate my attempts at benchmarking it. The I/O behaviour of the card seems to vary as a result of the previous access patterns and amount of “dwell time” between tests. The speed does also vary from reader to reader subtly. The reason for this isn’t entirely clear, but as I had earlier stated, my hypothesis is that the card’s internal flash mapping table management and CPU may be responsible for a bottleneck when many small accesses occur in quick succession. This maybe because the card is attempting to consolidate writes (maybe it has some pSLC cache?) or do some garbage collection/wear levelling. An alternative hypothesis is that this is an attempt at write-throttling to attempt to prolong the card’s lifetime in some “abusive” applications which result in constant writes.

For normal uses which involve bulk data transfers in large blocks, the card seems to perform just fine – in fact, for a “value segment” card, its performance is notably good. However, if small accesses are necessary, this card’s inconsistency does make it a little hard to recommend. Use in single-board computers to run operating systems is one place I can envisage such I/O limitations to cause problems.

Unfortunately, where high density consumer flash storage is involved, it is very likely all products are now triple-level cell (TLC) flash with a more limited endurance of somewhere between 300-500 cycles (realistically speaking). As a result, if you carefully read the warranty guidelines, Sandisk excludes warranty for:

(i) normal wear and tear, (ii) video monitoring, security, and surveillance devices, (iii) internet protocol/network cameras, (iv) in-car recording devices/dashboard cameras/black box cameras, (v) display devices that loop video, (vi) continuous recording set top box devices, (vii) continuous data logging devices like servers, or (viii) other excessive uses that exceed normal use in accordance with published instructions.

Other manufacturers are following suit as well. As a result, if you do operate such equipment, using “regular” microSD cards may result in rapid failures which can be disastrous – imagine a security camera or a dashcam that had destroyed the card, but the owner is not aware until they try to recover the footage post-accident. The card that failed before was not subject to any abusive use, but still began to lose data.

As a result, Sandisk has introduced a high endurance product category as an answer to their needs, however, they’re not so clear as to how much endurance they have, merely to claim up to 10,000 hours of Full HD recording for the 64Gb card, and 5,000 hours for the 32Gb card. Of course, the amount of writes depends on the bitrate – their specification is based on 26Mbit/s, so from my math that’s:

5000h * 3600s * 26Mbit/s / 8 = 58,500,000MB written
32Gb card = 32,000,000,000 bytes
          = 32,000MB
Cycles = 58,500,000/32,000
       = 1828.125

The endurance is a lot higher than your “average” TLC 300-500 cycles, but still falls short of MLC’s (approximately) 3000-5000 cycles. Maybe it’s MLC and they’re being a bit conservative, but it’s still interesting to know. Unfortunately, the price you pay is the cards only claim up to 20MB/s read and write speeds.

The issue of uneven I/O performance also seems to be something the industry is trying to work around, with the introduction of “A-class” figures (Application Performance Class) which is supposed to denote the IOPS performance capabilities of cards. However, this hasn’t really been seen on consumer level products at this stage.

But before things get better, it seems we might be on track to make things worse. Increasing demand for flash, a supply shortage and the constant demand for lower price has moved the market to contemplate and manufacture quadruple-level cell (QLC) flash which pushes the boundaries even more, with potentially worse effects on write speeds, data retention and cycle endurance. At this stage, it’s pretty hard to be enthusiastic when the shortcomings of some TLC products are already rearing their head.

Conclusion

I managed to get my card replaced painlessly on the spot at Wireless1, which was great, and the replacement card is of a different design to the original. Sequential speeds were quite good, achieving about 80MB/s-90MB/s on read and about 26MB/s on write which makes it the best 128Gb microSDXC card I’ve tested so far.

However, the random small block performance proved to be problematic to determine as the card appears to stall when subject to large numbers of small accesses, requiring time to recover. As a result, this card is not recommended for applications where small accesses may be necessary – e.g. use in single-board computers as primary storage. Why this card stalls is not known for sure, but could be related to its internal organization or a deliberate attempt to preserve its life through write-throttling. It is the first card to frustrate my benchmarking attempts, and for that reason alone, deserved its own blog post.

About lui_gough

I'm a bit of a nut for electronics, computing, photography, radio, satellite and other technical hobbies. Click for more about me!
This entry was posted in Computing, Photography and tagged , , , , , . Bookmark the permalink.

2 Responses to Review: Sandisk Ultra UHS-I 128Gb microSDXC Card (Up to 80MB/s)

  1. Alex Fetters says:

    While I whole-heartedly agree with your predictions for TLC/QLC write speeds, data retention and cycle endurance; I think you should restrict these to *planar* NAND. The transition to 3D NAND’s has caused node sizes to roll back a few generations. Neglecting any side effects from the stacking (which are probably not fully understood yet anyway) this should improve all of the drawbacks mentioned, at least on TLC. Frankly, I think the only reason we’re seeing QLC parts is because we have small node size technology being applied to larger nodes. I have not seen any news about planar QLC parts. That said, write speed is the one area that could remain affected as the ECC and programming algorithms from the smaller node technologies are often slower.

    • lui_gough says:

      I would agree with you to some extent – e.g. I’ve had little qualms about deploying Samsung 3D-VNAND based TLC products based on their increased endurance, consumers are likely to be seeing the “lowest possible cost” items in such competitive markets such as memory cards and USB storage devices where manufacturers are somewhat less worried about cycle life and endurance. Their failure to adequately provide the technical specifications also makes purchasing such commodity flash memory a little bit of a lottery – does the PRO product have MLC? 3D TLC? Only the manufacturer knows for sure and they are often hush on it. They don’t even give us “expected” cycle or endurance figures like they used to – probably because they don’t want to admit that their products are inferior to older MLC products.

      Even where 3D technologies allow feature sizes to recover, continuing demands for storage and cost reductions are likely to make this a temporary situation. The motivation for 3D was not solely in endurance, it was also for storage density. I suspect the difficulties in manufacturing and patent-related issues may mean that planar will continue to be with us for longer than we would like. If anything, Toshiba’s BiCS 3D made the news a while back, but I’ve not seen any products advertising it as one of their “features”. Given that Toshiba’s NAND factory is looking for a buyer, I’m not sure how much 3D-TLC actually makes it to market as Samsung is probably the only major other manufacturer producing 3D NAND, and most of that goes into their own products to my knowledge.

      Regardless, the whole QLC situation will require more attention to signal processing techniques and alterations to the ECC provisions. With higher error rates, a higher overhead of storage space will be required for ECC, which would work against achieving a 1:1 parity on increased storage space. I wonder if there will also be side-effects, such as higher susceptibility to read-disturb errors which could also cause faster premature wear-out by controller-mediated rewrites. Other than that, I also hold fears for what charge leakage would do to retention – JEDEC seems to specify two years as the “end of life” retention requirement, but in the case of enthusiasts storing machines for a while, or a “cold spare backup”, it could be inoperable when needed. Flash was never meant for this, but at least older SLC technologies easily made 10 years, as did the better MLC.

      Then again, I’m no expert myself … just my 2c.

      – Gough

Error: Comment is Missing!