So far, I’ve been recommending SandForce based drives for external enclosures as they typically work better despite the lack of TRIM support as they have sufficient overprovisioning, help from compression and sophisticated wear levelling algorithms.
As discovered in the previous part, the Transcend StoreJet 25S3 supports UASP, and thus this set of experiments were performed on UASP capable machine (i.e. my Asus K55A refurbished laptop).
Using the Sandisk Extreme II 480Gb as a USB-SSD
In the previous part, it was determined that the Jmicron based Transcend SSD340 performed horribly once filled due to a lack of overprovisioning for wear levelling and garbage collection to work effectively. As a result, I wondered just how well the Sandisk Extreme II would perform given that it does NOT use a SandForce controller, but it DOES have a larger overprovisioning similar to SandForce drives.
As a result, my methodology was to fill the drive repeatedly and completely using HD Tune Pro’s write feature. If the drive can maintain its performance under these circumstances, it should operate well despite a lack of TRIM.
From these results, we can see no dramatic loss in write speeds as we observed with the Jmicron based drive before. Lets take a look at the read performance …
Pretty decent for a USB 3.0 connected SSD. With the SSD still in its dirty state, I decided to check what CrystalDiskMark thought of it.
All in all, the drive doesn’t seem to be hurting at all, the performance is very good. H2testw saw no corruption issues, so I suppose the Extreme II is a good candidate for TRIM-less usage, although I cannot comment on how well it resists power interruption.
The Asmedia chipset used in the Transcend StoreJet 25S3 supports passing SMART data, so you can see how hard my SSD has been hammered in the name of research …
Of note is that the E9 parameter (likely to be actual flash writes) isn’t much higher than the F1 parameters, indicating low write amplification, and therefore, slower rates of wear even in TRIM-less situations. The overprovisioning provides enough wriggle room to rotate blocks around and optimize their wear.
Saving the Transcend SSD340 for USB-SSD by Manual Overprovisioning?
In doing the above, a little light bulb went off in my head. As we have already established that the SSD340 restores its performance upon a secure erase, what would happen if we chose never to write to some sectors at the end of the drive? The drive will know that the sectors were never used since the secure erase and should theoretically be able to use it as free blocks for block rotation.
So how can we establish the limits of the drive? Similarly to the above. By using HD Tune Pro’s Write feature, but with Short Stroke turned on, we can select to use only the first x gigabytes (decimal) of the drive. By writing over and over, we can determine if there is degradation over time.
The first point to start at, for a 256Gb drive, is 240Gb (the default SandForce overprovision).
Surprise surprise, the write performance is maintained across three fills of the first 240Gb. There is hope after all. The “user” overprovision is noticeable as the drive has a “hump” in its write pattern which shifts along. This indicates that the unused blocks are being rotated in on the next write, causing the hump to run along the graph. Cool! There’s hope after all!
Lets be a little more greedy and try to get 248Gb out of it.
There is a slight dip at the end, which is very small, but no permanent loss of write performance is evidenced, unlike for a full drive fill. So we’re good at 248Gb. Lets try upping it a little more to 250Gb (i.e. the same as a Samsung 840 Evo).
This is where everything falls apart. The drive starts losing write speed in parts of the drive in subsequent runs before then the whole drive cuts its write speed in half. In order, the average write rate goes:
It doesn’t quite decrease consistently, but it certainly does go down. As a result this drive can have about 248Gb of its surface filled before the performance goes downhill.
So how can we enforce this? By secure erasing the drive, and then partitioning the drive of course. It’s important to note that we have to do a conversion – the magic number is 248Gb in decimal gigabytes but most partitioners want the partition size in binary megabytes.
We can do the conversion by multiplying 248 (decimal Gb) by 1,000,000,000 (bytes in a decimal Gb) and then dividing it by 1,048,576 (bytes in a binary Mb). This gives us 236511.23. Lets round down to 236,511Mb. Remember, there’s no harm in over-overprovisioning!
In order to check if the drive is able to maintain its performance, I decided to fill the drive five times with H2testW and make a note of the write speeds. Overall, we witnessed no significant degradation in speed, implying that the strategy worked.
After this, with the drive still completely dirty, I decided to give it a few benchmarks on my non-UASP capable system just to check that it’s all good.
Of course, the results aren’t as fast as the UASP machine reports, but it’s miles faster than the dog-slow performance before.
Drives which feature significant overprovisioning (e.g. 120Gb vs 128Gb, 240Gb vs 256Gb, 480Gb vs 512Gb) might seem to be the worse buy to an end consumer. They seem to offer less space than their competitors without any tangible benefits in specifications.
The truth is that both drives contain the same amount of flash memory, and the additional flash memory improves the I/O consistency of the drive, especially in TRIM-disabled environments (i.e. RAID or USB). It is an important part of reducing write amplification, keeping write performance high and allowing wear levelling to work most effectively.
We have demonstrated, through manual overprovisioning, it was possible to turn a drive which works poorly without TRIM enabled into one which can perform acceptably in non-TRIM circumstances provided the controller has a “sane” block rotation and wear levelling algorithm. It’s likely, however, that should significant number of blocks fail, and be reallocated, the drive’s performance will fall again due to a lack of spare blocks for rotation.