An SSD in a USB 3.0 Enclosure: Part 3 – Sandisk Extreme II & Transcend SSD340 (again)

For those who are interested in the topic, they might also want to see Part 1 and Part 2 to better understand the caveats and limitations, as it will put this article in a better context.

So far, I’ve been recommending SandForce based drives for external enclosures as they typically work better despite the lack of TRIM support as they have sufficient overprovisioning, help from compression and sophisticated wear levelling algorithms.

As discovered in the previous part, the Transcend StoreJet 25S3 supports UASP, and thus this set of experiments were performed on UASP capable machine (i.e. my Asus K55A refurbished laptop).

Using the Sandisk Extreme II 480Gb as a USB-SSD

In the previous part, it was determined that the Jmicron based Transcend SSD340 performed horribly once filled due to a lack of overprovisioning for wear levelling and garbage collection to work effectively. As a result, I wondered just how well the Sandisk Extreme II would perform given that it does NOT use a SandForce controller, but it DOES have a larger overprovisioning similar to SandForce drives.

As a result, my methodology was to fill the drive repeatedly and completely using HD Tune Pro’s write feature. If the drive can maintain its performance under these circumstances, it should operate well despite a lack of TRIM.

28-August-2014_00-44 28-August-2014_01-06 28-August-2014_01-52

From these results, we can see no dramatic loss in write speeds as we observed with the Jmicron based drive before. Lets take a look at the read performance …


Pretty decent for a USB 3.0 connected SSD. With the SSD still in its dirty state, I decided to check what CrystalDiskMark thought of it.


All in all, the drive doesn’t seem to be hurting at all, the performance is very good. H2testw saw no corruption issues, so I suppose the Extreme II is a good candidate for TRIM-less usage, although I cannot comment on how well it resists power interruption.


The Asmedia chipset used in the Transcend StoreJet 25S3 supports passing SMART data, so you can see how hard my SSD has been hammered in the name of research …


Of note is that the E9 parameter (likely to be actual flash writes) isn’t much higher than the F1 parameters, indicating low write amplification, and therefore, slower rates of wear even in TRIM-less situations. The overprovisioning provides enough wriggle room to rotate blocks around and optimize their wear.

Saving the Transcend SSD340 for USB-SSD by Manual Overprovisioning?

In doing the above, a little light bulb went off in my head. As we have already established that the SSD340 restores its performance upon a secure erase, what would happen if we chose never to write to some sectors at the end of the drive? The drive will know that the sectors were never used since the secure erase and should theoretically be able to use it as free blocks for block rotation.

So how can we establish the limits of the drive? Similarly to the above. By using HD Tune Pro’s Write feature, but with Short Stroke turned on, we can select to use only the first x gigabytes (decimal) of the drive. By writing over and over, we can determine if there is degradation over time.

The first point to start at, for a 256Gb drive, is 240Gb (the default SandForce overprovision).

28-August-2014_03-27 28-August-2014_03-39 28-August-2014_03-52

Surprise surprise, the write performance is maintained across three fills of the first 240Gb. There is hope after all. The “user” overprovision is noticeable as the drive has a “hump” in its write pattern which shifts along. This indicates that the unused blocks are being rotated in on the next write, causing the hump to run along the graph. Cool! There’s hope after all!

Lets be a little more greedy and try to get 248Gb out of it.

28-August-2014_10-48 28-August-2014_11-13 28-August-2014_11-30

There is a slight dip at the end, which is very small, but no permanent loss of write performance is evidenced, unlike for a full drive fill. So we’re good at 248Gb. Lets try upping it a little more to 250Gb (i.e. the same as a Samsung 840 Evo).

28-August-2014_11-44 28-August-2014_12-10 28-August-2014_12-36 28-August-2014_12-56 28-August-2014_13-27 28-August-2014_13-57

This is where everything falls apart. The drive starts losing write speed in parts of the drive in subsequent runs before then the whole drive cuts its write speed in half. In order, the average write rate goes:

  1. 303.2Mb/s
  2. 232.6Mb/s
  3. 188.3Mb/s
  4. 220.3Mb/s
  5. 143.1Mb/s

It doesn’t quite decrease consistently, but it certainly does go down. As a result this drive can have about 248Gb of its surface filled before the performance goes downhill.

So how can we enforce this? By secure erasing the drive, and then partitioning the drive of course. It’s important to note that we have to do a conversion – the magic number is 248Gb in decimal gigabytes but most partitioners want the partition size in binary megabytes.

We can do the conversion by multiplying 248 (decimal Gb) by 1,000,000,000 (bytes in a decimal Gb) and then dividing it by 1,048,576 (bytes in a binary Mb). This gives us 236511.23. Lets round down to 236,511Mb. Remember, there’s no harm in over-overprovisioning!

In order to check if the drive is able to maintain its performance, I decided to fill the drive five times with H2testW and make a note of the write speeds. Overall, we witnessed no significant degradation in speed, implying that the strategy worked.

h2tw5 h2tw4 h2tw3 h2tw2 h2tw

After this, with the drive still completely dirty, I decided to give it a few benchmarks on my non-UASP capable system just to check that it’s all good.

transcend-nonuasp-cdm trasncend-AS-SSD-nonuasp StoreJet Transcend USB Device_256GB_1GB-20140828-1850 transcend-nonuasp-Atto

Of course, the results aren’t as fast as the UASP machine reports, but it’s miles faster than the dog-slow performance before.


Drives which feature significant overprovisioning (e.g. 120Gb vs 128Gb, 240Gb vs 256Gb, 480Gb vs 512Gb) might seem to be the worse buy to an end consumer. They seem to offer less space than their competitors without any tangible benefits in specifications.

The truth is that both drives contain the same amount of flash memory, and the additional flash memory improves the I/O consistency of the drive, especially in TRIM-disabled environments (i.e. RAID or USB). It is an important part of reducing write amplification, keeping write performance high and allowing wear levelling to work most effectively.

We have demonstrated, through manual overprovisioning, it was possible to turn a drive which works poorly without TRIM enabled into one which can perform acceptably in non-TRIM circumstances provided the controller has a “sane” block rotation and wear levelling algorithm. It’s likely, however, that should significant number of blocks fail, and be reallocated, the drive’s performance will fall again due to a lack of spare blocks for rotation.

About lui_gough

I'm a bit of a nut for electronics, computing, photography, radio, satellite and other technical hobbies. Click for more about me!
This entry was posted in Computing, Flash Memory and tagged , , , . Bookmark the permalink.

3 Responses to An SSD in a USB 3.0 Enclosure: Part 3 – Sandisk Extreme II & Transcend SSD340 (again)

  1. jwhendy says:

    I stumbled on your posts about an SSD inside an enclosure, and find them fascinating! Really awesome stuff, and thanks for your efforts. I have a new work computer that’s encrypted with McAfee EEPC 7.1, which (new “feature” since 6.x) decides to automatically encrypt any internal drive, rendering my former dual boot setup impossible.

    So, I decided to just get my own SSD, but can’t use it internally, as even the secondary SATA slot is subject to encryption (primary drive is M.2 SATA). So, I was looking for recommendations on getting the most speed from an SSD in an enclosure, and find your wonderful series.

    I’m trying to wrap my head around the final takeaways from your article:
    – Get a UASP controller if possible for the enclosure
    – If you can use TRIM, great, if not, secure erase + limit used portion of disk
    – Any concern over compatibility of the drive chipset vs. enclosure?

    I’ve got a Samsung 850 EVO 120gb, so I’m thinking it’s already got the provisioning you’re talking about (120 vs. 128G). I’m eyeing either a JM567 or ASM1153e controlled enclosure, both reported to be UASP compatible.

    I’m trying to understand what the secure erasure does for the disk, and if I need to do something like that to get good performance. I will admit that I’ve had some issues on Linux with restoring my backups to the drive using rsynce. Over long writes, sometimes the drive appears to disconnect/reconnect under a new xhci device, and sort of freezes the system. I’m wondering if I’m suffering from the same bog down under large writes you discuss.

    In any case, thanks for your efforts and detailed report!

  2. Pingback: Review, Teardown: Lexar Professional Workflow DD512 (512Gb) USB 3.0 Storage Drive | Gough's Tech Zone

  3. arre says:

    Thanks alot for taking the time to write this! Really helps!

Error: Comment is Missing!