One thing I can’t do without is oodles of storage. Some of my experiments have very peculiar storage demands – from high volume storage, to high speed streaming storage, I have to carefully consider my needs and how to best meet them in a cost-effective manner.
In this case, the Western Digital 6Tb Green drives will prove to be the stars of my latest upgrades. Instead of purchasing the bare drives, I decided to shuck them from WD MyBook units – so lets review the drives in an internally connected mode, testing them with my Asrock Z97E-ITX/ac + Intel i7-4770k @ 3.90Ghz platform running Windows 7 utilizing the latest iRST drivers (220.127.116.119).
Say hello to Western Digital’s largest consumer drive, the 6Tb Western Digital Green, model WD60EZRX. This drive has a familiar look to most of the recent WD Green series drives, although it does feel a little more hefty than normal, as it is a 6-platter, 12-head configuration.
Despite this, it manages to be packed into a regular 3.5″ form factor, without the screw-hole changes that Seagate did on their Archive series drives. Instead, there seems to be more holes drilled underneath the regular holes, which may be used for OEM or manufacturing specific applications.
The drive features the regular SATA backplane style connection with a jumper block which does not need to be configured in regular usage.
The front of the drive contains a serial number label for easy drive identification from the end without needing to look at the top label.
The rear of the drive features a PCB with all the components mounted towards the drive, as is regular practice nowadays but is also a WD convention. The edge connector seems to be different to older models, featuring a more reinforced connection to the PCB, and the screws appear to have a lower profile, which suggests possibly a thinner PCB to maximise the platter chamber space. The breather hole is placed on the underside of the drive.
In all, it looks pretty much identical to any WD Green drive you would purchase as a component for a computer.
Interestingly, when I threw the drives into my Asrock Z97E-ITX/ac + Intel i7-4770k machine running Windows 7, they came up as 1492.87Gb. Uh, say what?
The cause of this was very simple – outdated drivers. Specifically, my iRST RAID mode controller was running on the default drivers included in Windows 7, which date back to 2008. It was hence operating with a 32-bit sectors limitation, where all drives over 2.2Tb would be incorrectly detected. Updating to the latest iRST drivers solved the issue.
Of course, if you still have issues or you see the incorrect capacity in the BIOS, it is also worth updating the BIOS to overcome this limitation.
We can also see that after removing it from the enclosure and connecting it internally, the drive identifies with the expected 512e mode of operation, where 512 byte sectors are exposed to the host despite the drive having 4096 byte physical sectors.
The drive was connected internally, and as the SMART data is pretty much the same, a screenshot of it was not taken.
HD Tune Pro
There is a slight drive to drive variance, and thus was tested with a different drive to the one tested before. Do disregard the CPU usage, which was transiently high at the end of the benchmark. This drive is the only one in the group that has a slight dip near 2400Gb, which may suggest a slight media/servo flaw, but as it does not trigger any error, it is considered acceptable. The drive itself seems slower on read compared to the other tested drive, and that may just simply reflect a higher amount of media flaws on the outer edge resulting in zones being marked bad.
The results are, on the whole, very much similar to USB 3.0 connected state, as expected, although the access times now show more expected values. This may indicate the USB 3.0 bridge is actually doing the trickery to try and improve its performance in light of the latency of the interface by making command completion replies ahead of time.
Internally, the drive spun some rather inconsistent IOPS numbers, which mostly traded blows with the external drive results.
Due to the 512e appearance, the 512 byte transfer IOPS could be determined. The random results and 1Mb transfers seem to benefit most due to the latency of USB 3.0 being eliminated, but the results seem like the expectation for a green drive.
The improvement due to queued requests is quite clear, and HD Tune seems to be more optimistic, showing full drive performance for transactions of 8kB and above (contrasted with ATTO results which claim 32kB and above).
Internally, CDM shows increased 4k read performance at a queue depth of 32, but otherwise, the performance is very similar to that of the drive operating externally. This implies that the bridge is not the limiting factor, but instead, the ability to perform command queueing for reads is beneficial.
As the drive now exhibits a 512e sector layout, testing for <4kB accesses is possible. As expected, such accesses are heavily penalized by the Advanced Format sector layout of the drive, as such accesses require the drive’s controller to perform additional work in buffer to extract the necessary emulated 512-byte sector, or compose a modified 4kB sector from the 512-byte sector changes. Consistently, accesses from 32kB and up exhibit essentially the full performance of the drive.
Trying to outsmart H2testW by setting the expected file size backfired on me, as the change of the drive to internal operation results in a 512 byte sector emulated operation, resulting in slightly higher space utilized by the filesystem. The speed is roughly the same, considering that this is a different drive from the sample.
As expected, the drives’ performance did not really change too much with being internally connected. The sequential performance was always limited by the drive’s design, with the USB 3.0 bridge solution providing more than adequate bandwidth despite SATA II connectivity. The only improvements appeared from the reduction in the latency, which improved IOPS somewhat, and the command queueing, which improved small-block reads. The drive did, however, perform just fine being installed internally.
The drive isn’t the fastest, but it is one of the easiest 6Tb PMR drives to get at the moment and has an acceptable price.
Epilogue: Yes you can RAID me, you naughty boy!
Would I leave them as four internal drives sitting around? Of course not. Instead, I’ve decided to run them as four drives in a single RAID0 array for no-redundancy, high-speed, large, streaming storage for my experiments, contrary to WD recommendations. Apparently doing so also voids any warranty extended to you since you are using it in an application scenario for which it was not warranted for, but hey, I already voided my warranty in the previous step, so what does it matter?
The fact that a failure of a drive will cause a loss of the array is not important when you’re expecting to be working with a single 20Tb+ file! The target was a sequential write rate throughout the whole drive of above 280MB/s to ensure no loss of data, which it handily meets my needs – both reading and writing. This was achieved using Intel Rapid Storage Technology’s (fake) RAID, which also seems to skew access times and burst rates probably due to some caching and driver trickery.
Of course, if a drive goes flaky, things will go to [expletive] pretty quickly, but I don’t intend to run the array with a flaky drive, and if it fails, it fails. The data I’m dealing with isn’t that highly valuable to the point of needing to worry about losing it, but if I didn’t have storage that met this need, I wouldn’t be able to perform the experiment!
Testing the integrity with H2testW was successful across the whole array, which was a very exciting result, as it validates the array as performing correctly (for now).
Of course, benchmarks aren’t highly meaningful as the iRST driver seems to have some performance optimizing caching going on. That won’t hide the fact that hard drives are, for all intents and purposes, not good at performing randomized small I/O.
Western Digital strictly does not recommend Green drives for RAID, and their firmware contains some RAID-unfriendly features such as long bad block recovery times, aggressive head parking strategies. They recommend you fork out more for the Red/Red Pro/Black series drives in RAID applications as they have TLER, RV sensors, etc. The price differential, however, can be significant when added across an array, so I decided to take the chance. Software RAID / fakeraid is typically more forgiving than hardware RAID, but in the end, please consider your own needs and requirements.