Welcome to the second part of the review on the Orico USB 3.0 5-bay RAID HDD Enclosure where we continue with actual performance tests and usage experiences.
The drive itself presented as a single drive as I was using it in RAID0 mode. The drive appears to present itself as 512 byte sectors, although the internal RAID format was not determined. Should the controller itself fail, retrieving the data may pose a small challenge as with most hardware RAID devices.
As supplied, the unit was first tested with the NEC Renesas USB 3.0 controller on the Gigabyte 890FXA-UD7 with AMD Phenom II X6 1090T @ 3.90Ghz running Windows 7. This is my regular workstation, and the one which most devices are tested with.
As supplied, the HD Tune Pro sequential read shows an average 182MB/s read speed which is stable across the unit. This is better than a GbE connected NAS (~110MB/s) but a lot less than what is potentially possible if UASP capable chipsets and SATA III connections were used internally.
Writes posed a slightly less but similar speed of 172.7MB/s. As I was mainly interested in sequential performance, I did not explore the performance of the array under other benchmarks.
A check of the drive took over 33 hours but showed perfect data integrity but at a lower speed of 159MB/s – just 1MB/s shy of the requirement. This makes it a bit of a close shave.
I tried to test the unit with my second platform, my Z97-based system with an i7-4770k @ 3.90Ghz running Windows 7. This utilizes the Intel C220 platform, for which the latest drivers (188.8.131.52) were installed.
It was here that a big problem began to arise. Accesses to the RAID array from this platform were impossible to reliably perform, and instead, the drive will have errors of some sort after a random amount of time, often just a few minutes or less.
It didn’t matter the benchmark, or the tool used. The event log appears to show that trouble was indeed experienced.
It appears this is the curse of the Jmicron. Jmicron chipsets appear to have problems with certain USB 3.0 controllers, namely including the Intel ones. Other people have also experienced such issues from a variety of other hardware with the same chipset with varying levels of acknowledgement including outright denial of problems, and varying levels of “do-it-yourself” solutions. Sadly, it seems this issue is real and endemic to this particular chipset.
This was my daily use laptop, a Lenovo E431 with i7-3630QM CPU running Windows 10. Because this unit is based on the C216 chipset platform, it has a subtle difference, and surprisingly the drive was able to pass (95% of) the H2testW as supplied (before a stray projectile aborted the test) when used with the C216 platform. This may be the reason why there are inconsistent reports of Intel chipset and Jmicron compatibility issues.
The unit’s performance was better than a GbE NAS, but still fell way behind the theoretically available performance of the USB 3.0 bus under ideal conditions, and even under practical conditions as revealed by SSD-in-enclosure experiments. The Jmicron chipset was especially troublesome with Intel C220 platform USB 3.0 controllers (used in 4th Generation Intel CPUs and newer), refusing to function correctly even with the latest drivers, but worked with the C216 platform and the NEC Renesas chipset in my other computers. Compatibility with Fresco Technology, Etrontech, Asmedia, etc chipsets were not determined but from other reports, it appears that Jmicron chipsets cause problems with some of these other chipsets.
This issue practically makes the unit unusable with a vast majority of users of modern computers.
A Jmicron Cure?
Disclaimer: Undertaking any of the following steps will void your warranty and may lead to permanent damage to your device resulting in an inoperative device with no possibility of recovery. Do the following at your own risk.
Upon doing some research, some people had claimed that by updating the firmware of the unit, that they could get their USB hard drive docks, etc. performing reliably. A search landed me at the usbdev.ru archives of Jmicron firmware and tools which I could try. Many of the firmwares available are just bare files, and so, to flash the firmware, you need to use the JMicron JM2033x FW Update Utility v1.19.13.
Before updating the firmware, I tried to be diligent and back-up the existing firmware. While it claims to have succeeded, I have since found that the file created cannot restore the original firmware to the device, and so I have lost the original firmware forever. Initially I was unaware of the difference between JMS539 and JMS539B, so I had assumed the device to be a JMS539 and attempted a flash of JMS539_PM v255.31.3.41.22 while ignoring the warning of incompatible firmware only to result in the device to reconnect as “Internal Code”.
This in itself is a revelation as it indicates the unit has realized that the firmware is incorrect and instead is running off its internal Mask ROM. This is the basic “known good” firmware that is burnt into the chip as a fallback for vendors who want to save the money on a serial flash chip and not provide firmware-upgrade capabilities, or customized names.
Testing with the Intel C220 platform proved initially promising, with no errors appearing after a few hours of testing and higher performance.
The read speed was a healthier 219MB/s on average, with a dip at the end.
The write speed averaged the same, with a few dips and lower areas due to CPU usage (I couldn’t dedicate the machine to benchmarking for 60 hours). Unfortunately, things unravelled slightly from here – in attempting to verify a H2testW dataset which was written by and verified with another platform, it detected a 64kB silent corruption.
Giving it the benefit of the doubt, I decided to format the array and try filling it with the C220 platform machine. The result, interestingly, was me being woken up at 1:30am in the morning by a loudly beeping box that claimed the array had disintegrated.
A further check of the event log confirms that the box had reported a bad block to the host.
Further investigation was undertaken after rebooting the RAID unit, which beeped once to indicate it was healthy but then was completely trashed. It had a valid partition table, but the filesystem was completely trashed as RAW and could not be revived with a chkdsk /f. Plugging each of the drives in sequence to a test dock proved that one drive did have a transient communication problem and logged ONE UDMA CRC error which normally indicates a communication issue on the cable/controller. This is normally transient and clears with a retry – it’s a wonder why the Jmicron solution didn’t retry and instead resulted in a major trashing of the filesystem instead of a small loss of the last block of written data. It should have been recoverable.
Third time lucky perhaps? Indeed, it was third time lucky. It was able to write and complete a verify cycle on the C220 platform.
Upon first connecting the array to the C216 platform machine, it took a while to detect, and logged a few short errors in the Error Log. But after those errors were logged, no further errors presented, and the written data was successfully verified.
It seems promising that the mask ROM firmware would be the golden bullet to making the whole thing work. Unfortunately, it was soon discovered that it was not the case, and indeed, a regression had occurred. Compatibility with the NEC Renesas platform was broken, with very similar symptoms to the C220 platform with the original firmware.
Having now realized that the unit has a JMS539B chipset, I chose to flash a compatible firmware back onto the unit. The latest firmware I could find was JMS539B_PM v31.00.40.16 which was taken just fine.
With the firmware loaded, the drive was now operative with the NEC Renesas chipset with no problems. Unfortunately, this firmware continued to break the compatibility with the Intel C220 platform ports.
Further investigation with the firmware showed that different firmware had different side effects. The JMS539B type firmware, when installed, appears to cause problems with Intel C220 platform controllers, and causes a weird rapid spin-down and spin-up on unplugging but has no other issues to speak of. By bricking the firmware by loading an incorrect firmware deliberately, the embedded mask ROM firmware takes over, which appears to work both faster and flawlessly with the Intel C220 and C216 platforms but breaks compatibility with the NEC Renesas chipsets, causing the same problems as before with the C220 platform. It also breaks hot-plug capability, thus requiring a power cycle of the drive to allow the drive to be re-detected.
Sadly, it seems there is no way to make it compatible with all three platforms simultaneously, or even the two platforms I use most (Intel C220 + NEC Renesas), without the hassle of re-flashing the firmware when switching between units. In light of this, even if you were desperate to make it work, it’s hardly an ideal solution as the device has also shown other negative traits including silent data corruption and random RAID array trashing due to a single ICRC error.
The Orico 5-bay USB 3.0 RAID HDD enclosure tries to fill a market niche in DAS solutions. Unfortunately, the design of the unit is suboptimal, utilizing troublesome Jmicron chipsets which present both performance limitations and compatibility issues. The unit also shows questionable construction quality, and has been demonstrated to cause silent data corruption and array trashing after a normally “recoverable” event. The compatibility issues were only partially remedied after some extensive research, and the methods are not recommended to be implemented by end users.
This makes the unit impossible to recommend to anyone. In fact, if you have one and you haven’t already voided its warranty, you might consider taking it back, especially if you value your data and you want it to work. Seeing as each test run takes 30 hours or so to complete, it’s representative of many weeks of frustration and testing to reach the point where it is even usable, and even then, confidence in the solution is relatively low.