Review: Orico USB 3.0 5-bay RAID HDD Enclosure (9558RU3) – Part 2

Welcome to the second part of the review on the Orico USB 3.0 5-bay RAID HDD Enclosure where we continue with actual performance tests and usage experiences.

Performance Testing

The drive itself presented as a single drive as I was using it in RAID0 mode. The drive appears to present itself as 512 byte sectors, although the internal RAID format was not determined. Should the controller itself fail, retrieving the data may pose a small challenge as with most hardware RAID devices.


Platform 1

As supplied, the unit was first tested with the NEC Renesas USB 3.0 controller on the Gigabyte 890FXA-UD7 with AMD Phenom II X6 1090T @ 3.90Ghz running Windows 7. This is my regular workstation, and the one which most devices are tested with.


As supplied, the HD Tune Pro sequential read shows an average 182MB/s read speed which is stable across the unit. This is better than a GbE connected NAS (~110MB/s) but a lot less than what is potentially possible if UASP capable chipsets and SATA III connections were used internally.


Writes posed a slightly less but similar speed of 172.7MB/s. As I was mainly interested in sequential performance, I did not explore the performance of the array under other benchmarks.

h2twwrite h2twverif

A check of the drive took over 33 hours but showed perfect data integrity but at a lower speed of 159MB/s – just 1MB/s shy of the requirement. This makes it a bit of a close shave.

Platform 2

I tried to test the unit with my second platform, my Z97-based system with an i7-4770k @ 3.90Ghz running Windows 7. This utilizes the Intel C220 platform, for which the latest drivers ( were installed.


It was here that a big problem began to arise. Accesses to the RAID array from this platform were impossible to reliably perform, and instead, the drive will have errors of some sort after a random amount of time, often just a few minutes or less.

z97dtfail z97dtfail2

It didn’t matter the benchmark, or the tool used. The event log appears to show that trouble was indeed experienced.

craploads-errors-b-firmware orico-more-errors

It appears this is the curse of the Jmicron. Jmicron chipsets appear to have problems with certain USB 3.0 controllers, namely including the Intel ones. Other people have also experienced such issues from a variety of other hardware with the same chipset with varying levels of acknowledgement including outright denial of problems, and varying levels of “do-it-yourself” solutions. Sadly, it seems this issue is real and endemic to this particular chipset.

Platform 3

This was my daily use laptop, a Lenovo E431 with i7-3630QM CPU running Windows 10. Because this unit is based on the C216 chipset platform, it has a subtle difference, and surprisingly the drive was able to pass (95% of) the H2testW as supplied (before a stray projectile aborted the test) when used with the C216 platform. This may be the reason why there are inconsistent reports of Intel chipset and Jmicron compatibility issues.


The unit’s performance was better than a GbE NAS, but still fell way behind the theoretically available performance of the USB 3.0 bus under ideal conditions, and even under practical conditions as revealed by SSD-in-enclosure experiments. The Jmicron chipset was especially troublesome with Intel C220 platform USB 3.0 controllers (used in 4th Generation Intel CPUs and newer), refusing to function correctly even with the latest drivers, but worked with the C216 platform and the NEC Renesas chipset in my other computers. Compatibility with Fresco Technology, Etrontech, Asmedia, etc chipsets were not determined but from other reports, it appears that Jmicron chipsets cause problems with some of these other chipsets.

This issue practically makes the unit unusable with a vast majority of users of modern computers.

A Jmicron Cure?

Disclaimer: Undertaking any of the following steps will void your warranty and may lead to permanent damage to your device resulting in an inoperative device with no possibility of recovery. Do the following at your own risk.

Upon doing some research, some people had claimed that by updating the firmware of the unit, that they could get their USB hard drive docks, etc. performing reliably. A search landed me at the archives of Jmicron firmware and tools which I could try. Many of the firmwares available are just bare files, and so, to flash the firmware, you need to use the JMicron JM2033x FW Update Utility v1.19.13.

Before updating the firmware, I tried to be diligent and back-up the existing firmware. While it claims to have succeeded, I have since found that the file created cannot restore the original firmware to the device, and so I have lost the original firmware forever. Initially I was unaware of the difference between JMS539 and JMS539B, so I had assumed the device to be a JMS539 and attempted a flash of JMS539_PM v255. while ignoring the warning of incompatible firmware only to result in the device to reconnect as “Internal Code”.


This in itself is a revelation as it indicates the unit has realized that the firmware is incorrect and instead is running off its internal Mask ROM. This is the basic “known good” firmware that is burnt into the chip as a fallback for vendors who want to save the money on a serial flash chip and not provide firmware-upgrade capabilities, or customized names.

Testing with the Intel C220 platform proved initially promising, with no errors appearing after a few hours of testing and higher performance.


The read speed was a healthier 219MB/s on average, with a dip at the end.


The write speed averaged the same, with a few dips and lower areas due to CPU usage (I couldn’t dedicate the machine to benchmarking for 60 hours). Unfortunately, things unravelled slightly from here – in attempting to verify a H2testW dataset which was written by and verified with another platform, it detected a 64kB silent corruption.


Giving it the benefit of the doubt, I decided to format the array and try filling it with the C220 platform machine. The result, interestingly, was me being woken up at 1:30am in the morning by a loudly beeping box that claimed the array had disintegrated.


A further check of the event log confirms that the box had reported a bad block to the host.


Further investigation was undertaken after rebooting the RAID unit, which beeped once to indicate it was healthy but then was completely trashed. It had a valid partition table, but the filesystem was completely trashed as RAW and could not be revived with a chkdsk /f. Plugging each of the drives in sequence to a test dock proved that one drive did have a transient communication problem and logged ONE UDMA CRC error which normally indicates a communication issue on the cable/controller. This is normally transient and clears with a retry – it’s a wonder why the Jmicron solution didn’t retry and instead resulted in a major trashing of the filesystem instead of a small loss of the last block of written data. It should have been recoverable.

Third time lucky perhaps? Indeed, it was third time lucky. It was able to write and complete a verify cycle on the C220 platform.

orico-h2tw-fill-success orico-h2tw-verify-success

Upon first connecting the array to the C216 platform machine, it took a while to detect, and logged a few short errors in the Error Log. But after those errors were logged, no further errors presented, and the written data was successfully verified.

controllererror orico-raid-fw-retriedc216-orico-ok

It seems promising that the mask ROM firmware would be the golden bullet to making the whole thing work. Unfortunately, it was soon discovered that it was not the case, and indeed, a regression had occurred. Compatibility with the NEC Renesas platform was broken, with very similar symptoms to the C220 platform with the original firmware.

nec-nowbroken2 nec-nowbroken

Having now realized that the unit has a JMS539B chipset, I chose to flash a compatible firmware back onto the unit. The latest firmware I could find was JMS539B_PM v31.00.40.16 which was taken just fine.


With the firmware loaded, the drive was now operative with the NEC Renesas chipset with no problems. Unfortunately, this firmware continued to break the compatibility with the Intel C220 platform ports.


Further investigation with the firmware showed that different firmware had different side effects. The JMS539B type firmware, when installed, appears to cause problems with Intel C220 platform controllers, and causes a weird rapid spin-down and spin-up on unplugging but has no other issues to speak of. By bricking the firmware by loading an incorrect firmware deliberately, the embedded mask ROM firmware takes over, which appears to work both faster and flawlessly with the Intel C220 and C216 platforms but breaks compatibility with the NEC Renesas chipsets, causing the same problems as before with the C220 platform. It also breaks hot-plug capability, thus requiring a power cycle of the drive to allow the drive to be re-detected.

Sadly, it seems there is no way to make it compatible with all three platforms simultaneously, or even the two platforms I use most (Intel C220 + NEC Renesas), without the hassle of re-flashing the firmware when switching between units. In light of this, even if you were desperate to make it work, it’s hardly an ideal solution as the device has also shown other negative traits including silent data corruption and random RAID array trashing due to a single ICRC error.


The Orico 5-bay USB 3.0 RAID HDD enclosure tries to fill a market niche in DAS solutions. Unfortunately, the design of the unit is suboptimal, utilizing troublesome Jmicron chipsets which present both performance limitations and compatibility issues. The unit also shows questionable construction quality, and has been demonstrated to cause silent data corruption and array trashing after a normally “recoverable” event. The compatibility issues were only partially remedied after some extensive research, and the methods are not recommended to be implemented by end users.

This makes the unit impossible to recommend to anyone. In fact, if you have one and you haven’t already voided its warranty, you might consider taking it back, especially if you value your data and you want it to work. Seeing as each test run takes 30 hours or so to complete, it’s representative of many weeks of frustration and testing to reach the point where it is even usable, and even then, confidence in the solution is relatively low.

About lui_gough

I'm a bit of a nut for electronics, computing, photography, radio, satellite and other technical hobbies. Click for more about me!
This entry was posted in Computing and tagged , , , , , . Bookmark the permalink.

11 Responses to Review: Orico USB 3.0 5-bay RAID HDD Enclosure (9558RU3) – Part 2

  1. rasz_pl says:


    Have you considered eSata box? Latest backblaze blog about their storage pods lists:
    “5 Port Backplane (Marvell 9715 Chipset) $43.85” No idea where to get this port multiplier, but it looks like something that would suit your needs.

    • lui_gough says:

      Thanks for the comment …

      I used to be a fan of eSATA until I saw how badly implemented it was – many issues include hotplug connection support and lack of port-multiplier support. If you’ve read some posts where Orico eSATA capable owners were complaining only of being able to see just one of the drives, rather than all of them when operating independently, then that’s a manifestation of this mess. Generally, all Intel consumer chipset ports are not PM aware – so I suppose if the Marvell 9715 is hardware RAID and makes them appear as one LUN then it might be possible, otherwise it means installing a third party SATA card (e.g. Silicon Image … eh … if you remember the early PCI efforts, data corruption and BSOD was a major issue, and later efforts had poor data transfer rates). That consumes PCI-e ports, which are at a premium here (boxes with sat-cards, multi GPU, multi-LAN and multi-sound card are literally a necessity for the kind of tinkering I get into).

      Further mess can be found in gimped eSATA rates because of concerns that cabling is not up to snuff. The eSATA ports on my 890FXA-UD7 are being used, ironically, via plug adapters to plug two more internal drives in. They are a JMicron SATA II controller branded as Gigabyte, known to be rubbish, but will only operate SATA I on the eSATA ports no matter what I do. Others, like the HP Microserver seem to do the same thing too, which can’t be solved short of a risky BIOS hack that apparently bricks more modern motherboards.

      Looking up the datasheet for the Marvell 9715 here ( seems to say that they are regular FIS based port multipliers, of which you can source a few random bits on eBay, but the main issue is you need a host that’s FIS port multiplier capable (as opposed to older command based switching).

      I suppose if you’re PM-inclined, this page ( lists a good deal of chipsets and whether they’re FIS (labelled FBS) or CBS capable, or not at all. Get the wrong sort and they just won’t work. The internal SB850 AMD ports on my 890FXA-UD7’s seem to be FBS capable, but that’s cold comfort if the other machines aren’t participating.

      So I think you can appreciate just how finicky getting PM-based arrays working can be – it’s very much a case of “get the right balance of hardware and stick with it”. After all, a SATA III 6Gbps port has plenty of bandwidth for even 3 rotating rust drives on an almost non-contented basis, and maybe even 6 of them if you’re willing to not access them all at once. If they were more well supported, you would expect them to be more often used, or even integrated onto motherboards for home NAS markets – but alas they haven’t done so.

      Sorry for being overly negative here, but I did dream of stacked PM-configurations for many years before buying this product, but alas, many PMs disappointed me with their SATA II based nature, or needing a Silicon Image host card as the *only* qualified solution.

      – Gough

      • rasz_pl says:

        ah, posted too fast before refreshing and missed your reply
        I suspect its the USB/SATA bridge that is causing you all the trouble, so maybe bypassing it and using your own with port multiplier set to raid could fix things.

        sucks you had to spend 200 bucks just to end with box of incompatibility 🙁 🙁

    • rasz_pl says:

      or if you feel adventurous cut 4 traces between JMB394 and usb converter chip, solder SATA connector and use whole box as eSata with your own usb 3.0/sata dongle.

      JMB393 port multipliers, that look like half of this enclosure electronics, are $70 with free shipping on scambay if you like to experiment, but dont want to solder

      • lui_gough says:

        I suppose I could just desolder the SMD capacitors in line with the data leads and fly-wire (by very short wires) a USB 3.0 to SATA II capable chipset of my choosing (e.g. the ASM1051/1152) which will alleviate the Jmicron USB to SATA II firmware bugs.

        However, the JMB393 RAID-PM is also a load of fun too. I don’t fancy returning to a trashed array at the slightest sign of an ICRC error, and the loud screeching. I suppose getting rid of the buzzer would help. But hey, we still have the very questionable soldering. What a can of worms!

        Yeah, I did notice they were available under a variety of names, including “easy RAID” or something like that. I hesitate to think just how (un)reliable the arrays built using these solutions will be. As usual, when your needs are a bit “niche”, and you feel like cheaping out, then you get these kind of “cheap and cheerful” (ahem) solutions :).

        I suppose the purpose of posting was to warn away prospective buyers – if they can do with a NAS, go and build one, or if they don’t need mobility and have the PCIe to spare, they can probably invest in a pulled LSI RAID card with miniSAS style connectors for another 8-drives at about $100-200 from fleabay … which actually work.

        – Gough

  2. pubemail says:

    I noticed you are using RAID0 mode, as it is not error tolerant by nature. Have you tested in a RAID1 or RAID5 mode? I’m more interested in this. Thanks.

  3. Mark Hammerschmidt says:

    I just bought this 2 bay unit for use on a MAC. I’ve got two WD Green drives populating it (spare drives rather than purchased specifically) in RAID 1 and found the transfer rate with the hardware setting was dismally slow. Changing the dip switches to “Normal” and using a software RAID setup doubled the write speeds. The software is absolutely dismal, it cannot send an email warning and clearly not written for MAC properly. So now I’m using the limited choices of SoftRAID and DiskDX to manage and monitor it. Last night the DiskDX software gave me an overheat warning for the bottom most drive saying it was overheating for 77 minutes in the last 17 hours!! That’s worrying!!

    And just for an added word – DiskDX reports my less than one year old LaCie 2big 3tb box as having a health rating of just 45.7%!! All other drives are at 100% health!! Seagate 3tb Barracuda’s to be avoided at all costs!!

  4. Mark Hammerschmidt says:

    Holy moly, write speeds during a backup have reduced to a crawl of 1.2mb/s. Over 1hr 30 mins for just 2gb of data!! Testing using Disk Sensei and it’s still testing after 20 mins!! I think this is a pile of junk quite honestly.

  5. Deian says:

    Hey Thank you for spending the time and sharing your experience with us.
    What NAS platform would you recommend?

    • lui_gough says:

      At this moment, I don’t have a particular platform to recommend, so much as to say that I build my own NASes out of regular “computers” where performance is critical and run either Windows, Linux or FreeBSD depending on what mixture of reliability/performance/hardware component support is required.

      – Gough

Error: Comment is Missing!