If you ever do anything that involves high-rate data acquisition, a copious amount of relatively high speed storage is often a costly requirement. Having just recently upgraded my 4x4Tb array to 4x6Tb, this displaced four drives for use elsewhere. Two other 4Tb drives of the same model were kept in spare as well, so this proved to be the perfect opportunity to pool the storage together into something useful for my needs.
I’m sure the first thing people might think of is to just “build another NAS and add it to the network,” and that would be my instinct if it were not for the fact I’ve got numerous NASes running, none of which meets my needs for the specific application I have in mind. The biggest problem for me was the fact that I absolutely needed a minimum of 160MB/s sustained sequential access speeds to 16Tb or more, and preferably, 350MB/s if it were possible.
Unfortunately, a single dedicated Gigabit Ethernet link couldn’t deliver this, and an upgrade to 10Gbit/s Ethernet is relatively pricey, as appropriate NICs and cables had to be costed for both NAS and workstation. As I wanted to share the storage amongst two workstations (one at a time), outfitting each of them with the NICs would have been cost prohibitive.
As a result, I decided to look at a different solution, namely a DAS (Direct-Attached Storage) unit. From my experiments with USB 3.0 and SSDs, I know for a fact that under the right circumstances, transfer rates of 160MB/s can be achieved with regular Bulk-Only Transport, with 450MB/s possible with UASP support. Given that each of the hard drives had a minimum throughput of about 75MB/s, I figured that a 5-bay RAID0 unit (leaving one drive spare) would have the drives able to give me about 375MB/s at the minimum. Of course, this has no protection against drive failures, but because of the nature of the data, protection is unnecessary. Since USB 3.0 ports are relatively cheap to add to systems without them, and are fairly ubiquitous nowadays, it seemed to be the promising route.
Looking at the market of DAS enclosures, the number of multi-bay units are fairly limited. From the two or three options, I settled on an Orico 5-bay USB 3.0 RAID enclosure, model 9558RU3, which costs around AU$210. This unit has five hot-swappable bays and can support individual drive, JBOD and RAID 0, 1, 3, 5 operation with a USB 3.0 interface.
Unboxing the Unit
The unit came in a box with a carry handle, which had a fairly decent size. The package felt relatively light however.
The package seemed to feature printing which was applicable to a wide variety of their models, and was not specific to the model I had ordered hence the paste-on label on the top panel. There were also logos for interfaces not offered by the enclosure – for example, eSATA and Firewire. Further to this, it seems that the logos are not up to date, claiming just 3Tb drive support when their website already claims 8Tb support.
The rear of the box shows interfaces and fan layouts which do not match the product internally, so we might as well ignore the packaging and take a look at the unit itself.
The unit itself tries to be simplistic and elegant in its aesthetic, featuring a matte black curved aluminium outer shell, and elevated feet similar to the sort you may find on older Hi-Fi equipment. Unfortunately, there are a few “sharp” edges, with chipping paint at the edges and panel gaps that somewhat spoil the illusion of quality. Indeed, the unit is mostly empty as supplied, and feels hollow and light rather than being solid.
The front shows the five ventilated flip open bays, each with their own activity LED indicator and a single power button. The bays themselves have a security lock, although the security offered is extremely limited because the key is merely just a truncated allen key.
Opening the bays and peeking inside, we can see the use of a PCB backplane to connect with the drives, which slide in “tool-less” on a very basic metal “rail” made from folding the internal steel frame and held in place with a tiny tab at the end and the spring tension of the cover pushing against the front edge of the drive. The bay door is linked to a roller actuator at the rear which pushes out the drive from the backplane when the cover is fully open. This arrangement doesn’t seem particularly friendly to the drives in terms of minimising risk of vibrations, and requires an almost disconcerting amount of force to close properly. Initial attempts had the cover slightly ajar, which popped open at the slightest provocation, whereas additional force was required for the cover latch to fully snap into place. Even fully fitted, the doors do appear to have a slight bow to them due to the spring tension they’re pushing against the drives.
The rear of the unit features a 120mm exhaust fan for cooling the drive, and a single USB 3.0 B connector for the data connection (as promised). A special 4-pin power connector is used to supply power.
A set of DIP switches is used to configure the RAID mode, with a SET button used to perform the configuration.
The process of initializing new drives involves setting the switches to the clear RAID position, pressing and holding set when turning the power on, until the unit beeps continually to confirm the selection, and then repeating the process with the desired RAID mode. This seems to be a dangerous problem as it allows a malicious actor or mischievous user to damage the array and cause loss of data.
Included with the unit is an Orico branded power supply, rated at 12v 6.5A. This seems a bit underprovisioned for the short term start-up load, as this only gives each bay 1.3A when most drives require 1.5 to 2A, however, I suppose the adapter is probably capable of handling a short term transient overload. The supply itself uses a Figure 8 lead connector, and an appropriate Figure 8 lead was supplied.
Also provided is a user manual, support card, USB 3.0 A to B cable, and two “hex keys” for locking and unlocking the drive bays.
Always ever-curious as to what powers the unit, the first thing I did was to tear it apart. To gain access to the unit, the rubber has to be removed from the feet and the feet removed from the unit. Two rows of screws on the underside need to be undone so that the outer “shell” can slide off, which reveals the inside. A few screws on the back panel can be undone to remove the back panel, and examine the internals closely. During this process, it was found that the screws were mostly tapping into the aluminium and copious amounts of aluminium shavings and dust were found inside the unit, which seems to be a potential issue as it can be conductive and cause short circuiting if it reaches the wrong places.
The main PCB is dated Week 30 of 2014, with a design date of 30th December 2013. This implies this unit has been around for a while. The main ICs involved are two Jmicron chips, a JMB394 dated Week 11 of 2014 and a JMS539B dated Week 14 of 2014. The JMB394 is accompanied by an MXIC serial flash for its firmware, and the JMS539B is accompanied by a PMC serial flash for its firmware. Both have their own separate 25Mhz crystals.
The design of the unit becomes obvious and disappointing already. It is clear that the topology of the unit involves the JMB394 RAID Port Multiplier performing the RAID features, with the JMS539B performing the USB 3.0 to SATA functions. This is a suboptimal design for numerous reasons – namely that the JMB394 only supports SATA II resulting in a maximum array total speed of 300MB/s as a certainty. Further to this, the JMS539B does not support UASP and thus is a BOT mode only bridge, resulting in low small-file performance and slightly reduced sequential performance. This design may have come about because of the need to have a port-multiplier aware USB bridge so that the system can operate in non-RAID modes as well, and because of the desire to save money by sourcing all components from one company.
The unit itself features a warning buzzer, which is quite loud, which indicates any abnormalities in array operation. A fan header, power header and LED header is also present on the board, although no fan speed sensing is provided, so a stalled fan will not trigger a warning. The PCB itself repurposes the PCI-E x1 edge connector to supply power (short segment) and SATA differential pairs (long segment) to the backplane. The main board has an AX3121 Synchronous Buck Converter to supply (likely) a 5V rail for the drives.
The underside doesn’t feature any components, however, it shows quite poor and haphazard soldering, especially the USB 3.0 port which seems to have been very poorly soldered on with barely sufficient solder. Many of the holes are still not completely filled with solder, whereas at other locations, visible solder spatter is left behind on the solder resist, which may present future reliability issues should they dislodge and short out other components.
Moving on, it seems evident the haphazard soldering continues on the backplane, with fairly poor hand-soldered SATA connectors, with inconsistent amounts of solder and attempts at soldering plastic positioning studs, resulting in melted plastic and solder balls left behind. A few points look like they have a “whisker” of solder left behind due to excess solder being applied to the joint. At least the backplane appears to have capacitors locally placed to the connector to absorb the hot-plug inrush current (to prevent interfering with adjacent drives).
Some filtering or power conversion seems to be done on the backplane as well.
A two pin connection from the underside of the main PCB is made to the front to provide the power button status.
A wide ribbon is used to connect a strip of LEDs running along the side of the chassis to provide the front indication LEDs.
The fan itself is a 120mm fan secured to the rear panel by screws internally. Removing the screws revealed some rust in the holes and worn threads, which seemed to be a sign of low quality control, but the fan itself also seems to be an unknown unit from an unknown supplier. In all probability, it is probably a sleeve bearing unit that will need replacement in a few years’ time.
Setting Up the Unit
The unit itself is pretty intuitive to set-up as there really are only a few basic steps to undertake. The first step is to fit the drives, which will only fit one way, and close the drive bays with sufficient force and lock them. The next step is to plug in the power adapter and configure the RAID using the DIP switches as mentioned earlier. Finally, you can plug it into a computer and begin using it without any further hassle. However, if you would like, there is also monitoring software which can be installed to perform diagnostics and provide alert capability on the array.
Setting Up the Software
The software that monitors the unit is called Orico HW RAID Manager and can be downloaded here under the listing labelled 3588US3_9528RU3_9548RU3_9558RU3_9928RU3_3529RUS3_9948RU3_9958RU3_Driver. Once installed, with the array hooked up and powered on, the software allows you to see the condition of your RAID array.
The default screen shows you all of the disks, the RAID array type and the status. From here, it is possible to drill down to individual drives to find the member information.
From there, it is also possible to find the SMART data for the drive itself, which alleviates one concern of not being able to diagnose the health of the drives while they’re in an array. Unfortunately, it only gives the normalized SMART values and not the raw data, thus is less useful than it otherwise would be. Sadly, third party SMART utilities such as CrystalDiskInfo are not able to read SMART information from the members of the RAID.
Another feature is to view the event log for the array.
Also, it is possible to create and configure the arrays via the software, rather than the DIP switches, which is more user friendly. There is an option for “Support Password”, which I suppose enables setting a password to avoid accidental deletion of the array, but I’m not too sure.
The Advanced Mode tab allows you access to even more features, such as an ability to send alert e-mails on certain array events. Unfortunately, I can’t confirm it works with SSL-based mail servers, as there doesn’t seem to be anything that claims support for SSL, so it might be of limited use. I’m also not sure that the chipset itself has temperature or voltage monitoring.
An advanced RAID creation tab is also available for those who want to create multiple arrays using the pool of drives, or to use only a limited area of the disks. I did not explore the full abilities and limitations of this mode.
A firmware upgrade page is available for upgrading the firmware of the hardware RAID port multiplier chipset, but no firmware upgrades have been released. The version is a very suspicious V0.958 which doesn’t instill that much confidence.
The final tab configures the rebuild priority and sleep timer settings. I set my sleep timer to 0 which seems to disable the feature.
In all, the software isn’t a critical requirement, but having it allows you easier access to information about the RAID in case of problems (in theory) and can help you diagnose problems ahead of time by monitoring the SMART condition of the drives.
Join us in the next part of the review for how the unit performs in practice. You really don’t want to miss the next part!