Review: Seagate Archive 8Tb 3.5″ Internal Hard Drive

Hard drive storage space is one thing which always seems to run out regardless of how much you have. Hard drive manufacturers have been working to push the capacity of perpendicular magnetic recording (PMR) drives constantly, but have lately hit a brick wall at capacities of around 4-6Tb. PMR was introduced when drives using longitudinal recording hit their limits at about 320-400Gb, and has proven itself to be relatively reliable with the majority of today’s drives using PMR.

In order to overcome the limits of PMR technology, new technology has to be developed. One approach, taken by HGST in regards to enterprise storage, is to use Helium-filled drives. Helium reduces the friction inside the drive which lowers temperatures, energy usage, improves fly height regulation and allows for extra platters to be incorporated, however, the technology is too expensive for consumer usage and helium is currently in a global shortage which is only likely to get worse over time as it escapes from our atmosphere into space.

Another approach, which is only starting to gain traction, is that of Shingled Magnetic Recording (SMR). This technology is based around PMR, but exploits the differences in the size of the write head versus the read head to improve storage densities.

The Seagate Archive series of drives comes in 5Tb, 6Tb and 8Tb capacities, using 1.33Tb per platter technology. It has a spindle speed of 5900rpm and is qualified for 24×7 operations at 180TB/year workload. It is an Advanced Format 512e drive, and certain models support hardware encryption and instant secure erase. Oriented at large cold-data stores, it features very low power consumption per Gb of data stored, and also has rotational vibration balance for better performance in multi-drive installations. Despite these features, it is also consumer-oriented, being one of the first SMR drives to hit the market, backed by a three year warranty.

Interest in the drive has risen, as they are getting picked up by many computer shops across the country and sold at lower prices down to even AU$309 for 8Tb which seems amazing. But what about the caveats?

Ready to Mingle, with Shingles?

The change to SMR is actually quite a complicated one because SMR drives do not behave like PMR drives. The SMR technique exploits the fact that the write head writes a wider band than is strictly necessary for the read head to recover the data. If we rearrange the non-overlapping tracks in traditional PMR drives so as to overlap partially in writes, leaving just enough of the track for a read, we could squeeze more data into the same area. This looks like “shingles” on a roof, and results in a narrowing of the effective track width.

The issue with such a scheme is that random write access is no longer possible, as the shingled tracks have to be laid down in sequential order to avoid corrupting adjacent tracks.

A storage device that could only be written sequentially would not be easily accommodated by current operating systems, and so, this drive is specially structured and contains its own system to manage the shingle bands.

If you have time, it’s well worth watching this conference talk at Usenix Fast 15 by Abutalib Aghayev and Peter Desnoyers of Northeastern University where an unidentified, but presumed to be Seagate, SMR drive is tested and reverse engineered for its characteristics.

In short, the drive contains a shingle translation layer, which works like the flash translation layer of an SSD to map logical blocks into physical blocks and emulate a block device to the OS with random read/write abilities. The STL consists of a mapping table which keeps track of stale sectors.

In order to perform the miracle of random writes to an SMR drive, the drive contains a persistent cache in the outer diameter tracks, which is basically a “log” of changed sectors. The rest of the drive is arranged so as to have many smaller shingled recording bands separated by guard bands to protect adjacent shingled recording bands.

Random writes are sent to the persistent cache, and when the drive is idle or the persistent cache is full, a “read-modify-write” cycle is effected on the affected shingled recording bands. As a result of the amount of data manipulation required, the drive has a sizable DRAM cache.

As a result, the I/O characteristics of the drive can be quite unpredictable compared to PMR drives, as I/O operations can be cached into DRAM first, then into the persistent cache – and operations to the persistent cache can be interrupted by band-updates causing the drive to “stall” for ~0.6s. Therefore, this drive is not really suited for write heavy environments and should only be used where writes are few, mostly sequential rather than random and interleaved with lots of idle time for the shingled bands to be updated by the drive.

Also, because the drive essentially has its own “brains”, the drive can remain active even when idle and generate noise, heat and power consumption. Use of such a drive in an external enclosure might not be recommended unless you guarantee safe ejection and spin-down of the drive, as an unexpected power-down could in some cases result in possible data corruption. It is also important that the bridge chip be able to handle occasional delayed I/O responses too.

Unpacking the Drive

20150421-1913-4899I purchased two of the drives over a month ago at AU$379, which is a little much. It was one of the first batches to arrive, however, so I thought it deserved the premium. Because the drives were a little special, I decided to spend a whole month testing them before I delivered my verdict.

The drives were shipped to me from Synnex, and came wrapped in a thin bubble wrap bag, and thrown into a box. Seagate won’t accept drives packaged so poorly for RMA, so why are they shipping them out like that?

As usual, the drive comes in an anti-static bag.


The drive itself comes with a three year warranty, which is more than the two years you get with the regular Desktop series.


Eight terabytes. How amazing. All within the same 3.5″ standard from factor. This unit was dated Week 3 of 2015, manufactured in Thailand with firmware AR13. This unit is the ST8000AS0002, which does not feature on-drive encryption. It claims to use 0.55A on the 12V rail and 0.35A on the 5V rail, which is fairly economical and “green”.


The drive has a very “solid” boxy shape, with the “tub” extending right out to the absolute extremes of the 3.5″ form factor. This is because the 8Tb drive consists of six platters, which is pretty much the absolute limit for air-filled drives. As a result of the large size of the platter chamber, there is no mid-mount hole as with regular drives. The PCB has also been specially designed to fit into a small milled-out recess in the side, held in by small Philips screws.


The quirkiness with the drive mounting holes extend to the sides as well, where the mid-mounting hole does not exist. This doesn’t pose a problem for most mounting designs, although you might want to check just in case.


The front of the drive has a serial number label, which will make guys like Backblaze happy, so they can identify faulty drives quickly.


At the rear, nothing special, just the normal SATA connectors along with the diagnostic pins.


The front edge of the drive seemed to have an interesting ‘stepped’ area to the tub – I’m not sure what this is for, although it did get my curiosity.

Digging a Little Deeper

So I have a new and expensive drive, and I’m curious, what powers it? So I decide to dismantle the PCB (a warranty voiding procedure) and take a peek. We’ve already established that there’s nothing on the outward facing side of the PCB, so everything must be hidden underneath.


Removing the Philips screws allows the board to come free – two main chips have a thermal pad connection to the chassis for cooling. Removing the blue foam padding reveals the rest of the board – not much more.


It seems there are two silver rotational sensors at the extreme edges of the board, and an EEPROM for firmware.


The spindle motor control is done by a STMicroelectronics Smooth controller, code-named Dillon, custom designed for Seagate.


The hard drive controller is an LSI TTB71001V0 chip also made for Seagate. This chip appears special because of the double row of staggered pins, which is not common. Next to it is a Samsung K4B1G1646G-BCH9 128MiB DDR3 DRAM as the cache buffer.

Testing It Out

The drives were stressed for a whole month as part of my tests, because I had some reservations about the SMR technology. They were installed in a new HP Microserver N54L, although it only has SATA II ports. This is unlikely to limit the performance of the drives in any way however.

Suffice it to say, no SMART errors were thrown throughout a month of testing on both units. Here is a sample of the SMART attributes reported by one of the units:


HDTune Pro

Of course, the speed of the drive would be of a prime concern to users, so I performed sequential write and read tests using HDTune. Two drives were used, with two write benchmarks done back-to-back, followed by one read benchmark.

Drive #1

23-April-2015_01-06 23-April-2015_16-29 24-April-2015_08-37

The two write benchmarks are virtually identical, with an average write rate of 145.9MB/s which is quite impressive. The read benchmark mirrors the write benchmark with an average of 146.7MB/s. The impact of the STL can be seen in the access time write benchmark, where unusually low figures of <1ms are reported as the drive’s STL acknowledges writes before they actually happen.

Drive #2

24-April-2015_23-51 25-April-2015_15-12 26-April-2015_06-37

The second drive scores very similar results, although the patterning in the transfer rate is subtly different because of drive-to-drive variance in the sector layout/density and the head to media match.


For comparison, two new 4Tb PMR Seagate Desktop drives also had write and read benchmarks run on it using the same system.

26-April-2015_17-23 27-April-2015_01-46 27-April-2015_15-25 27-April-2015_23-40

The two PMR drives had significant variances in throughput in the outer diameter, likely because the second drive had poorer media to head match. The read and write speed averaged {137.5MB/s, 134.3MB/s} and {137.2MB/s, 134MB/s} respectively. The average transfer rate of the 8Tb drives were slightly faster at about 145MB/s which is a pleasing sign, although considering the areal density difference, a greater speed increase may have been expected.

For more comparisons, please see this article where WD and Seagate 4TB PMR drives are tested.

ATTO Benchmark

Looking at the block I/O performance, we find some very strange results with ATTO which is probably a result of the STL itself. Traditional disk benchmarks don’t seem to play that well with the SMR drives because of the persistent cache and DRAM cache interfering with the test.

Drive #1


Drive #2



For comparison, please see this article where WD and Seagate 4TB PMR drives are tested. The PMR drives provide much more consistent and predictable I/O performance over the block sizes – the exaggerated write performance is almost certainly the result of the large DRAM cache, persistent cache on the outer tracks and STL acknowledging writes before they have actually hit the disk. This is not helped by the small test-file size. The inconsistency in the results may reflect the differing internal states of the two SMR drives at the point of testing.


Drive #1


Drive #2



For comparison, please see this article where WD and Seagate 4TB PMR drives are tested. The drives seem to turn out pretty good numbers overall, although quite inconsistent between the two drives, but again, because of the small test file size and the design of the STL, the drive puts out artificially good numbers. As a result, regular benchmarks may not be so applicable to self-managed SMR drives.


H2testW is normally used to test flash drives for data integrity, however, we can also use it to test hard drives. Unfortunately, because it doesn’t seem to be able to compute the size of the MFT after the test files are placed on the drive, the Write+Verify test fails with an error that the drive filled up quicker than expected (as the MFT grew due to the number of files written by the test).


However, this is no problem as the data itself can be verified.

Drive #1 & #2

The average verify rate is about 128MB/s which is not too bad, and could be CPU-related too.

8tbb-h2tw-ok 8tba-h2tw-ok

More importantly, the test finished without any detected data corruption. This is especially difficult, as these drives are specified for one unrecoverable read error in 10^14 bits read. This translates to one read error in 113.69TiB/125TB read. This statistic has not really moved in all the years, so as drives get bigger, read errors might be encountered more frequently. Using that specification, you might expect to see one read error after reading the drive 15.63 times.


For comparison, please see this article where WD and Seagate 4TB PMR drives are tested. On the whole, transfer rates in the test seem fairly similar for this test.

Other Notes

The drive performed solidly during a month of testing, and didn’t have any big heat or vibration issues. The one annoyance is a tendency for the drive to be “clicky/wheezy” on seeking and seeking a lot because of the SMR band rewrites, thus sometimes it sounds “busy” even when it shouldn’t be. It’s not that loud, and I could easily sleep with these running in my room, but you probably wouldn’t want it in an HTPC.

Because of the SMR difference, you should also be careful in the way you use the drive. It’s absolutely fine for reading, but writing performance can take a serious hit under heavy random write access and is less consistent compared to PMR drives. It’s best suited for a single-user mostly-sequential large-file write scenario (where write rates of 69MB/s in small blocks, and up to 188MB/s in large blocks are seen on the outer tracks). It isn’t ideal for multi-user NAS scenarios due to the hit that random writes cause (multiple shingle band updates in each write). However, if you load up a fixed set of data and mount the partition read-only, I see no reason why you can’t use it for multi-user read-serving, although as a “green” style drive, you do expect it to be somewhat sluggish.

It’s also not best suited for real-time applications, such as video recording, surveillance monitoring or RAID systems. I also question its applicability to external enclosures, as some chipsets may not be ready for a drive of 8Tb, and they will have their own sector translation rules as well. Aside from that, unsafe power-down and unexpected ejection could have data loss ramifications with an extra level of caching in this drive, and some cases may spin down the drive on idle, reducing the amount of available idle time it can use to reconcile its persistent cache into the shingled recording bands.

Despite these caveats, after a thorough consideration of how most media tank drives operate, the Archive seems ideal for large scale media storage. The data rarely changes (if ever) and write operations are few, and mostly large sequential writes in nature. The media drives generally have lots of idle time and read-mostly workload. This is the same of a back-up drive as well. These are ideal applications for a drive of this sort, and SMR technology has no real drawbacks when it comes to this.


The Seagate Archive series of SMR drives offers a lot of storage for an affordable price. As these drives are SMR, their performance characteristics do differ from that of regular PMR drives.

The drives are ideally suited for large data stores, where read access is most common, and lower levels of write access are expected. It performs best when writes are large sequential writes, as opposed to scattered small random writes, and needs some level of idle time to ensure the bands are updated. In most home media applications, this is ideal as most media servers are idle and reading is the predominant operation.

That being said, even substantial rates of sustained writes can be handled by the drive, benchmarks showing anywhere from 69MB/s on the outer track in small blocks, up to 188MB/s in large blocks, so the drive isn’t much of a slouch at all. The read performance closely matches the write performance, and the disadvantages of SMR don’t show in most of the regular benchmarks, partly because the shingled bands are relatively small and fast to rewrite.

Of course, due to the I/O inconsistencies and the specific operations of the STL, it is not recommended this drive be used for real-time I/O (say video capture) or RAID applications. I wouldn’t recommend external drive applications either, partly due to sector translation issues possibly arising, and unexpected powerdowns may cause data corruption. It also isn’t an ideal drive if noise is your concern, as this drive has more seek noise than expected due to the six platter construction and the need to seek back and forth between persistent cache and shingled recording band areas. Heat was not an issue, however.

As for long term reliability, only time can tell, and there is always a risk with using “first generation” hardware. That being said, I did not experience any issues at all during my month of testing, and it is based upon a modified form of PMR which we have “mastered” over the years. The three year warranty and enterprise-and-consumer targeting of the drive inspires some level of confidence, as the regular desktop drives get just two-years.

I suppose this does make the Seagate Archive an attractive drive for those looking to back-up large amounts of data or run a large home-media store, at least, until the next technology comes about – which might be Heat-assisted Magnetic Recording. I’ve got a third drive on the way …

About lui_gough

I'm a bit of a nut for electronics, computing, photography, radio, satellite and other technical hobbies. Click for more about me!
This entry was posted in Computing and tagged , , , , , . Bookmark the permalink.

37 Responses to Review: Seagate Archive 8Tb 3.5″ Internal Hard Drive

  1. AlienTech says:

    I think you need to do some real world testing rather than depend on tools such as this to get a feel for how the drive would work. Last year I got a 3TB drive after reading about how great it was and I already had many of the 2TB drives some of which worked very well. I was led to believe the 3TB drive was based on the 2TB drives since Seagate was phasing out so called green drives etc etc etc.. Now I started copying files, like 200GB to the drive and after 10 minutes it started to drop in speed to 80mb.. then 50.. and stuck around those numbers.. My 2TB drive would transfer at over 100MB and hit 150-180 on an empty drive.. I tried many thing, to see if I could speed this up without any luck.. I also could not get any info on the firmware. But there were a couple of messages on the Seagate forum from others talking about the pathetic performance..

    So I decided to not use this in my desktop and only use it for backup. At 50MB it would take me days to copy the files. But then I noticed something else.. When the power company switches their lines during peak/off peak hours, sometimes the computer would reboot.. Even with a UPS connected.. SMPC’s with active PFC’s seems to have problems with this switching.. But I paid no attention, rebooted and continued.. Then later on when I went to get a backup, I noticed the crc failed.. Looking at the file showed that half the file was filled with 0’s.. But these files were copied to the drive much earlier and there was no erratic shutdown’s.. I can see corrupted files if the drive is powered off while it is writing and with dos was a major problem with chkdsk showing allocated fragments that belong to no one. But I never got any of these errors.. If there is a power failure, that file you are copying would show up as a 0 byte file, so you delete the file and recopy the file again. I have been finding files corrupted like this often enough.

    You can not use SMR drives for backup due to these problems. It corrupts files that’s not touched if there is any problem with improper shutdowns. And its common in a home environment because kids or animals would knock the wires around and that would be enough to trash your entire drive. No matter how careful you are, you cant know what files might have got corrupted. You cant keep checking terra bytes of data to see if they are all valid. Data centers using climate controlled and power controlled environment can use them for cold storage but not for personal use. As I found out, home UPS’s are not like the data center UPS’s which are running 100% of the time and they have no direct connection to the utility power supply, it all goes via the always on UPS circuits.

    I don’t find SMR drives much cheaper.. They dont run colder and they run slower.. Their reliability is unknown. Because I got cheated. Yea since Seagate decided to use the very same model number for a great drive and crappy one just like in the 2TB version you have like 3 drives with the same model number but only 1 is better than all the others. Now I pay extra if I have to and buy HGST or Toshiba. WDC also use this trick a few times, trick the customer into buying a crappy drive but sending the great drives to reviewers. Must have learnt from Kingston and the like.

    • lui_gough says:

      Sorry, but I just don’t buy your premise. You seem to have power issues related to a low quality power supply unit in your computers more than anything. I know for a fact I have a higher than normal peak/off-peak switching signal where I am (measured 17-18v RMS when it should be 6v, I did complain to the power company but they just keep pushing me around), but over 20+ computers, none of them suffer any issues with signalling that cause any instability. Some of my systems have over 200 days uptime, and at least four of them are servers, none of which have suffered bit-rot and or corruption or even spontaneous reboots. All are rebooted when I reboot them or when the power fails entirely. No UPSes are in use either. The hardware is fine if it is used within the manufacturers specifications, some of my drives running 40,000hrs+ with healthy SMART data and monthly full surface readback tests.

      Issues with write speed have been experienced, especially with HP Microservers, where the BIOS defaults to Write Caching disabled. This can be re-enabled under the drives in Device Manager and instantly cured some issues with write speeds experienced early on (but not published about) where the drive would write at 100MB/s+ and fall and stabilize near 25MB/s after a few days of consecutive writing. This issue mostly affects server-grade motherboards which disable write caching to avoid data loss during unexpected power removal.

      Needless to say, SMR drives DO NOT corrupt files that are not touched. It is not Flash memory, vulnerable to charge leakage, which is mostly blown out of proportion by the 840EVO’s TLC flash issues. If the tracks are properly laid down in alignment, there should be no issue. You CAN for a fact check terabytes of data to see if they’re valid – it’s called a checksum. You can MD5 hash your files and check them manually, or archive them in ZIP or RAR files which stores files with checksums and refuses to extract without ensuring the data is valid (and even add recovery records to fix bad data). If you’re particularly nifty, you can run them under Linux/FreeBSD using ZFS where the data is stored with checksums natively to guard against bit-rot and automatically periodically “scrubbed” (read back) during idle times to ensure the data has not “rotted”.

      In fact, that’s exactly what H2testw does – it’s a program designed to test flash memory for fake flash – i.e. data corruption, missing data, and “moved” data. It is capable of detecting most corruptions, and it was able to verify the written data repeatedly. I also tested it by using a hex editor and going over every sector with random data, and then verifying it three times over. It’s been running with data recording to the drive and as a personal NAS for myself. I didn’t hang onto it for a month just leaving it spinning!

      I’ve caught instances of bad USB bridge chips causing data corruption in my past, and I’m generally fastidious with my data. If there are problems, of course, I would report them. But reviews and tests of this drive are few and far between, and users ARE interested in buying them, and they ARE reaching consumer channels, so it’s better to get something out there for people to nibble on. After all, compared to many reviewers, I’ve done my due diligence, testing two units over the course of a month, and continuing to test them now in real life usage.

      I highly doubt you have tested SMR drives yourself – there are very few commercially available SMR drives – aside from the Seagate Archive, you have the HGST Helioseal 10TB drive, and that’s all I can see. Of course, they have their drawbacks, but depending on the use, it doesn’t matter. When on sale, 8TB SMR is AU$309 which works out to be AU$38.625/Tb. The best priced “green” type drive on a cost/Tb basis from our cheap computer shop is the WD 3Tb Green, runs for AU$42.667/Tb and you need to go and pick it up. For the same amount of storage you need over twice as many ports for 3Tb drives. If you want to get as close as possible without SMR, the WD 6Tb Green costs AU$53.167/Tb which is significantly more. More drives also equals more power usage, so there are real benefits for data-nuts like myself.

      The differences in drive speed between models do happen from time to time, partly due to media-head match differences, and changes in geometry due to the number of platters/head as they move up to higher capacity platters. Generally, the differences aren’t too big. If there are big discrepancies, it’s likely the drive is faulty – maybe having difficulty with its servo data, excessive interface CRC errors due to cabling/communications problems or power issues. At the moment, the average sequential write rate is faster than a GbE connection, which is sufficient for most users needs.

      – Gough
      (A guy that’s owned a pile of hard drives, and has been generally lucky with most of them, regardless of the brand.)

    • sparcie says:

      I agree with GoughLui, you definitely have issues with your power supply. The filter caps in your PSU should stop any signals for off-peak energy, and a SATA hard disk is not capable of creating a hard reboot. Also I think you were probably unlucky and got a DOA or marginal drive, the sudden drop in write speed is consistent with a failed drive. Even with proven relaible PMR drives this happens occasionally.

      I think the tools Gough uses are fine, and probably actually represent a higher demand on the drives than most normal use ever could have. I can’t imagine any use that isn’t represented by one of these benchmarks in some capacity. It is my opinion that I’d rather see benchmarks like this in numbers rather than a subjective opinion, it gives a better indication of performance and for what sorts of tasks they would perform well. They are also a better indication of drive health than any subjective opinion as well.

      As for reliabilty, well that is something you’ll only find out if you work for an local computer supplier and handle a larger number of the drives. Any new hard disk could suffer a fault so it is not sufficient to test only one or two. Individuals are unlikely to ever see enough drives to make any kind of determination.

      I don’t know how useful these will be for data centres either as they almost certainly can’t be used for RAID and have quite variable performance. RAID controllers will kick drives out of an array even if the SMART says it is good, if the drive takes longer to respond to commands than it should. Maybe they could be used as a replacement for tape backup, but they wouldn’t be suitable for the high IO throughput of a data centre server.

      Synnex has repacked the drives for being sent to you. Normally hard disks are packaged in boxes of 20 by the manufacturer and sealed with their factory seal. They are always packed quite well with lots of styrofoam custom designed for the purpose. Where I used to work we’d order the drives from Synnex by the box of 20, and they would just send on the box directly from the manufacturer. Obviously any smaller orders require workers at Synnex repack them to save on freight.


      • lui_gough says:

        Thanks for the reply, and agreed on most points – I think even those who handle many drives aren’t that well placed to make a reliability judgement because they will see only the infant-mortality phase failures during burn-in testing. In the case of the shop, you might have possibly a year or two “annual-failure-rate” style values depending on their warranty periods and whether users send drives directly back to manufacturer for RMA or not. Regardless, by then, it would be too late and the drives would either be off the market or superseded in capacity. Often it means just taking the plunge and hoping that they survive.

        That being said, I have two, and I’ve got a third on the way. That definitely says something :).

        I agree with the idea that the synthetic benchmark loads are much higher than client workloads for the most part – in my first month, I’ve done over five full surface writes and at least as many full surface reads. For a drive specified for a 180TB/year workload, I’ve already put it through 80TB in a month. And I’m still going. Most people won’t even do a full write/read surface test on their hardware and wonder why the drive fails a little bit down the track (as it may have been a dud drive damaged in transit that could have been caught prior to commissioning).

        That being said, I’m not inclined to trust the raw figures for different I/O sizes as the drive itself has its own multi-level cache and management strategies which make those numbers misleading, as they can only be sustained under certain conditions for a very short time. The sequential figures are much more reliable though. These are all caveats I have noted in the article, just for completeness.

        I think the target in data centres is for cloud “cold” data stores and MAID applications where disks are “soft-raided” or combined into a large “clustered” file-system using some filesystems with intelligence. Generally soft-raid is more forgiving of occasional slow-responses, and can be configured to be somewhat more tolerant, especially if NOT used for striping (which would be silly on these drives). I think these might target large cloud back-up drive pools where storage per port/rack space is a premium, power and heat are a concern, but throughput is generally limited, large and sequential with much idle time between subsequent backups. Even guys like Backblaze would probably be extremely happy with these sorts of drives – and maybe even Google, as they’ve done a lot of work with distributed failure-tolerant filesystems.

        Even for me, speed is not a major concern if it’s used for archival. Maybe you’re archiving 10Gb of old surveillance footage into the drive every day, to save them from being “thrown out”, but you don’t care if you lose it after all. It’s much “cheaper” and cost effective, time wise, as a replacement for optical media which is molasses. In fact, I think it would be perfect in optical media replacement – say home media server, home centralized back-up NAS for a very small number of users.

        I have seen the styro carriers you speak of, at my regular local computer shop, and I had half-a-box when I ordered 8 drives once before. That being said, whoever at Synnex did the repacking of these drives really needs to be taught a lesson, since one layer of bubble wrap around the drive, loosely placed in a box that “rattles around” with another drive to clatter into, is HARDLY safe. If I recall correctly, the RMA procedure mandates at least 1-2 inches of padding all-around – sending a new drive with less padding than that just sounds hypocritical.

        Thanks for taking the time to comment, and much thanks,


  2. AlienTech says:

    Not sure what people are reading, maybe what they want to read or interpreting it to what they want to.

    Yes I have a few dozen hard drives currently and have used hundreds of hard drives throughout the years. And since I have seen others having the very same problems and results as me, I think I will stick with my own opinions. Just because you have not been through the same circumstances mentioned and hence did not have the same results does not mean that will not happen. So good luck to you are your supporters. I don’t usually bother commenting because I think everyone needs to get experience the hard way or it is pointless. Just like I learnt the hard way to no rely on other peoples experiences.

  3. Inflex says:

    I’m *very* curious about the write-performance graph given here by the review.

    I have an 8TB Seagate SMR archive drive, and the first 15~20GB contiguous sequential write goes very nicely at anything between 150~120MB/sec, but then it falls apart, rapidly, down to 1~30MB/sec. It’s not just me, there’s reports all over the place and other reviews showing their write speed graphs being shot to hell… which is why I’m wondering how *this* review had such a nice write graph ( multiple point samples? )

    Fortunately there’s some good work being done by Seagate with drivers to specifically treat SMR drives differently in order to sidestep some of the worst issues ( ).

    A great drive for large storage and reading of data, but be prepared for a lot of waiting to fill that up if you’re trying to migrate from existing drives ( upwards of 60 hours! ).

    This drive really is best suited for acquiring new data at a slow rate that’ll be read many times over ( perhaps video/webcam archiving and data stream logging ).

    • lui_gough says:

      Have you tried checking your write cache settings? I had the same issues you were describing but that turned out to affect my PMR drives attached to the same system as well. As it turned out, my HP Microserver’s BIOS defaults to Write Cache OFF which causes these issues. As an owner of three HP Microservers, I am used to this issue, and I noted it on my last comment.

      Navigate to Device Manager -> Disk Drives -> ST8000AS 0002-1NA17Z SATA Disk Device and double click. Click on the Policies tab. Your first box “Enable write caching on the device” should be ticked and you should see the same performance as I do after making this change.

      Write Caching Policies

      The test was for the full surface, as configured in HDTune Pro. It is NOT point samples. I am quite sure this is the problem. On the HP Microserver, you could change the default for Write Caching in the BIOS which will have the same effect as well, especially relevant for OSes other than Windows

      Note that my THIRD drive has arrived, it also passed H2testw full surface write and readback at over 120MB/s for the full surface. The drive IS NOT a bad drive.

      – Gough.

  4. Inflex says:

    Write caching is off.

    Model=ST8000AS0002-1NA17Z WriteCache=disabled

    Can you do a test of writing a 50~100GB file to the drive with nothing else accessing the drive?

    This is a graph of a fairly typical run ( coincidently, also on a HP Microserver, with the hotswap mod’d BIOS, I use these machines for HDD datarecovery )…

    (output data generated from ddrescue -s 50G –log-rates=50G.rates -v -f /dev/zero 50G )

    Waiting to hear back from Seagate on the SMR EXT4 patch; once I’ve reformatted and tried with their modified-ext4 it should be interesting to compare.

    • lui_gough says:

      Write caching should be turned ON. If you have it turned off, you will have throughput issues on SMR and PMR (regular) hard drives since the CPU from the microserver is being over-taxed by the constant I/O interrupts. There is no good reason, at least for consumers, to run with write caching off as the performance hit is too big.

      I am running a HDTune Pro test for another follow-up posting to show the result when Write Caching is disabled – so far, it’s averaging about 27MB/s.

      – Gough

    • lui_gough says:

      I think you should also test it with your “regular” non-SMR drives and even with SSD with write caching off. I’ve found declining rates and problems with good throughout even with that and write caching turned off. Turn write caching on in the Microserver BIOS and you should see much better performance – it’s probably much quicker for you to do it and try it for yourself!

      – Gough

      • Inflex says:

        Right… brain fart there ( can I blame Sunday evening, and lack of sleep?) I’ll go try with caching on, though I must admit, I’m somewhat disinclined to run like that for the usual reasons.

        I’ll report back later.

        • lui_gough says:

          Dear Inflex,

          While you might be disinclined to run like that because of the regular “power loss” data loss issues, rest assured that *pretty much every system you build* runs with Write Caching ON by default, and back in the 80386 days with smartdrv, it was one of the first optimizations we turned on and left on. It’s not anything new, and you *should* always be running with it on. If you don’t believe me, try checking some of your other computers – I know all my Asus and Gigabyte boards turn it on by default.

          I think the difference in performance and CPU loading will have you change your mind ;).

          – Gough

          • Inflex says:

            Good news and bad news. With the cache on (showing in BIOS, and confirmed with hdparm) the writing went very nicely until 39GB, then the drive froze up (system was still responsive) until it offlined itself and I went in there and manually remove and reinsert the drive.

            Also interesting, the average transfer rate was more stable (make sense) though a bit lower (~110MB/sec).

            During the lockout the CPU load was barely over 2%.

            Have used these HP Microservers for a couple of years now, haven’t encountered issues with the normal HDDs (2, 2, 2 and 3TB installed).

            Not sure what’s going on here at all yet. Maybe it’s a linux ext4 fs issue, maybe it’s something else. Investigations will continue.

          • lui_gough says:

            Thanks for letting me know. That is an interesting outcome – how does the SMART attributes look like? I haven’t had an issue with three drives under Windows 7, write cache on, using it as a NAS for bulk backup (about 3.5Tb transferred onto it in a day) and Satellite DVB-S2 “record” drive + surveillance archive for three cameras.

            Just out of interest, which distro, and which version Microserver? Any dmesg output? Could be a buggy NCQ implementation, controller error (stab in the dark here)?

            – Gough

          • lui_gough says:

            Just for interest sake, I decided to pull one of my three drives from NAS service to explore performance with write cache off.

            This is what it looks like when running a Write benchmark with the cache turned off under Windows 7, so far about ~200Gb written – which I think corresponds well with your experience when the drive has “passed” its “fast” phase. My bench doesn’t show a fast phase, as the drive was just “written” to and then had the partition removed, so was probably still busy reconciling the SMR bands when the benchmark was started.

            HD Tune Result

            The difference between write cache on and write cache off is shown in this bench, when I started with write cache off after installing the new drive, realized it, then turned it on just a few minutes after I started the bench.

            Write cache on vs off

            For your sake, I hope you can get it working!

            – Gough

          • Inflex says:

            Alas, something has gone colossally wrong since trying the BIOS write-cache switch. The 8TB drive now partially comes to life, but gives me (among others) this error in syslog;

            >>>> softreset failed (1st FIS failed)

            Even when I change to another computer and an eSATA port dock, and now won’t mount or let me do anything further with it;

            >>>> link online but 1 devices misclassified, device detection might fail

            I think tomorrow I’ll have to start an RMA process.

            Thanks anyhow.

          • lui_gough says:

            Sheesh! Sorry about that! That’s an outcome I couldn’t have predicted.

            Something really bad seems to have happened and your drive isn’t responding to SATA commands as it should, so for some reason, there’s a physical interface issue with your drive in just getting its ID and configuration. I don’t think the write caching can affect that, because it’s just a memory buffer setting with the SATA controller on the motherboard, and shouldn’t cause any persistent damage to hardware. Maybe you had a dud drive, or somehow, these drives are more fragile than I thought.

            Does your BIOS recognize the device ID correctly? If it doesn’t, it may be a failure of the drive in reading its firmware, and that would require an RMA.

            – Gough

          • Inflex says:

            I wouldn’t fret. This sort of thing indeed shouldn’t happen at all, so something was “wrong” somewhere. I’ve sent you an email. All good.

  5. Why do you say these drives are not suitable for video recording? Are you recording extraordinary high quality video? I have one I use to record TV with Windows Media Center. It can record two channels at once and simultaneously play back a recorded program with no problems at all. That should all easily fit within 30MB/s. Or even 3MB/s.

    • lui_gough says:

      There is no issue with recording video, say using a computer, for a small number of channels or users, especially when talking about compressed videos. I’m doing exactly that myself, and I find it to work satisfactorily provided you aren’t recording too many files simultaneously that requires the drive to seek about a lot.

      What needs to be remembered is that the drive has a *certain* level of sequential performance, but once you start having I/O across the drive (i.e. read and writes in different places), because of the seeking and buffer-size limitations, the drive’s performance can easily plummet to much lower levels. Even a conventional drive capable of 200MB/s sequential write slows to merely 10-20MB/s once you start hitting it with three or four writes across the surface. The Archive drive places even more stringent requirements on head positioning owing to the narrower tracks, and would be expected to have more of a hit from it.

      What I mainly meant that it is not suitable for recording video in a surveillance system, say multi-channel stand-alone units. These are ideal areas to target large hard drives, but since the embedded systems have strict timing expectations and limited buffer sizes, frame drops are indeed very likely if the drive “lags”. Further to that, the amount of seeking required to work with the drive during write and playback might increase the wear on the drive depending on how sectors are allocated.

      We’re still in early days of SMR technology, so some level of experimentation is required. Sadly some users have been reporting compatibility issues with the drive and suffering from slow throughput – but I’ve got to say that I’m quite happy with mine now that I have four of them!

      – Gough

      • lui_gough says:

        Oh, and I forgot to mention, the original intention of mentioning video had also another reason – capturing and editing uncompressed or high-resolution/depth video with this drive isn’t particularly recommended either. When you’re doing more professional video, or working with high rate uncompressed captures from Analog sources, you easily will consume ~30MB/s or more consistently, and when the drive pauses for 0.6s to update a shingled band, it might cause a few frame drops especially where your buffering is insufficient. But high rate systems are often where you like high capacity, which is why you should be a little careful when dealing with SMR drives as they won’t perform identically to PMR drives we are used to today. However, for many “home” applications, you won’t notice the difference.

        – Gough

        • I think you need to look at video rates. HDV is only 3.1MB/s.

          • lui_gough says:

            It’s not the rate but the timely response of the drive. If your buffering isn’t sufficient (e.g. you have <1Mb write buffer), then even a 300ms stall in the write will cause data loss. Don't laugh, some software and hardware has atrocious buffering. The other issue comes down to the fact of compression. HDV is compressed, to fit inside the "miniDV" 25Mbit/s rate limit. Look at more "RAW" formats like RED - over 600Mbit/s is indeed quite possible, as is for high speed camera captures which is where large storage is desirable.

            Even RGB24 standard def 720x576 at 25fps gives you 720*576*24/8*25 = 31.104MB/s. I have done some VHS captures in this format myself, which is a little worse once you add audio and container formats, and any delay in the storage subsystem will cause you frame drops. If your CPU is decent, of course, you can use a lossless codec like FFV1/Huffyuv and drop that down to about 8MB/s, but it's still easily strenuous.

            [Note that testing of the write speed without write buffer on the drive showed it to be incapable of even sustaining 30MB/s - see – so this is especially relevant for systems with write buffer turned off, or not turned on as a deliberate decision]

            Offline edits can tolerate some delay, having worked losslessly at 4k myself, that’s not an issue because offline editing doesn’t really care so much for real time.

            The main reason I wouldn’t recommend working with real-time video with the drive is the same reason as I wouldn’t recommend using the drive in a RAID array – it’s I/O consistency. Simple as that. The shingled magnetic band structure means that there will be delays inherent in the drive that aren’t existent in PMR due to the need to do rewrites of whole bands to simulate random write access ability on a drive that is formatted in a way that it can’t actually do random write access. Each single band update can take 600ms to commit, and if you have say a file-system band update and storage band update back to back, you can be without the drive for 1.2 seconds as it “sorts itself out”. As it’s drive managed, you can’t control when these updates happen – so real-time software will need decent buffering to cope with that.

            – Gough

  6. Serge says:

    Hi Lui,
    Thanks for the informative article. You mentioned questioning drive’s applicability to external enclosures. It would be great to know how other’s are dealing with this drive in enclosure or docking station. I’m struggling to make this drive work through my docking stations (e-sata). Tried it with XIGMATEK (Model: SI-8808-SUS/US) which only shows me 1.4TB of space, while another (no-name) dock actually let me convert it in DISKPART to GPT and drive is shown 7452 GB. Unfortunately I am not able to format it (tried number of programs from Windows 7 Disk Manager to Paragon Hard disk manager 15). After 18-20 minutes from starting formatting the drive, my laptop (Sony VAIO e-series, Win7 x64) simply shows me a blue screen and restarts. Not even sure which docking station would be suitable for this drive, possibly issue might be with using this drive with my laptop as well. But anyway if you come across any enclosures, that support this drive, would be great to hear from you. Best regards. Serge

    • lui_gough says:

      Unfortunately, the four units I have are full-time deployed in my microserver, so I won’t be pulling the array apart for testing. However, there are some Seagate 8Tb USB 3.0 units which are pre-built and are likely to be certified to be working by Seagate themselves. These are likely to have their own 8Tb archive drive in them as there are no real consumer 8Tb options otherwise.

      – Gough

  7. DAVID says:

    Someone knows or has made this HDD 8Gb Run In Imac / Yosemite.
    For now I am as paperweights Deluxe.


  8. DAVID says:


  9. sorry but I have a basic question that doesn’t seem to have been answered. Can i install this drive in my windows 8 PC, and have it behave just like my other 3TB, 4TB drives (however slowly or inefficiently).. or do I need to install special software or make other changes to my setup? will i plug into a SATA slot on the motherboard and see an 8TB drive?

    • lui_gough says:

      Sorry but your basic question is one that depends on several variables which are not possible to guarantee, as incompatibilities can sometimes arise due to older drivers or BIOSes. As a result, I don’t give such blanket statements in general.

      As a rule of thumb, as long as you’re running a 64-bit OS, partition the drive as GPT, have a free bay, SATA power connector and have an appropriate SATA port from a relatively modern chipset, you should be fine. That’s why we have standards after all although some vendors may take some liberties with them. Enabling write caching, where not automatically done, should resolve most performance issues.

      – Gough

  10. ok thanks. appreciate the really fast answer! if you’ll indulge me some more, here are the specs. (its a relatively modern gaming machine – wont bore you with non-relevant info though).

    CPU = i7 4790K
    MB = MSI Z97 GAMING 9AC (socket 1150)
    Windows = 8.1 64BIT

    various drives attached to the SATA ports on the motherboard. I don’t partition anything, I just connect the drives and I see them in File explorer – such is windows nowadays I guess – making a dummy out of most of us. I just want to unplug an older 1TB drive and replace it with this 8TB drive if its easy 🙂

    • lui_gough says:

      That should work. It’s extremely modern by any standards. Just make sure when you partition the drive (even as one partition) inside “Create and format hard disk partitions” under Windows, you choose GPT, otherwise you will not see the full capacity of the disk.

      – Gough

  11. Nigel Haslam says:

    Dear Lui,

    Thanks for this article.

    I just built a new workstation and purchased a couple of these 8TB drives because I’ve started shooting and editing 4K video and found my Gigabit ethernet network simply can’t cope with the data volume. So my cunning plan is to use these new drives as internal storage, which used to be handled by a trusty HP microserver running Freenas.

    While the 8TB Seagates are for storage and backup, I have a pair of Samsung 950 Pro nvme PCIE drives doing all the heavy lifting for the editing.

    Now I’ve read your article, I’m wondering what the most reliable way to implement this storage might be. My Freenas server was built around 1TB drives with half hourly RSYNC backups from my working drive to the backup drive. Up til now I have preferred this simplicity to running any form of RAID ( I don’t relish the idea of having to identify which drive has gone down and waiting for a rebuild)

    However, I was just starting to think that I might try out a RAID 1 (mirror) on the 8TBs, because I don’t know of any windows software tools to replace the RSYNC of Freenas.

    Now I’ve read that RAID isn’t well suited to these drives I wonder if you have any suggestions?

    I’m also moving from Win7 Pro to Win 10 Pro with this new workstation, so perhaps there are some new robust sync tools that I have access to?

    To be crystal clear, my plan is to have:-

    current editing project on a Samsung 950 Pro

    copying over to one of the 8TBs as main storage (I can only hold one project in editing at a time on my 512 GIG Sammy, while I usually have several other projects at different stages of production in storage),

    then copying again to the second 8TB for archiving and safety.



    • lui_gough says:

      Dear Nigel,

      I would not recommend any form of RAID on the Archive drives as they have inconsistent I/O behaviour especially under random-write workloads that will really likely cause problems. If you want RAID, then you should just suck it up and pay up for a proper 6Tb PMR drive like the WD Red and live with the slightly reduced capacity.

      The Archive is not particularly aimed at fast-writes either – some users especially with ‘dirty’ drives and heavy write workloads find the drive slows down significantly even with write caching on – something like 30-100MB/s write per drive as a result, so again, if you’re interested in performance, I wouldn’t go with Archive drives. They are best for where you are not time sensitive on writing and you are expecting to have a read-mostly workload.

      As for synchronization tools, I don’t have any recommendations myself – although you could try upgrading to a 10GbE setup if you want to retain your prior set-up and improve throughput. It will be a potentially costly upgrade – but that’s the only way I can think of short of silly solutions like external USB 3.0 RAID boxes which I’ve had terrible experiences with.

      – Gough

    • David says:

      Nigel, I hope you have found a solution to your problem. I, too, have ignored RAID for the same reason as you. I’ve been using GoodSync for many years to sync a master drive to a work drive. It’s highly configurable and reliable. Synchronization can get very complex when you have a large drive with many files. I suggest you give GoodSync a try.

  12. Nigel Haslam says:

    Thanks heaps Lui,

    I did look into 10GbE (too expensive as you say) and multiple NICs (too complex to set up) but while researching those I came across Thunderbolt 3 interface, which my Gigabyte Z170 Gaming 7 board supports, so I’m hoping that there will be some TB3 capable external drives or NAS boxes coming out in the near future.

    Given your advice I will endeavour to use the 8TBs as raw camera footage / project backup and final archiving. I’ve got heaps of smaller drives that I can use for daily use.

    Thanks again


  13. David says:

    It’s a bit late in the game, but I have a couple of comments.

    First, my application fits these drives perfectly. I store very large (4-20 GB) video files, and once they are written it becomes a strictly a read-only situation. So the drives only see sequential writes. I maintain a master drive and sync to the work drive every so often. When this happens I see a real-world transfer rate of 150 MB/sec, by far the highest rate I’ve ever seen with files of this size. With file transfers of this files size on typical WD drives, I observe transfer rates that start out around 100 MB/sec and end up at around 75-85 MB/sec. This is the case even on new drives.

    I’ve also had the experience of poor packaging. I live in Thailand, but the distributors here are overpricing these drives to the retailers, usually by about $80 U.S., as compared to the price on (Even though these drives are manufactured in Thailand!) Although Thailand has high import tariffs, usually hard drives are within about 5-10% of Amazon’s price.

    So I ordered my first two drives from Amazon and had them shipped to a friend in the U.S. He shipped the drives to me along with some other stuff, so they were packaged properly.

    However, my next order for two more drives came “direct” from Amazon. In reality, Amazon has a separate channel for this type of thing, but I’ve never had a problem with it. The drives arrived packaged in separate little foam surrounds, but also were simply thrown into a large box with no other cushioning. Thus they were bounced around during the entire shipment.

    As you might expect, both drives arrived unusable. Of course Amazon gave me a refund, but as I had them shipped to a foreign country, I had to pick up the return shipping cost.

    I’m sold on these drives for my specific application, as HD video files are typically around 8-12 GB each. It eats drive space at a phenomenal rate, at least for consumer use.

Error: Comment is Missing!