Seagate’s first shingled hard drives now shipping: 8TB for just $260


Recommended Posts

Thank you.

 

Extended self-test routine
recommended polling time:     ( 950) minutes.

~ 15:50 for a long test.

 

This is what I was looking for in comparison to the preclears.

== Last Cycle's Pre Read Time  : 19:51:11 (111 MB/s)
== Last Cycle's Zeroing time   : 16:11:00 (137 MB/s)

 

Looks about right and that would be approximately a best case scenario for a parity check/sync.

Link to comment
  • Replies 655
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Thank you.

 

Extended self-test routine
recommended polling time:     ( 950) minutes.

~ 15:50 for a long test.

 

This is what I was looking for in comparison to the preclears.

== Last Cycle's Pre Read Time  : 19:51:11 (111 MB/s)
== Last Cycle's Zeroing time   : 16:11:00 (137 MB/s)

 

Looks about right and that would be approximately a best case scenario for a parity check/sync.

 

Just for anyone who is curious my parity sync just finished and it took 20.5 hours total.

 

I have to say so far I'm pretty impressed with this drive, especially considering it only cost me $32.50 per TB. I always write to the cache drive and never write directly to the array so the persistent cache will never be an issue for me.

Link to comment

... I always write to the cache drive and never write directly to the array so the persistent cache will never be an issue for me.

 

True.  It's very clear at this point that the mitigations Seagate has done in the firmware to "hide" the limitations of the shingled writes work VERY well ... especially in a typical UnRAID use case.  My next array will almost certainly be a bunch of these drives [perhaps the 10TB versions by the time I build another system  :) ]

 

Link to comment

... I always write to the cache drive and never write directly to the array so the persistent cache will never be an issue for me.

 

True.  It's very clear at this point that the mitigations Seagate has done in the firmware to "hide" the limitations of the shingled writes work VERY well ... especially in a typical UnRAID use case.  My next array will almost certainly be a bunch of these drives [perhaps the 10TB versions by the time I build another system  :) ]

 

Speaking of 10tb, did you see the Hitachi (?) 10tb drive that uses shingled media? The difference with that drive is they claim it requires OS level support and how Linux will be receiving an update to supportthe drives soon.

 

This has me wondering just what sort of support it would require, if its minor tweaks to do write ordering or if its more complex means of them attempting to do what Seagate does in their firmware.

 

I am curious to see how that 10tb will perform with and without the OS changes.

Link to comment

... I always write to the cache drive and never write directly to the array so the persistent cache will never be an issue for me.

 

True.  It's very clear at this point that the mitigations Seagate has done in the firmware to "hide" the limitations of the shingled writes work VERY well ... especially in a typical UnRAID use case.  My next array will almost certainly be a bunch of these drives [perhaps the 10TB versions by the time I build another system  :) ]

 

 

Has anyone done a 'red balled' rebuild test?

That's the final test people are awaiting.

Link to comment

Speaking of 10tb, did you see the Hitachi (?) 10tb drive that uses shingled media? The difference with that drive is they claim it requires OS level support and how Linux will be receiving an update to supportthe drives soon.

 

This has me wondering just what sort of support it would require, if its minor tweaks to do write ordering or if its more complex means of them attempting to do what Seagate does in their firmware.

 

I am curious to see how that 10tb will perform with and without the OS changes.

 

Love this quote I just read.

 

Unlike typical air-breathers, HGST's 10TB monster is hermetically sealed and filled with helium.

 

Anyone know when they're gonna make hard drives filled with Nitrous oxide ?  ;D

Link to comment

Stop array. Unassign disk. Start array.

Got it, thanks. Now another question - in what configuration would you like me to do it, and what filesystems to put on data disks?

Available configurations :) :

1. Parity 4x2TB RAID-0, two (or one) 8TB Archives as data drives.

2. Parity 8TB Archive, two (or one) 8TB Archives as data drives.

 

Link to comment

Stop array. Unassign disk. Start array.

Got it, thanks. Now another question - in what configuration would you like me to do it, and what filesystems to put on data disks?

Available configurations :) :

1. Parity 4x2TB RAID-0, two (or one) 8TB Archives as data drives.

2. Parity 8TB Archive, two (or one) 8TB Archives as data drives.

Request came from Weebo. I am not quite sure.

Link to comment

Stop array. Unassign disk. Start array.

Got it, thanks. Now another question - in what configuration would you like me to do it, and what filesystems to put on data disks?

Available configurations :) :

1. Parity 4x2TB RAID-0, two (or one) 8TB Archives as data drives.

2. Parity 8TB Archive, two (or one) 8TB Archives as data drives.

Request came from Weebo. I am not quite sure.

No problem. The test is already about an hour in, configuration: parity 8TB Archive, two 8TB Archives as data drives, XFS (default) filesystem.

 

If a different configuration wanted, I'll do it, too. I'm interested myself in testing those, and you guys make much more sense out of the testing numbers than I would... so I'll try to provide the numbers and then read your analysis :)

Link to comment

Stop array. Unassign disk. Start array.

Got it, thanks. Now another question - in what configuration would you like me to do it, and what filesystems to put on data disks?

Available configurations :) :

1. Parity 4x2TB RAID-0, two (or one) 8TB Archives as data drives.

2. Parity 8TB Archive, two (or one) 8TB Archives as data drives.

 

use whatever build you have to rebuild an 8TB archive data drive.

Filesystem and data on it do not matter. However, don't use live data if you don't have a backup.

If you can do parity with 4x2tb RAID-0 , then by all means be our guinea pig  ;)

If you can do parity with 1 8tb Archive drive, that would help just as well.

If you are adventurous, you can do both and the community will be that much more informed about it.

 

The goal is to determine a live case rebuild.

This will prove/disprove the speed penalty on rebuilds.

 

I theorize that the rebuild will be close to a parity sync time.

Link to comment

... I theorize that the rebuild will be close to a parity sync time.

 

Absolutely agree => I can't think of any reason the rebuild would be done any differently than simply making a serial pass through the sectors and writing the appropriate data to the drive being rebuilt.  As long as that's the case, it's simply a long sequential write of the entire disk => so the writes will be recognized as full band writes and there won't be any band rewrites, or even usage of the persistent cache.    I think the only UnRAID use case where there might be write deterioration is a long series of random writes to multiple disks that result in a lot of "jumping around" on the parity disk ... and even that would have to involve over 25GB of data so the persistent cache was filled, so it'd probably require writes that were moderately large, but not quite large enough to eliminate band rewrites.

 

Link to comment

... I always write to the cache drive and never write directly to the array so the persistent cache will never be an issue for me.

 

True.  It's very clear at this point that the mitigations Seagate has done in the firmware to "hide" the limitations of the shingled writes work VERY well ... especially in a typical UnRAID use case.  My next array will almost certainly be a bunch of these drives [perhaps the 10TB versions by the time I build another system  :) ]

 

Speaking of 10tb, did you see the Hitachi (?) 10tb drive that uses shingled media? The difference with that drive is they claim it requires OS level support and how Linux will be receiving an update to supportthe drives soon.

 

This has me wondering just what sort of support it would require, if its minor tweaks to do write ordering or if its more complex means of them attempting to do what Seagate does in their firmware.

 

I am curious to see how that 10tb will perform with and without the OS changes.

 

Yes, I've read about the OS-level drives.    Basically this simply moves the mediation of the shingled write issue from the drive to the OS.    This COULD, if implemented well, work even better than the firmware mitigation on the Seagates.  The mitigation strategy could be application-dependent; and can be easily modified for various use cases.    On the other hand, the drive-level mitigation eliminates any dependence on OS-level drivers ... so there are tradeoffs in both directions.

 

Considering how well the Seagate mitigations work, it'll be interesting to see if the OS-level mitigations in these drives can do even better.

 

Link to comment

Stop array. Unassign disk. Start array.

Got it, thanks. Now another question - in what configuration would you like me to do it, and what filesystems to put on data disks?

Available configurations :) :

1. Parity 4x2TB RAID-0, two (or one) 8TB Archives as data drives.

2. Parity 8TB Archive, two (or one) 8TB Archives as data drives.

 

use whatever build you have to rebuild an 8TB archive data drive.

Filesystem and data on it do not matter. However, don't use live data if you don't have a backup.

If you can do parity with 4x2tb RAID-0 , then by all means be our guinea pig  ;)

If you can do parity with 1 8tb Archive drive, that would help just as well.

If you are adventurous, you can do both and the community will be that much more informed about it.

 

The goal is to determine a live case rebuild.

This will prove/disprove the speed penalty on rebuilds.

 

I theorize that the rebuild will be close to a parity sync time.

No problem, I'll do both. It's a temporary test server, with no data, only various hardware in it, put together specifically for this tests, see this topic.

Link to comment

Okay, the redball rebuild test number one: parity 8TB Archive, two 8TB Archives as data drives. All three connected via motherboard SATA ports. Rebuilding one of the data drives.

 

Supermicro H8DME-2 with some low-end Opteron and 4GB RAM, unRAID Trial 6.0-beta14b.

 

Elapsed time:        Current position:  Estimated speed:  Estimated finish:

less than a minute    7.46 GB (0.1 %)    194.5 MB/sec  11 hours, 25 minutes

9 minutes              112 GB (1.4 %)    190.1 MB/sec  11 hours, 32 minutes

48 minutes            555 GB (6.9 %)    191.5 MB/sec  10 hours, 48 minutes

1 hour, 29 minutes    1.02 TB (12.7 %)  187.2 MB/sec  10 hours, 22 minutes

3 hours, 4 minutes    2.06 TB (25.8 %)  179.9 MB/sec  9 hours, 10 minutes

5 hours, 32 minutes    3.57 TB (44.6 %)  167.5 MB/sec  7 hours, 21 minutes

6 hours, 13 minutes    3.97 TB (49.6 %)  159.8 MB/sec  7 hours, 1 minute

7 hours, 19 minutes    4.58 TB (57.2 %)  152.8 MB/sec  6 hours, 14 minutes

8 hours, 23 minutes    5.14 TB (64.3 %)  145.5 MB/sec  5 hours, 27 minutes

8 hours, 48 minutes    5.35 TB (66.9 %)  142.7 MB/sec  5 hours, 9 minutes

15 hours, 2 minutes    7.94 TB (99.2 %)  88.7 MB/sec    12 minutes

15 hours, 10 minutes  7.99 TB (99.8 %)  87.0 MB/sec    3 minutes

15 hours, 12 minutes  7.99 TB (99.9 %)  91.7 MB/sec    1 minute

15 hours, 13 minutes  8 TB (100.0 %)    89.8 MB/sec    1 minute

 

I was lucky enough to catch it near ending, otherwise - syslog only?  Main page, after rebuild completion, seems to not saying anything about speed/duration.

 

Next test, going, is the same, but with 4x2TB RAID-0 parity. Projected finishing is around 3 AM so I'm not sure I'll catch its ending  :-\

Link to comment

The redball rebuild test number two: parity 4x2TB RAID-0 pool, via Areca ARC-1110 PIC-X 133 MHz, two 8TB Seagate Archive as data drives, connected to motherboard SATA ports. Rebuilding one of the data drives.

 

Supermicro H8DME-2 with some low-end Opteron and 4GB RAM, unRAID Trial 6.0-beta14b.

 

Elapsed time:          Current position:  Estimated speed:  Estimated finish:

less than a minute    9.45 GB (0.1 %)    192.1 MB/sec  11 hours, 33 minutes

10 minutes            116 GB (1.4 %)    196.5 MB/sec  11 hours, 9 minutes

42 minutes            488 GB (6.1 %)    184.3 MB/sec  11 hours, 19 minutes

3 hours, 42 minutes    2.46 TB (30.7 %)  164.9 MB/sec  9 hours, 21 minutes

4 hours, 45 minutes    3.10 TB (38.8 %)  172.5 MB/sec  7 hours, 53 minutes

5 hours, 31 minutes    3.55 TB (44.4 %)  157.9 MB/sec  7 hours, 50 minutes

6 hours, 9 minutes    3.92 TB (48.9 %)  163.7 MB/sec  6 hours, 56 minutes

7 hours, 16 minutes    4.52 TB (56.5 %)  159.8 MB/sec  6 hours, 3 minutes

9 hours, 52 minutes    5.84 TB (73.0 %)  141.2 MB/sec  4 hours, 15 minutes

11 hours, 28 minutes  6.57 TB (82.1 %)  120.7 MB/sec  3 hours, 17 minutes

13 hours, 16 minutes  7.26 TB (90.7 %)  111.0 MB/sec  1 hour, 52 minutes

15 hours, 5 minutes    7.90 TB (98.8 %)  92.5 MB/sec    17 minutes

15 hours, 21 minutes  7.99 TB (99.9 %)  90.8 MB/sec    2 minutes

15 hours, 23 minutes  8 TB (100.0 %)    90.0 MB/sec    1 minute

 

I have no idea why it took 10 minutes longer than previous, expected to be the other way around... maybe I was refreshing main page too many times  :D

Link to comment

@pkn: my supplier has informed me that the 3 8TB's I have ordered will arrive tomorrow!  ;D ;D

Conratulations!  :D

Can you post or PM me the procedure you followed for recording your test timings. Ill do the same thing just to provide extra data.

Here you go. Some steps are optional or not applicable, and, as always, there is more than one way to skin a cat, but what I did is here.

 

WARNING! Do not do this with disks containing needed data!

-- You have been warned.

 

1. Disable auto page refresh: Main==>Settings==>Display Settings==>Auto page refresh==>Disable

    Automatic page refresh is nice, but slows server down considerably.

2. New config: Main==>Tools==>New config

    This resets array configuration - it forgets all disks.

3. Assign disks - Parity, and  two data

    unRAID will say "Start will bring array online and start parity sync".

4. Check the "Parity is already valid" checkbox.

    To be able to skip parity sync and still fool unRAID that it has valid parity, since we are only interested in writing speed test.

5. Start array.

    This will start parity sync.

6. Cancel parity sync.

7. Stop the array.

8. Unassign (set to "no device") one data disk.

9. Start array.

    It will say "unprotected, replace the missing disk a.s.a.p."

10. Stop array.

11. Assign back the disk which was unassigned in step 8.

12. Start array.

    This will start data disk rebuild.

 

Then I just copy-pasted the main page progress report into a separate file from time to time.

 

Link to comment

@PKN Exemplary work for the community. Thank you.

 

I have no idea why it took 10 minutes longer than previous, expected to be the other way around... maybe I was refreshing main page too many times  :D

 

As can be seen from the other notes, 10 minutes is small beans compared to the overall process.

 

Thank you.

Extended self-test routine
recommended polling time:     ( 950) minutes.

~ 15:50 for a long test.

This is what I was looking for in comparison to the preclears.

== Last Cycle's Pre Read Time  : 19:51:11 (111 MB/s)
== Last Cycle's Zeroing time   : 16:11:00 (137 MB/s)

Looks about right and that would be approximately a best case scenario for a parity check/sync.

 

15:23 is still within the recommended polling time which is 15:50.

I was expecting them to be close and I'm happy they are.

Given the unRAID usage and rebuild scenario, My thoughts are as long as the array is left alone to do the parity sync/check and or data drive rebuild we'll have good results.

 

The storage review results were skewed by having the rebuild occur on an 'active' live server.

As we see here, results are in spec with a single drive sweep.

 

So far, this drive is looking more and more attractive for unRAID usage.

Link to comment

Your are welcome :)

... The storage review results were skewed by having the rebuild occur on an 'active' live server. ...

Do you think the big slowdown they observed is not related to SMR "write penalty"?

 

I have no doubt these SMR drives are quite useable as unRAID data disks... not so sure about parity yet. I'm trying to follow garycase logic... I've put together small bash script which does multiple copies of a given file, reporting performance, but it uses Linux "cp", so it not just writes, it reads-writes-reads-writes... and this will change the writing performance results. I'm searching for a method  to read a file completely in memory, in a bash script, but did not find any so far... will have to do it in C, probably.

 

Link to comment

Interesting review of the drive.

 

STORAGE REVIEW OF SEAGATE 8TB SMR

 

Makes me a little nervous about using for parity in an array that has a mixture of uses. Seems that small file I/O on any disk in the array would be impacted. But with several RAID0 parity options, anyone could start with one of these as parity and swap the archive disk to a data disk.

 

This chart and the few following were particularly concerning ...

seagate_archive_8tb_sata_main_4kwrite_throughput.png

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.