Inner to Outer - Is it possible?


SSD

Recommended Posts

As I complete my last couple RFS to XFS drive conversions, I lament on the slower disk performance of especially lower RPM drives as we go from the outermost cylinders where life is fast, to the innermost cylinders which are much slower. And wondering if there is a way to make the disk fill in reverse - from the innermost slow cylinders to the outermost fast cylinders. Although conversion would take longer, real-time use would be perkier. That's a trade I might be willing to make.

 

So wondering if there is some slick Linux option to cause the drive to favor slower sectors and gradually speed up as the drive gets fuller. Or fill the second half of the disk before the first half. The only other way I can think of to do this is to create a large dummy file before starting to copy data to a new XFS disk. Once the copy was completed I could then delete the dummy space that is on "prime real estate" and fill up the faster part of the disk last. But creating that large dummy file would likely take a very long time, and with the whole "sparse file" issue, I'm not sure there is a way to fake it out.

 

Thoughts?

Link to comment

Simply not realistically possible ... unless, of course, you want to write your own disk drivers that map all sectors for an N sector disk to N-k+1, where k is the "apparent" sector number you're writing.

 

So if there were 10,000 sectors, then instead of writing from 1 - 10000, those would be mapped to 10000 - 1.

 

A BAD idea, however, since you then couldn't access that disk from any other system or from the same system without your drivers loaded.

 

I'm at a bit of a loss as to WHY you'd want to do this anyway ... the performance of modern disks isn't all that bad, even on the inner cylinders (although granted it IS much slower than on the outer ones).    I suppose you're looking to have your almost-full disks perform like a new drive after you've mostly filled them with relatively static data.    If that's the case, then you COULD do as you noted, and create a file large enough to "reserve" the space you want in the outer cylinder area;  then copy the data you want to the disk; and then delete that file.

 

What would probably work okay (haven't tried it) is to use a large SSHD (i.e. a Seagate 4TB).  Not sure how it manages the SSD cache portion, but I suspect it gives a reasonable priority to writes -- so they may "seem" quite fast even on the inner cylinders as long as you don't write enough to fill the SSD cache.

 

FWIW, as I assume you know, optical drives already work the way you suggested  :)

 

Link to comment

As I complete my last couple RFS to XFS drive conversions, I lament on the slower disk performance of especially lower RPM drives as we go from the outermost cylinders where life is fast, to the innermost cylinders which are much slower. And wondering if there is a way to make the disk fill in reverse - from the innermost slow cylinders to the outermost fast cylinders. Although conversion would take longer, real-time use would be perkier. That's a trade I might be willing to make.

 

So wondering if there is some slick Linux option to cause the drive to favor slower sectors and gradually speed up as the drive gets fuller. Or fill the second half of the disk before the first half. The only other way I can think of to do this is to create a large dummy file before starting to copy data to a new XFS disk. Once the copy was completed I could then delete the dummy space that is on "prime real estate" and fill up the faster part of the disk last. But creating that large dummy file would likely take a very long time, and with the whole "sparse file" issue, I'm not sure there is a way to fake it out.

 

Thoughts?

 

The performance change you are seeing is not just the drive performance, but also the filesystem. Filesystem performance degrades from about 85% on up. It's a double whammy!

Link to comment

Gary -

 

So you have attacked the question and not added anything constructive. I don't think you know if there are solutions out there. Neither do I. Hence asking the question.

 

C3 -

 

Agreed.  But even raw reads (e.g., parity checks) result in 1/2 or less performance from start to end of disk.

 

Is there a way to quickly create a large non-sparse file?

Link to comment

What would probably work okay (haven't tried it) is to use a large SSHD (i.e. a Seagate 4TB).  Not sure how it manages the SSD cache portion, but I suspect it gives a reasonable priority to writes -- so they may "seem" quite fast even on the inner cylinders as long as you don't write enough to fill the SSD cache.

 

I have a 4TB SSHD for my mp3 archive.  I figured that, at least, the most often used directories would be cached by the on board SSD cache.  I haven't found much of a benefit for the premium. I'm not sure why.

 

 

I had purchased the 4TB SSHD and matching 4TB drive. I was going to let it run on the 4TB SSHD for a few weeks, drop the cache and see how long it takes to scan the whole tree.  It still took 30 minutes vs a non SSHD drive. Showed me there was no benefit at the current time.  It may handle a windows load differently.

 

 

Link to comment

I would use something like fs_mark to partial fill the fileysystem, as a single large file would not use the "other" parts of the filesystem in that region of the disk. With fs_mark you can fill with a directory structure, including depth and number of files.

Link to comment

... So you have attacked the question ...

 

Certainly not my intent.  I was just noting that you can't do it unless you either have customized drivers that remap the disk in reverse order;  or a file system that allocates files uniformly across the disk [Or, to really do what you asked, it would need to allocate from the highest numbered sectors towards the lowest numbers].

 

There are no disks that are structured to write from the inner cylinders outward ... and none that have an option to do that either (at least none that I've ever seen).    More precisely, no hard disks -- since optical drives do indeed allocate from the inner cylinders outward.

 

Link to comment

What would probably work okay (haven't tried it) is to use a large SSHD (i.e. a Seagate 4TB).  Not sure how it manages the SSD cache portion, but I suspect it gives a reasonable priority to writes -- so they may "seem" quite fast even on the inner cylinders as long as you don't write enough to fill the SSD cache.

 

I have a 4TB SSHD for my mp3 archive.  I figured that, at least, the most often used directories would be cached by the on board SSD cache.  I haven't found much of a benefit for the premium. I'm not sure why.

 

 

I had purchased the 4TB SSHD and matching 4TB drive. I was going to let it run on the 4TB SSHD for a few weeks, drop the cache and see how long it takes to scan the whole tree.  It still took 30 minutes vs a non SSHD drive. Showed me there was no benefit at the current time.  It may handle a windows load differently.

 

I'm not surprised the scan isn't really helped -- if I recall your discussions r.e. your MP3 collection, the tree is far larger than the SSD cache on the SSHD, and likely don't have enough accesses on specific sectors to hit the threshold of the caching algorithm that would cause some of those sectors to be retained.    With Windows, for example, there are frequently accessed DLLs, and of course the boot items, that would likely meet these criteria, and thus provide significant benefits in "perceived" speed.    At least that's what I'd expect.    I haven't tried an SSHD drive, as I've simply started buying 500GB and 1TB SSDs for my workstations -- which of course don't require any such compromises.

 

What I had though might be the case is that WRITES would use the SSD cache as long as the size of the writes was modest, which would mitigate the difference in actual write speeds between outer and inner cylinders.    I don't think it would help with READS, since it's unlikely there would be a significant number of cache hits during read operations, so the read speeds would still be driven by where on the disk the reads were occurring.

 

Not sure how you could really test this in UnRAID, since the write speeds are throttled by the 4 I/O's per write requirement.

Link to comment

I'm not surprised the scan isn't really helped -- if I recall your discussions r.e. your MP3 collection, the tree is far larger than the SSD cache on the SSHD, and likely don't have enough accesses on specific sectors to hit the threshold of the caching algorithm that would cause some of those sectors to be retained. 

 

In my scan and calculation there are 20509 Directories, 25044 Blocks, 25645056 Bytes.

This has interpreted arithmetic which slows it down a bit

root@unRAID:/mnt/disk3# time calcdirsize.bash
20509 Directories, 25044 Blocks, 25645056 Bytes
real    0m1.227s
user    0m0.820s
sys     0m0.390s

 

It's an 8GB cache. The directory inodes should be cached as they are constantly scanned.

Perhaps those LBA's do not move enough data to make it worthwhile to cache them.

From what I've read, the LBA's are cached, not necessarily the filesystem information.

 

The only thing I can think of is that every thing fits into the memory cache buffer of the drive, so the firmware doesn't necessarily see these LBA's being used over and over again. Then there is the filesystem buffer cache preventing going out to the drive itself.

 

Therefore I've seen no real world performance benefit in my current usage patterns.

The drive has been powered on for over 3200 hours with 53 power cycles.

I was expecting more out of this drive and it's cache.

On my laptop there was a huge different in directory scans with the just as large a mp3 archive.

 

root@unRAID:/mnt/disk3# time find /mnt/disk3 -type f| wc -l
239627
real    0m0.595s
user    0m0.160s
sys     0m0.490s

root@unRAID:/mnt/disk3# /boot/bin/dropcache

root@unRAID:/mnt/disk3# time find /mnt/disk3 -type f| wc -l
239627
real    19m20.531s
user    0m1.910s
sys     0m8.540s

 

I may make a last ditch effort on an external drive. I need the backup drive anyway.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.