Spin down timers - are they in HDD firmware or stored in slackware


NAS

Recommended Posts

Pretty much as subject suggests. Went looking for the spin down timer to see if i could work out how to show an ETA until spin down. Could not find them so I started thinking they may be on the drives themselves. Does anyone know where the live timer is stored?

Link to comment
  • Replies 56
  • Created
  • Last Reply

Top Posters In This Topic

The logic would be something like:

 

CRON per hour

Check which drives are spun up.

For each spun up drive check time until sleep from timer

If time until sleep timer is say < 10 minutes then scan folder and file structure.

 

The idea being to continue on with the "keeping structure" in RAM work people have been working on here but in a way that doesn't spin up drives and runs often.

 

Link to comment

Not sure if it will do as you require, since if a drive is accessed within that 10 minute window it will extend its spin-down-timer and not spin down for another increment of time (another hour) from its last access.

 

I have used a script that does an ls -R /mnt/user >/dev/null every 30 seconds or so.    If the blocks needed to list the structure are in the buffer cache, then disks do not spin up at all.  If the blocks are not, the specific disks needed to get the information are spun up.

 

Initially, all the disks spin up to read the directory structures.  Thereafter, a disk spins up only if the buffer cache has reused the block.

 

Joe L.

Link to comment

If you go to the Web GUI and look at the main screen (which shows drive temps and stats), does this restart the drive spindown counter?  I have some anecdotal evidence that it does (or at least has in a previous version), but nothing definitive.  I know it doesn't wake a sleeping drive, but if it is 1 minute away from going to sleep, would refreshing the main page make it go to 4 hours until it woud go to sleep?

Link to comment

If you go to the Web GUI and look at the main screen (which shows drive temps and stats), does this restart the drive spindown counter? ...

 

A very good question. There seems to be no definitive proof either way. Short of sitting with a stop watch i cant think of a way to check.

 

 

I am really keen to implement "ls -R /mnt/user >/dev/null" as the performance usage in real life for just navigating the shares is nothing short of immense. The point is as both I and Joe L.  has spotted is that unless we can find a way to limit its usage based on timings then it instantly becomes the death of spin down.

 

Manual spin down is shown in syslog so its not unreasonable that Tom could add a "ls -R /mnt/**insert disk to be spun down here*>/dev/null" before the typical "/usr/sbin/hdparm -S242 /dev/sdf >/dev/null" command is issued. This wouldn't be perfect but it would at least help.

Link to comment

Not sure if it will do as you require, since if a drive is accessed within that 10 minute window it will extend its spin-down-timer and not spin down for another increment of time (another hour) from its last access.

 

I have used a script that does an ls -R /mnt/user >/dev/null every 30 seconds or so.    If the blocks needed to list the structure are in the buffer cache, then disks do not spin up at all.  If the blocks are not, the specific disks needed to get the information are spun up.

 

Initially, all the disks spin up to read the directory structures.  Thereafter, a disk spins up only if the buffer cache has reused the block.

 

Joe L.

 

Joe how efficient do you think this is. I know it is hard to tell but can you detect a pattern to how often disks are spun up based on how often you use your NAS. Im concerned that normal everyday moves of avis will empty this data out of the buffer.

 

I am running with 4GB of mem installed fyi

Link to comment

I have implemented this using a 5 minute cron. Immediate impressions is its a bit of a miracle patch. Disks are staying spun down WAY WAY more.

 

If only there was a way to quantify this scientifically but without spin down timers i cant see how

Link to comment

I have implemented this using a 5 minute cron. Immediate impressions is its a bit of a miracle patch. Disks are staying spun down WAY WAY more.

 

If only there was a way to quantify this scientifically but without spin down timers i cant see how

Are you referring to a "ls -R" every 5 minutes?

 

Basically, the buffer cache uses its own algorithm to determine frequently accessed blocks.  Those that are "freed" are those unused the longest.  When playing a movie of any size, the content of a movie will usually use all the available memory buffer (at least in my case it will, with 4 to 5 Gig ISO images, and only 512Meg of RAM)

 

The part of the movie played more than 5 minutes ago is less frequently accessed than the "ls -R" done within the past 5 minutes, therefore, it should be re-used from the buffer cache.  The directory listing blocks should stay in memory since they are more recently accessed.  (Note, I was doing the "ls -R" every 30 seconds in my tests...)

 

Joe L.

Link to comment

Thats exactly what i did. With your explanation i can see why you used 30 seconds.

 

How did you achieve sub 1 minute with cron?

You cannot.  Cron only runs once a minute.

 

You can however, once a minute run a script that does this:

 

ls -R /mnt/user >/dev/null 2>&1

sleep 30

ls -R /mnt/user >/dev/null 2>&1

 

It will perform an "ls" command every 30 seconds  ;) even though it is invoked once a minute.

 

Joe L.

Link to comment

Thanks for that trick; implementing it now.

 

Another day has passed and this patch still has benefits but one downside. If you don't refresh the ls cache often enough moving data about the drive or uploading content or even just watching some avis will purge this part of the cache (as expected). When this happens you get a catastrophic fail in what you are trying to achieve i.e. every single drive you have spins up for at least an hour.

 

Apart from the ugliness of it I cant see a reason why we couldn't run this non stop i.e. run loop ad infinitum. Another improvement that could be made if this was a daemon is that we could actually check memory usage a change the refresh period based on as yet to ascertained parameters. This may be simply overcomplicating things though.

 

I would purchase another 4GB of memory if i could but as it is unRAID cant even use my full 4GB so it would be a waste this now.

Link to comment

As a test, I put this in a script on my server yesterday. I added it to those invoked by my "go" script when I boot my server.

 

I observed it in operation last night as I watched a 4Gig ISO Image of DVD via my MG-35 Media player.

The only disk that spun up was the one with the ISO of the movie I was watching.  The other disks did not spin up.  I only have 512Meg of memory in my server.

 

#!/bin/sh

# Edit as needed, if user shares not used, then list /mnt/disk1, /mnt/disk2, etc.

#shared_drive="/mnt/user"

shared_drive="/mnt/user/Movies /mnt/user/Mp3 /mnt/user/Pictures"

 

crontab -l >/tmp/crontab

grep -q "frequent ls" /tmp/crontab 1>/dev/null 2>&1

if [ "$?" = "1" ]

then

    echo "# frequent ls to keep directory blocks in memory:" >>/tmp/crontab

    echo "* * * * * ls -R $shared_drive 1>/dev/null 2>&1" >>/tmp/crontab

    crontab /tmp/crontab

fi

 

 

I only do the "ls" of the three folders with media.  My /mnt/user/data folder is HUGE and I have no need for it to use up space in the "ls" blocks buffered.

You will need to edit the "ls" command to contan the names of your shared drives.

I do not have a cache drive, so the once per minute "ls" was able to keep the directory listing in the disk buffer cache while the movie was streamed to the media player.  I can see how it is possible for transfers of files from the internal cache drive to internal data drives may use up the disk buffer cache at a rate fast enough to displace the "ls" results.    You might need to invoke the "ls" more frequently than once a minute during the mover script's operation.  Or, add commands to the end of the mover script to spin down the drives.

 

Joe L.

Link to comment

Thats nice enhancement. Will try it out ASAP.

 

I have been doing alot more reading on how memory is allocated and flushed and its complicated. Way above my level of understanding so this will take a while. From my experiments (with admittedly 8 time more RAM than you) it seems that writing to disks via samba causes the ls cache to be wiped quite often...definately more than if you are just reading data. This is all a bit subjective so hopefully one of us can work out how to monitor the ls cache size in memory form say /proc/meminfo. I would like to be able to see how much ram this is using and when it is swapped out so i can compare what is actually happening on the NAS at that time. I cant see it using more than a few tens of MBs so it is really frustrating that it is dropped at all. Even if it uses 500MB once we know that we can make recommendations accordingly.

 

I am currently uploading about 2TB of data to my NAS so once this is done I will have a more "normal" mode of system operation to test on.

Link to comment

I would purchase another 4GB of memory if i could but as it is unRAID cant even use my full 4GB so it would be a waste this now.

It can use it, it all depends on how you are using unRAID. With PAE enabled I have 8GB.

The issue I am having is all the torrents keep flushing the cache. LOL, Yes even with 8GB.

 

So my daemon will come into effect soon.

I just have to add in my own ftw subroutine because the standard one halts all subsequent directory reads if it encounters any error whatsoever.

 

Link to comment

it is really frustrating that it is dropped at all. Even if it uses 500MB once we know that we can make recommendations accordingly.

 

Add this command to your startup.

 

sysctl vm.vfs_cache_pressure=0

 

Here is my script

 

root@Atlas:/boot/custom/etc/rc.d# more S91-init.vfs_cache

#!/bin/sh

sysctl vm.vfs_cache_pressure=0
# find /mnt -ls >/dev/null 2>/dev/null &

 

 

I have the find commented out because now I use locate to initialize the directory cache upon bootup.

 

 

root@Atlas:/boot/custom/etc/rc.d# more S92-install-slocate

#/bin/sh

PKGDIR=/boot/custom/usr/share/packages

PACKAGE=slocate-3.1-i486-1
if [ ! -f /var/log/packages/$PACKAGE ]
   then installpkg ${PKGDIR}/$PACKAGE.tgz
fi

rm -rf /usr/doc/slocate-3.1
rm -rf /usr/man/man1/updatedb.1.gz /usr/man/man1/slocate.1.gz

batch <<-EOF
sleep 90
exec /usr/bin/updatedb -c /etc/updatedb.conf
EOF

 

Link to comment

Whats your take on swappiness?

 

excerpt from http://kerneltrap.org/node/1044

Swappiness is a kernel "knob" (located in /proc/sys/vm/swappiness) used to tweak how much the kernel favors swap over RAM; high swappiness means the kernel will swap out a lot, and low swappiness means the kernel will try not to use swap space.

 

...

Now, because we NORMALLY do not have swap space allocated in our machines, this setting does not come into play.

Link to comment

That was my take as well I was just curious if the lack of swap meant this settings acted differently.

 

Im curious why you went for locate rather than just the ls trick. Obviously locate gives you an extra ability but since it only updates once a day does it really work in this scenario. I have been using lower and lower timers to make ls run more and more often so does locate have some other trick up its sleave that i dont know about?

Link to comment

I use locate for the following.

 

1. To seed the directory cache buffers and create the locate database.

2. To search for files without using find.

 

it's so much faster to do a

 

locate ftwd.c

then a

find / -name ftwd.c

 

With locate, one database is searched for a string, with find, every directory is opened, read until EOF and closed.

 

if I want to search where a particular ISO is on what disk I can do a locate and find every one of the ISO.

 

I do not use locate the same as the recursive ls -lR > /dev/null. although I do believe it's possible.

 

This is why I originally wrote the ftwd daemon.

To walk the filetree and stat each file, but without printing and writing the output to /dev/null for no reason.

 

 

Link to comment

...

 

This is why I originally wrote the ftwd daemon.

To walk the filetree and stat each file, but without printing and writing the output to /dev/null for no reason.

 

Have you released this I would be keen to help you test it.

Link to comment

Sorry i must have glanced over that bit.

 

Been contining my reading and a couple of other kernel tuning paramters look interesting

 

block_dump

 

block_dump enables block I/O debugging when set to a nonzero value. If you want to find out which process caused the disk to spin up (see /proc/sys/vm/laptop_mode), you can gather information by setting the flag.

 

When this flag is set, Linux reports all disk read and write operations that take place, and all block dirtyings done to files. This makes it possible to debug why a disk needs to spin up, and to increase battery life even more. The output of block_dump is written to the kernel output, and it can be retrieved using "dmesg". When you use block_dump and your kernel logging level also includes kernel debugging messages, you probably want to turn off klogd, otherwise the output of block_dump will be logged, causing disk activity that is not normally there.

 

laptop_mode

Submitted by admin on Wed, 2006-05-31 16:26.

 

laptop_mode is a knob that controls "laptop mode". When the knob is set, any physical disk I/O (that might have caused the hard disk to spin up, see /proc/sys/vm/block_dump) causes Linux to flush all dirty blocks. The result of this is that after a disk has spun down, it will not be spun up anymore to write dirty blocks, because those blocks had already been written immediately after the most recent read operation. The value of the laptop_mode knob determines the time between the occurrence of disk I/O and when the flush is triggered. A sensible value for the knob is 5 seconds. Setting the knob to 0 disables laptop mode.

 

A more radical approach could be to monitor /proc/meminfo and when some condition is met issue a "echo 1 > /proc/sys/vm/drop_caches" which frees up all page cache and if i am redin this correctly does not effect dentrie and inode entries.

 

More can be found here: http://www.linuxinsight.com/proc_sys_vm_hierarchy.html

 

Update: Even more interesting reading here http://kerneltrap.org/node/4462

Update2: The stats we care about are in detailed in /proc/slabinfo

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.