cache_dirs - an attempt to keep directory entries in RAM to prevent disk spin-up


Recommended Posts

dunno if you recall my testing but i have 4gb of ram and cache_dirs couldnt handle just one of my shares (either movies or music)....

 

Have you tested it with 64bit?

 

I have unraid 5.x (so no, 32bit only), when trying out cache_dis I was using:

# cache_dirs -w -i "Movies" -i "TV" -F

 

Should I have been doing something to limit the depth or changing ulimit?

 

Per: http://lime-technology.com/forum/index.php?topic=4500.msg290565#msg290565

---

i understand cache_dirs wants to keep the filelist? folder list? in ram.. seems like almost 4gb of ram for ~940gb of movies is a bit excessive.

 

:/mnt/user/Movies# du -h --max-depth=1

936G ./Movies-SD

44G ./Movies-Asian

68G ./Movies-Ghibli

9.4G ./Movies-Stand-Up

 

:/mnt/user/Movies# du -a | cut -d/ -f2 | sort | uniq -c | sort -nr

  6101 Movies-SD

    899 Movies-HD

    179 Movies-Asian

    60 Movies-Ghibli

    41 Movies-Stand-Up

--

Link to comment

No fundamentally something is wrong if you are hitting problems with those kind of numbers.

 

I dont have any v5 anymore but when I did i used cache_dirs with 10-20 time more files than that and 4GB RAM.

 

The problem you face is do you try and look into this with v5 and its native PAE limitations or accept you specifically have an issue and wait until you move to v6 which has no PAE limitations.

 

I know which one I would do but i dont know your full use case

Link to comment

Is there a chance to have an option that spins up disks when the directory is accessed ?

 

i.e. I have a couple of directories which when I go there to view a directory listing, 4 times out of 5 I want to open a file.  If this could be setup so that when certain cached directories are read, they spin the drive up, that would be the best of both worlds.  Instant directory listing of a spun down drive but by the time I need to open the file, the drive is sup up waiting.

 

We've discussed this earlier in the thread and there was a hack for it, but I was wondering if it could be a supported feature ?

Link to comment

You would probably be better moving all the files that need accessed in this way to one specific disk and either not having it spin down ever or change it to be far less often.

 

I dont think what you are asking here fits with the purpose of cache_dirs... IMHO anyway :)

Link to comment

You would probably be better moving all the files that need accessed in this way to one specific disk and either not having it spin down ever or change it to be far less often.

 

I dont think what you are asking here fits with the purpose of cache_dirs... IMHO anyway :)

 

I disagree.  I'll look back but it was discussed in this thread and few like the idea.  It's really the best of both worlds.  Store the directory in cache so I can head down to where I need to see if new files have arrived, then play them if they are there.  This feature allows hit to click play on my little xios box without a delaying or timing out waiting for the disks to spin up.

 

John himself provided the hack but re:hacking after every new cache_dirs script is released.

Link to comment

No fundamentally something is wrong if you are hitting problems with those kind of numbers.

 

I dont have any v5 anymore but when I did i used cache_dirs with 10-20 time more files than that and 4GB RAM.

 

The problem you face is do you try and look into this with v5 and its native PAE limitations or accept you specifically have an issue and wait until you move to v6 which has no PAE limitations.

 

I know which one I would do but i dont know your full use case

 

Are there only benefits when moving to 64bits unraid i.e. unraid 6 beta 6? Not running out of memory and not crashing my server or having it unaccessible is reason enough to me. I have 16 GB of memory on my server and use several plugins like sabnzb, minidlna, serviio and apcupsd. Are these all 64bit compatible?

 

Downloading with sabnzbd together with running a parity check sometimes leaves my server unaccessible, probably because emhttp crashed?

 

 

Is it as easy as replacing bzroot and bzimage to try unraid 6?

Link to comment

I have 16 GB of memory on my server and use several plugins like sabnzb, minidlna, serviio and apcupsd. Are these all 64bit compatible?

 

Downloading with sabnzbd together with running a parity check sometimes leaves my server unaccessible, probably because emhttp crashed?

 

 

Is it as easy as replacing bzroot and bzimage to try unraid 6?

I would guess you are running out of "low-memory" with all those applications running and cache_dirs too.  Sounds like emhttp is being killed by the kernel in an attempt to free memory, since it was detected as being  idle the longest.

 

For most people, upgrading to 6.X is as simple as the new bzroot, bzimage.  You'll probably also want the new syslinux.cfg to be able to boot to some of the new options.  (The release notes for 6.X have the details on how to upgrade)

 

Unfortunately, NONE of your existing plugins will work.  They are 32 bit executables, and 6.X does not support them in any way.  You need the 64 bit versions.

 

Prior to upgrading, you'll need to disable/un-install the 32 bit versions, then after upgrading, you'll need to install and and configure their equivalent 64 bit versions.  (I do not know if 64 bit plugin equivalents will exist yet for minidlna or servino that you've installed  I've not been following those threads at all. 

Pretty sure they exist for most the others.)

 

Joe L.

Link to comment

So I have placed cache_dirs in the root of my flash and added /boot/cache_dirs  -w to my go script. Now for some reason Im getting error 404 and I can only think it is related to cache_dirs as I have not seen it before. Can anyone tell me if it is and what I can do about it.

 

Thanks

Screen_Shot_2014-07-27_at_22_53.png.5443297349dda200aeaf0c1d08ec0db4.png

Link to comment

nothing to do with cache_dirs at all.

 

The error messages are saying that the "ssl" package you are attempting to download does not exist on the server you are trying to download it from.

 

Probably one of your plugins is attempting to download openssl for installation.  You'll need to find an updated plugin, as the download URL in the ones you are attempting to load are out-of-date and no longer valid.

 

Joe L.

Link to comment

Thats great thanks Joe, Sorry for the blame. I only have APC UPS, Dynamix and Dynamix system temp installed so I will try and find out which one is causing the problem. With regards to the memory issues on cache_dirs 32bit I presume Im unlikely to have any worries of running out of memory with my four plugins including cache_dirs and 16GB of Ram?

Link to comment

I have 16 GB of memory on my server and use several plugins like sabnzb, minidlna, serviio and apcupsd. Are these all 64bit compatible?

 

Downloading with sabnzbd together with running a parity check sometimes leaves my server unaccessible, probably because emhttp crashed?

 

 

Is it as easy as replacing bzroot and bzimage to try unraid 6?

I would guess you are running out of "low-memory" with all those applications running and cache_dirs too.  Sounds like emhttp is being killed by the kernel in an attempt to free memory, since it was detected as being  idle the longest.

 

For most people, upgrading to 6.X is as simple as the new bzroot, bzimage.  You'll probably also want the new syslinux.cfg to be able to boot to some of the new options.  (The release notes for 6.X have the details on how to upgrade)

 

Unfortunately, NONE of your existing plugins will work.  They are 32 bit executables, and 6.X does not support them in any way.  You need the 64 bit versions.

 

Prior to upgrading, you'll need to disable/un-install the 32 bit versions, then after upgrading, you'll need to install and and configure their equivalent 64 bit versions.  (I do not know if 64 bit plugin equivalents will exist yet for minidlna or servino that you've installed  I've not been following those threads at all. 

Pretty sure they exist for most the others.)

 

Joe L.

 

Thanks Joe, very helpfull. I might move to V6 64bit. One question is important to me: Is there a 64bit version of of the apcupsd plugin or can it be configured through unmenu?

Link to comment

I have 16 GB of memory on my server and use several plugins like sabnzb, minidlna, serviio and apcupsd. Are these all 64bit compatible?

 

Downloading with sabnzbd together with running a parity check sometimes leaves my server unaccessible, probably because emhttp crashed?

 

 

Is it as easy as replacing bzroot and bzimage to try unraid 6?

I would guess you are running out of "low-memory" with all those applications running and cache_dirs too.  Sounds like emhttp is being killed by the kernel in an attempt to free memory, since it was detected as being  idle the longest.

 

For most people, upgrading to 6.X is as simple as the new bzroot, bzimage.  You'll probably also want the new syslinux.cfg to be able to boot to some of the new options.  (The release notes for 6.X have the details on how to upgrade)

 

Unfortunately, NONE of your existing plugins will work.  They are 32 bit executables, and 6.X does not support them in any way.  You need the 64 bit versions.

 

Prior to upgrading, you'll need to disable/un-install the 32 bit versions, then after upgrading, you'll need to install and and configure their equivalent 64 bit versions.  (I do not know if 64 bit plugin equivalents will exist yet for minidlna or servino that you've installed  I've not been following those threads at all. 

Pretty sure they exist for most the others.)

 

Joe L.

 

Thanks Joe, very helpfull. I might move to V6 64bit. One question is important to me: Is there a 64bit version of of the apcupsd plugin or can it be configured through unmenu?

I don't know which plugins are available in 64 bit versions, but I'll guess the most popular are.

 

There is a 64 bit version of apcupsd available through unMENU.  unMENU is 64 bit OS aware (as long as you use its "Check-for-updates/Install Updates" buttons on its user-scripts page and are running the more recent package-manager within it.).  Once on a 64 bit OS, it will only show the 64 bit package versions of the packages it manages.  There may also be a 4 bit apcupsd plugin... Again, I have not been following the 64 bit plugins activity as neither of my unRAID servers is running the 64 bit unRAID OS.

Link to comment

I run cache_dirs and now see un-named folders appear in folders on my shares. I cache_dirs doing this und using these folders?

Never seen (or heard) of anything like this.  cache_dirs is a read only process so never creates files/folders.  Are you running any other logins?  Is so it is more likely that one of these is the culprit.

Link to comment

I've installed cache_dirs 1.6.9 on unRAID 6b6, and I can't get logging to work, and similarly can't tell if it's doing anything.

 

I invoked the script with

 

/boot/config/plugins/cache_dirs -w

 

while my disks were sleeping and over a couple of minutes all of them spun up. seems right.  Putting the disks back to sleep loading the directory contents in mac Finder or Pathfinder still seems slow, but once the directories are open the disks (for the most part) are still spun down.  OK.

 

Then I tried to move the process to the Front, to see what was happening

 

First, the /boot/config/plugins/cache_dirs -F command gave no screen output for several minutes, and no cursor.

ctl-C to cancel, and /boot/config/plugins/cache_dirs -q says no process running.

 

Trying out the logging function:

root@Tower:/boot# /boot/config/plugins/cache_dirs -l on

Logging enabled to /var/log/cache_dirs.log

root@Tower:/boot# ps

  PID TTY          TIME CMD

3578 pts/4    00:00:00 ps

31741 pts/4    00:00:00 bash

root@Tower:/boot# /boot/config/plugins/cache_dirs -q

cache_dirs not currently running

root@Tower:/boot# cat /var/log/cache_dirs.log

 

root@Tower:/boot#

 

It looks like the process is not started, and there is no log written.

 

While the standard evoke

root@Tower:/boot# /boot/config/plugins/cache_dirs -w

cache_dirs process ID 3868 started, To terminate it, type: cache_dirs -q

root@Tower:/boot# ps

  PID TTY          TIME CMD

3868 pts/4    00:00:00 cache_dirs

3884 pts/4    00:00:00 cache_dirs

3904 pts/4    00:00:00 find

3908 pts/4    00:00:00 ps

31741 pts/4    00:00:00 bash

root@Tower:/boot#

 

Seems to start up process(es)

 

Maybe I shouldn't worry since it seems to work, I just can't get the evidence for it.  Plus, it's not done much to my Mac Finder experience.

 

Despite my issues, I am grateful for this.

 

Dennis

 

 

 

Link to comment

I just installed 1.6.9 and posted some issues I have with 6b6, but I had a more conceptual question.

 

I see the point of moving the directory lists off of the drives to prevent platter spin-up, but I wonder if having the option to save the directory data to an SSD drive instead of memory might be a big advantage for those of us who have an SSD cache.

 

Personally I have maxed out my RAM at 4 GB but have a 256 GB SSD sitting there doing nothing most of the time.  If cache_dirs wrote to the cache drive instead of memory, I would be able to use more of my memory for important stuff, like...

 

Maybe it's not as simple as it sounds, but I haven't seen discussion of that.

 

Thanks,

 

Dennis

 

Link to comment

Maybe it's not as simple as it sounds, but I haven't seen discussion of that.

 

It's not that simple. The dentry table is an internal kernel table that cannot be swapped out.

 

The biggest possible advantage would be to have the rootfs changed to tmpfs which could swap out unused parts of the ram filesystem. That's a pretty big internal change to unRAID and I do not think that is going to happen.

 

Having swap space allows memory hungry program to swap out or allows the /var/log to be swapped out.

However, In a normal system this is not the case.

Link to comment

 

... Plus, it's not done much to my Mac Finder experience.

 

 

An update following further experimentation.  I haven't been able to get logging to work, but from a post I've lost now I learned that the flag -d to set the depth of the menu collection is essential at least on large directories.

 

I now launch the script with /.../cache_dirs -w -d 4

 

In Finder or Pathfinder (osX) large directories now open in fractions of seconds rather than tens of seconds, and the drives are not spun up.  This is transformational and makes unRAID workable for once.

 

I think the issue is that without specifying the -d depth the default is something like 999 subdirectories that essentially never completes. For me -d 4 is enough to preview my media files.  It's telling that opening that fifth directory depth now takes tens of seconds again. 

 

Another point is that last night I went through a powerdown and one of the disks wouldn't unmount because it was busy.  The culprit was a "find" process that I think was part of the cache_dirs looking to complete its 999 depth survey. Not sure about that but today with -d 4 there is not a long lasting 'find' process.

 

Thanks again for this script.

 

Dennis

 

Link to comment

Might be time for a simple explanation of CacheDirs, for newer users.

 

In the beginning was UnRAID, and it was good.  And there was a vast expansion of space.  And drives and User Shares were brought forth, and made shareable by all creatures.  And a way to spin down drives was made, for days of rest.  But our media players and our needs insisted on constant access, and the drives had no rest.  So Joe created CacheDirs to keep directories loaded, and let the drives rest.  with apologies to any biblical scholars!

 

CacheDirs is a script of simple shell commands to periodically access selected directories, thereby keeping them loaded in kernel buffers.  In DOS(1) and Windows, you could do the same in a batch file of 'dir' commands(2), looping back again and again to repeat the same dir commands, but without displaying anything on the screen.  Why would you do that?  The kernel has limited buffer space, used for all disk I/O (both the folder info and other metadata, and the actual file data buffers).  When more buffer space is needed, the kernel drops anything that has not been accessed recently.  By CacheDirs constantly accessing the drive directories, we indirectly force the kernel to keep them loaded in RAM, not drop them.

 

Apart from a PID file in a single system folder (to keep CacheDirs from installing itself more than once), CacheDirs does not write anything anywhere.  It also does not directly manage memory, but depending on how it is configured, may load a few small directories or load many large directories, thereby causing the use of a little memory or a lot of memory.

 

UnRAID version 6 is 64bit, with almost no limit to memory for CacheDirs.  UnRAID version 5 and lower use 32bit kernels, that divide memory into LOWMEM and HIGHMEM.  LOWMEM is typically a little less than 900MB, no matter how much RAM is installed.  The directory buffers are limited to a small portion of LOWMEM, and that explains why it is possible for CacheDirs to apparently cause "out of memory" issues in UnRAID v5 and lower, if too many files and folders are cached (use Excludes and the depth parameter to limit them).

 

(1) A simple DOS version of CacheDirs, CacheDirs.bat:

@echo off

:top

dir folder_1 >null

dir folder_2 >null

dir folder_3 >null

...

dir folder_n >null

sleep 3 seconds

goto top

 

(2) Instead of the 'dir' command, CacheDirs uses the Linux 'find' command, similarly to the way 'dir' works.

 

Written fairly quickly, with little checking; please inform me of any inaccuracies, and I'll correct them.

 

Link to comment

Might be time for a simple explanation of CacheDirs, for newer users.

 

...

 

By CacheDirs constantly accessing the drive directories, we indirectly force the kernel to keep them loaded in RAM, not drop them.

 

I would have thought that made the disks spin up, but you must mean accessing the directories that were previously stored in RAM.

 

Apart from a PID file in a single system folder (to keep CacheDirs from installing itself more than once), CacheDirs does not write anything anywhere.

 

Yes, this I didn't understand at all.

 

Thanks for this explanation, and for other newbies, cache_dirs has made a large difference in my day to day usage of unRAID.

 

Dennis

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.