cache_dirs - an attempt to keep directory entries in RAM to prevent disk spin-up


Recommended Posts

Word of warning: cache_dirs does not currently work on unRAID 6.0.

 

# /boot/cache_dirs -w -i Movies -i Music -i TV

ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

/boot/cache_dirs: xmalloc: make_cmd.c:100: cannot allocate 365 bytes (98304 bytes allocated)

 

nr_pdflush_threads exported in /proc is scheduled for removal

sysctl: The scan_unevictable_pages sysctl/node-interface has been disabled for lack of a legitimate use case.  If you have one, please send an email to [email protected].

Link to comment

It will work on 6.0 if you comment out the "ulimit -v 5000" line.  Use this suggestion at your own risk, there's probably a better solution than completely commenting the line out.

 

Aha! Interesting how attempting to limit cache_dirs to 5MB of virtual memory causes the errors.

 

EDIT: It seems setting that value to any thing less then 15MB or 15360 causes things to break.

Link to comment

running 5.0.5, only things I have installed are unmenu, vim, smtp, shutdown + status script. I do have a cache drive.

 

I launched cache_dirs with:

# cache_dirs -w -i "Movies" -i "TV" -F

 

after a few hours it stopped, looking at the tab the tail end showed:

Executed find in 0.015807 seconds, weighted avg=0.015852 seconds, now sleeping 10 seconds
Executed find in 0.015729 seconds, weighted avg=0.015835 seconds, now sleeping 10 seconds
Executed find in 0.015959 seconds, weighted avg=0.015840 seconds, now sleeping 9 seconds
Executed find in 0.019165 seconds, weighted avg=0.016150 seconds, now sleeping 8 seconds
Executed find in 0.015761 seconds, weighted avg=0.016119 seconds, now sleeping 9 seconds
Executed find in 0.015785 seconds, weighted avg=0.016091 seconds, now sleeping 10 seconds
Executed find in 0.015797 seconds, weighted avg=0.016064 seconds, now sleeping 10 seconds
Executed find in 0.015786 seconds, weighted avg=0.016047 seconds, now sleeping 10 seconds
Executed find in 0.015710 seconds, weighted avg=0.016023 seconds, now sleeping 10 seconds
Executed find in 0.015828 seconds, weighted avg=0.016011 seconds, now sleeping 10 seconds
Executed find in 0.015751 seconds, weighted avg=0.015991 seconds, now sleeping 10 seconds
Executed find in 0.015857 seconds, weighted avg=0.015982 seconds, now sleeping 10 seconds
Executed find in 0.015817 seconds, weighted avg=0.015969 seconds, now sleeping 10 seconds
Executed find in 0.015824 seconds, weighted avg=0.015957 seconds, now sleeping 10 seconds
Executed find in 0.015797 seconds, weighted avg=0.015941 seconds, now sleeping 10 seconds
Executed find in 0.015780 seconds, weighted avg=0.015924 seconds, now sleeping 10 seconds
Executed find in 0.015825 seconds, weighted avg=0.015912 seconds, now sleeping 10 seconds
Executed find in 0.015814 seconds, weighted avg=0.015898 seconds, now sleeping 10 seconds
Executed find in 0.015816 seconds, weighted avg=0.015883 seconds, now sleeping 10 seconds
Executed find in 0.015808 seconds, weighted avg=0.015868 seconds, now sleeping 10 seconds
Executed find in 0.015743 seconds, weighted avg=0.015846 seconds, now sleeping 10 seconds
Executed find in 0.015774 seconds, weighted avg=0.015828 seconds, now sleeping 10 seconds
Executed find in 0.015788 seconds, weighted avg=0.015811 seconds, now sleeping 10 seconds
./cache_dirs: xrealloc: subst.c:658: cannot allocate 256 bytes (901120 bytes allocated)
Executed find in  seconds, now sleeping 10 seconds
./cache_dirs: xrealloc: subst.c:658: cannot allocate 256 bytes (901120 bytes allocated)
Executed find in  seconds, now sleeping 10 seconds
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xrealloc: subst.c:658: cannot allocate 256 bytes (901120 bytes allocated)
Executed find in  seconds, now sleeping 10 seconds
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: execute_cmd.c:3599: cannot allocate 72 bytes (901120 bytes allocated)
./cache_dirs: line 462: [: : integer expression expected
./cache_dirs: xmalloc: stringlib.c:135: cannot allocate 120 bytes (901120 bytes allocated)

 

during the cache_dirs process (before it stopped) here is what top showed:

~# top
top - 16:40:32 up 3 days, 21:29,  2 users,  load average: 0.63, 0.46, 0.28
Tasks: 132 total,   1 running, 131 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.6%us,  0.7%sy,  0.0%ni, 97.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,  3897424k used,   129480k free,   246844k buffers
Swap:        0k total,        0k used,        0k free,  3441640k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                 
  626 root      20   0     0    0    0 S    0  0.0   1:00.04 kworker/3:1                              
18212 root      20   0  4208  920  564 S    0  0.0   0:00.30 cache_dirs                               
18260 root      20   0  4408 1084  528 S    0  0.0   0:03.47 cache_dirs                               
    1 root      20   0   828  304  264 S    0  0.0   0:06.41 init                                     
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd                                 
    3 root      20   0     0    0    0 S    0  0.0   0:01.57 ksoftirqd/0                              
    5 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/0:0H                             
    7 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/u:0H                             
    8 root      RT   0     0    0    0 S    0  0.0   0:00.19 migration/0                              
    9 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_bh                                   
   10 root      20   0     0    0    0 S    0  0.0   0:04.60 rcu_sched                                

 

now since it stopped,

~# top
top - 19:50:19 up 4 days, 39 min,  2 users,  load average: 0.00, 0.01, 0.05
Tasks: 109 total,   1 running, 108 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.2%us,  0.0%sy,  0.0%ni, 99.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,  3888912k used,   137992k free,   243876k buffers
Swap:        0k total,        0k used,        0k free,  3435500k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                             
3970 root      20   0  2464 1000  756 R    0  0.0   0:00.01 top                                                                                                                                                  
    1 root      20   0   828  304  264 S    0  0.0   0:06.46 init                                                                                                                                                 
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd                                                                                                                                             
    3 root      20   0     0    0    0 S    0  0.0   0:02.60 ksoftirqd/0                                                                                                                                          
    5 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/0:0H                                                                                                                                         
    7 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/u:0H                                                                                                                                         
    8 root      RT   0     0    0    0 S    0  0.0   0:00.23 migration/0                                                                                                                                          
    9 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_bh                                                                                                                                               
   10 root      20   0     0    0    0 S    0  0.0   0:05.11 rcu_sched            

# free -l
             total       used       free     shared    buffers     cached
Mem:       4026904    3888628     138276          0     243876    3435500
Low:        867104     740020     127084
High:      3159800    3148608      11192
-/+ buffers/cache:     209252    3817652
Swap:            0          0          0

 

are my folders to large for the cache_dirs?

 

folder size:

:/mnt/user/TV# du -h --max-depth=1
2.9T	./TV
79G	./TV-Misc
65G	./TV-Food & Travel
311G	./TV-Canceled
9.4G	./TV-Mini Series
52G	./TV-Docu
21G	./TV-Anime
48G	./TV-Asian
3.4T	.

:/mnt/user/Movies# du -h --max-depth=1
936G	./Movies-SD
44G	./Movies-Asian
68G	./Movies-Ghibli
9.4G	./Movies-Stand-Up
1.1T	./Movies-HD
2.1T	.

 

number of unique files:

:/mnt/user/TV# du -a | cut -d/ -f2 | sort | uniq -c | sort -nr
   8232 TV
   1902 TV-Canceled
    258 TV-Anime
    254 TV-Misc
    177 TV-Asian
    148 TV-Food & Travel
     73 TV-Docu
     58 TV-Mini Series

:/mnt/user/Movies# du -a | cut -d/ -f2 | sort | uniq -c | sort -nr
   6101 Movies-SD
    899 Movies-HD
    179 Movies-Asian
     60 Movies-Ghibli
     41 Movies-Stand-Up

 

# ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31201
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 40960
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 31201
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

 

# slabtop -s c

Active / Total Objects (% used)    : 757906 / 861036 (88.0%)
Active / Total Slabs (% used)      : 19478 / 19478 (100.0%)
Active / Total Caches (% used)     : 54 / 99 (54.5%)
Active / Total Size (% used)       : 103664.34K / 113958.34K (91.0%)
Minimum / Average / Maximum Object : 0.01K / 0.13K / 16.12K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
419896 355813  84%    0.05K   5752       73     23008K buffer_head
47063  47063 100%    0.41K   2477       19     19816K reiser_inode_cache
58136  47590  81%    0.30K   2236       26     17888K radix_tree_node
37710  37046  98%    0.44K   2095       18     16760K fuse_inode
121728 109541  89%    0.12K   3804       32     15216K dentry
  1488   1488 100%    2.58K    124       12      3968K unraid/md
57216  55937  97%    0.06K    894       64      3576K kmalloc-64
  7900   7900 100%    0.31K    316       25      2528K inode_cache

Link to comment

Feature request.  Would be nice if cache_dirs printed one-time file counts by directory into syslog on startup execution for all files cached by the script.  Display the normal script startup information then give a file count by directory list that it is trying to cache based on startup parameters in syslog.  No need to repeat the counts, just once at startup.

 

Link to comment

running 5.0.5, only things I have installed are unmenu, vim, smtp, shutdown + status script. I do have a cache drive.

 

I launched cache_dirs with:

# cache_dirs -w -i "Movies" -i "TV" -F

 

after a few hours it stopped, looking at the tab the tail end showed:

Executed find in 0.015807 seconds, weighted avg=0.015852 seconds, now sleeping 10 seconds
Executed find in 0.015729 seconds, weighted avg=0.015835 seconds, now sleeping 10 seconds
Executed find in 0.015959 seconds, weighted avg=0.015840 seconds, now sleeping 9 seconds
Executed find in 0.019165 seconds, weighted avg=0.016150 seconds, now sleeping 8 seconds
Executed find in 0.015761 seconds, weighted avg=0.016119 seconds, now sleeping 9 seconds
Executed find in 0.015785 seconds, weighted avg=0.016091 seconds, now sleeping 10 seconds
Executed find in 0.015797 seconds, weighted avg=0.016064 seconds, now sleeping 10 seconds
Executed find in 0.015786 seconds, weighted avg=0.016047 seconds, now sleeping 10 seconds
Executed find in 0.015710 seconds, weighted avg=0.016023 seconds, now sleeping 10 seconds
Executed find in 0.015828 seconds, weighted avg=0.016011 seconds, now sleeping 10 seconds
Executed find in 0.015751 seconds, weighted avg=0.015991 seconds, now sleeping 10 seconds
Executed find in 0.015857 seconds, weighted avg=0.015982 seconds, now sleeping 10 seconds
Executed find in 0.015817 seconds, weighted avg=0.015969 seconds, now sleeping 10 seconds
Executed find in 0.015824 seconds, weighted avg=0.015957 seconds, now sleeping 10 seconds
Executed find in 0.015797 seconds, weighted avg=0.015941 seconds, now sleeping 10 seconds
Executed find in 0.015780 seconds, weighted avg=0.015924 seconds, now sleeping 10 seconds
Executed find in 0.015825 seconds, weighted avg=0.015912 seconds, now sleeping 10 seconds
Executed find in 0.015814 seconds, weighted avg=0.015898 seconds, now sleeping 10 seconds
Executed find in 0.015816 seconds, weighted avg=0.015883 seconds, now sleeping 10 seconds
Executed find in 0.015808 seconds, weighted avg=0.015868 seconds, now sleeping 10 seconds
Executed find in 0.015743 seconds, weighted avg=0.015846 seconds, now sleeping 10 seconds
Executed find in 0.015774 seconds, weighted avg=0.015828 seconds, now sleeping 10 seconds
Executed find in 0.015788 seconds, weighted avg=0.015811 seconds, now sleeping 10 seconds
./cache_dirs: xrealloc: subst.c:658: cannot allocate 256 bytes (901120 bytes allocated)
Executed find in  seconds, now sleeping 10 seconds
./cache_dirs: xrealloc: subst.c:658: cannot allocate 256 bytes (901120 bytes allocated)
Executed find in  seconds, now sleeping 10 seconds
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xrealloc: subst.c:658: cannot allocate 256 bytes (901120 bytes allocated)
Executed find in  seconds, now sleeping 10 seconds
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: subst.c:7606: cannot allocate 112 bytes (901120 bytes allocated)
./cache_dirs: xmalloc: execute_cmd.c:3599: cannot allocate 72 bytes (901120 bytes allocated)
./cache_dirs: line 462: [: : integer expression expected
./cache_dirs: xmalloc: stringlib.c:135: cannot allocate 120 bytes (901120 bytes allocated)

 

during the cache_dirs process (before it stopped) here is what top showed:

~# top
top - 16:40:32 up 3 days, 21:29,  2 users,  load average: 0.63, 0.46, 0.28
Tasks: 132 total,   1 running, 131 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.6%us,  0.7%sy,  0.0%ni, 97.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,  3897424k used,   129480k free,   246844k buffers
Swap:        0k total,        0k used,        0k free,  3441640k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                 
  626 root      20   0     0    0    0 S    0  0.0   1:00.04 kworker/3:1                              
18212 root      20   0  4208  920  564 S    0  0.0   0:00.30 cache_dirs                               
18260 root      20   0  4408 1084  528 S    0  0.0   0:03.47 cache_dirs                               
    1 root      20   0   828  304  264 S    0  0.0   0:06.41 init                                     
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd                                 
    3 root      20   0     0    0    0 S    0  0.0   0:01.57 ksoftirqd/0                              
    5 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/0:0H                             
    7 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/u:0H                             
    8 root      RT   0     0    0    0 S    0  0.0   0:00.19 migration/0                              
    9 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_bh                                   
   10 root      20   0     0    0    0 S    0  0.0   0:04.60 rcu_sched                                

 

now since it stopped,

~# top
top - 19:50:19 up 4 days, 39 min,  2 users,  load average: 0.00, 0.01, 0.05
Tasks: 109 total,   1 running, 108 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.2%us,  0.0%sy,  0.0%ni, 99.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,  3888912k used,   137992k free,   243876k buffers
Swap:        0k total,        0k used,        0k free,  3435500k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                             
3970 root      20   0  2464 1000  756 R    0  0.0   0:00.01 top                                                                                                                                                  
    1 root      20   0   828  304  264 S    0  0.0   0:06.46 init                                                                                                                                                 
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd                                                                                                                                             
    3 root      20   0     0    0    0 S    0  0.0   0:02.60 ksoftirqd/0                                                                                                                                          
    5 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/0:0H                                                                                                                                         
    7 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/u:0H                                                                                                                                         
    8 root      RT   0     0    0    0 S    0  0.0   0:00.23 migration/0                                                                                                                                          
    9 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_bh                                                                                                                                               
   10 root      20   0     0    0    0 S    0  0.0   0:05.11 rcu_sched            

# free -l
             total       used       free     shared    buffers     cached
Mem:       4026904    3888628     138276          0     243876    3435500
Low:        867104     740020     127084
High:      3159800    3148608      11192
-/+ buffers/cache:     209252    3817652
Swap:            0          0          0

 

are my folders to large for the cache_dirs?

 

folder size:

:/mnt/user/TV# du -h --max-depth=1
2.9T	./TV
79G	./TV-Misc
65G	./TV-Food & Travel
311G	./TV-Canceled
9.4G	./TV-Mini Series
52G	./TV-Docu
21G	./TV-Anime
48G	./TV-Asian
3.4T	.

:/mnt/user/Movies# du -h --max-depth=1
936G	./Movies-SD
44G	./Movies-Asian
68G	./Movies-Ghibli
9.4G	./Movies-Stand-Up
1.1T	./Movies-HD
2.1T	.

 

number of unique files:

:/mnt/user/TV# du -a | cut -d/ -f2 | sort | uniq -c | sort -nr
   8232 TV
   1902 TV-Canceled
    258 TV-Anime
    254 TV-Misc
    177 TV-Asian
    148 TV-Food & Travel
     73 TV-Docu
     58 TV-Mini Series

:/mnt/user/Movies# du -a | cut -d/ -f2 | sort | uniq -c | sort -nr
   6101 Movies-SD
    899 Movies-HD
    179 Movies-Asian
     60 Movies-Ghibli
     41 Movies-Stand-Up

 

# ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31201
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 40960
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 31201
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

 

# slabtop -s c

Active / Total Objects (% used)    : 757906 / 861036 (88.0%)
Active / Total Slabs (% used)      : 19478 / 19478 (100.0%)
Active / Total Caches (% used)     : 54 / 99 (54.5%)
Active / Total Size (% used)       : 103664.34K / 113958.34K (91.0%)
Minimum / Average / Maximum Object : 0.01K / 0.13K / 16.12K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
419896 355813  84%    0.05K   5752       73     23008K buffer_head
47063  47063 100%    0.41K   2477       19     19816K reiser_inode_cache
58136  47590  81%    0.30K   2236       26     17888K radix_tree_node
37710  37046  98%    0.44K   2095       18     16760K fuse_inode
121728 109541  89%    0.12K   3804       32     15216K dentry
  1488   1488 100%    2.58K    124       12      3968K unraid/md
57216  55937  97%    0.06K    894       64      3576K kmalloc-64
  7900   7900 100%    0.31K    316       25      2528K inode_cache

basically, you ran out of memory.
Link to comment

DO we belive this is a bug (unlikely) or is cache_dirs really eating up mutiple gigs of ram. I though the last time we done the calculations for this we worked out ram should never be that high?

 

 

Joe perhaps you can shed some light

 

Personally I have 10 disks, plus a cache drive. I am running cache_dirs with the only change being commenting out ulimit -v 5000

 

I have 16GB of RAM, all told all plugins (nzbget, Sickbeard, couchpotato, Plex) plus cache_dirs and I am using just over 2GB of RAM total.

Link to comment

basically, you ran out of memory.

 

so cache_dirs took up all 4gb? is there a way to limit how much it can use?

I tried running it again only on my Movies and it did the same thing.

 

I was being a bit blind not reading the actual data

 

47063  47063 100%    0.41K   2477       19     19816K reiser_inode_cache

 

Unless I fundamentally don't understand something cache_dirs is not even using 1% of available RAM.

 

 

Link to comment

Just started a new cache_dir process (as root) at the top of the hour, this time I'm doing: cache_dirs -w -i "Movies"

 

Running top (as root) with 30sec delay, sorted by memory (M), showing threads (H).

 

After 11mins in:

top - 11:11:36 up 4 days, 39 min,  1 user,  load average: 0.68, 0.85, 0.49
Tasks: 139 total,   1 running, 138 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.2%us,  0.4%sy,  0.0%ni, 98.4%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,  2868600k used,  1158304k free,   148744k buffers
Swap:        0k total,        0k used,        0k free,  2521436k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 
2973 zoggy     20   0 22208 9068 7360 S    0  0.2   2:13.65 smbd 
3603 root      20   0 21184 7940 6616 S    0  0.2   0:59.29 smbd 
2533 root      20   0 16552 3872 3140 S    0  0.1   0:00.31 smbd 
2933 root      20   0  4820 3536  940 S    0  0.1   0:00.10 awk 
1253 root      20   0  8196 2924 2240 S    0  0.1   0:06.99 ntpd 
3175 root      20   0 14232 2328  576 S    0  0.1   0:00.00 shfs 
3176 root      20   0 14232 2328  576 S    0  0.1   0:44.63 shfs 
3177 root      20   0 14232 2328  576 S    0  0.1   0:44.87 shfs 
3183 root      20   0 14232 2328  576 S    0  0.1   0:42.44 shfs 
5017 root      20   0 14232 2328  576 S    0  0.1   0:38.95 shfs 
2516 root      20   0 10004 2036 1480 S    0  0.1   0:01.96 nmbd 
17536 root      20   0  6876 1948 1496 S    0  0.0   0:00.08 in.telnetd 
2579 root      20   0 16552 1932 1200 S    0  0.0   0:00.33 smbd 
17537 root      20   0  4304 1720 1296 S    0  0.0   0:00.02 bash 
2572 avahi     20   0  3052 1588 1348 S    0  0.0   0:00.03 avahi-daemon 
1288 root      20   0  5768 1492 1164 S    0  0.0   0:00.46 emhttp 
2606 root      20   0  5768 1492 1164 S    0  0.0   0:39.45 emhttp 
3166 root      20   0 10284 1264  564 S    0  0.0   0:00.00 shfs 
3167 root      20   0 10284 1264  564 S    0  0.0   0:24.55 shfs 
3168 root      20   0 10284 1264  564 S    0  0.0   0:28.35 shfs 
6465 root      20   0 10284 1264  564 S    0  0.0   0:28.14 shfs 
22744 root      20   0  2472 1100  824 R    0  0.0   0:00.51 top 
  799 root      16  -4  2476 1032  496 S    0  0.0   0:00.04 udevd 
17592 root      20   0  4208  920  564 S    0  0.0   0:00.17 cache_dirs 
17593 root      20   0  4208  920  564 S    0  0.0   0:00.17 cache_dirs 
17595 root      20   0  4208  920  564 S    0  0.0   0:00.17 cache_dirs 
17597 root      20   0  4208  920  564 S    0  0.0   0:00.17 cache_dirs 
17599 root      20   0  4208  920  564 S    0  0.0   0:00.18 cache_dirs 
17601 root      20   0  4208  920  564 S    0  0.0   0:00.17 cache_dirs 
17602 root      20   0  4208  920  564 S    0  0.0   0:00.17 cache_dirs 
17603 root      20   0  4208  920  564 S    0  0.0   0:00.17 cache_dirs 
17604 root      20   0  4208  920  564 S    0  0.0   0:00.17 cache_dirs 
17605 root      20   0  4208  920  564 S    0  0.0   0:00.17 cache_dirs 
17625 root      20   0  4268  900  484 S    0  0.0   0:00.73 cache_dirs 

 

-- trimmed the top output to the first 0.0 mem item to reduce uneeded lines --

 

After 1h in:

top - 12:01:18 up 4 days,  1:29,  1 user,  load average: 0.88, 0.53, 0.50
Tasks: 139 total,   1 running, 138 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.5%us,  0.5%sy,  0.0%ni, 97.8%id,  0.0%wa,  0.0%hi,  0.2%si,  0.0%st
Mem:   4026904k total,  2869684k used,  1157220k free,   148744k buffers
Swap:        0k total,        0k used,        0k free,  2521436k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 
2973 root      20   0 22208 9068 7360 S    0  0.2   2:14.03 smbd 
3603 root      20   0 21184 7940 6616 S    0  0.2   0:59.30 smbd 
2533 root      20   0 16552 3872 3140 S    0  0.1   0:00.31 smbd 
2933 root      20   0  4820 3536  940 S    0  0.1   0:00.10 awk 
1253 root      20   0  8196 2924 2240 S    0  0.1   0:07.06 ntpd 
3175 root      20   0 14232 2328  576 S    0  0.1   0:00.00 shfs 
3176 root      20   0 14232 2328  576 S    0  0.1   0:44.70 shfs 
3177 root      20   0 14232 2328  576 S    0  0.1   0:44.95 shfs 
3183 root      20   0 14232 2328  576 S    0  0.1   0:42.54 shfs 
5017 root      20   0 14232 2328  576 S    0  0.1   0:39.07 shfs 
2516 root      20   0 10004 2036 1480 S    0  0.1   0:01.99 nmbd 
17536 root      20   0  6876 1948 1496 S    0  0.0   0:00.10 in.telnetd 
2579 root      20   0 16552 1932 1200 S    0  0.0   0:00.33 smbd 
17537 root      20   0  4304 1720 1296 S    0  0.0   0:00.02 bash 
2572 avahi     20   0  3052 1588 1348 S    0  0.0   0:00.03 avahi-daemon 
1288 root      20   0  5768 1492 1164 S    0  0.0   0:00.46 emhttp 
2606 root      20   0  5768 1492 1164 S    0  0.0   0:39.82 emhttp 
17625 root      20   0  4692 1324  484 S    0  0.0   0:09.42 cache_dirs 
3166 root      20   0 10284 1264  564 S    0  0.0   0:00.00 shfs 
3167 root      20   0 10284 1264  564 S    0  0.0   0:24.55 shfs 
3168 root      20   0 10284 1264  564 S    0  0.0   0:28.35 shfs 
6465 root      20   0 10284 1264  564 S    0  0.0   0:28.14 shfs 
22744 root      20   0  2472 1100  824 R    0  0.0   0:00.97 top 
  799 root      16  -4  2476 1032  496 S    0  0.0   0:00.04 udevd 
17592 root      20   0  4244  960  568 S    0  0.0   0:00.88 cache_dirs 
17593 root      20   0  4244  960  568 S    0  0.0   0:00.87 cache_dirs 
17595 root      20   0  4244  960  568 S    0  0.0   0:00.89 cache_dirs 
17597 root      20   0  4244  960  568 S    0  0.0   0:00.89 cache_dirs 
17599 root      20   0  4244  960  568 S    0  0.0   0:00.90 cache_dirs 
17601 root      20   0  4244  960  568 S    0  0.0   0:00.88 cache_dirs 
17602 root      20   0  4244  960  568 S    0  0.0   0:00.88 cache_dirs 
17603 root      20   0  4244  960  568 S    0  0.0   0:00.90 cache_dirs 
17604 root      20   0  4244  960  568 S    0  0.0   0:00.88 cache_dirs 
17605 root      20   0  4244  960  568 S    0  0.0   0:00.88 cache_dirs 

 

After 7 hours in:

top - 18:11:06 up 4 days,  7:39,  1 user,  load average: 0.73, 0.75, 0.71
Tasks: 140 total,   2 running, 138 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.5%sy,  1.2%ni, 98.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,  2940248k used,  1086656k free,   148860k buffers
Swap:        0k total,        0k used,        0k free,  2591688k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 
2973 zoggy     20   0 22208 9068 7360 S    0  0.2   2:17.05 smbd 
3603 root      20   0 21184 7940 6616 S    0  0.2   0:59.37 smbd 
2533 root      20   0 16552 3872 3140 S    0  0.1   0:00.34 smbd 
2933 root      20   0  4820 3536  940 S    0  0.1   0:00.10 awk 
15503 root      20   0 16884 3360 2592 S    0  0.1   0:00.02 smbd 
1253 root      20   0  8196 2924 2240 S    0  0.1   0:07.61 ntpd 
3175 root      20   0 14228 2144  576 S    0  0.1   0:00.00 shfs 
3176 root      20   0 14228 2144  576 S    0  0.1   0:45.33 shfs 
3177 root      20   0 14228 2144  576 S    0  0.1   0:45.63 shfs 
3183 root      20   0 14228 2144  576 S    0  0.1   0:43.16 shfs 
5017 root      20   0 14228 2144  576 S    0  0.1   0:39.68 shfs 
2516 root      20   0 10004 2036 1480 S    0  0.1   0:02.17 nmbd 
17536 root      20   0  6876 1948 1496 S    0  0.0   0:00.23 in.telnetd 

 

After 12 hours in:

top - 23:15:17 up 4 days, 12:43,  1 user,  load average: 0.21, 0.52, 0.50
Tasks: 140 total,   2 running, 138 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.3%sy,  1.3%ni, 98.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,  3894748k used,   132156k free,   146048k buffers
Swap:        0k total,        0k used,        0k free,  3541400k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
2973 zoggy     20   0 22208 9092 7384 S    0  0.2   2:20.72 smbd
3603 root      20   0 21184 8136 6812 S    0  0.2   1:15.47 smbd
2533 root      20   0 16552 3872 3140 S    0  0.1   0:00.35 smbd
2933 root      20   0  4820 3536  940 S    0  0.1   0:00.10 awk
15503 root      20   0 16884 3360 2592 S    0  0.1   0:00.05 smbd
3175 root      20   0 15416 3232  576 S    0  0.1   0:00.00 shfs
3176 root      20   0 15416 3232  576 S    0  0.1   0:48.71 shfs
3177 root      20   0 15416 3232  576 S    0  0.1   0:48.87 shfs
3183 root      20   0 15416 3232  576 S    0  0.1   0:46.52 shfs
5017 root      20   0 15416 3232  576 S    0  0.1   0:43.15 shfs
1253 root      20   0  8196 2924 2240 S    0  0.1   0:08.06 ntpd
2516 root      20   0 10004 2036 1480 S    0  0.1   0:02.33 nmbd
17536 root      20   0  6876 1948 1496 S    0  0.0   0:00.34 in.telnetd

 

After 12 hours in (slabtop):

# slabtop -s c
Active / Total Objects (% used)    : 335034 / 590252 (56.8%)
Active / Total Slabs (% used)      : 12385 / 12385 (100.0%)
Active / Total Caches (% used)     : 54 / 99 (54.5%)
Active / Total Size (% used)       : 56141.15K / 71910.02K (78.1%)
Minimum / Average / Maximum Object : 0.01K / 0.12K / 16.12K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME 
337771  95197  28%    0.05K   4627       73     18508K buffer_head
41522  38448  92%    0.30K   1597       26     12776K radix_tree_node
20216  20216 100%    0.41K   1064       19      8512K reiser_inode_cache
61088  59784  97%    0.12K   1909       32      7636K dentry
13032  13032 100%    0.44K    724       18      5792K fuse_inode
  1488   1488 100%    2.58K    124       12      3968K unraid/md
  7875   7875 100%    0.31K    315       25      2520K inode_cache

 

After 15 hours in:

top - 02:29:32 up 4 days, 15:57,  2 users,  load average: 0.40, 0.30, 0.42
Tasks: 143 total,   1 running, 142 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  1.0%sy,  2.6%ni, 96.4%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,  3896212k used,   130692k free,   147324k buffers
Swap:        0k total,        0k used,        0k free,  3534348k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
2973 root      20   0 22208 9092 7384 S    0  0.2   2:22.01 smbd
3603 root      20   0 21184 8228 6904 S    0  0.2   1:19.72 smbd
2533 root      20   0 16552 3872 3140 S    0  0.1   0:00.35 smbd
15503 root      20   0 21172 3748 2972 S    0  0.1   0:00.08 smbd
2933 root      20   0  4820 3536  940 S    0  0.1   0:00.10 awk
3175 root      20   0 15432 3000  576 S    0  0.1   0:00.00 shfs
3176 root      20   0 15432 3000  576 S    0  0.1   0:50.77 shfs
3177 root      20   0 15432 3000  576 S    0  0.1   0:50.89 shfs
3183 root      20   0 15432 3000  576 S    0  0.1   0:48.40 shfs
5017 root      20   0 15432 3000  576 S    0  0.1   0:45.09 shfs
1253 root      20   0  8196 2924 2240 S    0  0.1   0:08.35 ntpd
2516 root      20   0 10004 2036 1480 S    0  0.1   0:02.41 nmbd
8889 root      20   0  6876 1952 1496 S    0  0.0   0:00.03 in.telnetd
...

 

After 15 hours in (slabtop):

Active / Total Objects (% used)    : 334612 / 591238 (56.6%)
Active / Total Slabs (% used)      : 12390 / 12390 (100.0%)
Active / Total Caches (% used)     : 54 / 99 (54.5%)
Active / Total Size (% used)       : 56181.47K / 72017.39K (78.0%)
Minimum / Average / Maximum Object : 0.01K / 0.12K / 16.12K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
337552  93578  27%    0.05K   4624       73     18496K buffer_head
40976  38717  94%    0.30K   1576       26     12608K radix_tree_node
20349  20349 100%    0.41K   1071       19      8568K reiser_inode_cache
61184  60592  99%    0.12K   1912       32      7648K dentry
13176  13176 100%    0.44K    732       18      5856K fuse_inode
  1488   1488 100%    2.58K    124       12      3968K unraid/md
  7875   7875 100%    0.31K    315       25      2520K inode_cache

 

can see the mem free dwindling away.. but where is it going? just memory leak in cache_dirs?

 

After 27 hours in:

top - 14:16:19 up 5 days,  3:44,  2 users,  load average: 1.01, 1.18, 1.16
Tasks: 142 total,   1 running, 141 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.7%sy,  1.5%ni, 97.8%id,  0.0%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   4026904k total,  3887944k used,   138960k free,   147324k buffers
Swap:        0k total,        0k used,        0k free,  3534360k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
2973 zoggy     20   0 22208 9188 7480 S    0  0.2   2:30.72 smbd
3603 root      20   0 21184 8232 6908 S    0  0.2   1:19.86 smbd
2533 root      20   0 16552 3872 3140 S    0  0.1   0:00.38 smbd
15503 root      20   0 21172 3752 2976 S    0  0.1   0:00.15 smbd
2933 root      20   0  4820 3536  940 S    0  0.1   0:00.10 awk
3175 root      20   0 15436 3000  576 S    0  0.1   0:00.00 shfs
3176 root      20   0 15436 3000  576 S    0  0.1   0:52.56 shfs
3177 root      20   0 15436 3000  576 S    0  0.1   0:52.85 shfs
3183 root      20   0 15436 3000  576 S    0  0.1   0:50.27 shfs
5017 root      20   0 15436 3000  576 S    0  0.1   0:46.92 shfs
1253 root      20   0  8196 2924 2240 S    0  0.1   0:09.39 ntpd
2516 root      20   0 10004 2036 1480 S    0  0.1   0:02.75 nmbd
8889 root      20   0  6876 1952 1496 S    0  0.0   0:00.06 in.telnetd

 

Active / Total Objects (% used)    : 336508 / 591608 (56.9%)
Active / Total Slabs (% used)      : 12397 / 12397 (100.0%)
Active / Total Caches (% used)     : 54 / 99 (54.5%)
Active / Total Size (% used)       : 56640.92K / 72104.58K (78.6%)
Minimum / Average / Maximum Object : 0.01K / 0.12K / 16.12K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME 
337552  93578  27%    0.05K   4624       73     18496K buffer_head
40976  38717  94%    0.30K   1576       26     12608K radix_tree_node
20349  20349 100%    0.41K   1071       19      8568K reiser_inode_cache
61408  60724  98%    0.12K   1919       32      7676K dentry
13194  13194 100%    0.44K    733       18      5864K fuse_inode
  1488   1488 100%    2.58K    124       12      3968K unraid/md
  7875   7875 100%    0.31K    315       25      2520K inode_cache

 

So this is the longest i've seen it last.. the only difference this instance is that I did NOT use -F option. So perhaps -F option is causing it to use more memory than it should/leak? My mover script ran at 0730 which might have helped as well?

 

After 45 hours in:

top - 08:17:00 up 5 days, 21:45,  2 users,  load average: 0.27, 0.32, 0.35
Tasks: 145 total,   1 running, 144 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.9%sy,  2.3%ni, 96.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,  3085816k used,   941088k free,   182972k buffers
Swap:        0k total,        0k used,        0k free,  2678436k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
2973 zoggy     20   0 22208 9272 7504 S    0  0.2   3:10.45 smbd
3603 root      20   0 21184 8344 7016 S    0  0.2   2:02.43 smbd
2533 root      20   0 16552 3872 3140 S    0  0.1   0:00.44 smbd
15503 root      20   0 21172 3752 2976 S    0  0.1   0:00.26 smbd
2933 root      20   0  4820 3536  940 S    0  0.1   0:00.10 awk
3175 root      20   0 20152 3128  576 S    0  0.1   0:00.00 shfs
3176 root      20   0 20152 3128  576 S    0  0.1   1:04.68 shfs
3177 root      20   0 20152 3128  576 S    0  0.1   1:04.62 shfs
3183 root      20   0 20152 3128  576 S    0  0.1   1:02.17 shfs
5017 root      20   0 20152 3128  576 S    0  0.1   0:58.80 shfs
9873 root      20   0 20152 3128  576 S    0  0.1   0:11.37 shfs
1869 root      20   0 20152 3128  576 S    0  0.1   0:07.75 shfs
1253 root      20   0  8196 2924 2240 S    0  0.1   0:11.01 ntpd
2516 root      20   0 10004 2060 1504 S    0  0.1   0:03.47 nmbd
8889 root      20   0  6876 1952 1496 S    0  0.0   0:00.12 in.telnetd

 

Active / Total Objects (% used)    : 651626 / 722360 (90.2%)
Active / Total Slabs (% used)      : 14941 / 14941 (100.0%)
Active / Total Caches (% used)     : 54 / 99 (54.5%)
Active / Total Size (% used)       : 79185.53K / 85171.41K (93.0%)
Minimum / Average / Maximum Object : 0.01K / 0.12K / 16.12K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME 
426028 375540  88%    0.05K   5836       73     23344K buffer_head
45110  41700  92%    0.30K   1735       26     13880K radix_tree_node
27455  27455 100%    0.41K   1445       19     11560K reiser_inode_cache
74912  72059  96%    0.12K   2341       32      9364K dentry
16452  16452 100%    0.44K    914       18      7312K fuse_inode
  1488   1488 100%    2.58K    124       12      3968K unraid/md
  7875   7875 100%    0.31K    315       25      2520K inode_cache

 

Can see its getting closer and closer to out of memory....

 

Link to comment

Just started a new cache_dir process (as root) at the top of the hour, this time I'm doing: cache_dirs -w -i "Movies"

can see the mem free dwindling away.. but where is it going? just memory leak in cache_dirs?

Since cache_dirs is just a shell script, it is basically doing this in your example:

 

create a semaphore_file  (it can be removed with "cache_dirs -q" to cause cache_dirs to stop)

 

while semaphore_file_exists

do

  s=current_time (to track how long it takes to perform the "find" command)

  find /mnt/disk1/Movies >/dev/null 2>/dev/null

  find /mnt/disk2/Movies >/dev/null 2>/dev/null

  find /mnt/disk3/Movies >/dev/null 2>/dev/null

  find /mnt/disk4/Movies >/dev/null 2>/dev/null

  f=finish_time_of_find

  d=duration

  sleep for X seconds based on calculated duration

done

 

If the shell had a memory leak, it could be the culprit as it loops.  (although possible, it is more likely that something else has the memory leak, since so many others use the "shell" and typically it gets found/fixed by the linux community.)

At one point (long ago) there was a limit on the number of child processes a shell could "fork."  That was addressed long ago by using a sub-shell

 

It could as easily be an issue in the kernel, or in the user-share file system. (although the user-shares are not normally being scanned by the "find"commands)

 

In any case, a memory leak would would normally show as increased memory size of a given process.  sorting as you have, by "M" should show the culprit.

 

Joe L.

Link to comment

Normally I do not concern myself with cache_dirs. It has nothing to do with the code or anything like that.

 

It's that I find it partially able to do what it's intended to do until you reach a limit of files.

With 32 bit linux kernel there is only so much low memory you can use before you start having issues.

 

This manifest itself in a number of ways, which sometimes look like a memory leak.

 

I remember a while back a member was having trouble when running the mover script.

In that we put some directives to drop the cache before and after the mover ran.

It resolved his problem.

 

If you are using a cache disk with mover, this may help. The down side is, all your disks will spin up once a day.

It is a good time to do a collective file list for each disk.

Then drop the cache, then let cache_dirs reload it and let the disks spin down.

 

In my own case, I had so many small files on so many disks, I would get an out of memory error every night.

cache_dirs aggravated that.  Once I turned off cache_dirs and drop the cache after my nightly rsyncs, the problem went away.

If I happen to execute two of these massive rsyncs in parallel, I would have out of memory errors again.

 

I see that a DU is posted with disk sizes. It's not the size of the disk, but the count of the files.

I had millions and millions of files.  While many people here collect movies, I collect music.

This has the result of many more files of a smaller size.

 

hundreds of thousands of music files.

I also collect source for review. Expand it, review it learn from it. leave it there.

 

With my rsync_linked_backup solution, it worked really well, but since every dated directory had files hardlinked to the previous backup, it looked like may more files then actually existed.  You can imagine the growing count of source code files that are constantly linked to prior directories every day.

 

So it's more about the number of files and how many times you wade through it vs what other daily activities are caching date.

Link to comment

Just started a new cache_dir process (as root) at the top of the hour, this time I'm doing: cache_dirs -w -i "Movies"

can see the mem free dwindling away.. but where is it going? just memory leak in cache_dirs?

Since cache_dirs is just a shell script, it is basically doing this in your example:

 

create a semaphore_file  (it can be removed with "cache_dirs -q" to cause cache_dirs to stop)

 

while semaphore_file_exists

do

  s=current_time (to track how long it takes to perform the "find" command)

  find /mnt/disk1/Movies >/dev/null 2>/dev/null

  find /mnt/disk2/Movies >/dev/null 2>/dev/null

  find /mnt/disk3/Movies >/dev/null 2>/dev/null

  find /mnt/disk4/Movies >/dev/null 2>/dev/null

  f=finish_time_of_find

  d=duration

  sleep for X seconds based on calculated duration

done

 

If the shell had a memory leak, it could be the culprit as it loops.  (although possible, it is more likely that something else has the memory leak, since so many others use the "shell" and typically it gets found/fixed by the linux community.)

At one point (long ago) there was a limit on the number of child processes a shell could "fork."  That was addressed long ago by using a sub-shell

 

It could as easily be an issue in the kernel, or in the user-share file system. (although the user-shares are not normally being scanned by the "find"commands)

 

In any case, a memory leak would would normally show as increased memory size of a given process.  sorting as you have, by "M" should show the culprit.

 

Joe L.

 

Just updated my post above with the values at 27hours in.

 

So right now, where is all the memory it's used at (I don't see any process memory actually show the allocation of the memory)?

I don't run other apps/plugins (besides unmenu - I do have the status/powerdown scripts installed but obviously those arent running).

 

Link to comment

Base (45 hours running cache_dirs):

top - 08:17:00 up 5 days, 21:45,  2 users,  load average: 0.27, 0.32, 0.35
Tasks: 145 total,   1 running, 144 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.9%sy,  2.3%ni, 96.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,  3085816k used,   941088k free,   182972k buffers
Swap:        0k total,        0k used,        0k free,  2678436k cached

 

Base (47 hours running cache_dirs):

Active / Total Objects (% used)    : 648683 / 717451 (90.4%)
Active / Total Slabs (% used)      : 14877 / 14877 (100.0%)
Active / Total Caches (% used)     : 54 / 99 (54.5%)
Active / Total Size (% used)       : 79111.36K / 84939.12K (93.1%)
Minimum / Average / Maximum Object : 0.01K / 0.12K / 16.12K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
426028 375540  88%    0.05K   5836       73     23344K buffer_head
45110  41699  92%    0.30K   1735       26     13880K radix_tree_node
27455  27455 100%    0.41K   1445       19     11560K reiser_inode_cache
74912  72241  96%    0.12K   2341       32      9364K dentry
16452  16452 100%    0.44K    914       18      7312K fuse_inode
  1488   1488 100%    2.58K    124       12      3968K unraid/md
  7875   7875 100%    0.31K    315       25      2520K inode_cache

 

Ran:

# cat /proc/sys/vm/drop_caches
0
echo 1 > /proc/sys/vm/drop_caches

 

After:

top - 23:14:56 up 4 days, 12:43,  1 user,  load average: 0.11, 0.52, 0.50
top - 10:48:10 up 6 days, 16 min,  2 users,  load average: 0.16, 0.69, 0.55
Tasks: 145 total,   2 running, 143 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.4%sy,  1.1%ni, 98.5%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4026904k total,   491452k used,  3535452k free,    38608k buffers
Swap:        0k total,        0k used,        0k free,   237576k cached

 

Active / Total Objects (% used)    : 260470 / 589736 (44.2%)
Active / Total Slabs (% used)      : 13075 / 13075 (100.0%)
Active / Total Caches (% used)     : 54 / 99 (54.5%)
Active / Total Size (% used)       : 51250.83K / 77518.22K (66.1%)
Minimum / Average / Maximum Object : 0.01K / 0.13K / 16.12K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
299884  13896   4%    0.05K   4108       73     16432K buffer_head
43602  14627  33%    0.30K   1677       26     13416K radix_tree_node
27455  27455 100%    0.41K   1445       19     11560K reiser_inode_cache
74912  71904  95%    0.12K   2341       32      9364K dentry
16452  16452 100%    0.44K    914       18      7312K fuse_inode
  1488   1488 100%    2.58K    124       12      3968K unraid/md
  7875   7875 100%    0.31K    315       25      2520K inode_cache

 

Notice that memory usage dropped quite a bit... so if this is the 'magic' fix why doesn't cache_dirs just do this during the mover script time?

Link to comment

Notice that memory usage dropped quite a bit... so if this is the 'magic' fix why doesn't cache_dirs just do this during the mover script time?

 

 

It probably doesn't belong there.

It belongs in the mover script and only belongs there if you are running cache_dirs AND/OR you have a huge number of files.

 

 

I had issues without running cache_dirs. So did another member.

Link to comment

In simple terms where do we believe is the ram going?

 

i understand cache_dirs wants to keep the filelist? folder list? in ram.. seems like almost 4gb of ram for ~940gb of movies is a bit excessive.

 

:/mnt/user/Movies# du -h --max-depth=1
936G	./Movies-SD
44G	./Movies-Asian
68G	./Movies-Ghibli
9.4G	./Movies-Stand-Up

:/mnt/user/Movies# du -a | cut -d/ -f2 | sort | uniq -c | sort -nr
   6101 Movies-SD
    899 Movies-HD
    179 Movies-Asian
     60 Movies-Ghibli
     41 Movies-Stand-Up

Link to comment

In simple terms where do we believe is the ram going?

 

i understand cache_dirs wants to keep the filelist? folder list? in ram.. seems like almost 4gb of ram for ~940gb of movies is a bit excessive.

 

It's not how large the data set is, it's how many files.

When the kernel tries to throw away the dentries, cache_dir is there forcing them to stay active.

It's doing what it was intended to do.

I don't have the code, but the chances are that it also adjusts the kernel to allow the dentries to be preferred to be in ram.

 

I think it's an architecture limitation. It's an issue with low memory. When you run out of low memory, you start to get OOM errors.

It's kernel tunings along with the md driver tunings.

I'm hoping the x64 architecture will resolve this issue.

 

I had 8GB and I had the same issues with a huge number of files.

 

Another person had problems with an app and running out of memory. Here's the thread, you can also see what I put in my rsync script.

http://lime-technology.com/forum/index.php?topic=28920.msg258077#msg258077

 

And another person who put the drop_cache line in the mover script.

http://lime-technology.com/forum/index.php?topic=29798.msg267760#msg267760

Link to comment

In simple terms where do we believe is the ram going?

 

i understand cache_dirs wants to keep the filelist? folder list? in ram.. seems like almost 4gb of ram for ~940gb of movies is a bit excessive.

 

It's not how large the data set is, it's how many files.

When the kernel tries to throw away the dentries, cache_dir is there forcing them to stay active.

It's doing what it was intended to do.

I don't have the code, but the chances are that it also adjusts the kernel to allow the dentries to be preferred to be in ram.

 

I think it's an architecture limitation. It's an issue with low memory. When you run out of low memory, you start to get OOM errors.

It's kernel tunings along with the md driver tunings.

I'm hoping the x64 architecture will resolve this issue.

 

I had 8GB and I had the same issues with a huge number of files.

 

Another person had problems with an app and running out of memory. Here's the thread, you can also see what I put in my rsync script.

http://lime-technology.com/forum/index.php?topic=28920.msg258077#msg258077

 

And another person who put the drop_cache line in the mover script.

http://lime-technology.com/forum/index.php?topic=29798.msg267760#msg267760

 

right, which is why I posted how many unique files were in the folders. there's not that many folders or files...

just checked again and per the share it says there are: 6,054 files and 1,252 folders

wasting so much memory and increase cpu load non stop with a chance of OOM (no swap) just does not seem like its worth it in the end.

 

oh well, maybe 0.6.x will bring some optimizations so one wouldn't need this sort of hack anyways. oh well thanks for all the feedback but I'm giving up on this for now

Link to comment

...

I think it's an architecture limitation. It's an issue with low memory. When you run out of low memory, you start to get OOM errors.

...

 

Right thats the missing fact that had me head scratching. Nice one WeeboTech.

 

zoggy the obvious next step is for you to try the 64 bit beta and see if it can be replicated. is this viable?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.