look into min_free_kbytes


Recommended Posts

Unfortunately my test system which is a new esxi build that my live server is going to move to is basin 2 m1015's in passthrough with 2 hd's 1 data 1 parity and the only plugin is unmenu.  I don't run nfs or afp as it don't require It of rmy setup so I can only test the things I use it for.  One thing I must note and I am really not sure if I am just seeing things but the system is much more responsive, it goes offline and online much quicker.

 

1 modification that I cannot test is that on my live system I need to run with this command in my go file or I get "page allocation failure" messages flooding the syslog when preclearing drives

echo 65536 > /proc/sys/vm/min_free_kbytes

 

Has the "page allocation failure" been an issue with the later builds and doing preclears??  I don't have a spare disk to put into the system and run a test preclear to see.

Link to comment

Unfortunately my test system which is a new esxi build that my live server is going to move to is basin 2 m1015's in passthrough with 2 hd's 1 data 1 parity and the only plugin is unmenu.  I don't run nfs or afp as it don't require It of rmy setup so I can only test the things I use it for.  One thing I must note and I am really not sure if I am just seeing things but the system is much more responsive, it goes offline and online much quicker.

 

1 modification that I cannot test is that on my live system I need to run with this command in my go file or I get "page allocation failure" messages flooding the syslog when preclearing drives

echo 65536 > /proc/sys/vm/min_free_kbytes

 

Has the "page allocation failure" been an issue with the later builds and doing preclears??  I don't have a spare disk to put into the system and run a test preclear to see.

LOL well I had this one-liner in the issue list I  maintain:

- look into min_free_kbytes

But I couldn't remember wth it was for... until now.  This change has not been made to -rc15 but I'll put into next release (which hopefully will be 'final').

Link to comment

Unfortunately my test system which is a new esxi build that my live server is going to move to is basin 2 m1015's in passthrough with 2 hd's 1 data 1 parity and the only plugin is unmenu.  I don't run nfs or afp as it don't require It of rmy setup so I can only test the things I use it for.  One thing I must note and I am really not sure if I am just seeing things but the system is much more responsive, it goes offline and online much quicker.

 

1 modification that I cannot test is that on my live system I need to run with this command in my go file or I get "page allocation failure" messages flooding the syslog when preclearing drives

echo 65536 > /proc/sys/vm/min_free_kbytes

 

Has the "page allocation failure" been an issue with the later builds and doing preclears??  I don't have a spare disk to put into the system and run a test preclear to see.

LOL well I had this one-liner in the issue list I  maintain:

- look into min_free_kbytes

But I couldn't remember wth it was for... until now.  This change has not been made to -rc15 but I'll put into next release (which hopefully will be 'final').

 

Hahahaha well at least I can now say I helped at least just a little :)  Issue really only showed up for me when preclearing a drive and doing other file transfers to and from the server.

Link to comment

What I've decided to do with this is not make this change in 5.0 release.  Very few people run into this problem and it's pretty simple to fix with a custom line in the 'go' file.  If I change the setting of this tunable right now and it causes some other issue, then it's going to delay 5.0 even more.  So for now this is marked as "enhancement request".

Link to comment

I had problems with VM on virtualbox (under unraid) and have to change the vm.min_free_kbytes to 128MB because every time I tries to copy files across the network I got an error from the VM network adapter that it couldn't copy because there was not enough memory the network buffer could use.

 

sysctl -w vm.min_free_kbytes=131072

 

unRAID default was 3806=3.71MB.

 

Link to comment

What I've decided to do with this is not make this change in 5.0 release.  Very few people run into this problem and it's pretty simple to fix with a custom line in the 'go' file.  If I change the setting of this tunable right now and it causes some other issue, then it's going to delay 5.0 even more.  So for now this is marked as "enhancement request".

 

 

That makes sense. I put it here for tracking purposes.

Link to comment