CHBMB Posted November 16, 2016 Share Posted November 16, 2016 Just thought I'd show what happens on my system when copying a lot of small files. As expected, atrocious speeds. Yep, I get it now... Thanks! It wasn't so much driving home a point, more like "I understand your suffering" Quote Link to comment
jeffreywhunter Posted November 16, 2016 Author Share Posted November 16, 2016 Just thought I'd show what happens on my system when copying a lot of small files. As expected, atrocious speeds. Yep, I get it now... Thanks! It wasn't so much driving home a point, more like "I understand your suffering" Yep, while I'd like it to be faster, there's comfort in knowing "it is what it is" and that behavior is normal. Quote Link to comment
CHBMB Posted November 16, 2016 Share Posted November 16, 2016 FWIW IT actually got down to about 100kb/s at some point, damn Plex metadata. I abandoned it and just recreated my library from scratch... Quote Link to comment
RobJ Posted November 16, 2016 Share Posted November 16, 2016 For those suffering from these slowdowns, I'd be interested in knowing if adjusting the kernel disk caching parameters (the vm.dirty* settings) might make a positive difference, especially for those users with very large amounts of RAM. It's possible it's over-caching, queues filling with high numbers of requests, not flushing out as quickly as needed. I'd try decreasing the numbers from 10 and 20 to 2 and 5, or similar, and test again. You can use the Tips and Tweaks plugin to play with the numbers. For reference, see this post and its reference links. Quote Link to comment
jeffreywhunter Posted November 17, 2016 Author Share Posted November 17, 2016 For those suffering from these slowdowns, I'd be interested in knowing if adjusting the kernel disk caching parameters (the vm.dirty* settings) might make a positive difference, especially for those users with very large amounts of RAM. It's possible it's over-caching, queues filling with high numbers of requests, not flushing out as quickly as needed. I'd try decreasing the numbers from 10 and 20 to 2 and 5, or similar, and test again. You can use the Tips and Tweaks plugin to play with the numbers. For reference, see this post and its reference links. What numbers would your recommend for 24GB system? My Stats look like this: http://my.jetscreenshot.com/12412/20161117-1rud-18kb.jpg[/img] http://my.jetscreenshot.com/12412/20161117-qthp-29kb.jpg[/img] Here's my tips & tweaks. I don't do anything with VM's at the moment. Would vm.dirty_background_ration and vm.dirty_ratio not have an affect then? http://my.jetscreenshot.com/12412/20161117-jyiq-36kb.jpg[/img] And if its any user, vmstat... http://my.jetscreenshot.com/12412/20161117-8uc4-16kb.jpg[/img] Quote Link to comment
RobJ Posted November 17, 2016 Share Posted November 17, 2016 For those suffering from these slowdowns, I'd be interested in knowing if adjusting the kernel disk caching parameters (the vm.dirty* settings) might make a positive difference, especially for those users with very large amounts of RAM. It's possible it's over-caching, queues filling with high numbers of requests, not flushing out as quickly as needed. I'd try decreasing the numbers from 10 and 20 to 2 and 5, or similar, and test again. You can use the Tips and Tweaks plugin to play with the numbers. For reference, see this post and its reference links. What numbers would your recommend for 24GB system? I'd try changing 10 and 20 to 2 and 5 for one test, and to 5 and 10 for another test. But I could be wrong, and it won't affect anything relevant to your slowness. Here's my tips & tweaks. I don't do anything with VM's at the moment. Would vm.dirty_background_ration and vm.dirty_ratio not have an affect then? The vm.dirty variables don't have anything to do with VM's, they control caching buffers for regular Linux disk I/O. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.