IamSpartacus Posted November 15, 2016 Share Posted November 15, 2016 Strange issue here. I can't get SMB transfers from a single disk one UnRAID01 to cache on UnRAID02 to exceed 50MB/s. Cache to cache I'm exceeding 400MB/s and if I do say 4 simultaneous transfers from 4 different disks in UnRAID01 to cache in UnRAID02 I will get 50MB/s per disk (200MB/s overall). With iperf3 between servers I get over 8GB/s (I'll troubleshoot this at a later date). At first I thought maybe a bad disk (my disks are rated at 190MB/s reads) but since all disks including some different models are behaving the same I'm doubting it's a disk issue. Any ideas? P.S. I'm getting identical behavior both ways whether I go UnRAID01 -> UnRAID02 or UnRAID02 -> UnRAID01. I've confirmed all shares on both servers are configure to use cache. Both servers running 6.2.4. EDIT: I just tried rsyncing /mnt/user/Share to /mnt/user/Share and I'm getting even worse performance (sub 20MB/s). Quote Link to comment
TSM Posted November 16, 2016 Share Posted November 16, 2016 If the 4 data drives in UnRAID01 are connected to a slow drive controller that could explain everything. I'm talking about SATA speed? SATA III, SATA II, and SATA I and etc.. Is the cache drive connected to the same controller that the other drives are? Even if the drives are connected to sata ports on the motherboard, it's very common for there to be some ports that run at a slower speed than others. This is true even of expensive high end boards sometimes. And if you are using a plug-in card, you also will want to check your motherboard documentation to be sure that the slot is running at full speed. For example a lot of manufacturers will have a slot that physically looks like it's for a card of a certain pci-e bandwidth, but internally it's running at a slower speed. Quote Link to comment
gubbgnutten Posted November 16, 2016 Share Posted November 16, 2016 You're not really providing us with anything to work with. Diagnostics would be useful. With iperf3 between servers I get over 8GB/s (I'll troubleshoot this at a later date). What's to troubleshoot with the 8GB/s? Too fast? Too slow? You're not accidentally confusing Gb(it)/s for GB(yte)/s? Quote Link to comment
JorgeB Posted November 16, 2016 Share Posted November 16, 2016 Strange issue here. I can't get SMB transfers from a single disk one UnRAID01 to cache on UnRAID02 to exceed 50MB/s. I've been noticing this also, for some time now, I need to further investigate this but it affects most/all of my servers, though it doesn't happen always and some times the speed fluctuates from normal to slow during big transfers. Quote Link to comment
IamSpartacus Posted November 16, 2016 Author Share Posted November 16, 2016 If the 4 data drives in UnRAID01 are connected to a slow drive controller that could explain everything. I'm talking about SATA speed? SATA III, SATA II, and SATA I and etc.. Is the cache drive connected to the same controller that the other drives are? Even if the drives are connected to sata ports on the motherboard, it's very common for there to be some ports that run at a slower speed than others. This is true even of expensive high end boards sometimes. And if you are using a plug-in card, you also will want to check your motherboard documentation to be sure that the slot is running at full speed. For example a lot of manufacturers will have a slot that physically looks like it's for a card of a certain pci-e bandwidth, but internally it's running at a slower speed. All my drives are connected to an on-board LSI2116 controller (SuperMicro Xeon D board) via SAS3 to SATA3 breakout cables. I'm doubting the controller is my issue especially since the cache drives are connected to the same controller and aren't seeing these limitations. You're not really providing us with anything to work with. Diagnostics would be useful. With iperf3 between servers I get over 8GB/s (I'll troubleshoot this at a later date). What's to troubleshoot with the 8GB/s? Too fast? Too slow? You're not accidentally confusing Gb(it)/s for GB(yte)/s? Yes that was a typo, I meant 8Gb/s. With jumbo frames enabled I'm hoping to squeeze more performance out of my 10Gb connection at some point. Diagnostics attached. spe-unraid-diagnostics-20161116-0846.zip Quote Link to comment
JorgeB Posted November 20, 2016 Share Posted November 20, 2016 I'm not 100% clear if your issue happens when transferring unRAID to unRAID directly or using a Windows desktop to make the transfer, if it's the latter see this and for a possible workaround try: Settings -> SMB -> Samba extra configuration: max protocol = SMB2_02 Click apply, stop and re-start array and see if speed improves. Quote Link to comment
IamSpartacus Posted November 21, 2016 Author Share Posted November 21, 2016 I'm not 100% clear if your issue happens when transferring unRAID to unRAID directly or using a Windows desktop to make the transfer, if it's the latter see this and for a possible workaround try: Settings -> SMB -> Samba extra configuration: max protocol = SMB2_02 Click apply, stop and re-start array and see if speed improves. I've tried UnRAID to UnRAID directly using rsync and get even worse speeds (<20MB/s). But yes, mainly I'm using a Windows desktop to make the transfer. I will add those Samba extra settings, test, and report back. Thanks. Quote Link to comment
IamSpartacus Posted November 21, 2016 Author Share Posted November 21, 2016 I set those SMB extra settings on both my servers but it did not help. Pretty frustrating to have 10Gb networking and not even be able to transfer at my disks' rated read speeds. Quote Link to comment
JorgeB Posted November 21, 2016 Share Posted November 21, 2016 It works for me so it's probably a different issue. Quote Link to comment
IamSpartacus Posted November 21, 2016 Author Share Posted November 21, 2016 It works for me so it's probably a different issue. What Windows Desktop version are you using the initiate the transfers? Quote Link to comment
JorgeB Posted November 21, 2016 Share Posted November 21, 2016 See this post by Tom: http://lime-technology.com/forum/index.php?topic=53689.msg516544#msg516544 It helped me troubleshoot, as unRAID to unRAID worked at normal speed. Quote Link to comment
IamSpartacus Posted November 21, 2016 Author Share Posted November 21, 2016 See this post by Tom: http://lime-technology.com/forum/index.php?topic=53689.msg516544#msg516544 It helped me troubleshoot, as unRAID to unRAID worked at normal speed. Thanks for the link. I will fool with this some more this week. Howerver if I can't make any progress I'm going to start testing SnapRAID to see if I get better performance with it. I can't live with such poor performance of my arrays with the amount of money I've put into my network. Quote Link to comment
JorgeB Posted November 21, 2016 Share Posted November 21, 2016 Also, since your symptoms look so similar to mine double check the workaround is working, connect to an unRAID share, open windows powershell and type: Get-SmbConnection Confirm that SMB version in use (dialect) with unRAID is indeed 2.0.2 Quote Link to comment
IamSpartacus Posted November 21, 2016 Author Share Posted November 21, 2016 Also, since your symptoms look so similar to mine double check the workaround is working, connect to an unRAID share, open windows powershell and type: Get-SmbConnection Confirm that SMB version in use with unRAID is indeed 2.0.2 Yup, confirmed 2.0.2. Quote Link to comment
IamSpartacus Posted November 22, 2016 Author Share Posted November 22, 2016 See this post by Tom: http://lime-technology.com/forum/index.php?topic=53689.msg516544#msg516544 It helped me troubleshoot, as unRAID to unRAID worked at normal speed. Just did the test mentioned in the post you linked by Tom and I got 62MB/s in the first test (network copy) vs. twice that speed (120MB/s) via local storage copy. Quote Link to comment
JorgeB Posted November 22, 2016 Share Posted November 22, 2016 So you're issue is definitely different from mine, I was getting full speed unRAID to unRAID copy, your problem looks like a network issue. Quote Link to comment
IamSpartacus Posted November 22, 2016 Author Share Posted November 22, 2016 So you're issue is definitely different from mine, I was getting full speed unRAID to unRAID copy, your problem looks like a network issue. I don't see how this can be a networking issue when the slowness only happens when transferring data that has been moved to the protected array. If I only transfer data between my two cache pools I'm getting much higher speeds. Quote Link to comment
JorgeB Posted November 23, 2016 Share Posted November 23, 2016 It can also be a user share issue (user shares are usually slower than disk shares due to FUSE overhead I assume, but it shouldn't be so slow), you can try copying from a disk share. Enable disk shares and try copying from e.g.: \\tower\disk1 You can also do the direct unRAID test, use one of the disks as share, e.g.: mount //tower2/disk1 /x -o user=nobody Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.