Drewster727

Members
  • Posts

    42
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Drewster727's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. I am rolling back to 6.9.x - not seeing any traction on this issue.
  2. tower-diagnostics-20220704-1022.zip
  3. Can confirm this issue with 6.10 -- Also upgraded to each latest hotfix version, still having NFS issues. One of my client servers basically loses access to half the data, empty folders in lots of places.
  4. @zandrsn hey dude -- I believe that's in the glances.config file that you reference.
  5. @klamath @johnnie.black I'm also running into this issue. I've got 22 drives in my array, most are 4-6TB, 1x8TB, and dual 8TBs in parity. Your last message is confusing, did you make a change that made a difference? My NFS files access is basically *useless* when running a parity check. I seem to be capped at 75MB/s throughout the whole check as well, so it's slowwwww. My CPU never goes above 40% either. Also, this is on 6.4.1, so the UI is still responsive during checks, just NFS access is atrocious. Thanks
  6. Try this: Slightly different issue, but could be the same solution.
  7. I had a similar issue, however, I was seeing a clear out of memory sign in the logs. You can check out the following post, get the tips & tweaks plugin, and try what fixed it for me: Worth a shot! -Drew
  8. Ok, so just to clarify: Turn off the array Switch to maintenance mode (ensures no writes?) Swap the parity disks in the GUI Let it rebuild Once complete, exit maintenance mode If anything fails, pop the old 6TB parity disks back in to resolve the issues. Is this correct?
  9. Ok, figured that was probably the case. I may get risky and just do a full rebuild on both of them to minimize the time I'm putting pressure on the array.
  10. Well, the only reason I wasn't considering doing them one at a time is because when parity checks run, they're slow and causes performance issues with my array during the sync that I'm trying to avoid. Question -- if I do it one at a time, is unRAID smart enough to rebuild parity from the existing parity disk or does it still have to read from the entire array during the sync process?
  11. I've currently got 2x6TB WD Red drives as my parity disks (dual parity). I recently purchased 2x8TB HGST Deskstar drives to replace them (so that I can start adding 8TB drives to my array). I've never had to rebuild dual parity before, let alone replacing the disks. I assume it's exactly the same process as a single disk. In other words, my plan to upgrade them is: Preclear new 8TB disks (already done) Stop the array Shut down the server Swap the current parity disks with the new ones Boot up server Re-assign parity slots to the new drives Turn on the array and just let it rebuild Is this a correct procedure for dual parity rebuilds? Thanks!
  12. @johnnie.black hey man -- after adjusting those vm.dirty_ values down to 1 and 2, everything has been very stable. I have kicked off the mover several times the past week and have had 0 issues. Thanks again!
  13. Anyone know if this is normal behavior with the cache amount? Was using this much before and after tweaking my 'dirty' cache settings... just curious.
  14. @johnnie.black well, it crashed again when the mover ran, same out of memory exception. Pushing those values down to 1 and 2.