dave_m

Members
  • Posts

    99
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

dave_m's Achievements

Apprentice

Apprentice (3/14)

4

Reputation

  1. I started getting these errors after upgrading to Windows 11, or at least that's when I noticed them. Changing the SMB settings to enable multi channel and disable enhanced macOS interoperability seems to have resolved it for me.
  2. Now that I'm finally upgrading all my drives after seeing a catastrophic server failure at work (5 out of 8 drives in two servers), here are the first drives removed from mine. They had data and were being used until this last week Samsung HD103SJ 8/2010 Samsung HD103SI (undated) 2 Seagate Barracuda Green 8/2011 2 Hitachi HDS7230 8/2011 and Refurbished 2/2011 Samsung HD204UI 3/2011 3 WD15EARX Recertified from 2012 3 WD20EZRX Recertified from 2013 Bunch of other random (and some recertified) drives to go after the preclearing finishes. I'm impressed with how long it all lasted, most were the cheapest green drives I could find, and they survived being taken out and put in storage for half a year during a move in 2016 as well. (After adding some drive dates, I realized they were all older than I thought, as I expected to see more dates from 2013/2014)
  3. I upgraded from 6.6.7 to 6.7 at the same time I made some other changes, and the system would reliably stop responding within 1 to 8 hours. One of the other changes was replacing a failing drive and rebuilding the array, so I eventually backed out every other change but that one. Each time I would bring up the server, it would try and rebuild the replaced drive, but stop responding before it completed. It wasn't crashing, as the lights were still on, but there was no disk activity. I tried to rebuild the drive at least 8 times on 6.7, but it never completed. I finally rolled back to 6.6.7 last night and the rebuild completed and the server is running normally. There were never any errors reported on 6.7 and one of the rebuilds was with all plugins disabled. It passed Memtest multiple times, and the VM and docker apps were not running during the rebuild. Here's the build details, the hardware isn't especially new: M/B: ASRock - 970 Extreme4 CPU: AMD FX-8320E RAM: 16GB DDR3 1600 Case: Norco 4224 Controllers: LSI SAS1068E & SAS2008 Drives: 16 data + dual parity, cache + 2 outside array for docker / vm Apps: MythTV VM and Plex docker NICs: onboard Realtek RTL8111E + PCIe BCM5721 (bonded), and PCI Intel PRO/1000 (vm) I waited before rolling back as I had initially added another SAS1068E that might be bad and accidentally reset the BIOS settings, but the system hangs continued after correcting both of those.
  4. I am running 6.3.5 with dual parity have an empty disk that is assigned to the array and already formatted with RFS...what's the easiest way to switch it to XFS? The disk was being used, but it was trivial enough to move the files off of it.
  5. I just wanted to post up a success story about how well Unraid worked for me again. I’ve been an unraid user since 2011, rebuilt in 2013 with the current Norco RPC-4224 case. I moved in September of 2016, taking the drives out of the server and placing it all in a storage unit for 8 months. Then it was moved to the new house but not reassembled until yesterday. A few minor hiccups with some cheap (but not needed) PCIE controllers, but I was able to get it up and running on 6.1.9 without any real drama. Today it completed the parity check with no errors, I’m listening to music on Plex again, upgraded to 6.3.5 and am currently adding the second parity disk, without any hiccups. Still a happy customer :->
  6. I found the updated Global SMART Settings section, but unchecking the 188 Command time-out box and then clicking Apply doesn't result in actually saving the new setting. When the page reloads the box is checked again. The per disk setting has the same behavior.
  7. I see this same behavior as well, regardless of which browser I use. However, it might be related to the array disks being spun down. If the majority of the disks in my system are spun down, it's sometimes impossible to get the preclear plugin popup to appear. If the disks are spun up, then it's usually only one or two clicks to get the popup.
  8. Thanks for the update, sounds like Maintenance Mode is the way to go.
  9. That all makes sense, but I'm wondering if the system being in Maintenance mode is a suggestion or a requirement. With two drives to clear it would be nice to keep it running, and a cache drive will keep it in basically read-only mode if the mover is temporarily disabled...
  10. Stock unraid, no plugins at all. New motherboard is Asrock 970 extreme4 with 16GB of memory. Both use a Realtek onboard NIC, so it could be that driver as well. I should have a PCI NIC around somewhere, maybe I'll test that out.
  11. I'm not sure if anyone else has run into this issue, but I swapped out the motherboard and memory and the 5.0.x series still has slower write speed. The 6 beta2 and beta5a have normal write speeds.
  12. It will work on 6.0 if you comment out the "ulimit -v 5000" line. Use this suggestion at your own risk, there's probably a better solution than completely commenting the line out.
  13. On the shares page, the free space is including the cache drive, if that share has files on the cache drive. So in my case, several shares appear to have 2TB more free than they really do.
  14. And the speed is back to normal in 6.0 beta 1 Not sure what to do here. Assume 5.0.x is just a bad match for my hardware and use 5.0RC10 or 6.0? syslog-6-beta1.zip
  15. I tried 5.0.4 with the mem=4095M option, and no change. I've also tried swapping out the cache drive for a different one, and switching the controller it was attached to, and no change after either of those.