MDEnce

Members
  • Posts

    45
  • Joined

  • Last visited

About MDEnce

  • Birthday 04/12/1968

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

MDEnce's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. JorgeB - TY Ran a filesystem check, then finally converted to XFS. Working again!
  2. Had some data drive failures. Did a parity rebuild, but then the cache drive got locked. Mover won't run. Can't move anything from Cache drive with SAMBA or MC. Also a couple dockers say they need to update, but the update won't take without the cache drive being unlocked. Please help. Last time I had an issue I needed to post a Dx file, I did it wrong, so I ran Dx, then rebooted (it won't start the array automatically) and ran Dx again, then started array and ran Dx again. Please disregard the ones that don't matter. unraid-diagnostics-20210321-2109.zip unraid-diagnostics-20210321-2100.zip unraid-diagnostics-20210321-2052.zip
  3. That was my original post. I was in the middle of rebuilding Parity 2 and Data Disk 14, when Parity went too. Now I have 3 bad disks. My whole point was how can I get the array to start with the 3 new disks. I'd like to get some parity protection back, even though I realize that disk 14 may be lost. Just don't want to make anything worse if I do a "new config" or something.
  4. I can't start the array with either parity drive, or data drive 14 installed. If I remove them, I can start it. It's running unprotected, and without data drive 14. I shut down the array, reinstalled the 2 parity drives and the data drive 14, and then ran the Diagnostics when it failed to load them. unraid-diagnostics-20210111-1930.zip
  5. Will this one work? unraid-diagnostics-20210110-1030.zip
  6. Here you go - TIA unraid-diagnostics-20210106-2239.zip
  7. I'm on UNRAID 6.9.0-beta35, Pro key. I had 2 parity disks, a cache disk and 19 data disks. (all are XFS except the cache, which is still ReiserFS) Disk 14 went bad, as did Parity2 at the same time. I shut down the array, unassigned the 2 bad drives, and restarted the array. Then I shut down the array again, reassigned each drive and restarted it, and started a parity rebuild. About a day into the rebuild (40-50%) Parity1 went bad too. I stopped the rebuild, shut down the array again, unassigned the 2 parity drives, and discovered that Disk 14 had disappeared. I restarted without any parity, then immediately shut down the array again and tried to restart it, but I keep getting a "Start error" message at the bottom of the Main UNRAID page, and the UNRAiD GUI starts to load the array, hangs, and then dumps me back to the array being stopped. I've tried restarting; shutting down and then restarting; formatting the drives; I even ran preclear on them. But every time I try to start the array, it looks like it's starting, and then dumps me back to the array being shut down. If I remove all 3 disks, the array will start. Help.
  8. This happens frequently. Not sure I should post here (since I'm on 6.9.0-beta 25) or elsewhere, but Every few days the memory seems to just fill up (99-100%) and everything locks up. I can (eventually) stop the array, and then reboot, and everything is fine - for awhile. Fix Common Problems says: Your server has run out of memory, and processes (potentially required) are being killed off. You should post your diagnostics and ask for assistance on the unRaid forums. Attached is the Dx file. Please help. TIA unraid-diagnostics-20200913-1032.zip
  9. THX! Here's the Dx. I'll work on the cache file system check. and let you know if I run into a problem. unraid-diagnostics-20190327-0301.zip
  10. Had a failing 2 TB drive. Replaced it with an 8 TB drive. Decided to do the "serverlayout" plug-in while I was at it and removed each of the drive bays to note the SN's for each. Meanwhile I'd grown tired of the lag and 100% CPU utilization of my dual core Pentium processor, so I'd ordered a Xeon E3-1271 and it showed up in that days mail, so I shut the system down, and swapped the CPU. Adjusted the BIOS on the initial startup. On start up unRAID then rebuilt the bad (Empty) 2TB drive to the new 8TB. Also, docker apps won't write to cache, so SAB won't D/L, Plex wont play, etc. Then restarted and discovered that 2 different discs now had errors. Did another parity rebuild of those, but when finished I get a run "fix common problems" error, and get the following errors: disk6 (ST31500541AS_6XW03QXX) is disabled Begin Investigation Here: disk16 (ST5000DM000-1FK178_W4J0BDZF) is disabled Begin Investigation Here: Unable to write to cacheDrive mounted read-only or completely full. Begin Investigation Here: Attaching log Please help unraid-syslog-20190326-0619.zip
  11. I did NOT reformat. I DID run xfs_repair. First with -v (since I'd deleted the log with the -vL). Then I ran -vL. Both stopped after phase 2. Then I ran with -v again, and it went through to phase 8. Then I shut down the array, and when I restarted it SAID Disk7 was back. Not sure I trust it. The wiki on re-doing a drive says I should move the data, re-format as ReiserFS, then reformat back toXFS. I'm moving data now. Should I do the rest, or is that a waste of time? BTW, I really appreciate the assistance.
  12. OK, but now it is telling me "Unmountable disk present:Disk 7 • WDC_WD30EFRX-68EUZN0_WD-WCC4N6ZVE5A3 (sdj)" I'm letting parity re-build my replaced Disk 4, but when that finishes, am I going to have to format Disk 7, and either lose what's on it, or have to let parity rebuild it too (I didn't think unRAID could rebuild 2 failed disks without a 2nd parity drive), or is there something else I should do?
  13. xfs_repair -v /dev/md7 Phase 1 - find and verify superblock... - block cache size set to 709856 entries Phase 2 - using internal log - zero log... zero_log: head block 463014 tail block 462128 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. not sure what this is telling me. Do I just stop the maintenance mode in the GUI, and then what?
  14. Does that mean I'll need to connect a monitor and kb to the unraid box (I run headless) or there some ssh command to restart in maintenance mode?