Die_piggy

Members
  • Posts

    36
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Die_piggy's Achievements

Noob

Noob (1/14)

0

Reputation

  1. all good, I figure I have a higher chance of the disk failing while building parity seeing as it would read 100% of the drive. Thanks, ill give it a try
  2. So, long story short, I have spent the last week or so moving data off array disks, reformating some old drives from Reiserfs to XFS and moving the data back. Because I had 16 array drives and a buttload of data to move around, I disabled my 2 parity drives and used them in unassigned devices to store data while doing this. While in this process, I have noticed that one of my array drives is VERY slow (maybe 10-15MB/s) while every other drive gets copied to at about 183MB/s (with parity disabled). Usually, if I wanted to swap a drive out I would replace it and let parity rebuild it, however with them currently disabled I wanted to know if I can install a new drive, use new config to swap it in the array, and mount the old drive using unassigned devices to copy the data back. I have tried, and the drive is slowing parity rebuild times so I thought this might be a more efficient way to go about this. Obviously I have all VMs and dockers disabled while doing this to ensure nothing else is writing to the array. So, is this something I can do?
  3. got it. So this should be the proceedure: 1. Move data off emulated drive, 2. Bring Array offline 3. Replace Parity drive 4. Shrink array 5. Rebuild parity 6. Add old parity drive into array Sound right?
  4. I should mention I would rather not have the array unavailable for a couple of days if I can avoid it (ie don't want to do the parity swap procedure outlined here if possible https://wiki.unraid.net/The_parity_swap_procedure)
  5. Long story short, I just moved home and found an old 8TB drive which had been in a pc on extremely light duties. I ran a 2 pass check and everything looked good so I added it to my array. A couple of days later, I came back to hear the dredded clicking and the array being downgraded and the 'new' drive emulated. This was about 4 days ago, I have been away from home since, so only getting around to sorting something out now. My parity drive is currently 12TB, and the replacement drive I have ordered is 16. The emulated drive only has 50 odd GB of data stored on it, mostly Linux files. Worst case I can redownload what's on it. What's my best course of action? Can I move the data off the emulated disk and shrink the array before replacing my parity drive? Or is there a better idea which I haven't thought of? How would I go about emptying the emulated drive, can unbalance achieve this? Thanks in advance for the help
  6. Thanks. I just reseated everything and it seems to be doing the read check without errors now.
  7. I currently have the system up, but have disabled any downloading to try and keep the data as static as possible until I hear back from the gurus here
  8. Last night/this morning my server popped up an error and disabled one of my disks as it couldnt be writen to. I disabled the drive, ran an extended SMART test which suggested the drive was OK, and went to re-enable the drive in the array. This evening when I went to add it back to the array the system went into a "read check" and spat out thousands (over 2 million in a few mionutes) of errors on 'disk0' I have since canceled the read check as it seemed to have stalled the system. Am i correct that the most likely problem is a cable/set of cables, or possibly the sas controller or port multiplier? I have been rejigging the server for a few days now, and am not sure which drive is where physically or on the controllers wotbox-diagnostics-20220303-1927.zip
  9. for anyone randomly searching, enabling 8x/8x bifeyrcation did the trick, everything is working perfectly now
  10. honestly, no I havent spent much time on it yet, I had a different issue which caused 4 drives to become "unmountable: Unsupported partition Layout" which I am working through right now from memory, the only thing I saw which could fit was a bifurcation setting
  11. They are both Crucial P2's. Just for clarity incase I didnt say it right, the m.2 drives are available and working, its when I put a 2nd pcie card in that I fail to post
  12. So, feel free to tell me if there is a better place for me to post this question. I have just built myself a new 12600k system, currently housing 14 HDDs (using SATA ports 0-5 and 8 ports on a SilverStone ECS04 SAS Controller), and 2 m.2 drives. Everything so far seems to be working well, at least after I updated to the f6 bios, as before none of my m.2 drives were showing in the BIOS. My PCIe lanes are populated like this: m.2 1TB drive in M2a_CPU (top slot) HBA Card in Top 16x slot M2P_SB (mid slot) unpopulated m.2 500GB drive in M2Q_SB (bottom m.2 slot) I tried to populate the 2nd 16x PCIe port (at 4x) with an IBM 46m0997 sas expander, as well as a couple of other random PCIe cards I had lying around, and the system was unable to post with anything in the slot. Anyone have any ideas on why I cant get a post? or alternatively, would a m.2 to 4x PCIe adaptor work in the spare m.2 slot?