wildwolf

Members
  • Posts

    31
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

wildwolf's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Currently running sickchill on a synology, but that presents a msg that it's no longer supported. I have an unRAID server, and found this thread. Is this a similar sickchill? Same sickchill? Running in a docker on unRAID seems desirable to me. I think I could then get rid of my synology. Is there a good howto/video for installing sickchill on unRAID docker from scratch? I'm assuming I can't easily port any thing over and would just need to start from scratch.
  2. Okay, a little more searching and I think I've learned this is known/expected (unfortunately) behavior. It's unRAID server that can't keep up with Plex streaming when file transfers occur.
  3. Hi, maybe I just have higher expectations than reality, so I need to ask how this should work or if I potentially have something wrong. I have a pretty decent system, albeit a few years old (64GB RAM, i7-6500K. I have a 2 port 10G card via unifi 48-port Enterprise POE switch, managed by a Unifi Dream Machine Pro. I have 1 line wired to 10G switch for my LAN network (192.168.1.1/24), and the 2nd line wired to my switch for my WiFi network (192.168.30.1/24). If I transfer a large amount of data (currently, 1TB) via the 192.168.1.1/24 network, it does the typical SMB thing - runs at about 38-50 MB/sec speeds (unRAID is encrypted, so I assume this slows things down), then eventually runs out of buffer? slows down and catches back up for a few more minutes. The windows file transfer icon tells me about 4 hours to finish the transfer. In the mean time, my daughter is trying to watch some kids shows on Plex. Almost immediately, it buffers like crazy and can't keep up - in short, it is unplayable. I was hoping by upgrading both lines to the switch to 10G that I would be able to transfer files at max speed on 1 line (wired) and plex would still be able to feed the wifi clients a decent amount of speed on the 2nd 10G line. Is it because they're still on the same switch, even though they're also on different lines and on different VLANs and the transfers over that switch are still the limiting factor? Is the limiting factor the CPU and the power encrypting 1TB of data to store on a share and not network throughput?
  4. Found my problem. Didn't complete step 2 correctly. Disks are reordered, parity 2 is currently resyncing. Thanks, everybody, for the help & patience.
  5. Something isn't right. After step 5, Start is not an option and it says Stopped. Invalid Expansion.
  6. Okay, hopefully last question, to make sure I have the reorder work correct. Here is the current layout (see pic). I am trying to remove the gap in numbering on the GUI now. From my understanding, these are my steps: 1. Stop array. 2. Tools --> New Config 3. Click Preserve current configuration, click all, then click close. 4. Go to Main page. 5. Un-assign 3AYC as disk 8, assign it to disk 5. 6. Un-assign 5PNC as disk 9, assign it to disk 6. 7. Un-assign Parity 2. 8. Check box, Parity is valid. 9. Start array. 10. After it starts up, shut down again and reassign parity 2 drive to parity 2 spot/slot. 11. Start array - and rebuild parity 2 will commence. Is that accurate?
  7. They're already labeled by manufacturer (WD drives are awesome for this!). Just wondering if there's any protection or performance benefit to separate the parity 1 per cable, or if not, I'll just slap them into a slot to match what the new config to be - labeled top to bottom 2 parity and 6 data.
  8. Thanks for all the help guys. Card removed, down to 1 card, 2 breakout cables, and all is looking a little better to me. Now, off the wall question. My card (9211) has 2 connected cables with 2 drives connected each. Is there any benefit to having 1 parity drive on each connected breakout cable? Either performance or parity/data safety? Or no real world difference, just slap the drives where it's most convenient/easier to maintain/know where your drives are? When I originally built, I put 1 parity per cable, thinking it might be safer. However, I now realize that may not be true and wanted to ask if it matters. The system has 8 drives mounted vertically (Fractal Define R5), might be easier to know which drive is which if I put Parity at top, Parity 2 below that, Data 1 3rd, Data 2 4th, and on down the line. But is there any real detriment to doing that? Or would it be better to still separate parity drives per card's cable bundle?
  9. I think I finally understand. I can remove my other card, rearrange physical disks as I want. But my setup will still remain: P1 P2 D1 D2 D3 D4 D8 D9 until I do new config, which will then require rebuilding new parity 2 to 'close the gap' in the GUI to: P1 p2 D1 D2 D3 D4 D5 D6 Is that correct?
  10. This is where I'm confused then. I want to accomplish both. I want to physically rearrange the 8 disks that remain, by swapping cables & such around so they are all on 1 controller card, instead of 2. Plus, I also hope to "close the gap" in the GUI layout. Or, will physically rearranging things (thus eliminating the 3 empty slots by eliminating all the extra slots from that 2nd card when it's removed and consolidating down to 1 card) automatically cause the drives to show up in all the right places, as long as I have parity & parity 2 identified as the correct 2 drives? Are you/others stating I/anybody could (in my view below) move disk 9 to the cable connector that is disk 5 (different controller card), and it'd still boot up with all the data intact, without having to run new config or do anything else?
  11. Yes, it was checked, so I assume it was correcting parity errors, and not some other type of errors.
  12. Yes, I'm pretty sure, but have no way to know? So, as long as I rearrange SAS cards, put drives back in the same order (1-2 parity in serial # order, 3-8 in serial # order) in new slots 1-8, I should be good? Added screenshot of parity checks - this is the only time I've ever received errors on a parity check, so my assumption is they are 'correcting parity check' errors.
  13. Correct, and I want to reduce from using 2 SAS cards down to 1 SAS card. Also, when I 'cleared' the last 2 5TB drives, the script just stopped working - on both instances. (I ran and tried separately). I went ahead and removed the drives and changed configuration each time because I had already moved the data off each drive using unbalance. I assumed it failed, and that my parity, even though it said valid wasn't valid. After I got the 2nd one out, I ran another parity check to be sure. It just finished: Last check completed on Fri 19 Nov 2021 11:39:53 AM EST (today) Finding 760363348 errors Duration: 1 day, 12 hours, 51 minutes, 44 seconds. Average speed: 105.5 MB/se I assume because I was right, I removed and it didn't finish/update parity along the way when I tried to clear. However, I do appear to still have same size used array, all my larger drives in place, and I just finished the parity check so it all should be good now. I just need to consolidate the cards/cables down to 8 lines of a single SAS card now.
  14. Thanks again, Jonathan & trurl. This has indeed speed up the process. I can't seem to find the "yes I'm sure" checkbox anywhere, but so far, everything seems to be working smoothly. Sorry - that's in the "Replacing a Data Drive" unRAID wiki: https://wiki.unraid.net/Replacing_a_Data_Drive I have noticed a small discrepancy in some (very little, a key word or two) of the text in the wikis, but made my way through it. I am almost done, and I'm sure someone more experienced might have done all this faster. I do have more questions, though. Currently, I'm sitting here: P1 14TB P2 14TB D1 14TB D2 14TB D3 8TB D4 8TB D5 (removed) D6 (removed) D7 (removed) D8 8TB D9 8TB I have 2x SAS9211 cards. Card 1 has 2x SFF8087-4SATA cables. I can't look right now, but I believe: 1st cable has 4 drives (1 of which is parity) 1nd cable has 2 drives Card 2 has 1x SFF8087-4SATA cable. 1st (only) cable has 1 drive (I think this is the other parity drive). I know for a fact that I can trace my cables to drives, look at my serial numbers, and determine which 2 drives are the 2x 14TB (and which is #1 and which is #2). I'd like to remove card 2 from the system. Hook up all 8 drives to card #1. I think (someone tell me if I'm wrong?) it would be smart to have 1st cable - probably 1st drive parity, other 3 data drives 2nd cable - probably 1st drive parity, other 3 data drives Is there a safe/easy way to do this? I understand I'll have to rebuild parity drive #2 from this thread: https://forums.unraid.net/topic/54221-reorder-disks/ Is it as simple as disconnecting everything, removing my 2nd card, attaching drives as I've indicated to the 8 slots that are left (in the order I choose), identify both the parity drives, and the other 6 as the data drives, and start array?