papnikol

Members
  • Posts

    341
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

papnikol's Achievements

Contributor

Contributor (5/14)

6

Reputation

1

Community Answers

  1. First thank you very much for your willingness to help. I also learned about the concept of Rebuilding a drive onto itself. I would usually do a new config and accept the array. I hope to exploit it, hopefully, for one last time So, I removed drive 8 and its emulated counterpart seems to be working. I also mounted drive 8 as unassigned and it seems to work but, still, although I changed cables, the SMART test seems to get stuck at 90%, while it works for other drives. This is a bit strange, so I am thinking of using another drive for a rebuild. Does that make sense? Also, would it hurt if I simultaneously allowed the new 2nd parity to rebuild? That would allow me to be sooner protected from another failure...
  2. Thanks a lot. Disks 1, 2, 3 (along with 4) share both the same SATA splitter and miniSAS. I will have to investigate. Just a few clarifications: what do you mean by that? since disk 8 is redballed, wouldn't the only way to mount it be to take it out of the array? Should I take it out of the array and try mounting with UD? (btw, I changed SATA cable and it still gets stuck at 90% of the short SMART test) -What exactly do you mean by "rebuild on top"? Accept disk 8 as good and rebuild the new parity drive?
  3. Hi everyone,, I posted this question to r/unraid but I am posting it here too because it is somehow urgent: I have an array with dual parity. A few days ago I decided to replace both parity drives with larger ones (16TB). The first parity drive replacement went smoothly. The second parity sync stuck at some point and I had to reboot. When I came back one data drive was red balled. I run a short SMART test but it gets stuck at 90% (I tried other drives and they completed their short SMART test). This indicates that the drive might be problematic. So, my situation is: One of the 2 parity drives is un-synced and one data drive might need to be replaced. I also have a feeling that another disk might be on the verge of failing. What is the optimal solution:? replace the redballed disk with my previous parity drive (which was obviously removed from the array) and have the array simultaneously rebuild the new parity drive and the failed drive. 1st rebuild the new parity drive and then the failed data drive (so, 2 runs) Remove the new parity drive, accept the array as correct and check if the drive works. Then, if it works, add again the second parity drive for parity sync Some other solution I have not though of... Any input is welcome towerp-diagnostics-20240314-1206.zip
  4. No, you were, I get it now. So, a viable solution would be to use a splitter with only 2 SATA connectors? Again, do you think I could use 1 splitter on every connector of the PSU cable? like that (where every "[" is a 1->2 splitter): PSU | | |----[ | |----[ | |----[ | |----[
  5. I see. Yet, my Corsair has 4 SATA connectors on one cable. Maybe it is specifically made////
  6. And they are easier to find, but, generally, from all the comments in various forums, the common wisdom is that they should be avoided (although that is what I have been using up to now)
  7. Interesting, can you please point me to the source?
  8. There are a few but they are relatively expensive: https://de.pcpartpicker.com/products/power-supply/#A=550000000000,2050000000000&D=20&sort=price&page=1&E=5,14 So, going back to the original question(s):
  9. No, I am talking about this: Thanks for the info, I am aware, but even many good quality 750W PSUs usually do not have enough connectors to accommodate 20 drives. Whenever I built a PC, the last thing I use to cut costs is the PSU, but, obviously, I am trying to avoid buying something that I dont need, hence the power extenders (I also happen to have them at hand).
  10. Hi everyone, I am looking for a PSU that will power many HDDs/SDDs (up to 20). I can buy a PSU with 6 cables, with 4x and 2x SATA connectors but these are higher wattage PSUs that are unnecessarily expensive for my purpose. Trying to avoid IDE -> 4x SATA cables, I wanted to try for my first time SATA expanders. So, I have 2 questions: 1. Is it safe to connect a 4x SATA expander to a 4x SATA cable (thus having 7 drives per cable in total)? 2. Is it better to connect the expander at the 1st or the last connector of the SATA cable. Or is it of no importance? Thanks in advance for your answers....
  11. So I tried various proposed stuff (and was away for a bit, sorry for taking too many days to get back to you after your prompt inputs). What I found out is by disabling docker, the problem was fixed. Additionally, I noticed in the dashboard that docker was using quite a lot of memory, when enabled. More particularly 67% of my 6GBs, which is almost 4GBs. I noticed that in my other unraid machine docker was also using ~4GB. I do not understand why that would be since all my containers were stopped. Maybe the memory is reserved by docker?
  12. Thanks a lot for the input. I will boot normally and when the problem occurs again (I am guessing soon), I will post the diagnostics. I might try that too, afterwards. A plugin might be the problem but I seriously doubt it has something to do with Docker since I am nor running automatically any of the installed plugins.
  13. Hi everyone, I am using unraid v6.11.3. Suddenly (without me having made any significant changes), the user shares have started disappearing and they do not reappear until I reboot. Initially it happened once in a while but it keeps happening more and more often (3 times today). Here is what I tried (having also searched the forum for suggestions): 1. Tried stopping and restarting the array - didn't work 2. Changed Settings > Global Share Settings > Tunable (support Hard Links) to "No" - didn't work 3. I have some Docker Containers but they are not running so I dont think they could be the problem 4. Updated all apps - didn't work 5. Checked the XFS filesystem in all disks EDIT: I just noticed an error in the logs: Could this be the problem? My server has 6GB and I am running cache_dirs (folder caching). I also have a LOT of files in the server. I always had but they have been slowly increasing. Could I have reached a limit where the memory is somehow filled? Any help would be very welcome as my server has been rendered unusable.