Jump to content

c3

Members
  • Posts

    1,175
  • Joined

  • Last visited

Retained

  • Member Title
    nooB

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

c3's Achievements

Collaborator

Collaborator (7/14)

13

Reputation

  1. Durp... blindly updating the docker from time to time, never knew to do the upgrade from the GUI... Now I get This version of Nextcloud is not compatible with PHP 7.2. You are currently running 7.2.14. My docker containers (mariadb-nextcloud and nextcloud) are up-to-date, but I guess that does not help. Without a gui, how can I proceed?
  2. Most controllers can raid 1 across n number of drives, n>1. The odd blocks go to drive+1, even to drive-1. They do this because the rebuild and performance is better. Any non adjacent drives can fail. Mirroring stripes is also a performance advantage. Striping mirrors is the lowest performance config. Optimized striping for rebuild and performance
  3. You need to decide which factor is your primary concern, data durability (data loss), or data availability. As mentioned backups dramatically improve data durability. But if you are after data availability, you'll need to handle all the hardware factors power supplies (as mentioned), memory (ECC and DIMM fail/sparing), cooling, and probably networking (lacp, etc). The sparing process can be scripted. As a subject matter expert, and your vast experience, this will be straight forward. Perl and python are available in the Nerd Tools. This may allow you to worry less while working. However, I am not sure it would be "hot" as the array must shutdown to reassign the drive. You could implement NetApp's maintenance garage function, to test, and then resume or fail the drive.
  4. c3

    Managed Switches

    Hard to beat the used Dell market for home switches, but they can be loud. I ran one with fans unplugged for 5+ years, never died, just upgraded to POE. 24port $52 https://www.ebay.com/itm/Dell-PowerConnect-5424-24-Port-Gigabit-Ethernet-Switch-4SFP-M023F/372102804664?epid=1808131533&hash=item56a30e34b8:g:WFUAAOSw1JVZ5j6Z 24port+POE+10G $150 https://www.ebay.com/itm/DELL-PowerConnect-5524P-POE-24-port-Gigabit-Managed-Layer-3-Switch-2-SFP-ports/152999060231?epid=1430486060&hash=item239f746307:g:xFMAAOSwdIBa4lRQ
  5. That is a great idea! It would be a simple plugin to gather, anonymize, and ship the data. I already have a system which tracks millions of drives, so a baby clone of that could be used. It could be frontended with a page showing population statistics. I wonder how many would opt-in to share drive information from unRAID servers?
  6. Yeah, I too have a bunch of these and do not have any bad reports. The fans fail, but I have had them a long time, so that is to be expected. I have never had a speed problem. I stopped using them because of cost. If you are putting together lots of drives, the 24/48/60 drive units are cheaper than bolting these in, and the backplane is nicer than lots of cables. If you just need 5-10 drives, these are great.
  7. Naw, I am still thinking the config needs to be changed to remove the old directory names, and include the new if you want them.
  8. Wow, the same thing as you reported in March 1, 2017, might want to try the same thing this year and see if it works.
  9. Thanks for the XFS, as dev I can not really come here and promote it. 1% of 8TB is 80GB, your 120Gb is 1.5% which you probably did to avoid the warning at 99%. I :+1: your request for getting the setting to allow decimals as I am constantly dealing with people who claim out of space 100%, yet have 100s of GB available. When you have room for thousands/millions more of your average filesize, you're not out of space, just because you see a 100% from df.
  10. It depends... If the data is written, not deleted, changed, or grown, you can fill a filesystem very full. However once you begin making changes, called making holes, or fragmenting, things can become very ugly. Depending on the filesystem, you may notice as soon as 80% full, but certainly in the 90+% range the filesystem begin doing a lot of extra work if files are being changed. Also, almost all filesystems keep working on this to improve. That said, you want to put 8TB of cold storage, (what about atime, etc?), you can go very full 99%, just be sure it is truly cold. I would even single thread the copy to avoid fragments, but that is really just a read speed thing.
  11. Similar experience with 5TB years ago. Not sure unRAID will ever use the f2fs, but once the kernel upgrades (4.10 and beyond), using the dm-zoned device will mitigate this further.
  12. For drive 5 and 6 (or 3 and 4) At the end you will need to change the config and rebuild parity.
  13. Recently, Lime Tech has done a great job of keeping drivers updated. And you can always ask to get a driver added/updated if you find a need.
  14. Yes, this limited free space will cause issues. My advise, buy an additional drive so the archive drives are not forced to garbage collect so much.
  15. I am sorry for the delay, I was busy working on things like data checksum on the XFS roadmap. Everyone there understood why, it was just the timing and who would do the work. I did take time to have discussions with several disk drive manufacturers about the ECC performance, which remains RS. Two of them indicated I might get my hands on the details under NDA, of a non current device (like from 2TB). We spent a lot of time talking about the impact adding read heads (TDMR) will have on this whole process. There was a pretty good joke about going from helium to vacuum drives would help stabilize the head, but then how to fly in vacuum, maglev. I guess you had to be there. Since I was told RS is still being used, Lemma 4 ends with (emphasis is mine); "If that occurs, then Lemma 4 implies that there must have been more than e errors. We cannot correct them, but at least we have detected them. Not every combination of more than e errors can be detected, of course—some will simply result in incorrect decoding". This is the foundation of UDE(paywalled), which drives the need for filesystem level checksum. UDE is a larger set of conditions, especially when talking spinning rust. You can see where TDMR will help. To improve the chance of avoid anything that might have my ideas, work, or decisions; use Microsoft products, but only the newer filesystems. In other news, quietly slipped into unRAID 6.3.2 was f2fs. Which is very exciting (or worrisome), especially post 4.10, probably the next roll of unRAID. f2fs now has SMR support (took 2 years). But the stuff I work on takes a long time to do and even longer to get rid of. SMR is just the beginning, but the whole concept of decoupling the write head from the read head/track width was fundamental to driving density. Other filesystems will follow, and/or use dm-zoned. Doing so will have benefits for flash write durability. Best of luck avoiding. But things like filesystem checksum will be needed outside the spinning rust world, and the OP is probably grateful.
×
×
  • Create New...