cj0r

Members
  • Posts

    291
  • Joined

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

cj0r's Achievements

Contributor

Contributor (5/14)

0

Reputation

  1. Boom nailed it. Close the thread! Next song!!!
  2. Frankly, that user should be banned from the community. That was such an unprofessional and unethical move. Limetech would have made its announcement when it was ready. If the user was truly concerned, they could have approached Limetech before calling a 5 alarm fire with incomplete information. They are no friend to this community.
  3. That is far from accurate when you look at the server software space. Unraid baking in a pricing model that can ensure they have a steady cash flow is more than acceptable considering how long they've gone without some form of a fee or licensing system like this. Even moreso considering how much they've given over the years when comparing the original offering and how much it has evolved to its modern day state. They're also allowing customers to keep using their product even after that annual license expires... that's incredibly generous considering most licensed software becomes useless once expired. Yes you can build a similar free open source alternative but it will be a hell of a lot more work to get setup, won't be nearly as modular, and it will never provide the same easy to use experience. Frankly all those saying things about free alternatives should be donating $$$ to those devs as well but I'm sure they are not and that's just sad and is a big problem across the free/open source software community. I personally have bought multiple licenses from Unraid and recommended to many over the years. I have absolutely no regrets and fully support their decision to create a reasonable revenue stream that they more than deserve.
  4. If you're not booting beyond bios you have something else going on. You should create a separate support thread since I don't think this is related to the release. My guess is your bios settings have reset so you're no longer booting from USB key, key itself has died, or motherboard has a hardware issue such as the USB port the key is plugged in to going bad. Could just be bad luck/timing when paired with the update.
  5. +1 Would love to see f2b built into Unraid. As mentioned above, as soon as the My Servers plugin started becoming a thing, I feel like fail2ban should have also had a native roll out. There's a lot of users out there and a whole bunch of them aren't aware of the security risks involved in exposing any part of their server to the outside world. f2b would be one more layer to protect them.
  6. Oh baby you're embarking on a fun project. I did the same kind of consolidation between April and May on my primary server; 23 drives down to 7. Took a week or so shuffling data between the old and new drives but I came out unscathed in the end, you will too. Measure twice and cut once! Be sure to sell your old drives after properly clearing. I recovered all upgrade costs and then some because of drive shortages and Chia demand... nice bonus for the efforts.
  7. Ya I caved in late last night and recreated the docker image. That corrected the error, but yes I agree something is up with the cache drive (it's a fairly old drive). I might swap it out for a new drive and replace the cable while I'm at it. For reference and future people that may stumble on this thread, the error line started with "BTRFS error (device loop2)". You can see the full log in the zip. It repeated over time and was definitely related to my docker image (Plex in this case). I was able to fix it by recreating the image, but I did so on a different drive. As I said above, I will be replacing the drive in question along with the cable just to be safe. It's time to retire that drive anyway. Thanks for looking into it!
  8. Wondering if someone could take a look at my diagnostic output and give me an idea of what is going on and how I could fix this? It just started happening last night, both times mid Plex stream. I was forced to restart earlier today and everything seemed ok but it just happened again. Thanks! tower-diagnostics-20181125-2151.zip
  9. Well almost one month later, and one drive failure, I've finally migrated all my drives over to XFS (52TB's of data across 13 drives). I can safely say since I started the move, I have not had a single hard lock up like previously. The future will be the real test though since I was constantly stopping the array etc. to change the drives over after moving their data off. Thanks for the help. Hopefully it works.
  10. As I had mentioned, after this morning's freeze, I'm 90% sure this is a totally different disk from what I had experienced previously. I'm going to go with the scorched earth policy and just move my data and convert all the disks. Shouldn't take "too" long, and you're right, I should do it anyway (not knocking it, treated us fairly well over the years).
  11. Didn't even think of that, thank you for the suggestion. I bought a new drive anyway so I'll drop it in, format as XFS, move data over, format other drive and so on. Gonna be a pain in the ass but easy enough to accomplish since I have an empty drive to work with. Thanks again!
  12. I've had this happen a few times in the past few months and it's now frustrating enough to reach out about it. Occasionally I find that I can no longer access the server web interface or the shares to my server. Plex for instance becomes no longer functional as well. I can also no longer cleanly shut down the server. The login for the web interface will pop up and I try to log in but the actual WebUI itself will never load. I can connect to my VM's and they're functioning fine other than their ability to connect to their shares. As mentioned Plex no longer works. The shares are no longer connectable and neither is the flash. I can however connect via Putty. I did a little exploration within midnight commander during one of the freezes a while back. I can usually access all drives directly (not the shares themselves) except for one rogue disk during whatever this event is. This time it was disk Z4D22HWK but I'm pretty sure in the past it's been a different disk, W501BSFF or WXL1H641J5RY (pretty sure it's the first one). If I try to access one of these culprit disks, the entire session of Putty will freeze. I have uploaded a syslog and hoping someone can figure out what is happening. It's very sporadic and I haven't noticed a pattern to it happening... it seems random. This is unraid 6.3.5 and if you need a list of my hardware I'll be able to provide later. Thanks! syslog912.txt
  13. This is no longer available. It sold for $350 for future reference. Included in Bundle: Supermicro X9SCM-F Server Motherboard Supermicro X9SCM-F IO Shield Intel Xeon E3-1230 v2 Processor - 3.3 GHz Intel CPU Heatsink Cooler and Fan SK Hynix 32GB (4x8GB) PC3-12800E Memory (1600Mhz) Perfect condition. Pulled directly from my ESXi build, has served me extremely well. Plenty of power and highly efficient. Runs vanilla unRaid without issue. IPMI has saved me a lot of frustration many times over... internal USB port is also an excellent feature to have. Asking for $350 via PayPal. I'll cover shipping within US however international buyers will need to cover shipping costs.
  14. ... If you need AD now and only need 7 drives, send me an email and we can work something out: [email protected] Heh heh heh, pretty much opening the flood gates on that one
  15. I'm with the dirtysanchez on this one. I use the 3TB 7200RPM 1TB platter drives for parity and data on servers that I need fast read/write access. I use the 4TB 5900 1TB platter drives for archival storage. I do not use the Red & NAS drives. By the time the drive is having issues, I've either outgrown it or the most current technology is better and worth the upgrade for me. I probably don't consume as much space rapidly as others. In complete agreement. Not worth it for me to pay the premium for the NAS drives because I get rid of my drives so quickly. I try to swap my drives out every 2 years or less. Less lately because they've gotten so large and I don't run out as quickly. Almost time to get rid of my 3TB drives I must say....