JonathanM

Moderators
  • Posts

    16120
  • Joined

  • Last visited

  • Days Won

    65

JonathanM last won the day on November 11 2023

JonathanM had the most liked content!

Retained

  • Member Title
    unAdvanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

JonathanM's Achievements

Grand Master

Grand Master (14/14)

2.3k

Reputation

204

Community Answers

  1. Not using bridge network.
  2. Obviously before messing with it make a backup. Stop your HA VM, and click on the 32GB under CAPACITY. Change it to 42G, or whatever floats your boat, and apply the change. Set up a new VM with your favorite live utility OS as the ISO. https://gparted.org/livecd.php is a good option. Add the existing haos vmdk vdisk file as a disk to the new VM. Boot the new VM, it should start the utility OS, where you can use gparted to expand the partition to fill the expanded vdisk image.
  3. Which is why the Unraid regular container startup has customizable delays between containers. Black start from nothing is easier, partially running start during backup sequence is more complex, it needs even better customizations. Shutdown and startup conditionals and/or delays would be ideal. As an example, for my nextcloud stack I'd like nc to be stopped, wait for it to close completely, stop collabora, stop mariadb. Backup all three. Start mariadb, start collabora, wait for those to be ready to accept connections, start nextcloud. The arr stack is even more complex. The arr's and yt dl need to be stopped, then the nzb, then the torrent and vpn. Startup should be exactly the reverse, with ping conditionals ideal, blind delays acceptable.
  4. I think that is backwards. emhttp was the only web engine in the past, currently nginx is the web server, and emhttp takes care of the background tasks.
  5. Sorry, I didn't mean to imply that there are properly working boards that don't run with all slots full. If the manufacturer says their board will run with model XXXX RAM, it should run it fine, but that doesn't mean boards don't fail. I just wanted to let you know that could be a failure symptom, you can have a board where all the slots are fine, all the DIMM's are fine, but all 4 at once isn't. I personally had a board that ran fine with all 4 DIMMS for years, until it didn't. The only failure mode was random errors when all 4 slots were full, it ran perfectly on any 2 of the DIMMS, but put all 4 in and memtest would fail every time.
  6. Are you positive nothing else was trying to access the drive during the test?
  7. Some motherboards just won't run with all slots filled.
  8. Yeah, but they hawk the ability to easily daisy chain them in the same system, even have pinout and diagrams to show how. I can see how stacking these as you add drives could be a good way to go, assuming they work as promised.
  9. It's more correct to think of the USB stick as firmware with space for storing changed settings. Unraid loads into and runs from RAM, it only touches the USB stick when you change settings. Container appdata and executables should live on a SSD or multiples for redundancy, separate from the main storage Unraid array. Legacy documentation and videos will refer to that storage space as "cache", now it's more properly referred to as a "pool" of which you can create as many as make sense for the desired speed and redundancy.
  10. Hoopster summed it up quite well, but I wanted to stick my .02 into the discussion to hopefully clear this up a little more. Parity doesn't hold any data. Period. It's not a backup. Period. It contains the missing bit in the equation formed by adding up the bits in an address row. Pick any arbitrary data offset, say drive1 has a 0, drive2 has a 1, drive3 has a 1, drive4 has a 1, so parity would need to be a 1 to make the column add up to 0. Remove any SINGLE drive, and do the math to make the equation 0 again, and you know what bit belongs in that column of the missing drive. So, you can protect ANY number of drives, and as long as you only lose 1 drive, the rest of the drives PLUS PARITY can recreate that ONE missing drive. Lose 2 drives, and you lose the content of both, but since Unraid doesn't stripe across drives, you only lose the failed drives. Unraid has the capability to use two parity drives, so you can recover from 2 simultaneous failures. However, the second parity is a much more complex math equation that takes into account which position the drives are in, so it's a little more computationally intensive. The extra math is trivial for most all modern processors.
  11. New, unproven, expensive? Advertising looks great, do you have any links to third party real tests?
  12. That's not a thing. Unraid will quite happily continue to use a disk slot even if the drive fails a write and is disabled.
  13. Strange. I'm out of things to try at this point. Maybe someone else will have some ideas.
  14. Probably because you are using a custom network for delugevpn instead of the default bridge. Binhex doesn't support anything but plain bridge. Doesn't mean you can't make it work, but it can be challenging. Maybe the radarr port was added while you were in plain bridge mode?