-Daedalus

Members
  • Content count

    135
  • Joined

  • Last visited

Community Reputation

5 Neutral

About -Daedalus

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  1. CPU allocation: Sanity check

    Hi all, Having recently moved to a 1950X, and experimenting with clustered VMs and a few other things, I find myself needing to move around core assignments more than before. Before, with only a VM or two, I'd use 'isolcpus' to leave a couple of cores free for VM use. I'm wondering though: Can I take this a bit further, and isolate everything bar, say 1 or 2 cores, and manually assign everything? I'd leave a core or two (whatever is needed) for unRAID itself, and the lighter Docker containers - download clients, monitoring, etc. - and manually assign the heavier containers - Plex and MineOS, chiefly - as well as any VMs I'm running. This would mean that the machine wouldn't need a reboot every time I wanted to assign more isolated cores to a VM, and only Plex itself would need a restart if I needed to give it more resources. Anyone else do this? Does it work well?
  2. Agreed. I've brought things like this up before, but the comments are usually: 1) There's a plug-in to do this 2) There's a (relatively involved) procedure for this, check the wiki 3) We don't need this build in, see the first two points In the same vein, unRAID could do with more disk management features in general. Having a "Remove" button for a drive that does what's being suggested here would be nice. Having a "Replace" button, that does this, but also moves the data to a new disk, would be nice as well.
  3. unRAIDĀ® by country

    I never knew this was a thing! How many in Ireland?
  4. Double or Triple "Cache" pools

    Yes, although (AFAIK) snapshots aren't implemented in the GUI yet, and the whole idea of this is to not have the VMs on the same storage as all the Docker images constantly ready/writing things all over the place. I think I might have to look at other solutions, to be honest. unRAID does lots of things pretty well, but nothing amazingly. ESXi has much better VM management, ZFS has (arguably) much better storage. If Limetech were in the habit of giving even a rough roadmap of the direction they're thinking of going, that might help, but we don't really hear about features until they show up in snapshots, and for something like this - that typically is more of a longer-term investment - I don't really think it serves the community well.
  5. Protected Flash Storage (other than cache)?

    There's a feature request for multiple cache pools. I'd really like something other than cache - that isn't as slow as the main array, but is protected - for VMs and the like.
  6. unRAID OS version 6.4.1-rc1 available

    Haven't updated from 6.3.5 yet, but planning to today or tomorrow. For those of us on Threadripper, are there any manual steps - go/config file edits or similar - that are still needed?
  7. Dark theme?

    I know this has been raised before around the time of the original launch , and I think Limetech said they would look into it at the time, but (unless I'm missing something), it hasn't been implemented. Recently invested in a new monitor, and I now find this forum's theme is especially retina-searing. Any chance could get something going on this?
  8. Anybody planning a Ryzen build?

    Cheers. BIOS 2.00 was released not too long ago for the Taichi. I'm going to try that this weekend with RC15e (released today) assuming you don't beat me to it. If it's still not working I'll flash 1.70 and see what happens.
  9. unRAID OS version 6.4.0-rc15e available

    Awesome! Will try this release at the weekend, along with a new BIOS update for my X399 system. Anything worthwhile in here for Ryzen/Threadripper?
  10. Thanks for the responses guys. Limetech: I'll give that workaround a go next chance I get, though probably won't be for a little while. Yup, checked, card on latest firmware. Charlie: I'm using 1.80. Figured things would naturally get better with BIOS revisions on a new platform, but what do I know? same as above. Helldirver: Correct, I didn't flash the bootfrom for this card either, so can't see if HDDs are detected, but as stated, the H310 I was using saw drives fine, then didn't here. I'd expect the same is true of the 9211.
  11. Anybody planning a Ryzen build?

    Similar to what John_M asked, if you try with one card, and it's detected, do you see your logs full of the same errors I do? You don't always see it by opening the live log, so I usually go Tools -> System Log. What slots have you tried the card in? Because I've tried it in all of them, same thing. I have a 9207-8i on the way as well. I'll give things a go with that at some point, but the exact same behavior was present with an IT-flashed H310 as well, which makes me think it's not the card.
  12. AMD Ryzen update

    Thanks for adding a specific topic for this. I've experienced one crash so far on Ryzen (Threadripper) due to this, but that's on 6.3.5. I haven't been able to move to the RCs because my HBA isn't detected. Is there a possibility this can be looked at? I'm willing to pay for a troubleshooting session to facilitate data gathering, testing, etc. if need be. Relevant reading: https://forums.lime-technology.com/topic/61500-64rc101112-it-flashed-lsi-9211-8i-drives-not-detected-on-x399/
  13. [Support] Linuxserver.io - Unifi

    Thanks guys. Had to set adapter to host mode for the container, and had to manually force an inform via SSH, but all up and running now. Man does regular consumer stuff suck in comparison.
  14. [Support] Linuxserver.io - Unifi

    Hi all, Running this container in 6.3.5. Get to the setup in the webUI, but can't detect any devices. All settings are default, no ports changed. Server is static 1.100 address. Installing controller software on a Windows machine on 1.8 works as expected. USG, AP and switch are all seen on the configuration wizard. Any ideas?
  15. Anybody planning a Ryzen build?

    Using Chrome.

Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.