frevan

Members
  • Posts

    13
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

frevan's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Apparently using a Windows To Go drive with a GPT partition scheme lets me boot it from usb in the VM.
  2. I've been trying to get a VM in unRAID to boot from a USB device. More specifically, I have a Windows To Go installation on an external ssd and use that to boot the same (work) system on various computers. I would like to use it in a VM as well. The first thing I tried was to pass the physical device through to the VM, using "sata" as the disk type. This lets me boot from the drive, but there are problems with the GPU passthrough and with overall speed (it's sluggish for some reason, takes long to long in and start various startup applications). So that doesn't seem to be the way to go. Now I'm using a passed-through PCIe USB controller with the drive attached to it. Neither OVMF nor SeaBIOS seem to be able to boot from the drive this way. They don't seem to see it, as far as I can tell. What am I missing?
  3. I have a bunch of Windows VMs that I currently use for testing our software. Moving them all to KVM would involve a bit of work (not too much I guess), but more importantly I'd have to reactivate Windows on all of them. Furthermore, I can easily run them on other (Windows) computers if need be, as long as I install VMWare Player on them.
  4. I'm trying to run VMWare Workstation inside a VM that in turn runs in unRAID 6.2 beta 23. While I can run nested VMs, they run extremely slowly. As far as I can tell, this is because I've been unable to use hardware assisted virtualization ("Virtualize Intel VT-x" option in Workstation). When I enable the option, VMWare complains that this isn't available on the processor. How would I make sure this is available in the VM? I found some information here (hiding KVM from the VM, setting vmport to off, some other things) but none have worked so far.
  5. Another thread mentioned that core assignment means that a VM can only use the assigned cores, not the others. So now I'm thinking of assigning all cores to all VMs. Though I guess I'll just have to experiment with it and try to find the optimal configuration. Good to know about unRAID and core 0, however. I'll take that into acocunt.
  6. Indeed, I have considered that and I'll also experiment with it once the system is built.
  7. I'm going to build an Unraid system with a quad core i7. I'd like to give one VM 3 cores (plus the associated hyperthreaded cores), which only leaves one actual core for the second VM. This is less than ideal, so what I have in mind is: - VM1 gets cores 1-3 (plus hyperthreaded cores) - VM2 gets cores 3-4 (plus hyperthreaded cores) I wouldn't really have both computer doing a lot of work at the same time, but they would be running together most of the day. Would this have any downsides? I assume it's not ideal for VM performance, but I've done similar things in VMWare before without really noticing it.
  8. Thanks for the replies. The loss of the console doesn't matter to me, I'll just run a couple of Windows VMs.
  9. It seems to reboot fine with SeaBIOS instead of OVMF. I'll have to try it a bit more to be certain, but it looks good so far. Does anyone know if this would be a bug in OVMF or something I did wrong?
  10. I tried with Remote Desktop now and it looks like the computer crashes when I remove the GPU (the RDP connection is lost and can't be restored).
  11. I didn't know that was possible. I'm not sure what happens in this case, as I don't see anything on the screen. This is what I do and what (seems to) happen: 1. I initiate a delayed reboot with the command "shutdown /r /t 10" (and click away the warning message that appears) 2. I safely remove the graphics card (but not the corresponding soundcard, if that matters) 3. the screen goes black 4. the computer's hdd has some activity for a bit, so I assume it's rebooting 5. the screen never comes back on
  12. I forgot to mention I'm using the latest 6.2 beta.
  13. Hello, I configured some virtual machines with Windows 10 and GPU passthrough and they work very well so far. There's only one problem at this point, but it's a bit annoying: when I reboot Windows, it shuts down, I see the POST screen and then I get a black screen with just a text cursor (which doesn't blink) on it. I then have to force stop the VM and start it again, which works fine. Has anyone seen this problem before? Is there a solution? I did search the forum, but I didn't find anything so far. Some information about the system: - Gigabyte motherboard with Z170 chipset - Skylake Pentium cpu (cheapest I could find, because I'm just testing at this point) - tried GeForce GT610, Radeon R9 270X and Quadro M2000 cards I tried with and without breaking up the IOMMU groups and with just one and two cards in the system.