jwegman

Members
  • Posts

    108
  • Joined

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jwegman's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. @uaeprozI realized that I'm late to the party, however had it been considered that instead of real-time transcoding for each client stream request (not capable of Direct Play), that instead you one-time pre-transcode (perhaps using Plex Optimized Versions feature) for those users? I suspect that attempting to real-time transcode such large/high bit rate h265 content to as many users as you anticipate may end in tears (and frustration) regardless of how much money/hardware you throw at it. https://support.plex.tv/articles/214079318-media-optimizer-overview/
  2. ...to answer my own question, the issue was resolved by changing the bridge interface's model type from 'vmxnet3' to 'e1000-82545em' (as Gridrunner uses in his example vm.xml): <interface type='bridge'> <mac address='52:54:00:51:66:48'/> <source bridge='br0'/> <model type='vmxnet3'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </interface> ...which Unraid resulted in altering the bridge interface element with the following (it automatically added the target dev, and alias): <interface type='bridge'> <mac address='52:54:00:51:66:48'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='e1000-82545em'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </interface>
  3. Are people able to log into the App Store on a fresh/clean Sierra 10.12.6 vm? I've tried several VM creation attempts and they've all resulted in "Verification Failed. There was an error connecting to the Apple ID server". Upon finalizing the Fusion VM, I had verified that I could log in using a valid Apple ID into the App Store, however after converting to KVM and running it in unraid, I hit upon the above error. Here's my VM xml: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Sierra_10.12.6</name> <uuid>190d9ca4-3e22-8b21-bcfe-eba9a9a53d31</uuid> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='26'/> <vcpupin vcpu='2' cpuset='11'/> <vcpupin vcpu='3' cpuset='27'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='28'/> <vcpupin vcpu='6' cpuset='13'/> <vcpupin vcpu='7' cpuset='29'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.9'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/190d9ca4-3e22-8b21-bcfe-eba9a9a53d31_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/OCZ_VERTEX460_A22BF061439003586/VM_Images/MacOS_Sierra_10.12.6/sierra_10.12.6_new.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='usb' index='0' model='piix3-uhci'> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:51:66:48'/> <source bridge='br0'/> <model type='vmxnet3'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </interface> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='1' device='6'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='none' model='none'/> <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,vendor=GenuineIntel'/> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=02:00.0,bus=pcie.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=02:00.1,bus=pcie.0'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=00:1d.0,bus=root.1,addr=00.0'/> </qemu:commandline> </domain>
  4. The lowest he went for me (last week) was $60 (with free shipping).
  5. FYI, this is back as of today; $159 @ BestBuy.com. My local BestBuy had a mix of the Thailand (which use the larger cache drives) and Chinese produced items: https://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?skuId=5792401
  6. ...I myself haven't yet assigned a parity drive on my secondary unraid box (which backs up my primary)... So I don't consider that as amazing. However, my use-case might be on the fringe.
  7. LXC was always about containers (if you might recall, Docker sprouted from LXC). LXD switched to the image concept (vs LXC's tarball approach) and scrapped the various C and shell script user space tools for a REST based client (which confusingly is name lxc)... There are Public (and private) image servers from which you can share and pull an image (just like Docker). LXC, while they touted the idea of application containers, it was primarily focused on system containers (para-virtualized linux instances). With LXD, they are 100% focused on system containers and defer application containers to the likes of Docker. Heck, they tout hosting Docker within an LXD system container. Docker's application container approach is great for single use things like Plex, nzbget, and sickrage, etc... However it sucks if you need an instance that runs more than a single application such as a LAMP environment, where you don't want to orchestrate a spin up of each individual docker instance for each component. LXD is great if you need a linux instance that you can run your suite of daemons/applications without the overhead of HVM (hardware virtualized) QEMU/KVM. You can even 'pass-through' devices such as graphics/audio cards for a Desktop instance (although I have no experience with GPU passthrough on LXC/LXD). I would love to see the addition of LXD to Unraid for the linux vm use-case where you don't want the bloat or performance implications of HVM and the purpose of the VM is more sophisticated than a single use Docker instance.
  8. I'd be much obliged. The funny thing is, I'm in an IBM town with a moderately large plant (which has been in decline the past decade or two), yet it's seemingly dry of old surplus equipment like these keyboards... Regards, Jake
  9. Good day all, I'm in the US (midwest) and on the hunt for a serviceable IBM keyboard... The 'clicky' kind. I'm open to any of the Model F's (from the PC,XT, and AT era) and Model M's (from the PS/2 era). I'm not particular on the cable interface, I'll work with any type including the rj45 terminal... If you have one that you'd like to send to a good home, please let me know and we'll discuss price (based on condition, etc)! Regards, Jake
  10. Have you enabled MSI interrupts? I had to do that for every single win10 vm that uses hdmi audio: http://lime-technology.com/wiki/index.php/UnRAID_Manual_6#Enable_MSI_for_Interrupts_to_Fix_HDMI_Audio_Support
  11. Yeah, I have two s2600cp2j's each with (2) gt730's (for VMs), and a third s2600cp2j with (1) gtx1080 (win 10 on bare metal). nice so using the gt730's for gaming vm's? I'm wondering if you plan to virtualize your third s2600cp2j with the gtx1080 with unraid my setup goal would be to run possibly two gtx1080's in sli (if possible) for a virtal editing box with adobe premiere how much did your s2600cp2j's run you? I'm mainly using the gt730's for VM GPU passthrough. I had dicked around with 4 vm's (each using a gt730) with X-Plane 10 (flight sim), all networked to cooperate with a master system to slave cockpit views etc. It was fun; I was using a MacOS vm as the master, a win10 vm as a slave, and two other linux vm's as slaves (all on the first two dual e5-2670 boxes; before I built the third). Now I'm only using the 3rd system with the gtx 1080 on bare metal win10 with a head tracker, so I don't really need the other networked systems for views... I do have unraid license for the third system, and I've done some simple benchmarks between bare metal and vm with the gtx1080, and as that system will only be used for 'gaming' (sims), I'll most likely leave it bare metal/win10 (not going to keep it powered up when not using it unlike the first two unraid vm hosts). Concerning the cost of the s2600cp2j's, they were all from natex, so $175 ea.
  12. I haven't ordered it but was going to. You can get it on ebay for like $10 or something. I don't have the link handy but if you can't find it I'll look it up as I saved it with the intent of ordering it. Probably still will. Thanks, yeah they're not cheap for such a small thin peice of metal. I'll probably skip it.. The Intel IO plates are well built (not just the typical stamped metal). I've paid $5ea (plus shipping) via Best Offer from http://www.ebay.com/itm/182162637018?_trksid=p2057872.m2749.l2649 --edit; shit, just saw that they were sold out...
  13. Yeah, I have two s2600cp2j's each with (2) gt730's (for VMs), and a third s2600cp2j with (1) gtx1080 (win 10 on bare metal). I'm assuming the 1080 is in the x16 slot (since there is only one). Does this hit the memory tabs? Looking at pictures it looks like it was easily hit those. Nope, in my rig, the 1080 is in slot #3 (from the left; the first blue open ended slot). As I only have one CPU in at the moment, the other blue slot (#5) can't be populated as it's only available when using a 2nd CPU.
  14. Yeah, I have two s2600cp2j's each with (2) gt730's (for VMs), and a third s2600cp2j with (1) gtx1080 (win 10 on bare metal).
  15. *AND* I consider the same is true for multiple posts from the same individual(s) in a single thread hammering on a single point to be just as meaningless I'm not calling anyone out here, you all know who you are. Say your peace and let others respond. Consider what has been stated by others, then if you have something new to say, contribute it, otherwise chill out. Easy concept, right?