bennymundz

Members
  • Posts

    111
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bennymundz's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

  1. Hi Unraid team, I had a disk die (disk4 in my array) in my unraids, when I was in the process of replacement of the one dead disk (disk 4 data disk), a second (disk 1 data disk) decided to also give up. I pulled both out of the unraid and connected to USB to SATA device to check how dead they were with some tools. One is completely dead and will not even detect, however the 2nd (the more recent) failure was actually readable via Paragon through windows. (She isn't healthy tho) My question is, is there a way to force unraid to believe (data disk 1) is okay, so I can use parity to rebuilt data disk 4, then once that is rebuilt use parity to rebuild on a new disk data disk 1? data disk 4 simply wont detect - so no smart reports data disk 1 fails smart test at either 10% or 90% I run one parity disk and both drive failures are on data disks. Parity drive is good.
  2. Sorry my fault, I should have explained... It just hangs at the last line and never progresses. I am going to be looking into what is in PCI slot 9 where it hangs, clearly that card doesn't want to play nice and then working backwards from there today. It's something up with the linux builds in those later versions 6.1.x linux versions
  3. 6.12.0 and 6.12.1 both killed my unraid box. I manually downgraded to 6.11.5 and back up and running fine. I tried disabling the power management in bios and upgrading to latest bios with no avail.
  4. Binhex Sonarr/Radarr along with the unify video, UNMS, and Unify controller were all present.
  5. VM's stable after 1 week. The corrupted dockers which all ran without issue were the problem. Dockers were deleted and everything started working as expected.
  6. @jonp I will do, I've successfully had 3 vm's running for 2 days now. Before deleting those dockers i would be lucky to get 2 hours out of them. After a week i will consider this resolved. For now im putting this down to corrupted docker config causing pain, perhaps locking resources causing kvm to kill VM's
  7. I had a similar issue. It looks that some dockers were causing a headache for me. If you want to give it a try, I'd recommend deleting all your docker and associated configs then trying to run your VM's and see how it goes. Be sure to make any necessary config backups of your docker, before you delete them and their configs.
  8. @jonp - I will do that and report back. However I think i might have fixed the issue, there were some dockers which were working fine but for some reason could not be updated. (I relalise my issue was with VM's). I ditch all dockers and deleted the docker configs and the VM's have all been stable. I did notice the dockers were using an unusual amount of CPU which lead me to trash all of them, perhaps the config of the dockers was corrupted.
  9. @John_M mrblack-diagnostics-20190305-2219.zip New diags with log files populated. Please let me know if you need anything else.
  10. @John_M that's weird, i will do that and post back. Thanks.
  11. Im exactly the same, my unraid box was fine running multiple VM's for months, the i upgraded now no VM's work longer than 24hrs.
  12. I've since updated back to 6.7 RC5 trying to fix this no avail. I have attempted the following - deleted /mnt/user/system/libvirt/libvirt.img and let it be recreated, no resolution - Increase the size of /mnt/user/system/libvirt/libvirt.img to 2gb from 1gb - created VM xmls - full power cycled my system Again this morning woke up, VM had crashed again libvirt log 2019-03-04 13:00:00.362+0000: 6701: info : libvirt version: 4.10.0 2019-03-04 13:00:00.362+0000: 6701: info : hostname: mrblack 2019-03-04 13:00:00.362+0000: 6701: warning : qemuDomainObjTaint:7831 : Domain id=1 name='AMS01' uuid=61fa935c-ce3b-6c32-3dcd-cea3cece8ee1 is tainted: high-privileges 2019-03-04 14:03:07.467+0000: 6697: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor Would love it if anyone had any suggestions at all. After a VM crashes, until i reboot i get this error Execution error internal error: process exited while connecting to monitor: qemu: qemu_thread_create: Resource temporarily unavailable mrblack-syslog-20190304-2238.zip mrblack-diagnostics-20190304-2248.zip
  13. Hello all, I am hoping someone might be able to assist. I recently upgraded to the latest RC5 and noticed weirdness on my unraid box so i decided to downgrade back to the latest stable release 6.6.7 to fix the issue. However my problem is that my vm's are all still crashing. Until i did the upgrade everything was running perfect no issue for 60+ days humming along nicely and now i cannot even get 60min out of it before my VM's crash, This is the log in libvert, tho i dont specifically know what it means. 2019-03-02 10:30:03.592+0000: 6591: info : libvirt version: 4.7.0 2019-03-02 10:30:03.592+0000: 6591: info : hostname: mrblack 2019-03-02 10:30:03.592+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=1 name='UTD01' uuid=8ca2aaf4-c5ec-a8a3-774d-9fde82c3d944 is tainted: high-privileges 2019-03-02 10:30:03.592+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=1 name='UTD01' uuid=8ca2aaf4-c5ec-a8a3-774d-9fde82c3d944 is tainted: host-cpu 2019-03-02 10:30:03.780+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=2 name='AMS01' uuid=d80df609-ca7b-33e2-ab90-59de51a176af is tainted: high-privileges 2019-03-02 10:30:03.992+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=3 name='DLH01' uuid=c02d5c00-ad2c-c6e9-6be2-ca553682a971 is tainted: high-privileges 2019-03-02 10:57:37.319+0000: 6575: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor 2019-03-02 11:33:18.005+0000: 6575: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor Really hoping someone can point me in the right direction to fix this annoying issue. Thanks
  14. Oh my this would be insanely annoying to me if this were implemented.. But who put's their unraid box on open internetz haha seems like a rookie error to me. Spin up a jump box and ssh to that or as someone else said use VPN.
  15. Changed Status to Solved