sloob

Members
  • Posts

    79
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sloob's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. Here is the output /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="272C-EBE2" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/sdf1: UUID="c0ee8f00-ac85-4429-b5d8-88d34c051dff" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="fc78c683-3ba7-4102-be1b-c212445fc81b" /dev/nvme0n1p1: UUID="c18f6710-9ec2-483c-81a6-c7deac32d0ba" UUID_SUB="3874402b-fc82-44bc-bb02-527c75fb804c" BLOCK_SIZE="4096" TYPE="btrfs" /dev/sdb1: UUID="7f1e82ce-7080-4992-b551-353df6501f1c" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="1becd917-01" /dev/sdk1: UUID="a5bd460c-eb00-4f3d-a950-fdf48a2b01df" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="1becd915-01" /dev/sdi1: UUID="cbec1d86-292e-4613-8a9e-c4e690c89bff" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="1becd91b-01" /dev/sdg1: UUID="6efb0319-0e15-4c0e-829c-28d0aa0903dc" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="861719a4-b949-36c8-40c8-76be2832510d" /dev/loop0: TYPE="squashfs" /dev/sde1: UUID="8be95965-635e-4f81-a519-15adc525b5e2" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="b47a4e39-e902-4b6d-aa04-64cf297a6374" /dev/sdj1: UUID="3fa58017-2146-498f-a76f-232ce4351270" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="e67564e4-dea8-4690-95e6-6d9baeeec3db" /dev/md6: UUID="3fa58017-2146-498f-a76f-232ce4351270" BLOCK_SIZE="512" TYPE="xfs" /dev/md4: UUID="6efb0319-0e15-4c0e-829c-28d0aa0903dc" BLOCK_SIZE="512" TYPE="xfs" /dev/md2: UUID="cbec1d86-292e-4613-8a9e-c4e690c89bff" BLOCK_SIZE="512" TYPE="xfs" /dev/md7: UUID="7f1e82ce-7080-4992-b551-353df6501f1c" BLOCK_SIZE="512" TYPE="xfs" /dev/md5: UUID="c0ee8f00-ac85-4429-b5d8-88d34c051dff" BLOCK_SIZE="512" TYPE="xfs" /dev/md3: UUID="a5bd460c-eb00-4f3d-a950-fdf48a2b01df" BLOCK_SIZE="512" TYPE="xfs" /dev/md1: UUID="8be95965-635e-4f81-a519-15adc525b5e2" BLOCK_SIZE="512" TYPE="xfs" /dev/sdd1: PARTUUID="234fbfeb-63f9-4ce3-a966-de04d57cf69c" /dev/loop2: UUID="512b0675-05b0-46e2-972f-c7ce0fa5c066" UUID_SUB="37250b6a-f4df-437c-b20a-a2435317c12c" BLOCK_SIZE="4096" TYPE="btrfs" /dev/sdc1: PARTUUID="33a77cd4-f4f3-48bd-abca-c8aed03d1d14" /dev/loop3: UUID="3c98e999-4985-49ef-b99d-7b46dcdddf9e" UUID_SUB="0881c0f6-f8b3-4003-a57c-0b35d1e4d75f" BLOCK_SIZE="4096" TYPE="btrfs" /dev/sdh1: PARTUUID="edab0d4b-f755-4f4f-b896-ea4654b155af"
  2. Should I format the drive? Is there a way to see if I'll lose data if I do so? I'm guessing if that drive never made it to the array that means it contained no data
  3. Alright, it has been running more than 12 hours now and it still hasn't found anything. it now says exiting now. One thing that I should mention it that I can't be 100% sure that disk was every truly part of the array. What happened was I installed the disk to the array (array had 7 disk before, added this one as the 8th) it started to initialize it and I let it run alone and went about my day. I'm not sure I ever saw that disk online, initialized and part of the array. I only remember it saying Unmountable: Unsupported or no file system. so the failure might have occurred while the disk was being initialized. I've posted a new diagnostic unraid-diagnostics-20230416-0752.zip
  4. The command I tried was "xfs_repair /dev/md8" Setting the FS did make the section appear! but so far it's doing the same thing: Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... It's only been running for a few minutes this time tho, should I let it run for a few hours or it shouldn't take that long?
  5. I attempted to, but I do not have the Check Filesystem Status section for that disk. I have it for every other disks but not this one. running the command gives me this error Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... and then runs forever without ever finding anything
  6. I had a power failure and one of my disk now says Unmountable: Unsupported or no file system. I've tried replacing the drive and it rebuilds in the same state. unsure how to proceed. I've attached a log. unraid-diagnostics-20230415-1324.zip
  7. I understand, I think I will buy another case then. If another drive fails while rebuilding after that, can I force unraid to trust the drive that is moostly good but has a few read errors like I explained above or is there absolutely no other way and I will lose that drive?
  8. Hi! Thanks for the answer, Did you change out SATA power splitters and data cables? Is it always the same drives? Are these drives in the same slots? Yes, I changed the power supply, SATA cables, SAS controller, drives and it's not always the same slot (although recently it's been happening more with disk 5) I had a look at the SMART folder and noticed that you only had one disk disabled (disk 5) at the time when you captured the Diagnostics. Did a second disk fail after you did that? (There is another disk (sdj) that appears to be unassigned. Is that the missing parity 1 disk?) I'm not sure why the logs don't show it but here is what happened: Yesterday I upgraded one of my drive from 2TB to 4TB, while upgrading disk 5 failed (write error), I let unraid finish the data rebuild on the upgraded disk and then today I started my usual procedure when one of my drive fails due to write error (Stop the array, assign the failed disk to "none", Start the array, Stop the array, assign the now empty slot to the old drive and re-start the array. Unraid then rebuilds the drive using the parity.) Except this time Disk 5 (the failed drive) re-failed almost immediately while rebuilding itself. A few minutes later of of my parity drive also showed the red X so I immediately took a diagnostic and stopped the system before a 3rd drive failed. What is the wattage of that unit and does it have a single 12V rail? I believe it's a 650W hot swappable redundant server power supply. I'm not sure if it's a single 12V rail or not. It's an old Chenbro RM31408, couldn't find anyone else having this issue.
  9. I've been plagued with drive failures for a while now, the only component I haven't changed yet is my case (it uses a SATA backplane) but before pulling the trigger on an expensive rackmount case I'd like to get some input and maybe some pointers. It's not my first time trying my luck on the unraid forums with this issue but unfortunately never got clear answers on the following points. So here it goes: Right now the server is shut off because 2 drives are disabled (I have 2 parity drives) and I don't trust the system to rebuild a drive without another one going bad. if worse comes to worst and another drive fails because of write errors can I force unraid to trust it (I understand write errors occured so some files are incomplete/corrupted) but most of the files on the drive are still good, no? I felt like I was pretty secured with 2 parity drive but it sounds like if one of your SAS controller or if your backplane goes bad and causes write error you can fail every single drives in your system in a few minutes/seconds? Thank you for reading this. I am slowly going insane. unraid-diagnostics-20221125-2105.zip
  10. Ok so I rebooted and it's now saying the drive is normal and the dot is green. maybe it was a GUI issue only? About the random write error, do you have any idea what can cause this other than motherboard,ram,cpu,psu,cables,disks,sas card? I'm starting to suspect my server backplane might be faulty. Thank you unraid-diagnostics-20221125-0730.zip
  11. I've been plagued with random disk write errors for a while now (I've tried changing everything from the SAS controller, ram, cpu, mobo, psu, sata cables, power cables etc but nothing will fix it, it seems to occur whenever i'm writing heavily to a particular drive) and now it happened while I was rebuilding a disk after a disk upgrade. I don't know if the data rebuild failed when the drive failure occured since it happened during the night but it's been stuck at 58.9% for about 12 hours now. Wondering what the proper procedure is since I now technically have 2 failed drives and can't afford to mess anything up. PS. if the unimaginable happens and another drive fails while I'm rebuilding, is it possible to put the old one back (even tho it's 2TB instead of the new 4TB) and make unraid "trust" it and not loose the files that are on it? Thank you. unraid-diagnostics-20221124-1947.zip
  12. Sorry to revive such an old thread, but I was never able to start the VM back up. Even if I uncheck every devices on the "edit" screen for that VM, it crashes my whole server every time. Is there any other way to force remove the controller from the VM while still passing my capture card? Ideally without deleting and re-installing the VM? Here is the XML for that VM <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Capture Card</name> <uuid>17e53729-4f3b-ecb0-642e-50cb56087e3c</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>7</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='7'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='9'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='11'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/17e53729-4f3b-ecb0-642e-50cb56087e3c_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='7' threads='1'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/CaptureCard/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Softwares/ISO/Microsoft Windows 10 Home and Pro x64 Clean ISO/en_windows_10_multiple_editions_x64_dvd_6846432.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Softwares/ISO/virtio-win-0.1.141-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:a6:d2:e6'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> </domain> And here is the device I'm trying to parse (See attachment) Thanks