unRAID Server Release 6.2.0-beta21 Available


Recommended Posts

I had the same issues on every beta I've tried for 6.2 :P

 

I'm now steering clear of the betas for a while haha ;)

 

Just scans for LABEL=UNRAID and then fails to find it somehow

 

Bus 003 Device 003: ID 0781:5571 SanDisk Corp. Cruzer Fit

 

Could you try another Flash Drive and see if you have the same problem?  Just to see if it boots.  With two of you having virtually the same problem, I can't believe it is cockpit error! 

Link to comment
  • Replies 545
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

For those having problems with the flash drive, I had the same issue with one of my servers, booted fine with v6.1 and not with v6.2, reformatting flash dive didn't help, but this did if you have a Windows pc.

 

Backup your flash drive then open a command prompt window as administrator, then type, in this order:

 

-diskpart

-list disk

-select disk x (x=your flash drive)

-clean

-create partition primary

-format fs=fat32 label=UNRAID quick

-active

-assign

-exit

 

-close cmd window

-copy unraid files

-execute make_bootable as admin

 

It should work now.

 

Link to comment

I may have discovered why my Windows 10 Pro fresh install on 6.2.0-beta21 was crashing.

 

Hyper-V is being turned on by default on the Windows 10 template. I think that may be the source of the problem.

 

I noticed when I downgraded to 6.1.9 and did a fresh install that Hyper-V was turned off in the Windows 8 template by default and everything worked fine, when I turned it on, it crashed while booting.

 

I'm passing a GTX 970 through to Guest VM. Any thoughts to Hyper-V causing these VM crashes which also crashed the whole array?

Link to comment

I had the same issues on every beta I've tried for 6.2 :P

 

I'm now steering clear of the betas for a while haha ;)

 

Just scans for LABEL=UNRAID and then fails to find it somehow

 

Bus 003 Device 003: ID 0781:5571 SanDisk Corp. Cruzer Fit

Could you try another Flash Drive and see if you have the same problem?  Just to see if it boots.  With two of you having virtually the same problem, I can't believe it is cockpit error!

 

The issue is the cruzer fit.  I have the same thing.

 

Here's how to manually test it.

 

When it scans for the UNRAID label, remove/reinstall the USB key.  You'll see it picked up right away and it'll boot normally.

 

That's obviously not a longer term solution, but it will allow you to boot manually.

 

 

Link to comment

Is this safe to use on a main server?

 

 

Sent from my iPhone using Tapatalk

 

No. Its a beta, not even an RC. The disclaimers say to not use it on anything but test servers.

 

In addition to this, the 6.2betas REQUIRE an Internet connection to exist before unraid will start the array. No internet, no array.

Link to comment

Is this safe to use on a main server?

 

 

Sent from my iPhone using Tapatalk

 

No. Its a beta, not even an RC. The disclaimers say to not use it on anything but test servers.

 

In addition to this, the 6.2betas REQUIRE an Internet connection to exist before unraid will start the array. No internet, no array.

 

I find that if you wait 2 or 3 public betas, most critical issues would have been identified & resolved and anything left are nuisances.

 

Please note the emphasis on "most".

Link to comment

Is this safe to use on a main server?

 

 

Sent from my iPhone using Tapatalk

 

No. Its a beta, not even an RC. The disclaimers say to not use it on anything but test servers.

 

In addition to this, the 6.2betas REQUIRE an Internet connection to exist before unraid will start the array. No internet, no array.

 

I find that if you wait 2 or 3 public betas, most critical issues would have been identified & resolved and anything left are nuisances.

 

Please note the emphasis on "most".

 

What you really have to do is to wait at least a few days and read the posts to see what the issues are with each beta release.  (There is always be issues with betas!!!)  If you see that the issues (as an example) are mostly involve VM's and you don't use VM's, you would be safer installing that beta than if a VM was an absolute necessity and had to work flawlessly. 

 

The second thing you have to evaluate is your tolerance to risk.  If you expect your install to work without any issues, then you should avoid using an beta.  In fact, you probably should be waiting a week or two after the release of 6.2.0 Final before upgrading.  Conversely, if you love solving problems, finding solutions to issues, and generally playing around with software to see what makes it tick, you are a prime candidate to become a beta tester.

Link to comment

To daigo and anyone else that has had system stability issues relating to VMs, array vdisks, etc.

 

I just wanted to touch base on this issue because we have been trying a to recreate this on our systems here and have been unsuccessful.  There are multiple people reporting issues like this, but it's definitely not affecting everyone (nor would I say even the majority of users).  I've tried copying large files to and from both array and cache-based vdisks.  I've tried bulk copies to and from SMB shares.  I've tried bulk copies from mounting ISOs inside Linux VMs and copying data from them to the vdisk of the linux VM.  No matter what, the systems here remain solid and stable, no crashes or log events of any kind.

 

What this means is that we are still investigating and we are continuing to patch QEMU and the kernel so we can see if this issue is better addressed in a future beta release.

 

I wish I had more to say on this issue but for now, until we can recreate it, it's going to be an ongoing research problem.

Link to comment

I think i am going to downgrade to 6.1.9 , but i wanted to post the issue that i am having.  I saw some reference to it in 6.2 beta 18 or 19 forum.  I copied over about 6 tb of files and now i have a tremendous slowdown, basically unusable when trying to connect to the samba shares with windows 10 and windows server 2008.  It just takes forever for the green bar to finish.  The content is quite a number of folders (mp3)..  I did one reboot, but that didn't help.  I am posting diagnostic in case it is of any use.

The beta looks amazing with the new features - keep up the good work!

 

pipe-diagnostics-20160415-1409.zip

Link to comment

I think i am going to downgrade to 6.1.9 , but i wanted to post the issue that i am having.  I saw some reference to it in 6.2 beta 18 or 19 forum.  I copied over about 6 tb of files and now i have a tremendous slowdown, basically unusable when trying to connect to the samba shares with windows 10 and windows server 2008.  It just takes forever for the green bar to finish.  The content is quite a number of folders (mp3)..  I did one reboot, but that didn't help.  I am posting diagnostic in case it is of any use.

The beta looks amazing with the new features - keep up the good work!

Hard to diagnose because there are no details to your post.

Link to comment

I am having VM specfic issues. Basicly the VM will hang the entire system to include dockers and the webgui. I can still SSH into the system. It appears to happen most often when i run something on the VM in administrator mode. (cannot reproduce every time)

 

VM LOG

LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name SONARR -S -machine pc-i440fx-2.5,accel=kvm,usb=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/541f5dce-aa11-e639-d52a-0e1529ce4afc_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 1024 -realtime mlock=off -smp 2,sockets=1,cores=2,threads=1 -uuid 541f5dce-aa11-e639-d52a-0e1529ce4afc -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-SONARR/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,buget/domain-SONARR/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 0.0.0.0:0,websocket=5700 -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
Domain id=1 is tainted: high-privileges
Domain id=1 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 0]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 1]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 2]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 3]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 4]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 5]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 6]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 7]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 8]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 9]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 12]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 13]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 14]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 15]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 16]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 17]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 23]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 24]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 0]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 1]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 2]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 3]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 4]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 5]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 6]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 7]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 8]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 9]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 12]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 13]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 14]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 15]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 16]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 17]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 23]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 24]

 

Server log

Apr 15 19:32:46 Icarus emhttp: nothing to sync
Apr 15 19:32:46 Icarus kernel: Ebtables v2.0 registered
Apr 15 19:32:47 Icarus kernel: device virbr0-nic entered promiscuous mode
Apr 15 19:32:47 Icarus avahi-daemon[2559]: Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1.
Apr 15 19:32:47 Icarus avahi-daemon[2559]: New relevant interface virbr0.IPv4 for mDNS.
Apr 15 19:32:47 Icarus kernel: virbr0: port 1(virbr0-nic) entered listening state
Apr 15 19:32:47 Icarus kernel: virbr0: port 1(virbr0-nic) entered listening state
Apr 15 19:32:47 Icarus avahi-daemon[2559]: Registering new address record for 192.168.122.1 on virbr0.IPv4.
Apr 15 19:32:47 Icarus dnsmasq[3366]: started, version 2.75 cachesize 150
Apr 15 19:32:47 Icarus dnsmasq[3366]: compile time options: IPv6 GNU-getopt no-DBus i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
Apr 15 19:32:47 Icarus dnsmasq-dhcp[3366]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Apr 15 19:32:47 Icarus dnsmasq-dhcp[3366]: DHCP, sockets bound exclusively to interface virbr0
Apr 15 19:32:47 Icarus dnsmasq[3366]: reading /etc/resolv.conf
Apr 15 19:32:47 Icarus dnsmasq[3366]: using nameserver 192.168.2.1#53
Apr 15 19:32:47 Icarus dnsmasq[3366]: using nameserver 8.8.8.8#53
Apr 15 19:32:47 Icarus dnsmasq[3366]: using nameserver 8.8.4.4#53
Apr 15 19:32:47 Icarus dnsmasq[3366]: read /etc/hosts - 2 addresses
Apr 15 19:32:47 Icarus dnsmasq[3366]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Apr 15 19:32:47 Icarus dnsmasq-dhcp[3366]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Apr 15 19:32:47 Icarus kernel: virbr0: port 1(virbr0-nic) entered disabled state
Apr 15 19:32:47 Icarus sshd[3134]: Accepted none for root from 192.168.2.237 port 52784 ssh2
Apr 15 19:33:18 Icarus kernel: device vethb30baa4 entered promiscuous mode
Apr 15 19:33:19 Icarus kernel: eth0: renamed from veth86924de
Apr 15 19:33:19 Icarus kernel: docker0: port 1(vethb30baa4) entered forwarding state
Apr 15 19:33:19 Icarus kernel: docker0: port 1(vethb30baa4) entered forwarding state
Apr 15 19:33:20 Icarus ntpd[1821]: Listen normally on 3 docker0 172.17.0.1:123
Apr 15 19:33:20 Icarus ntpd[1821]: new interface(s) found: waking up resolver
Apr 15 19:33:34 Icarus kernel: docker0: port 1(vethb30baa4) entered forwarding state
Apr 15 19:43:51 Icarus kernel: device vnet0 entered promiscuous mode
Apr 15 19:43:51 Icarus kernel: br0: port 3(vnet0) entered listening state
Apr 15 19:43:51 Icarus kernel: br0: port 3(vnet0) entered listening state
Apr 15 19:43:56 Icarus kernel: kvm: zapping shadow pages for mmio generation wraparound
Apr 15 19:43:56 Icarus kernel: kvm: zapping shadow pages for mmio generation wraparound
Apr 15 19:44:06 Icarus kernel: br0: port 3(vnet0) entered learning state
Apr 15 19:44:21 Icarus kernel: br0: topology change detected, propagating
Apr 15 19:44:21 Icarus kernel: br0: port 3(vnet0) entered forwarding state

 

When i try to kill the VM in the webguy it will not load and hangs the webgui, when i try to kill the VM in CLI

root@Icarus:~# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # list
Id    Name                           State
----------------------------------------------------
1     SONARR                         running

virsh # destroy 1
error: Failed to destroy domain 1
error: Failed to terminate process 7775 with SIGKILL: Device or resource busy

virsh #

 

HTOP Output

http://i.imgur.com/XyOSfUA.png

 

 

Hopefully this will help get the bugs worked out. If i go slow and i am extra carefull not to stress the VM, it appears to be fine.

 

Link to comment

 

 

To daigo and anyone else that has had system stability issues relating to VMs, array vdisks, etc.

 

I just wanted to touch base on this issue because we have been trying a to recreate this on our systems here and have been unsuccessful.  There are multiple people reporting issues like this, but it's definitely not affecting everyone (nor would I say even the majority of users).  I've tried copying large files to and from both array and cache-based vdisks.  I've tried bulk copies to and from SMB shares.  I've tried bulk copies from mounting ISOs inside Linux VMs and copying data from them to the vdisk of the linux VM.  No matter what, the systems here remain solid and stable, no crashes or log events of any kind.

 

What this means is that we are still investigating and we are continuing to patch QEMU and the kernel so we can see if this issue is better addressed in a future beta release.

 

I wish I had more to say on this issue but for now, until we can recreate it, it's going to be an ongoing research problem.

 

Not directly related to vm's but I used to have an issue with copying to my cache drive (an ssd) used to start off fast and the slowdown, eventually hang then fail.  I initially I thought it was when writing from Windows based pc's and vm's, but then found after further testing it was happening with and sort of device (e.g. usb attached hdd or wired gigabit connection, etc).  Writting directly to the array without a cache drive was fine.

 

This issue didn't really manifest unless I copied larger files e.g. larger than 1 or 2 gigs.

 

What I discovered was that after enabling Trim as a scheduled daily task the issue went away and since then it has been fine.

 

Anyway I'm wondering if this could be a similar issue for vm stability in that once some ssd's have had data written to the whole drive then performance can take a real nose dive / cause hangs while the algorithms find / clear space to write to as they don't have automatic garbage collection and need to have trim run on a regular basis to maintain performance.  I imagine this would be more prevalent in instances where people tend to fill their ssd to capacity or near capacity on a regular basis.

 

Sent from my LG-D802T using Tapatalk

 

 

Link to comment

I think i am going to downgrade to 6.1.9 , but i wanted to post the issue that i am having.  I saw some reference to it in 6.2 beta 18 or 19 forum.  I copied over about 6 tb of files and now i have a tremendous slowdown, basically unusable when trying to connect to the samba shares with windows 10 and windows server 2008.  It just takes forever for the green bar to finish.  The content is quite a number of folders (mp3)..  I did one reboot, but that didn't help.  I am posting diagnostic in case it is of any use.

The beta looks amazing with the new features - keep up the good work!

Hard to diagnose because there are no details to your post.

 

* He has the issue with numerous smbd threads started.  I don't recall the issues and workarounds for that problem.

 

* You need to check for a BIOS upgrade, due to issues (and age=2008).  It indicates, among other issues, an early spurious IRQ19, that caused its early disabling.  However, a different kernel module did not know that and assigned IRQ 19 anyway, to your onboard SATA ports, causing them to fail.  This was fatal for your array.  If no BIOS upgrade or it doesn't fix the issue, you may need a new motherboard.

 

* On next boot, go into BIOS settings and change SATA mode to AHCI.  It's currently set to an IDE emulating mode.

Link to comment

My array had behaved itself for a number of days OK running 6.2.0-beta21, and I was able to copy files to it OK.

 

However this afternoon, whilst copying some files to a mapped drive, access to the drive suddenly disappears & Terracopy / Windows Explorer both hang

 

Please find attached diagnostic file

 

I fully appreciate that LimeTech are looking into this issue, as I understand that other people are having similar problems too.

 

I am seriously considering stepping back to 6.1.9, but I would loss access to the second parity drive which I have just installed.

tower-diagnostics-20160416-1619.zip

Link to comment

Here is another screenshot, showing that after the array hangs, I have tried to stop the array.

 

The GUI reports that drives are being spun down, however the GUI is not refreshed when the array has been stopped, with the result that I have to reboot the array manually.

Unraid1.jpg.59847050389abe4654bf8fc698dfef52.jpg

Link to comment

Minor little dockerMan v2 bug

 

Seems to me that if the template is not a v2 template (Version="2" attribute), then dockerMan should ignore any v2 sections if they happen to be present

 

Consider this template:

<?xml version="1.0" encoding="UTF-8"?>
<Container>
  <Beta>true</Beta>
  <Category>HomeAutomation: Status:Beta</Category>
  <Name>home-assistant-dev</Name>
  <Overview>Home Assistant is a home automation platform written in Python 3. Track and control all devices at home and automate control.</Overview>
  <license>MIT</license>
  <Project>https://home-assistant.io</Project>
  <Support>https://home-assistant.io/help/</Support>
  <Description>
    [b]This is the development channel of Home Assistant - use with caution.[/b][br]
    Home Assistant is a home automation platform written in Python 3.[br][br]
    Home Assistant will run its dashboard on port 8123. It will run in demo mode if no configuration is found.[br][br]
    [b][span style='color: #E80000;']Directions:[/span][/b][br]
    [b]/config[/b] : in this path, Home Assistant will store it's configuration files.
  </Description>
  <Registry>https://registry.hub.docker.com/u/homeassistant/home-assistant-dev/</Registry>
  <GitHub>https://github.com/balloob/docker-home-assistant</GitHub>
  <Repository>homeassistant/home-assistant-dev</Repository>
  <BindTime>true</BindTime>
  <Privileged>false</Privileged>
  <Environment/>
  <Networking>
    <Mode>host</Mode>
    <Publish></Publish>
  </Networking>
  <Data>
    <Volume>
      <HostDir>/mnt/cache/app_config/home-assistant/</HostDir>
      <ContainerDir>/config</ContainerDir>
      <Mode>rw</Mode>
    </Volume>
  </Data>
  <WebUI>http://[iP]:[PORT:8123]</WebUI>
  <Banner>https://raw.githubusercontent.com/balloob/unraid-docker-templates/master/balloob/home-assistant-banner.png</Banner>
  <Icon>https://raw.githubusercontent.com/balloob/unraid-docker-templates/master/balloob/home-assistant-icon.png</Icon>
  <Config Name="Config Directory" Target="/config" Default="/mnt/cache/app_config/home-assistant" Mode="rw" Description="This path is where Home Assistant will store it's configuration files." Type="Path" Display="always" Required="true" Mask="false"/>
  <Forum>http://lime-technology.com/forum/index.php?topic=36535.msg339480.0</Forum>
  <Repo>Balloob's Repository</Repo>
  <Base>unknown</Base>
  <TemplatePath>http://tools.linuxserver.io/xml/5ff4bf5510dde833534951824f5c48b8a7d07c14.xml</TemplatePath>
  <Updated>1460786401</Updated>
  <sha>43a66bbed21e0301a80c5d26f8ac385eaaa43a36</sha>
  <downloads>7</downloads>
  <stars>0</stars>
  <Compatible>true</Compatible>
  <MinVer>6.0</MinVer>
  <MaxVer></MaxVer>
</Container>

This template is a valid xml (v1) with an extra <Config> section.  A v1 template should only populate dockerMan with the entries that are valid for v1 (ie: no <Config>).  However, this particular one (admittedly the author of the template made a mistake and didn't copy over the Version="2"), causes some strange things in dockerMan (notably, the command passed through to docker run will contain 2 /config path mappings, and this appears on the screen:

width=500http://i46.photobucket.com/albums/f109/squidaz/Untitled_zpsz0tl556c.png[/img]

 

If this happened once, it will happen again, and dockerMan should be intelligent enough to only parse the sections that are relevant to the version of the template.

Link to comment

Another issue I have just found is that after rebooting after the initial hanging issue this afternoon, I then tried moving files in smaller amounts.

 

I copied the first batch of files, Terracopy confirmed that they had copied OK, and I manually checked that they had moved to the array.

 

I then tried to move the remainder of the files, and the array hung once more.

 

On rebooting I found that the first batch are now not shown???

 

Just to confirm, the share which I am copying to is not using a cache drive, so I can't see it being a cache drive problem.

 

PS Now just to confuse matters, I have just made Windows Explorer the default for moving files instead of Terracopy

 

Touch wood & cross fingers - so far I have been able to move files without the array crashing??????

Link to comment

Another issue I have just found is that after rebooting after the initial hanging issue this afternoon, I then tried moving files in smaller amounts.

 

I copied the first batch of files, Terracopy confirmed that they had copied OK, and I manually checked that they had moved to the array.

 

I then tried to move the remainder of the files, and the array hung once more.

 

On rebooting I found that the first batch are now not shown???

 

Just to confirm, the share which I am copying to is not using a cache drive, so I can't see it being a cache drive problem.

 

PS Now just to confuse matters, I have just made Windows Explorer the default for moving files instead of Terracopy

 

Touch wood & cross fingers - so far I have been able to move files without the array crashing??????

and if you try moving some test files to the Cache will they transfer?
Link to comment

Another issue I have just found is that after rebooting after the initial hanging issue this afternoon, I then tried moving files in smaller amounts.

 

I copied the first batch of files, Terracopy confirmed that they had copied OK, and I manually checked that they had moved to the array.

 

I then tried to move the remainder of the files, and the array hung once more.

 

On rebooting I found that the first batch are now not shown???

 

Just to confirm, the share which I am copying to is not using a cache drive, so I can't see it being a cache drive problem.

 

PS Now just to confuse matters, I have just made Windows Explorer the default for moving files instead of Terracopy

 

Touch wood & cross fingers - so far I have been able to move files without the array crashing??????

 

I have had problems in the past with Terracopy copying files.  It was usually related to files which required Administrative privileges and Terracopy just failed in some manner (I don't remember the circumstances) to finish.  The Windows Explorer copy would display a popup which had a 'button' to use administrator privileges. 

Link to comment

I have just copied an 8GB file to the share that uses the cache drive

 

The file moves from the cache drive to the array share drive correctly, and the array is running normally

 

I have used Terracopy for years with no problems, I have seen the array crash using Terracopy to 2 different PC's, so it is not a problem with a single copy not behaving properly?

 

To be honest I don't know if the problem with Terracopy is a red herring, as I see other problem having problems moving files too?

Link to comment

I just wanted to touch base on this issue because we have been trying a to recreate this on our systems here and have been unsuccessful.  There are multiple people reporting issues like this, but it's definitely not affecting everyone (nor would I say even the majority of users).  I've tried copying large files to and from both array and cache-based vdisks.  I've tried bulk copies to and from SMB shares.  I've tried bulk copies from mounting ISOs inside Linux VMs and copying data from them to the vdisk of the linux VM.  No matter what, the systems here remain solid and stable, no crashes or log events of any kind.

 

What this means is that we are still investigating and we are continuing to patch QEMU and the kernel so we can see if this issue is better addressed in a future beta release.

 

I wish I had more to say on this issue but for now, until we can recreate it, it's going to be an ongoing research problem.

At least for me, its not a big deal, because I know how to recreate it and can avoid it. Others are not so lucky it seems.

If I couldn't run a VM on the array anymore, I would have no problem with it.

I am testing so much stuff and try to give as much feedback as possible, because It's part of the fun.

 

The thing is, even on my system, I can't see any unusual log events at all, not before, not while and not after the crash occurs.

I tried to increase the log_level of libvirt (to 1), but the settings Eric pointed me to, didn't add any (not even one...) messages, not to syslog and not to libvirtd.log.

I would like to configure logging to be more verbose on libvirt, samba and maybe some kernel events. (if you think it may be related)

 

Anyway, I think you are doing a great job, using bleeding edge technology always comes with a price.

And if you try to please every single unraid user (which is how it seems to me), its not going to be easy ;)

I started with version 4.x and never had any issues, that weren't taken care of or were my own fault.

 

If I find anything on my end, I'll post it here and of course after every new beta, I'll see if anything changes.

Until then, enjoy your weekend :)

 

What I discovered was that after enabling Trim as a scheduled daily task the issue went away and since then it has been fine.

I'll add that to the list of things to test.

At least the last few weeks, I only used my old ssd to test, because I don't have to wait that long for a vm to boot.

I guess triming or even secure erasing might be an option or I'll just see what happens when I use a normal hdd instead of a ssd.

 

 

Link to comment

My array had behaved itself for a number of days OK running 6.2.0-beta21, and I was able to copy files to it OK.

 

However this afternoon, whilst copying some files to a mapped drive, access to the drive suddenly disappears & Terracopy / Windows Explorer both hang

 

Syslog shows a fine system, no apparent issues at all.  I have to wonder therefore if the issue you had isn't on the Windows side.  If it happens again, you may want to try a third machine, see if the unRAID server appears correctly, just to determine if your problem is on the unRAID or Windows side.  I *can* say that your User Share system is using the lion share of the CPU by far, but don't know the significance of that.

Link to comment

Not directly related to vm's but I used to have an issue with copying to my cache drive (an ssd) used to start off fast and the slowdown, eventually hang then fail.  I initially I thought it was when writing from Windows based pc's and vm's, but then found after further testing it was happening with and sort of device (e.g. usb attached hdd or wired gigabit connection, etc).  Writting directly to the array without a cache drive was fine.

 

This issue didn't really manifest unless I copied larger files e.g. larger than 1 or 2 gigs.

 

What I discovered was that after enabling Trim as a scheduled daily task the issue went away and since then it has been fine.

 

Anyway I'm wondering if this could be a similar issue for vm stability in that once some ssd's have had data written to the whole drive then performance can take a real nose dive / cause hangs while the algorithms find / clear space to write to as they don't have automatic garbage collection and need to have trim run on a regular basis to maintain performance.  I imagine this would be more prevalent in instances where people tend to fill their ssd to capacity or near capacity on a regular basis.

 

I tried to trim the ssd I moved to the array. From what i read, it should be "fstrim [options] device-mountpoint"

 

So "fstrim /mnt/disk5" in my case. I got an error, that said something like "discard not supported".

Tried the same command on "/mnt/cache" and it worked.

 

It seems, that trim on xfs filesystems (which all of my disks are) only works, if the disk is mounted with the "-o discard" parameter (source)

 

After stopping the array, and mounting the ssd correctly, trim worked.

However, it did not help in any way with the issue. I moved the vm from the ssd in the array to another disk (hdd) with the same result.

Link to comment

Not directly related to vm's but I used to have an issue with copying to my cache drive (an ssd) used to start off fast and the slowdown, eventually hang then fail.  I initially I thought it was when writing from Windows based pc's and vm's, but then found after further testing it was happening with and sort of device (e.g. usb attached hdd or wired gigabit connection, etc).  Writting directly to the array without a cache drive was fine.

 

This issue didn't really manifest unless I copied larger files e.g. larger than 1 or 2 gigs.

 

What I discovered was that after enabling Trim as a scheduled daily task the issue went away and since then it has been fine.

 

Anyway I'm wondering if this could be a similar issue for vm stability in that once some ssd's have had data written to the whole drive then performance can take a real nose dive / cause hangs while the algorithms find / clear space to write to as they don't have automatic garbage collection and need to have trim run on a regular basis to maintain performance.  I imagine this would be more prevalent in instances where people tend to fill their ssd to capacity or near capacity on a regular basis.

 

I tried to trim the ssd I moved to the array. From what i read, it should be "fstrim [options] device-mountpoint"

 

So "fstrim /mnt/disk5" in my case. I got an error, that said something like "discard not supported".

Tried the same command on "/mnt/cache" and it worked.

 

It seems, that trim on xfs filesystems (which all of my disks are) only works, if the disk is mounted with the "-o discard" parameter (source)

 

After stopping the array, and mounting the ssd correctly, trim worked.

However, it did not help in any way with the issue. I moved the vm from the ssd in the array to another disk (hdd) with the same result.

My ssd is format with btrfs and I just used the Dynamix SSD TRIM plugin.  This seems to have fixed my issue with file copies hanging and failing.

Link to comment
Guest
This topic is now closed to further replies.