9p sharing speed not what I expected...


johnodon

Recommended Posts

I thought I had 9p correctly implemented however shouldn't I be seeing much faster speeds?  The below is copying an MKV from /mnt/user/Movies (mounted as /storage/movies) to /tmp (XBMCBuntu VM).  My VMS are located on /mnt/cache/VMs.

 

BEKPMlQ.png

 

Here is the relevevant section of my XML:

 

    <filesystem type='mount' accessmode='passthrough'>
      <source dir='/mnt/user/Movies'/>
      <target dir='movies'/>
      <alias name='fs0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <source dir='/mnt/user/Music'/>
      <target dir='music'/>
      <alias name='fs1'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <source dir='/mnt/user/TV'/>
      <target dir='tvshows'/>
      <alias name='fs2'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/>
    </filesystem>

 

And here is my fstab on the VM:

 

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/vda1 during installation
UUID=7fae00e5-f843-44ff-b9bd-c3910fc7b98f /               ext4    errors=remount-ro 0       1
# swap was on /dev/vda5 during installation
UUID=fc14b9fe-717a-4b18-8ae3-c6d8e3310571 none            swap    sw              0       0
movies  /storage/movies    9p  rw,dirsync,_netdev,relatime,trans=virtio,version=9p2000.L,posixacl,cache=loose 0 0
tvshows /storage/tvshows        9p rw,dirsync,_netdev,relatime,trans=virtio,version=9p2000.L,posixacl,cache=loose 0 0
music   /storage/music    9p  rw,dirsync,_netdev,relatime,trans=virtio,version=9p2000.L,posixacl,cache=loose 0 0

 

Am I doing something wrong?  Am I missing a driver on the VM side possibly?

 

Here is my full XML in case there is something configured incorrectly outside of the 9p stuff:

 

<domain type='kvm' id='45' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>HTPCLIVRM</name>
  <uuid>c43b7542-b40b-495f-90ab-aaa4eec68e8b</uuid>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='3'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-2.1'>hvm</type>
    <boot dev='hd'/>
    <bootmenu enable='no'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  <clock offset='localtime'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none' io='native'/>
      <source file='/mnt/cache/VMs/HTPCLIVRM.qcow2'/>
      <backingStore/>
      <target dev='vdc' bus='virtio'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/Images/xbmcbuntu-13.0~gotham_amd64.iso'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb0'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb0'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb0'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x2'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='sata0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
    </controller>
    <filesystem type='mount' accessmode='passthrough'>
      <source dir='/mnt/user/Movies'/>
      <target dir='movies'/>
      <alias name='fs0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <source dir='/mnt/user/Music'/>
      <target dir='music'/>
      <alias name='fs1'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <source dir='/mnt/user/TV'/>
      <target dir='tvshows'/>
      <alias name='fs2'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/>
    </filesystem>
    <interface type='bridge'>
      <mac address='52:94:00:d0:c0:bb'/>
      <source bridge='br0'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
    </interface>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1784'/>
        <product id='0x0011'/>
        <address bus='8' device='9'/>
      </source>
      <alias name='hostdev0'/>
    </hostdev>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='none' model='none'/>
  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=85:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=85:00.1,bus=pcie.0'/>
  </qemu:commandline>
</domain>

Link to comment

Have you tried to change some of your options in fstab? Not sure if all of your options are valid and the best option. Have a look at this link for mounting options: https://www.kernel.org/doc/Documentation/filesystems/9p.txt.

 

This is what I have on my fstab and my speed on a test I did now was 73.62MB/s:

recordings /mnt/recordings 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev    0   0

Link to comment

I built a vanilla ubuntu server 14.04 VM and updated/upgraded to the latest.  No improvement.

 

What type of drive do you guys have your VM store on?  Platter or SSD?

 

And are you using qcow2 for the image?

 

saarg, I changed my fstab entry to your format and got similar results.

 

movies /storage/movies  9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev    0   0

Link to comment

Just to chime in here, VirtFS (9p) should showcase performance that's on-par or better than going through a block device / virtual network device.  While I doubt anyone here would read this paper in it's entirety, IBM researcherspublished a paper in 2010 about VirtFS and the advantages it provides and here is a powerful quote from the conclusions section of that document:

 

...we have shown that our initial work has superior performance to the use of conventional distributed file systems and even reasonable performance when compared to the use of a paravirtualized disk...

 

They had a section specifically about performance in the paper as well where they used dd commands to testing performance comparing VirtFS, NFS, and CIFS at various IO blocksizes.  The results from the test showed that VirtFS performance extremely well, but that the biggest performance gains are seen using larger block sizes.

 

As for the issues in your tests, there are many options you can tweak to fine tune the performance of VirtFS (both on the host and the guest).  We will be experimenting with them more in the future to ensure solid performance.  Same goes for security and permissions.

 

As for the others who ran tests, I'd like to know more about those tests:

 

1)  Where was the source located? (array disk / cache disk / non-array partition)?

2)  What type of source device was used?  (ssd or hdd)?

3)  Was the source referenced in your XML through the user shares (/mnt/user) or directly to a disk device (/mnt/disk1 or /mnt/cache)?

3)  Same questions for the target location / device.

4)  And for the target device, if referenced in your XML using a user share, was that user share cache enabled?

 

Please let me know as I am curious on how this works out.

Link to comment

5) Target filesystem (i.e. cache/non-array/array drive...BTRFS, XFS, etc.)

 

Important?

I think after viewing the other thread's results, my own, and some googling I suspect that the performance issues lie in btrfs and qcow2 images.  In the other thread, all the people with good results don't use btrfs.  I had already planned to convert my cache and vm ssd to XFS.  I think I will do that sooner and bet that's where the problem is.

Link to comment

I am using btrfs om my cache drive where my vm's are stored. I do not remember which vm I used or what I wrote in the other thread.

I'll check it tomorrow.

Well that may ruin my theory.  I just saw the ext4.  I'm gonna try xfs anyway.

And it did ruin your theory ;)

I just tested with both raw image and qcow2 and the speed is almost the same.

Raw: 74,38 MB/s

Qcow2: 75.10 MB/s

 

Both VM's run Ubuntu 14.04 with Ext4.

Link to comment

I'm gonna try xfs anyway.

 

Hi dmacias,

It would be great if you could share the steps you will follow to convert you disks to XFS.

Rgds.

Basically I'll just copy everything off my cache and vm drive to the array. Then I'll stop the array.  You can then click on the cache drive in the main unRAID webgui and change format to xfs.  Then start the array and cache will show as unformatted and I'll format it.  Then stop the array again.  I'll then remove that drive from cache slot and add the vm drive to cache slot and format it to xfs also. Then I'll switch them back. Start the array and move everything back.

Link to comment

 

 

I am using btrfs om my cache drive where my vm's are stored. I do not remember which vm I used or what I wrote in the other thread.

I'll check it tomorrow.

Well that may ruin my theory.  I just saw the ext4.  I'm gonna try xfs anyway.

And it did ruin your theory ;)

I just tested with both raw image and qcow2 and the speed is almost the same.

Raw: 74,38 MB/s

Qcow2: 75.10 MB/s

 

Both VM's run Ubuntu 14.04 with Ext4.

 

Oh well.  But until I test it on my system I won't know for sure.  I want to move off btrfs. I'll post my results then.  I get about 25 MB/s right now.

Link to comment

Well I was bored so I copied everything off my cache drive and reformatted it as XFS (was BTRFS before) moved my VM's etc back across and performed exactly the same test as I posted in the speed results thread. My speed has gone from 25MB/s to 45MB/s so I think there is something to theory of file system of the cache drive hosting the VM's affecting the speed of the 9p transfers. My next test is to replace the cache drive (750Gb HDD) with an SSD to see if that will increase it further.

Link to comment

Well I was bored so I copied everything off my cache drive and reformatted it as XFS (was BTRFS before) moved my VM's etc back across and performed exactly the same test as I posted in the speed results thread. My speed has gone from 25MB/s to 45MB/s so I think there is something to theory of file system of the cache drive hosting the VM's affecting the speed of the 9p transfers. My next test is to replace the cache drive (750Gb HDD) with an SSD to see if that will increase it further.

Have you tried msize=262144 in your fstab line.  It gave me another 10-15MB/s.  I converted my cache to XFS.  No change in speed but my vm drive is still btrfs.  I have a mythbuntu vm that I can't copy.  It does this on a fresh install too on btrfs.  It gives an I/O error and qemu-image check shows its fine.  I'll try to reinstall myth on the XFS partition.
Link to comment

As I posted here http://lime-technology.com/forum/index.php?topic=34301.15 it says do not use btrfs on the host of your image files ... I will check for this in my setup ...just had no time to do it.

Storage

 

QEMU supports a wide variety for storage formats and back-ends. Easiest to use are the raw and qcow2 formats, but for the best performance it is best to use a raw partition. You can create either a logical volume or a partition and assign it to the guest:

 

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

 

QEMU also supports a wide variety of caching modes. If you're using raw volumes or partitions, it is best to avoid the cache completely, which reduces data copies and bus traffic:

 

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

 

As with networking, QEMU supports several storage interfaces. The default, IDE, is highly supported by guests but may be slow, especially with disk arrays. If your guest supports it, use the virtio interface:

 

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

 

Don't use the linux filesystem btrfs on the host for the image files. It will result in low IO performance. The kvm guest may even freeze when high IO traffic is done on the guest.

Link to comment

I want to comment here about the use of BTRFS and VMs.  I have been running Windows 8.1 as my primary workstation at work as what I'm calling a "localized virtual machine."  I pass through a number of PCI devices (GPU, video capture card, USB controller, and on-board audio) and I actually have two virtual disk images that I pass through to the VM (I'll explain why some other time, but there is a reason why I did it this way).  These vdisks live on an unRAID cache pool (BTRFS RAID 1) made up of 3 x SSDs (2 x 512GB SanDisks and 1 x 240GB Corsair).  Both the vdisks are of qcow2 image format.

 

In my numerous months now of day-in-day-out usage of this VM, which was running on unRAID 6 the whole time, I haven't noticed any major performance issues.  I want to be clear that there is a huge difference between "noticing a performance impact" and "measuring performance impact."  I haven't measured the actual differences between write speed using a qcow virtual disk on a BTRFS filesystem with COW enabled vs. no COW enabled vs. other non-COW filesystems, etc., etc.  However, I've been using my localized VM as my primary workstation for months now where I have used it for all sorts of both basic and complex tasks.  The basics like browsing the web, writing e-mail, etc are obviously not a good measure for storage performance as they are not performance-heavy tasks, but I also have been capturing, encoding, and editing video, creating animations, and when the workday is over, enjoying some great 4K gaming with Eric.  I've done a number of large file transfers between the Windows VM and unRAID user shares (both two and from) and performance has always been great.

 

All of this said, there is potential for an improvement to IO performance by disabling the use of Copy on Write specifically on vdisks.  Our theory is that by having a QCOW virtual disk image that has it's own native Copy on Write capability (that's actually what the "cow" in qcow actually stands for) on a filesystem that also supports Copy On Write, there could be some duplication of effort here.  The good news is that there is a pretty simple way to disable COW functionality on specific files / folders on BTRFS without disabling it on the entire filesystem overall.  In addition, we also suspect that the performance impact of qcow on a COW filesystem like BTRFS may only be noticed when you're copying large quantities of small sized files into the VM itself.  Again, "suspect," not "tested and proven."

 

I wanted to provide these comments because I think there are some folks here that are googling this subject and getting some outdated information.  Note that we are using a very modern linux kernel for which there have been lots of patches applied related to BTRFS and KVM.  A lot of the documentation on the subject out there is in relation to either older kernel versions or software packages, and it is very probable that the issues from the last few years on this subject have been resolved or that they simply don't impact the atypical use-cases for virtualization on unRAID.

 

Anyway, I hope this post was helpful!

Link to comment

Well I was bored so I copied everything off my cache drive and reformatted it as XFS (was BTRFS before) moved my VM's etc back across and performed exactly the same test as I posted in the speed results thread. My speed has gone from 25MB/s to 45MB/s so I think there is something to theory of file system of the cache drive hosting the VM's affecting the speed of the 9p transfers. My next test is to replace the cache drive (750Gb HDD) with an SSD to see if that will increase it further.

Have you tried msize=262144 in your fstab line.  It gave me another 10-15MB/s.  I converted my cache to XFS.  No change in speed but my vm drive is still btrfs.  I have a mythbuntu vm that I can't copy.  It does this on a fresh install too on btrfs.  It gives an I/O error and qemu-image check shows its fine.  I'll try to reinstall myth on the XFS partition.

 

I added msize=262144 into my fstab line and re ran the same test plus a few more and now I'm seeing an average of 90.22MB/s. I can live with those speeds :)

 

Link to comment

Well I was bored so I copied everything off my cache drive and reformatted it as XFS (was BTRFS before) moved my VM's etc back across and performed exactly the same test as I posted in the speed results thread. My speed has gone from 25MB/s to 45MB/s so I think there is something to theory of file system of the cache drive hosting the VM's affecting the speed of the 9p transfers. My next test is to replace the cache drive (750Gb HDD) with an SSD to see if that will increase it further.

Have you tried msize=262144 in your fstab line.  It gave me another 10-15MB/s.  I converted my cache to XFS.  No change in speed but my vm drive is still btrfs.  I have a mythbuntu vm that I can't copy.  It does this on a fresh install too on btrfs.  It gives an I/O error and qemu-image check shows its fine.  I'll try to reinstall myth on the XFS partition.

 

I added msize=262144 into my fstab line and re ran the same test plus a few more and now I'm seeing an average of 90.22MB/s. I can live with those speeds :)

Nice.  Is that the spinner or the ssd?

Link to comment

Well I was bored so I copied everything off my cache drive and reformatted it as XFS (was BTRFS before) moved my VM's etc back across and performed exactly the same test as I posted in the speed results thread. My speed has gone from 25MB/s to 45MB/s so I think there is something to theory of file system of the cache drive hosting the VM's affecting the speed of the 9p transfers. My next test is to replace the cache drive (750Gb HDD) with an SSD to see if that will increase it further.

Have you tried msize=262144 in your fstab line.  It gave me another 10-15MB/s.  I converted my cache to XFS.  No change in speed but my vm drive is still btrfs.  I have a mythbuntu vm that I can't copy.  It does this on a fresh install too on btrfs.  It gives an I/O error and qemu-image check shows its fine.  I'll try to reinstall myth on the XFS partition.

 

I added msize=262144 into my fstab line and re ran the same test plus a few more and now I'm seeing an average of 90.22MB/s. I can live with those speeds :)

Nice.  Is that the spinner or the ssd?

 

Spinner, I haven't had chance to put the ssd in yet.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.