vdisk across multiple drives in array?


Recommended Posts

I have just gotten started with unraid and have so far enjoyed the experience, however, I seem to be running into an issue with a vdisk in a Windows 10 guest. It seems that images on the array cannot exceed the size of whatever physical disk it is created on without pausing the VM.

 

The vdisk is located on the array in /mnt/user/domains/ which has ~4TB free, however, the image seems to only reside on disk 3 of the array (a 640gb HDD, the vdisk is 1TB, the VM paused once it filled disk3). I assumed that since the image is in the user share it should span across all the disk as needed. After playing with it for awhile I have come to the conclusion that unraid cannot span a single file across multiple disk like a more conventional array.

 

Am I correct in my assumption? If so, are there any solutions other than assigning multiple vdisk inside of the guest?

 

The main reason for giving unraid a shot was to utilize an old gaming machine and a hodgepodge of disk to make a media server and headless steam streamer in one box. It has worked pretty well so far but if the file size is limited to the size of a physical disk I may have to go back to proxmox and try it with lvm or some other kind of JBOD.

Link to comment

unRaid does not span disks using xfs. that way, if your array crashes and 1 disk dies, and your parity dies while rebuilding, you don't lose the entire enchilada, just the data on the disks that died.

 

You "could" run a striped cache disk using btrfs to achieve what you are wanting, but that scares some folks.

 

I use unassigned SSD's for smaller main vm images, 20-50GB each (which are backed up to the array,) and then add to them any other vdisks I need for space via unassigned devices for better speed. For OS X I have an actual vdisk with common apps so I don't have to reinstall each time I create a new os setup. Keeps the main vm's nice and tidy.

Link to comment

Most people dedicate an SSD for a VM and do no deploy them to the array, this avoids I/O issues and logically separates the VM from everything else.

 

I run a Windows 10 VM which resides on its own dedicated SSD as an unassigned device and it runs as good as bare metal.

 

Ultimately I will install the VM on an 80gb ssd that I am currently using in a makeshift LXD server, regardless, I expect to be able use as much of the total arrays space as I wanted for additional storage in the VM. That said, before I filled the physical disk (an old 640GB WD Blue) it was running surprisingly well, normal desktop task were not incredible fast as expected but gaming was very impressive.

Link to comment

unRaid does not span disks using xfs. that way, if your array crashes and 1 disk dies, and your parity dies while rebuilding, you don't lose the entire enchilada, just the data on the disks that died.

 

That makes sense, I am very new to unraid and thought it worked more like LVM or JBOD when not using parity (this is just a fun project for some extra hardware, I don't intend on storing anything I don't mind losing on this server)

 

You "could" run a striped cache disk using btrfs to achieve what you are wanting, but that scares some folks.

 

I use unassigned SSD's for smaller main vm images, 20-50GB each (which are backed up to the array,) and then add to them any other vdisks I need for space via unassigned devices for better speed. For OS X I have an actual vdisk with common apps so I don't have to reinstall each time I create a new os setup. Keeps the main vm's nice and tidy.

 

I think that will be the best solution for me, I really only need about 2tb's of additional storage so I guess I will stripe my two 1TB drives and use the spare SSD as an unassigned device for the OS once I migrate everything from my LXD server. Can caching be disabled so the "cache" pool is essentially a normal storage pool?

 

At this point I am starting to realize that I am only using unraid as a hypervisor (they just make it so damn easy to deploy VM's with PCI passthrough) so maybe I would be better off sticking with proxmox or just Ubuntu server.

Link to comment

 

Thanks for the link, I came across that soon after your first post. For now I have a 1.5tb vdisk on a 2tb btrfs striped cache pool and so far it seems to be working great. I am transferring my steam library now so we will see what happens once it exceeds 1tb (Although I am certain it will be fine now that the data is striped). Once I have everything stable I will dismantle my old LXD server and use its SSD for the VM's OS, keep the cache pool for games, and use the rest of my disk for less sensitive data storage (mostly infrequently accessed Plex media and temporary storage). Not exactly what I intended but it accomplishes the same thing.

 

BTW, I did notice that each share can independently enable/disable the cache, which is quite useful for me.

 

Thanks for your help, I think I'll stick with unraid if everything looks good over the next few weeks. I am using the trial now but it looks like a license may be in my future (never thought I would pay for a linux distro, but lime-tech seems deserving enough)

Link to comment

 

Thanks for the link, I came across that soon after your first post. For now I have a 1.5tb vdisk on a 2tb btrfs striped cache pool and so far it seems to be working great. I am transferring my steam library now so we will see what happens once it exceeds 1tb (Although I am certain it will be fine now that the data is striped). Once I have everything stable I will dismantle my old LXD server and use its SSD for the VM's OS, keep the cache pool for games, and use the rest of my disk for less sensitive data storage (mostly infrequently accessed Plex media and temporary storage). Not exactly what I intended but it accomplishes the same thing.

 

BTW, I did notice that each share can independently enable/disable the cache, which is quite useful for me.

 

Thanks for your help, I think I'll stick with unraid if everything looks good over the next few weeks. I am using the trial now but it looks like a license may be in my future (never thought I would pay for a linux distro, but lime-tech seems deserving enough)

 

Just came across your post and thought of your problem.

You don't want multi vdisks correct.

I guess as you want one large drive in windows.

Well as you are not using parity then the writes to your array will not be slowed by a parity write. So here is what you should do.

Create a vdisk on each drive you have in your array so eg on disk 1,2 and 3.

Attach all the vdisks to your windows VM.

Go to disk management in windows and using those attached vdisks create a striped or spanned volume.

This way to windows you will have one large disk but it is going across multiple vdisks.

I just tried it on mine and seems to work ok :)

 

edit .....Any reason you are not just using a mapped drive in windows to the array? You could just install the os on your ssd then map a network drive to your array for data storage. Shares on the array span across drives.  A vdisk will not as it is just one file

  • Like 1
Link to comment

 

Thanks for the link, I came across that soon after your first post. For now I have a 1.5tb vdisk on a 2tb btrfs striped cache pool and so far it seems to be working great. I am transferring my steam library now so we will see what happens once it exceeds 1tb (Although I am certain it will be fine now that the data is striped). Once I have everything stable I will dismantle my old LXD server and use its SSD for the VM's OS, keep the cache pool for games, and use the rest of my disk for less sensitive data storage (mostly infrequently accessed Plex media and temporary storage). Not exactly what I intended but it accomplishes the same thing.

 

BTW, I did notice that each share can independently enable/disable the cache, which is quite useful for me.

 

Thanks for your help, I think I'll stick with unraid if everything looks good over the next few weeks. I am using the trial now but it looks like a license may be in my future (never thought I would pay for a linux distro, but lime-tech seems deserving enough)

 

Just came across your post and thought of your problem.

You don't want multi vdisks correct.

I guess as you want one large drive in windows.

Well as you are not using parity then the writes to your array will not be slowed by a parity write. So here is what you should do.

Create a vdisk on each drive you have in your array so eg on disk 1,2 and 3.

Attach all the vdisks to your windows VM.

Go to disk management in windows and using those attached vdisks create a striped or spanned volume.

This way to windows you will have one large disk but it is going across multiple vdisks.

I just tried it on mine and seems to work ok :)

 

edit .....Any reason you are not just using a mapped drive in windows to the array? You could just install the os on your ssd then map a network drive to your array for data storage. Shares on the array span across drives.  A vdisk will not as it is just one file

 

smart. I hadn't thought of that. Might have to try it for fun now!

Link to comment

Just came across your post and thought of your problem.

You don't want multi vdisks correct.

I guess as you want one large drive in windows.

Well as you are not using parity then the writes to your array will not be slowed by a parity write. So here is what you should do.

Create a vdisk on each drive you have in your array so eg on disk 1,2 and 3.

Attach all the vdisks to your windows VM.

Go to disk management in windows and using those attached vdisks create a striped or spanned volume.

This way to windows you will have one large disk but it is going across multiple vdisks.

I just tried it on mine and seems to work ok :)

 

edit .....Any reason you are not just using a mapped drive in windows to the array? You could just install the os on your ssd then map a network drive to your array for data storage. Shares on the array span across drives.  A vdisk will not as it is just one file

 

I had considered that but wanted to keep the flexibility of a single vdisk outside of the guest. That way when I move the OS to an SSD I can just  add the vdisk to the new guest and avoid having to transfer almost 2TB of data (Of course I will delete all the OS data and use it as a normal data drive). For me, portability is one of the biggest advantages of virtualization, I didn't want to limit myself to software raid running on a guest OS.

 

It's easy to mount a single disk image on pretty much any OS (Windows/OSX/Linux/etc.) while reconstructing a raid array from multiple independent vdisk is not as trivial.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.