App Drive Class


Recommended Posts

Currently Unraid 6's drive scheme utilizes 3 drive classes: Cache, Data, & Parity. Given the massive expansion in the functionality with the release of Unraid 6, it seems like a bit of an oversight to have Unraid only able to store it's containers in the main array or the cache drives. For several reasons that are unimportant to enumerate here, I prefer not to store my applications on my array. That only leaves the Cache drive for app storage. The problem with this arises when you want to use an SSD for app storage and a HDD for cache. I write a lot of large files to my array. The write speed penalty is noticed on the tail end of the larger files. Using the SSD for cache would alleviate this, but wear down the SSD with lots of large writes. One can argue that the longevity of the SSD is enough that it wouldn't be a problem, but for me that's irrelevant, I don't want the SSD for cache. The write speed to an unpooled HDD is more than enough for my needs.

 

The proposal: Create a 4th drive class (App) that offers the same storage features (BTRFS pool) as the Cache drives, without the array attempting to store temp data there. The Docker & VM pages will default to using this drive class, and it's availability will be severed from the main array; such that if the array is shutdown, it won't cause the apps to be required to halt.

 

Note that the Unassigned devices plugin fills some (but not all) of the above requirements. However, a built in feature with the same management features & capabilities as suggested above is desired.

 

I'm a new user to Unraid, so if any of the above doesn't make sense please let me know.

Link to comment

I just manually mount a SSD drive outside of my array with the go script. Works perfect, the only thing is there is no drive redundancy tho the vms could be backed up if i wanted.

 

 

Would be handy tho to have redundancy capability

If you are mounting the drive via the go script, then there is no reason you cannot set up a BTRFS cache pool instead of it being a single drive.  Are you using an entry in the 'stop' script to do a tidy umount of this drive?
Link to comment

I just manually mount a SSD drive outside of my array with the go script. Works perfect, the only thing is there is no drive redundancy tho the vms could be backed up if i wanted.

 

 

Would be handy tho to have redundancy capability

If you are mounting the drive via the go script, then there is no reason you cannot set up a BTRFS cache pool instead of it being a single drive.  Are you using an entry in the 'stop' script to do a tidy umount of this drive?

 

LOL nah, maybe i should right.

Link to comment

I wholeheartedly agree with this request. I've been an unraid user for about 6 years now and I really love it. I usually don't write much in the forums, mostly just read what everyone else posts. In 6.1.9 I've been using Unassigned Devices to mount my 250GB ssd for 2 vms and about 12 or so dockers. After upgrading to 6.2, I have to say I was really disappointed that I could no longer use my UD ssd and instead would have to either use my HD cache or my small ssd as a cache. Since buying a new ssd cache drive isn't really in my budget right now, I'm forced to go back to 6.1.9. It would really be nice if there was another drive class implemented with unraid.  :)

Link to comment

If there is a decision to go down this route then there are other features I would like to see implemented:

  • Possibly allow for more than one such drive.  I could see using one for docker purposes, and some more for VMs
  • The ability to specify that such drives can be mounted at system startup and remain mounted even when the arrays not started.  At the moment I have a SSD that is mounted in the system 'go' file and umounted in the system 'stop' file, but it would be better to have built-in support for such functionality
  • The ability specify VMs that can be run even when the array is not started.  If set to autostart then such VMs should be started as part of the system startup process.  I certainly have VMs that have no need to access the normal array data drives.  I believe that a significant proportion of those who use Vms would welcome such a capability.
  • Ideally this model should also be used for docker containers.  Having said that I am not sure what proportion of users dockers have mapping to the array data drives so need to the array started for them to work as desired.

Link to comment

If there is a decision to go down this route then there are other features I would like to see implemented:

  • Possibly allow for more than one such drive.  I could see using one for docker purposes, and some more for VMs
  • The ability to specify that such drives can be mounted at system startup and remain mounted even when the array.  At the moment I have a SSD that is mounted in the system 'go' file and umounted in the system 'stop' file, but it would be better to have built-in support for such functionality
  • The ability specify VMs that can be run even when the array is not started.  If set to autostart then such VMs should be started as part of the system startup process.  I certainly have VMs that have no need to access the normal array data drives.  I believe that a significant proportion of those who use Vms would welcome such a capability.
  • Ideally this model should also be used for docker containers.  Having said that I am not sure what proportion of users dockers have mapping to the array data drives so need to the array started for them to work as desired.

Add me to the list wanting this feature in general and those above specifically.  Especially the ability to stop and start the array without turning off the VMs that are not using the array or cache drive.
Link to comment

Especially the ability to stop and start the array without turning off the VMs that are not using the array or cache drive.

 

+1

 

I remember Tom saying this wouldn't be done/easy to do, because stopping the array stops all services and unRAID would need a lot of changes, but we can hope.

I sure you are right but as you said - we can hope.  And if we bug them enough and offer to pay an upgrade fee (I would for my 5 servers even would for my 7 total licenses - retired 2) maybe it will be more likely.
Link to comment

Especially the ability to stop and start the array without turning off the VMs that are not using the array or cache drive.

 

+1

 

I remember Tom saying this wouldn't be done/easy to do, because stopping the array stops all services and unRAID would need a lot of changes, but we can hope.

if certain services such as libvirt could be started at system startup and closed at system shutdown (rather than linked to array start/stop) then those prepared to do a little work at the command line level could probably achieve running VMs independently of the array.  it then might allow for some real experimentation to be done to see if other problems occur when trying to do this as a pre-cursor to a friendlier implementation that is supported via the GUI.
Link to comment

Especially the ability to stop and start the array without turning off the VMs that are not using the array or cache drive.

 

+1

 

I remember Tom saying this wouldn't be done/easy to do, because stopping the array stops all services and unRAID would need a lot of changes, but we can hope.

 

Here's Tom's comment from the 6.2 beta 20 thread:

https://lime-technology.com/forum/index.php?topic=47875.msg460468#msg460468

This is not going to happen anytime soon, if ever.  It interferes with features we have planned for the future.  "Array Start/Stop" is really a misnomer.  In this context "array" refers to the entire set of devices attached to the server, not just the ones that are assigned to the parity-protected devices.

Link to comment

+1

I run a similar setup to the OP and want similar things.

 

This topic seems to come up every couple of months.

Last time I saw it.

 

I use to have my docker.img on my UDEV ssd but as of 6.2 that is no longer possible. So at the moment we are going the wrong way..

 

If there is a decision to go down this route then there are other features I would like to see implemented:

  • Possibly allow for more than one such drive.  I could see using one for docker purposes, and some more for VMs
  • The ability to specify that such drives can be mounted at system startup and remain mounted even when the arrays not started.  At the moment I have a SSD that is mounted in the system 'go' file and umounted in the system 'stop' file, but it would be better to have built-in support for such functionality
  • The ability specify VMs that can be run even when the array is not started.  If set to autostart then such VMs should be started as part of the system startup process.  I certainly have VMs that have no need to access the normal array data drives.  I believe that a significant proportion of those who use Vms would welcome such a capability.
  • Ideally this model should also be used for docker containers.  Having said that I am not sure what proportion of users dockers have mapping to the array data drives so need to the array started for them to work as desired.

I would really like to see this as well. But I would be happy if these were added later due to the complexity.

Link to comment

Especially the ability to stop and start the array without turning off the VMs that are not using the array or cache drive.

 

+1

 

I remember Tom saying this wouldn't be done/easy to do, because stopping the array stops all services and unRAID would need a lot of changes, but we can hope.

 

Here's Tom's comment from the 6.2 beta 20 thread:

https://lime-technology.com/forum/index.php?topic=47875.msg460468#msg460468

This is not going to happen anytime soon, if ever.  It interferes with features we have planned for the future.  "Array Start/Stop" is really a misnomer.  In this context "array" refers to the entire set of devices attached to the server, not just the ones that are assigned to the parity-protected devices.

I remember that now too - darn it.  Hopefully we can change his mind.  This one feature is why I'm still considering switching back to ESXi.  Except my version of ESXi only supports a single CPU (from what I remember of 5.0 anyway).  Maybe the free version of 6.0 will support multiple physical CPUs?
Link to comment

I remember that now too - darn it.  Hopefully we can change his mind.  This one feature is why I'm still considering switching back to ESXi.  Except my version of ESXi only supports a single CPU (from what I remember of 5.0 anyway).  Maybe the free version of 6.0 will support multiple physical CPUs?

looks like no more physical CPU and RAM restrictions on ESXi 6..

see here for example: https://communities.vmware.com/thread/535228?start=0&tstart=0

Link to comment

Especially the ability to stop and start the array without turning off the VMs that are not using the array or cache drive.

 

+1

 

I remember Tom saying this wouldn't be done/easy to do, because stopping the array stops all services and unRAID would need a lot of changes, but we can hope.

 

Here's Tom's comment from the 6.2 beta 20 thread:

https://lime-technology.com/forum/index.php?topic=47875.msg460468#msg460468

This is not going to happen anytime soon, if ever.  It interferes with features we have planned for the future.  "Array Start/Stop" is really a misnomer.  In this context "array" refers to the entire set of devices attached to the server, not just the ones that are assigned to the parity-protected devices.

 

You know, that's kind of a BS response. I mean, if you were to go back in time to UR 3.0/4.0 days and ask for VM functionality integrated into the core, you'd have been laughed at. Dismissing an important feature because it's difficult to implement is a weak excuse. The bottom line is the parity based storage functions need to be modularized and divorced from the rest of the storage to allow room for the other aspects of the OS to grow. I'm not talking in terms of product, but just in operation. Array misnomer or not, people need the ability to alter the core protected storage (adding/replacing drives) without disrupting elements that don't strictly depend on the array's operation. Given the lousy write speeds to the array, I don't know why anyone would plan to store VM containers directly on that array. So, if you are adding in VM functionality, why limit storage to just the cache/array drives?

 

Seems to me that a docker/VM (i.e. App) class (R1 or R0, depending on risk/performance profile) is exactly what is needed for proper support of that feature.

Link to comment

I remember that now too - darn it.  Hopefully we can change his mind.  This one feature is why I'm still considering switching back to ESXi.  Except my version of ESXi only supports a single CPU (from what I remember of 5.0 anyway).  Maybe the free version of 6.0 will support multiple physical CPUs?

looks like no more physical CPU and RAM restrictions on ESXi 6..

see here for example: https://communities.vmware.com/thread/535228?start=0&tstart=0

Went checking myself on that but thanks.  The only limiting thing now is max 64GB ram since it limits you to 32GB per CPU (have 128GB loaded now).  But I don't expect that to really be a problem.  Am really thinking about moving back again. 

 

 

Edit: Reading your link tells me the memory is NOT the limiting factor just the vCPU count.  Definitely going to think about switching back now.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.