What is the current status of NVMe support?


dAigo

Recommended Posts

EDIT:

As of March 11, 2016 unRAID Server Release 6.2.0-beta18 NVMe is *supported* for cache and array usage.

 

THIS IS BETA SOFWARE

--------------------

While every effort has been made to ensure no data loss, **use at your own risk!**

 

 

Original Post:


 

In the last 5 years, i usualy found a "good enough" answer on the forums to any question i could not answer for myself, but not this time. And so I hope this thread, if it goes anywhere, may help other people in the future. ;) Distant future probably, as the tpoic is not that hot even now with Intel/Samsung pushing NVMe to consumers/enthusiasts.

 

I found varius posts regarding NVMe disks, but they seem to get nowhere:

1) Probaly off Topic therefore no Answer

2) Same Guy, new Thread -> still No Answer

3) PCIe: AHCI vs. NVMe

 

From the 6.1.2 Notes i took:

linux: enable kernel options: CONFIG_BLK_DEV_NVME: NVM Express block device

My guess so far is, that PCIe SSDs with AHCI should work, because in the end they are recognized as SATA Disks. (/dev/sdX)

But since unRAID was always developed with the SATA Protocoll in mind,  if we are lucky NVMe functionality is "unknown" and if we are not its "not working"?

 

Enough theorie, here is my current situation.

I installed an Intel 750 NVMe PCIe SSD and was hoping that it could be selectable in the GUI as a cache/data disk, but its not.

No big deal, hoping doesn't mean expecting.

It is however correctly recognized as a PCI device and even as a "disk". (probably thanks to the update in 6.1.2?)

root@unRAID:~# lspci -k

04:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01)

Subsystem: Intel Corporation Device 370e

Kernel driver in use: nvme

Kernel modules: nvme

root@unRAID:~# lsblk

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

[...]

nvme0n1 259:0    0 372.6G  0 disk

And as it is correctly identified as a "disk" i would asume, that its possible to add it as an additional drive besides the array/cache until further support is implemented. (which I am trying to avoid)

Normaly I use the Plugin "unassigned devices" from gfjardim to manage devices outside of the array, but it seems he had no reason so far to add support for NVMe. I may ask him if he thinks he could add that. (maybe just add /dev/nvme* as a search pattern?)

And if I would run the disk besides the Array, are there recommendations for the filesystem?

In a cache pool, it would obviosly be btrfs, but it seems xfs would be the more common way and is at least mentioned in some spec sheets: Intel - NVMe Filesystem Recommendations (last paragraph - "Be sure to turn off the discard option")

 

So, what is the "officall" state of NVMe in unRAID? (as cache/data/parity)

Does it work? (as in "Report a Bug if it does not work.")

Could it work? (as in "It's new tech, send us additional logs/debugs and we might get it to work.")

Should it work? (as in "unRAID makes no difference between SATA/AHCI and NVMe.")

 

If it works, but not as a part of the array, the next questions would be:

Would you think using a NVMe disk is "safe" in unRAID or should I wait for newer kernel/modules/filesystem versions?

 

And if it doesn't work, is there any planned roadmap for further support or should I just create a "Feature request"?

 

I would understand if that topic is not very high on the list, especially if you would need to recode the whole storage system.

And I know that "PCIe AHCI SSDs" and even "normal SATA SSDs" are usualy more than enough, my two old Intel X25-M SSDs are still good enough for normal "desktop use", but as a tech nerd, I want to see what the new stuff could do.

Its unbelievable what you guys packed into v6 of unRAID, so many shiny toys (docker, kvm, cache pools). And to top that, Intel (skylake) and NVidia (maxwell2) are getting so much more efficient that its hard, to push it to the limits. (But I can't keep myself from pushing  ::))

I may need a "little" Microsoft (Exchange) and Cisco (UCM) Lab in the near future (for work), would be nice to keep that running while playing Fallout4 at 1440p/144Mhz, while listening to music from the local media server, while monitoring everything via snmp, while ... oh, its getting off topic ...  :-[ Well its the first post, some things needed to be said  :-*

 

That beeing said, any and all feedback would be appreciated :)

Link to comment

Great first post, welcome! However, the forum is probably not the quickest or best way to get your question fully answered. I would email support, [email protected] with pretty much an exact copy of your post, maybe trim it down a little to accentuate the points that only Tom and the team at limetech can answer.

 

The forum is great for user to user support, but your core question can really only be answered by limetech, and their preferred contact is the email for this sort of thing.

Link to comment

Current state of NVMe is that the kernel drivers have been loaded for it, but that's it.  We haven't added support for assigning those devices to the cache or the array yet, and from reading the tech spec you linked, it looks like some extra work must be done at the filesystem level to safely support those devices.  That said, I know that there is some interest here, but can't say for certain just yet what release would contain support for this.

Link to comment
I would email support, [email protected] with pretty much an exact copy of your post, [...]
Thanks, I'll do that.

 

The forum is great for user to user support, but your core question can really only be answered by limetech
I know, but even if my questions won't get answered, others's may be. I kept the title very broad for that reason.

 

That said, I know that there is some interest here, but can't say for certain just yet what release would contain support for this.
Would it be usefull to open a "Feature request" so that we can keep track of any progress or test things out if it seems usefull?

 

 

On the question of NVME support. Can I still assign a NVME M.2 SSD to a VM and use it as a boot drive?
I think that depends on your Hardware.

I choose the Intel SSD over the faster and cheaper Samsung 950 Pro for compatibility, safety reasons. I mean, Samsung doesn't even have a datasheet for the 950... Who knows what features might be missing to support passthrough, raid and what ever a "normal" consumer wont need.

 

And the ASUS-Mainboard, because it has support for the Intel SSD. (with the "Hyper-Kit")

I have an option in the bios to specify if i use the hyper-kit in the M.2 slot (and therefore an nvme SSD) or not.

It may otherwise run in some M.2/AHCI legacy mode, who knows.

As far as i know, the Mainboard/PCIe/BIOS can have impact on PCI-Passthrough, so your results may vary.

 

In my case, the answer is yes, I can.

Just for you, installed Win10 x64 on the mentioned Intel 750 NVMe SSD (M.2 -> Hyper-Kit -> U.2). (in under 5 Minutes I might add :))

I used uefi/ovmf, rest is default. And of course edited the .xml to add the SSD passthrough, since only GPU/USB passthrough has support through the web interface.

No additional drivers were needed (but Intel recommends to install them at some point)

I had some trouble with the bootmanager and uefi, I need to select the windows-efi-bootfile manually.

But I have these on other VMs as well, problably related and not nvme specific.

 

Win7 has no native nvme support (afaik), so a hotfix/driver would probably be needed to install I asume.

Linux should work if the kernel has nvme support.

Mac I don't know.

 

I also can't say how safe it is to passthrough the ssd, if there are issues with the passthrough, the filesystem on you systemdrive could be damaged. Its your own risk I would say.

It may or may not be safer to format/mount the drive in unRAID and place your virtual disks there. At least you could place (and boot) any OS on there, even if there is no native nvme support (Win7)

 

If I find the time, I may run some benchmarks to compare passthrough vs. unRAID mounted.

So until the disk leaves the testbench and goes live, I could try things out if there are questions.

Link to comment
  • 1 month later...

Current state of NVMe is that the kernel drivers have been loaded for it, but that's it.  We haven't added support for assigning those devices to the cache or the array yet, and from reading the tech spec you linked, it looks like some extra work must be done at the filesystem level to safely support those devices.  That said, I know that there is some interest here, but can't say for certain just yet what release would contain support for this.

 

I would email support, [email protected] with pretty much an exact copy of your post, [...]
Thanks, I'll do that.

 

The forum is great for user to user support, but your core question can really only be answered by limetech
I know, but even if my questions won't get answered, others's may be. I kept the title very broad for that reason.

 

That said, I know that there is some interest here, but can't say for certain just yet what release would contain support for this.
Would it be usefull to open a "Feature request" so that we can keep track of any progress or test things out if it seems usefull?

 

 

On the question of NVME support. Can I still assign a NVME M.2 SSD to a VM and use it as a boot drive?
I think that depends on your Hardware.

I choose the Intel SSD over the faster and cheaper Samsung 950 Pro for compatibility, safety reasons. I mean, Samsung doesn't even have a datasheet for the 950... Who knows what features might be missing to support passthrough, raid and what ever a "normal" consumer wont need.

 

And the ASUS-Mainboard, because it has support for the Intel SSD. (with the "Hyper-Kit")

I have an option in the bios to specify if i use the hyper-kit in the M.2 slot (and therefore an nvme SSD) or not.

It may otherwise run in some M.2/AHCI legacy mode, who knows.

As far as i know, the Mainboard/PCIe/BIOS can have impact on PCI-Passthrough, so your results may vary.

 

In my case, the answer is yes, I can.

Just for you, installed Win10 x64 on the mentioned Intel 750 NVMe SSD (M.2 -> Hyper-Kit -> U.2). (in under 5 Minutes I might add :))

I used uefi/ovmf, rest is default. And of course edited the .xml to add the SSD passthrough, since only GPU/USB passthrough has support through the web interface.

No additional drivers were needed (but Intel recommends to install them at some point)

I had some trouble with the bootmanager and uefi, I need to select the windows-efi-bootfile manually.

But I have these on other VMs as well, problably related and not nvme specific.

 

Win7 has no native nvme support (afaik), so a hotfix/driver would probably be needed to install I asume.

Linux should work if the kernel has nvme support.

Mac I don't know.

 

I also can't say how safe it is to passthrough the ssd, if there are issues with the passthrough, the filesystem on you systemdrive could be damaged. Its your own risk I would say.

It may or may not be safer to format/mount the drive in unRAID and place your virtual disks there. At least you could place (and boot) any OS on there, even if there is no native nvme support (Win7)

 

If I find the time, I may run some benchmarks to compare passthrough vs. unRAID mounted.

So until the disk leaves the testbench and goes live, I could try things out if there are questions.

 

I would really appreciate an update on the status. As a first time UnRAIDer I am in the process of compiling my purchase list and would like to know if I can purchase the Intel 750 with the intention of solely running a VM on it through UnRAID. This drive will NOT need to be in an array or be used as a cache disk. If so, can I create this VM through UnRAID or would I have to do a passthrough? Thanks in advance for your help and I'm so excited to get started. :)

Link to comment

I have no update to report at this time.  NVMe support will require us to procure NVMe devices for testing which we have not done yet. While this demand is starting to grow, it still represents a small subset of the user base, so other features that benefit everyone are being worked on first.

Link to comment

I would really appreciate an update on the status. As a first time UnRAIDer I am in the process of compiling my purchase list and would like to know if I can purchase the Intel 750 with the intention of solely running a VM on it through UnRAID. This drive will NOT need to be in an array or be used as a cache disk. If so, can I create this VM through UnRAID or would I have to do a passthrough? Thanks in advance for your help and I'm so excited to get started. :)

 

Short answer:

Yes you can run a VM on that drive outside of the array. At least I can (and I currently do) and I see no reason why other systems should behave different.

You could also use passthrough for better perfomance, but passthrough always depends on more components. In my case it worked fine, same procedure as a GPU.

 

Longer answer:

You wont be able to do everything through the Web-Interface, so a basic understanding about ssh/cli commands and/or linux in general will help.

You would need to identify the name of your device, be able to format it as xfs (or ext4) and mount it somewhere under /mnt.

Once you do that, you can choose the drive like any other disk on your server when creating VMs through the GUI.

It would defenitly help to know how you automate the mounting procedure for every reboot of your server.

 

If you are able to do all of that, your are done in 5-20 minutes and can start your VM.

But since you are a "first timer", I would asume, that these steps are not exactly familiar to you?

 

I could post the way I did it, that does not mean its safe or the best/easyest way, but it worked for me. (and others I spoke to)

I definetly had a very good learning experience, I still dont know everything I should know about what i have done, but I know more than before ;)

 

You would still do it at your own risk, so create backups of whatever is on your ssd or dont put any valuable date on it.

 

I have no update to report at this time.  NVMe support will require us to procure NVMe devices for testing which we have not done yet. While this demand is starting to grow, it still represents a small subset of the user base, so other features that benefit everyone are being worked on first.

I dont think "official" NVMe support is something we will see in the next 4-6 Month, maybe some beta features or plugins, but from what I have seen so far, there are a lot of things that needs to be evaluated before it can be supported.

But that is just my personal opinion, i have no knowledge how fast lime-tech can get where ever it is they want to be, before they support new stuff.

Link to comment
  • 2 weeks later...

I would really appreciate an update on the status. As a first time UnRAIDer I am in the process of compiling my purchase list and would like to know if I can purchase the Intel 750 with the intention of solely running a VM on it through UnRAID. This drive will NOT need to be in an array or be used as a cache disk. If so, can I create this VM through UnRAID or would I have to do a passthrough? Thanks in advance for your help and I'm so excited to get started. :)

 

Short answer:

Yes you can run a VM on that drive outside of the array. At least I can (and I currently do) and I see no reason why other systems should behave different.

You could also use passthrough for better perfomance, but passthrough always depends on more components. In my case it worked fine, same procedure as a GPU.

 

Longer answer:

You wont be able to do everything through the Web-Interface, so a basic understanding about ssh/cli commands and/or linux in general will help.

You would need to identify the name of your device, be able to format it as xfs (or ext4) and mount it somewhere under /mnt.

Once you do that, you can choose the drive like any other disk on your server when creating VMs through the GUI.

It would defenitly help to know how you automate the mounting procedure for every reboot of your server.

 

If you are able to do all of that, your are done in 5-20 minutes and can start your VM.

But since you are a "first timer", I would asume, that these steps are not exactly familiar to you?

 

I could post the way I did it, that does not mean its safe or the best/easyest way, but it worked for me. (and others I spoke to)

I definetly had a very good learning experience, I still dont know everything I should know about what i have done, but I know more than before ;)

 

You would still do it at your own risk, so create backups of whatever is on your ssd or dont put any valuable date on it.

 

I have no update to report at this time.  NVMe support will require us to procure NVMe devices for testing which we have not done yet. While this demand is starting to grow, it still represents a small subset of the user base, so other features that benefit everyone are being worked on first.

I dont think "official" NVMe support is something we will see in the next 4-6 Month, maybe some beta features or plugins, but from what I have seen so far, there are a lot of things that needs to be evaluated before it can be supported.

But that is just my personal opinion, i have no knowledge how fast lime-tech can get where ever it is they want to be, before they support new stuff.

 

I'm reasonably familiar with Terminal use from my server days, but a walkthrough of your process would be an invaluable reference. Thanks in advance for your help!

Link to comment

Just a quick update on this, we have ordered an NVMe drive for testing so….......

 

ITS HAPPENING!!!!!!!!!!

 

Great :) my system can support two so looking forward to adding these to a cache pool.

 

Be interested to know if SATA AHCI SSD's and PCIE NVMe SSD's can be mixed in a pool

Link to comment

 

 

Just a quick update on this, we have ordered an NVMe drive for testing so….......

 

ITS HAPPENING!!!!!!!!!!

 

Great :) my system can support two so looking forward to adding these to a cache pool.

 

Be interested to know if SATA AHCI SSD's and PCIE NVMe SSD's can be mixed in a pool

 

Yes that should work, but you would be losing all the performance benefits of pcie/NVMe because you will be bottlenecked to the speed of the SATA devices. It'd be like having a Ferrari but you're always stuck behind a Camaro. Sure Camaros are fast, but your Ferrari could smoke it if it just got outta the damn way!! Even mixing PCIe AHCI SSDs with NVMe SSDs would remove all the benefits of NVMe and bottleneck you to AHCI (at least there it'd be more like a Ferrari behind a Corvette ;-).

Link to comment

+1 to request for nvme support. I just got a one of the 950 pros and would love to make use of it. I think nvme will become MUCH more prevalent in the years to come esp as the price/gb continues to become more and more reasonable.

 

Also this thread is the diamond in the rough on this subject. Could this info be stickied or something to make it show up in more searches?

 

Good read and great news for a new unraid-er.

Link to comment

I'm reasonably familiar with Terminal use from my server days, but a walkthrough of your process would be an invaluable reference. Thanks in advance for your help!

Ok, I will see what I can do.

 

Yes that should work, but you would be losing all the performance benefits of pcie/NVMe because you will be bottlenecked to the speed of the SATA devices.

Out of curiosity, can unraid/btrfs give a higher priority to the "copy" of the data that is on a specific disk in case of a read request? Or is it random/round robin?

That could remove the SATA/AHCI penalty in while reading the data. (booting VMs/starting Programs)

But it would probably still be a waste I think :)

 

Even mixing PCIe AHCI SSDs with NVMe SSDs would remove all the benefits of NVMe and bottleneck you to AHCI (at least there it'd be more like a Ferrari behind a Corvette ;-).

Definitly correct, however, I would argue that nvme and ahci PCIe are both kind of bottlenecked by the qemu/kvm overhead and btrfs, at least in non-enterprise enviroments like unRAID.

 

According to Intel, NVMe has "only" 3µs lower latency, wich is great (50%!) but in the end only a portion of the io-wait time... If you compare the "avarage service time" under "Heavy Load" of the Samsung SM951 NVMe/AHCI versions, they are at 278µs (NVMe) vs 281µs (AHCI), which is exactly what intel told us 3 Years earlier :) (and both are faster than the 256GB NVMe 950 Pro)

And that were all bare-metal tests, add virtualization latency with consumer hw on top and you problably wont notice any difference between AHCI/NVMe as long as they both are using the same PCIe lanes.

The protocolls differ in many ways, and if perfomance is similar, compatibility could make AHCI PCIe SSDs the better choice right now. Which is why those AHCI PCIe SSDs were mostly OEM, high speed, high compatibility but only short term (3-5 years is ja good timeframe for an OEM like HP/Lenovo)

Its just, that NVMe is the obvious winner in the long run, with PCIe4.0, 3D-Nand/3D XPoint, better/faster controller -> 320GBit/s ~ DDR3/DDR4 speed for storage...  (5 years +)

 

interesting things to know about PCIe/m.2 on "older" chipsets: (x99/z97/...)

- all the onboard goodies (storage/audio/LAN) are connected to the chipset

- the connection to that chipset varies, but is shown on wikipedia

- x99 Chipset uses DMI 2.0 to connect to the CPU, so everything that is NOT in a PCIe Slot will share a max of 20GBit/s

- the m.2 slot can be either shared with the SATA Express (or SATA 9&10) or directly connected (through an additional controller), as an example m.2 on Asus X99WSIPMI is shared, but the Asus Z97DELUXE has an additional "ASMedia® SATA Express controller" that is connected directly to the CPU and the m.2 port shares its bandwith.

- DMI2.0 uses PCIe2.0 which has a theoretical max. of 380k IOPS, which could also bottleneck a PCIe3.0 (x4) SSD that is rated for  460k IOPS

- a SATA SSDs should have 6 GBit/s each (so 24 GBit/s total with 4)

- INTEL 750 is listed with 2.4 GByte/s max. but Benchmarks show up to 2.7 GByte/s (1Byte=8Bit -> 19,2 GBit/s up to 21,6Gbit/s)

- PCI3.0 is not only faster but also uses a better optimized "encoding" which results in less overhead and higher IOPS

 

What does that mean?

1) A NVMe m.2 SSD like the Samsung 950 or Intel 750 could be bottlenecked to "only" 380k IOPS, when used in PCIe2.0 as in DMI2.0.

2) If you consider other components like LAN (1-2 GBit), SATA (6Gbit), USB3 (5GBit) its probably safe to say, that it would be rare to see a NVMe disk using its full potential while sharing 20Gbit/s with other components.

3) AHCI or NVMe on a m.2 slot that is connected through DMI2.0 would probably make no difference at all.

*) basicly Ferrari vs Corvette with a lot of traffic ;)

 

All Skylake Chipsets (Z/H/Q170, Q/B150, B110) use DMI 3.0 (so PCI3.0) and "newer/better" x99/z97 mainboards have the needed additional controller to make sure PCI3.0 and no shared lanes are used. In that case NVMe should be "slightly" faster than AHCI, depending on the workload and the system.

 

But yes, "in general" you are right, I just wanted to share some insight into the NVMe/AHCI/PCIe/Chipset dilemma :D

At least that is what I think is true about nvme today, I could be wrong and there will be changes in the future.

Link to comment

I'm reasonably familiar with Terminal use from my server days, but a walkthrough of your process would be an invaluable reference. Thanks in advance for your help!

Ok, I will see what I can do.

 

Yes that should work, but you would be losing all the performance benefits of pcie/NVMe because you will be bottlenecked to the speed of the SATA devices.

Out of curiosity, can unraid/btrfs give a higher priority to the "copy" of the data that is on a specific disk in case of a read request? Or is it random/round robin?

That could remove the SATA/AHCI penalty in while reading the data. (booting VMs/starting Programs)

But it would probably still be a waste I think :)

 

Even mixing PCIe AHCI SSDs with NVMe SSDs would remove all the benefits of NVMe and bottleneck you to AHCI (at least there it'd be more like a Ferrari behind a Corvette ;-).

Definitly correct, however, I would argue that nvme and ahci PCIe are both kind of bottlenecked by the qemu/kvm overhead and btrfs, at least in non-enterprise enviroments like unRAID.

 

According to Intel, NVMe has "only" 3µs lower latency, wich is great (50%!) but in the end only a portion of the io-wait time... If you compare the "avarage service time" under "Heavy Load" of the Samsung SM951 NVMe/AHCI versions, they are at 278µs (NVMe) vs 281µs (AHCI), which is exactly what intel told us 3 Years earlier :) (and both are faster than the 256GB NVMe 950 Pro)

And that were all bare-metal tests, add virtualization latency with consumer hw on top and you problably wont notice any difference between AHCI/NVMe as long as they both are using the same PCIe lanes.

The protocolls differ in many ways, and if perfomance is similar, compatibility could make AHCI PCIe SSDs the better choice right now. Which is why those AHCI PCIe SSDs were mostly OEM, high speed, high compatibility but only short term (3-5 years is ja good timeframe for an OEM like HP/Lenovo)

Its just, that NVMe is the obvious winner in the long run, with PCIe4.0, 3D-Nand/3D XPoint, better/faster controller -> 320GBit/s ~ DDR3/DDR4 speed for storage...  (5 years +)

 

interesting things to know about PCIe/m.2 on "older" chipsets: (x99/z97/...)

- all the onboard goodies (storage/audio/LAN) are connected to the chipset

- the connection to that chipset varies, but is shown on wikipedia

- x99 Chipset uses DMI 2.0 to connect to the CPU, so everything that is NOT in a PCIe Slot will share a max of 20GBit/s

- the m.2 slot can be either shared with the SATA Express (or SATA 9&10) or directly connected (through an additional controller), as an example m.2 on Asus X99WSIPMI is shared, but the Asus Z97DELUXE has an additional "ASMedia® SATA Express controller" that is connected directly to the CPU and the m.2 port shares its bandwith.

- DMI2.0 uses PCIe2.0 which has a theoretical max. of 380k IOPS, which could also bottleneck a PCIe3.0 (x4) SSD that is rated for  460k IOPS

- a SATA SSDs should have 6 GBit/s each (so 24 GBit/s total with 4)

- INTEL 750 is listed with 2.4 GByte/s max. but Benchmarks show up to 2.7 GByte/s (1Byte=8Bit -> 19,2 GBit/s up to 21,6Gbit/s)

- PCI3.0 is not only faster but also uses a better optimized "encoding" which results in less overhead and higher IOPS

 

What does that mean?

1) A NVMe m.2 SSD like the Samsung 950 or Intel 750 could be bottlenecked to "only" 380k IOPS, when used in PCIe2.0 as in DMI2.0.

2) If you consider other components like LAN (1-2 GBit), SATA (6Gbit), USB3 (5GBit) its probably safe to say, that it would be rare to see a NVMe disk using its full potential while sharing 20Gbit/s with other components.

3) AHCI or NVMe on a m.2 slot that is connected through DMI2.0 would probably make no difference at all.

*) basicly Ferrari vs Corvette with a lot of traffic ;)

 

All Skylake Chipsets (Z/H/Q170, Q/B150, B110) use DMI 3.0 (so PCI3.0) and "newer/better" x99/z97 mainboards have the needed additional controller to make sure PCI3.0 and no shared lanes are used. In that case NVMe should be "slightly" faster than AHCI, depending on the workload and the system.

 

But yes, "in general" you are right, I just wanted to share some insight into the NVMe/AHCI/PCIe/Chipset dilemma :D

At least that is what I think is true about nvme today, I could be wrong and there will be changes in the future.

Thorough post and food for thought.  My info on this is from Kingston directly, who more simply stated that IOPS potential is far greater with NVMe than AHCI and that in their own testing, the difference to performance in general is fairly substantial, but that was from a casual chat at CES, not exhaustive research or testing.

Link to comment

Just a quick update on this, we have ordered an NVMe drive for testing so….......

 

ITS HAPPENING!!!!!!!!!!

 

Looking forward to it.  Just bought my new system with a 950 Pro and the only thing that was holding me back from going Unraid was not having that so 1+ for people willing to pay once it's working.

Link to comment

I'm reasonably familiar with Terminal use from my server days, but a walkthrough of your process would be an invaluable reference. Thanks in advance for your help!

 

So here is what I did. Not saying its the only way, but it worked for me.

I am by no means a unix guy, most of the things I did here were a first for me, its usualy only windows for me.

Its like I said, someone who knows all what I am about to write, just types 2-3 lines into the cli and is done in under 1 minute.

If anything goes wrong, don't blame me! If there is anything unclear, ask. Somebody can probably help before something bad happens.

 

1) make sure the device is correctly identified as a nvme device and using the correct driver/modules

root@unRAID:~# lspci -k -v
04:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) (prog-if 02 [NVM Express])
Subsystem: Intel Corporation Device 370e
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at f7110000 (64-bit, non-prefetchable) [size=16K]
Expansion ROM at f7100000 [disabled] [size=64K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI-X: Enable+ Count=32 Masked-
Capabilities: [60] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [150] Virtual Channel
Capabilities: [180] Power Budgeting <?>
Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
Capabilities: [270] Device Serial Number XX-XX-XX-XX-XX-XX-XX-...
Capabilities: [2a0] #19
Kernel driver in use: nvme
Kernel modules: nvme

 

2) The rest is basicly the same way as for any other blockdevice (HDD/SSD), just use the correct device path, which should be /dev/nvme(X)n(Y) instead of /dev/sd(X)

root@unRAID:~# lsblk | grep nvme
nvme0n1     259:0    0 372.6G  0 disk

Unfortunatly, there is no /dev/disk/by-id (which would include the serial number) for nvme devices, so if you have multible identical disks, you would need* to identify the exact device by knowing which disk got what Serial Number  (in my case 04:00.0)

root@unRAID:~# udevadm info -q all -n /dev/nvme0n1
P: /devices/pci0000:00/0000:00:1d.0/0000:04:00.0/nvme/nvme0/nvme0n1
N: nvme0n1
S: disk/by-path/pci-0000:04:00.0
E: DEVLINKS=/dev/disk/by-path/pci-0000:04:00.0
E: DEVNAME=/dev/nvme0n1
E: DEVPATH=/devices/pci0000:00/0000:00:1d.0/0000:04:00.0/nvme/nvme0/nvme0n1
E: DEVTYPE=disk
E: ID_PART_TABLE_TYPE=dos
E: ID_PATH=pci-0000:04:00.0
E: ID_PATH_TAG=pci-0000_04_00_0
E: MAJOR=259
E: MINOR=0
E: SUBSYSTEM=block
E: UDEV_LOG=3
E: USEC_INITIALIZED=9333489

So the pci-device with the s/n that can be found through "lspci -k -v" is actually "/dev/nvme0n1"

You probably could skip that and just use the devicelink with the pci-slot "/dev/disk/by-path/pci-0000:04:00.0" instead of "/dev/nvme0n1"

But those names/links may change when you have many nvme disks or add/remove/change their place on the mainboard. (like sda/sdb ...)

Which is probably a part of the reason unraid only uses /dev/disk/by-id which includes the s/n and is therefore unique through reboots.

I guess you could create the "/by-id" link yourself with customized udev rules, but I would not recommend that unless lime-tech approves.

The only unique thing I found would probably be the UUID of the partition that gets created in the next step.

 

3) If you have the correct device name, use gdisk (for GPT) or fdisk (for MBR) or any other portitioning tool to create a partition (the following procedure wipes all data on the disk, be carefull)

root@unRAID:~# gdisk /dev/nvme0n1
GPT fdisk (gdisk) version 0.8.7

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): Y

Command (? for help): n
Partition number (1-128, default 1): 
First sector (34-156301454, default = 2048) or {+-}size{KMGTP}: 
Last sector (2048-156301454, default = 156301454) or {+-}size{KMGTP}: 
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/nvme0n1.
The operation has completed successfully.

"o" -> clear the disk

"n" -> creates a new partition table (to create one big partition Just hit enter everytime for the default values)

"w" -> writes the changes to the disk

 

3) format the partition:

From what I read, XFS/ext4 are recommended for nvme devices.

I don't know in what state btrfs is right now, but I heard there were issues in some versions with really slow qemu/kvm access through btrfs.

I have only 1 drive, so I won't need btrfs pools and therefore did not invest any time testing it.

 

I went ahead and created the partition with default options, with the addition of "-K" (recommended by intel for my disk, I don't know what it does and if its good or bad on other disks...) and "-f" to force the creation. After creating the partition, for any reason a vfat partition is found and the new format must be forced.

mkfs.xfs /dev/nvme0n1p1 -f -K

 

4) mount the new partition:

Create an empty folder where the nvme disk should be mounted. I went with /mnt/nvme but I guess it does not matter.

But it seems, that any folder under /mnt can be used through the web-interface to create VMs or use with docker, so basicly the stuff I want to put on my nvme disk.

# MOUNT SSD
mkdir /mnt/nvme
mount /dev/nvme0n1p1 /mnt/nvme

But like I said earlier, I thinks its possible that the device name "nvme0n1" may change when you add more nvme disks or rearrange their pci-slot.

To be sure, you could use the UUID of the partition when mounting it, which should never change unless its reformated.

 

I added the mount command to my "GO" script of unraid and never had any issues to auto-start VMs on array start. I do not know the exact order of go/array-start/kvm-start/docker-start, but i know lime-tech changed some things in the past to support gfjardims plugin unassigned-devices and its automount feature. Maybe i am just lucky, but it seems the go script works for now.

In my case:

root@unRAID:~# udevadm info -q all -n /dev/nvme0n1p1 | grep uuid
S: disk/by-uuid/2d5e7ce0-41e6-47b5-80d2-70df40a8c1da
E: DEVLINKS=/dev/disk/by-path/pci-0000:04:00.0-part1 /dev/disk/by-uuid/2d5e7ce0-41e6-47b5-80d2-70df40a8c1da

root@unRAID:~# more /boot/config/go | grep nvme
mkdir /mnt/nvme
mount /dev/disk/by-uuid/2d5e7ce0-41e6-47b5-80d2-70df40a8c1da /mnt/nvme

 

6) After a reboot it looks like this in my case

root@unRAID:~# mount | grep nvme
/dev/nvme0n1p1 on /mnt/nvme type xfs (rw)

root@unRAID:~# lsblk | grep nvme
nvme0n1     259:0    0 372.6G  0 disk 
??nvme0n1p1 259:1    0 372.6G  0 part /mnt/nvme

 

While still not part of the array/cache, I created a symbolic link on the cache drive that points to the nvme mount and has part of my steam library and some other games on it. Works for me until nvme is officialy supported as a cache device.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.