Jump to content

Unraid 24 Disk Limitation


Recommended Posts

All,

 

I really don't know if this has ever been asked, but why is unraid limited to 24 hard drives?

 

I understand physical limitations of the case or power supply and cooling.  Lets just suppose you had a power supply as big as necessary and a case that would hold 50 hard drives that was placed in a refrigerator.  Why would unraid be limited to only 24 drives?

 

The linux kernel does not have a theoretical limit on devices beyond 24. In-fact, you can have a devices named sdaa-sdzz and even sdaaa-sdzzz and even sdaaaa-sdzzzz and on and on and on...  Is it because of the Parity drive?

 

Sincerely,

 

Sideband Samurai

Link to comment

Because you only have 26 letters of the alphabet.. /sda through /sdz

 

I believe it has been stated that a future version will be redesigned to accommodate more drives, but I daresay that's a fair way off yet.

 

You might have not had a chance to read my edited post.  the Linux kernel has limitations, but it is so high for all purposes of this discussion it has no limits.  If you refer to my post above (which I was editing when you answered) you can have devices like sdaa-sdzz and sdaaa-sdzzz (etc).  So just its not just because there are 26 letters in the alphabet.  Support beyond 26 devices is already supported by Slackware and the Linux kernel.

 

-- Sideband Samurai

Link to comment

Yes it is in unRAID's case - do some searching on the forum and you'll see the full story.  As I understand it, the way that unRAID has been coded means it can't make use of /sdaa etc or anything beyond the first 26.  I probably should have been more specific and mentioned it was an unRAID limitation as opposed to a Linux one.

 

As I hinted at, I believe this redesign is on the to do list.

Link to comment

Yes it is in unRAID's case - do some searching on the forum and you'll see the full story.  As I understand it, the way that unRAID has been coded means it can't make use of /sdaa etc or anything beyond the first 26.  I probably should have been more specific and mentioned it was an unRAID limitation as opposed to a Linux one.

 

As I hinted at, I believe this redesign is on the to do list.

 

Sorry yes, I did see your hint.  I just thought it was odd it only allowed 26 from the start.  I figured it was because of the Parity drive and its size.

 

-- Sideband Samurai

Link to comment

Yes it is in unRAID's case - do some searching on the forum and you'll see the full story.  As I understand it, the way that unRAID has been coded means it can't make use of /sdaa etc or anything beyond the first 26.  I probably should have been more specific and mentioned it was an unRAID limitation as opposed to a Linux one.

 

As I hinted at, I believe this redesign is on the to do list.

 

This folks is why you always code something considering upgrades, because, once you've written something, it's a bitch and a half to modify it after all the other functions/classes are linked in with it. It's like trying to change a single thread's color in a sweater.

Link to comment
This folks is why you always code something considering upgrades, because, once you've written something, it's a bitch and a half to modify it after all the other functions/classes are linked in with it.

 

To quote from a certain BG: "Who could ever need more than 640kB?"

Link to comment

In early years, unraid only supported half the drives it supports today. Over the years 1-2 drives were added at various releases.

Part of the reason to not fully expand to 24 drives right away is the practical limitation of protecting so many drives with one parity drive.

 

 

Even as I started expanding larger and larger I kept having the desire to have multiple smaller arrays.

Link to comment

There are two dynamics in array size. One is the number of drives in the array, the other is the size of drives.

 

Increase the number of drives and the frequency of drive failure within the array increases. You'll want hot spare with auto rebuild to address this. This feature is on the list.

 

Increase the size of the drive and the time to rebuild increase, enlarging the opportunity for double disk failure. You'll want raid6/double parity to address this. This feature is also on the list.

 

Support for multi array is on the list as well, and the handy ability to virtualize provides this, for the price of another license.

Link to comment

Yes but today's machines are so much powerful from machines from 10 years ago.  Systems today are really wasted because of how fast they process requests.

 

I can see how adding more drives can introduce more problems.  How about a parity drive for each 26 letters?  Since you have /sda-sdz and it has one parity, why not /sdaa-sdzz supported by another parity drive?  It keeps the array stripe to a manageable size and utilizes the code already developed for the first 26 letters.  I guess this is the "multiple arrays that Tom had mentioned before".  Anyway, I was just wondering why only 26 letters and not the "unlimited" capabilities of what Linux offers.

 

Visualizing mitigates that somewhat but at a price of a license for each array you want to have on the network.  Then you have the multiple shares on the network, replicating user accounts if they exist ... etc ... etc ... etc

 

-- Sideband Samurai

Link to comment
  • 4 weeks later...

 

... I figured it was because of the Parity drive and its size.

 

-- Sideband Samurai

 

The size of the parity drive has no bearing on how many drives it can protect.  What DOES matter is the likelihood of having a 2nd failure during the rebuild of a failed drive.    With the size of today's drives, with a typical 10^(-14) error rate for the drives, with 2TB, 3TB, 4TB or even larger drives, if you have a large number of them the likelihood of a failure during rebuild starts to be fairly high.  Even a 20 drive array is considered VERY large for a single parity drive.    That's one reason (probably the most important reason) why RAID 6 is becoming very common on high end RAID systems.

 

Most storage professionals limit RAID 5 arrays to about 8 disks ... beyond that they use RAID-6 or multiple arrays.    And that's with Enterprise class drives, that have drives with an order of magnitude better error performance [e.g. 10^(-15) error rates].

 

Link to comment

The size of the parity drive has no bearing on how many drives it can protect.  What DOES matter is the likelihood of having a 2nd failure during the rebuild of a failed drive.    With the size of today's drives, with a typical 10^(-14) error rate for the drives, with 2TB, 3TB, 4TB or even larger drives, if you have a large number of them the likelihood of a failure during rebuild starts to be fairly high.  Even a 20 drive array is considered VERY large for a single parity drive.    That's one reason (probably the most important reason) why RAID 6 is becoming very common on high end RAID systems.

 

Most storage professionals limit RAID 5 arrays to about 8 disks ... beyond that they use RAID-6 or multiple arrays.    And that's with Enterprise class drives, that have drives with an order of magnitude better error performance [e.g. 10^(-15) error rates].

 

This is why the p+q parity feature needs to be implemented. But even if it is not, it shouldn't be impossible to create multiple unRAID arrays on the same system, as an alternative feature. This discussion highlights the difference between corporate enterprise best practices based on experience, and lower budget home use based on theory. Personally, I built my system to only house 12 drives total, based on the increasing probabilities of multiple drive failures in a single parity array measured against longer recovery times for large drives (4TB can take days to restore). If I need more capacity, I'll build another system. The internal hardware is cheap enough, and it adds layers of redundant hardware beyond the storage subsystems.

Link to comment

I've also limited how large I'll let my arrays grow for the same reason.    My original setup can grow to 14 drives;  my newest one is limited to 6 drives.    I've built several systems for others, with 12-drive limits [3 Cooler-Master 4-in-3 cages].

 

That's PLENTY for a single-parity setup ... the capacity can be quite good using 3 or 4 TB drives.

 

Note that a 4TB drive has about 8.8 x 10^12 bits.    So a 12 drive array with 4TB drives has 4.2 x 10^14 bits.    Modern drives have an error specification on the order of 1 in 10^14 bits ... so statistically it becomes quite likely that an error will occur during a rebuild of a system with that many drives !!

 

That's the primary reason RAID-6 is beginning to be very prevalent in enterprise systems.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...