If time & money were no object...


Recommended Posts

If you could build yourself a new unRAID server (or two, should you feel the need) and cost wasn't an issue; what would you build? How would you have your server/s setup? What is that project you have always wanted to do but just can't seem to afford, have wife-approved or find the time to do? Would you also upgrade your network and workstation to handle this new beast?

 

Perhaps we should also consider splitting these into two categories of builds now, would you go for PERFORMANCE or GREEN?

 

Let's assume that you will be paying for the costs to run this "theoretical" server (electricity etc) and the setup should still be relatively good value for money.

Link to comment

and cost wasn't an issue;

 

A pair of Areca 1882-24 4GB cache controllers, and 48 Samsung 830 512GB SSDs.

 

Intel 10GB Ethernet card

 

With an Antec Earthwatts 530 80+Platinum PSU.  ;)

 

Ok, you caught me out :P

 

Updated original post to include the words "relatively good value for money."

 

Just thinking about an all SSD build makes me hot though! :D

Link to comment

I think i would build atlas as it is now.. maybe put it in a 36 bay supermicro chassis.

 

that or I would build one of those ivy bridge server boards with 6+ PCIe slots,

3 chassis, 1 head with a NFS array, 2x 24 bay DAS units. each with a full UNRAID guest.

Link to comment

If money is no object, unRAID is gone.  UnRAID is all about saving money by allowing mixed drive sizes, purchase the lowest $/TB, and powering down unneeded spindles. There are many far more reliable and flexible methods to store data. Stripes of raidz2/3 with hot spares, auto rebuild, etc...

Link to comment

If money is no object, unRAID is gone.  UnRAID is all about saving money by allowing mixed drive sizes, purchase the lowest $/TB, and powering down unneeded spindles. There are many far more reliable and flexible methods to store data. Stripes of raidz2/3 with hot spares, auto rebuild, etc...

 

This is a legitimate argument with merit, but one element left out is disaster recovery.  unRAID is tops when it comes to disaster recovery.  Got 12 drives and lose 3?.... data on the other nine are still fine.  That is a big reason I stay with unRAID rather than move to a RAID-6 monster. 

 

I use unRAID like near-online storage used to be used.  It was cheaper, safer, and slower than online ... but still a big part of data management.

 

IMHO, most people use unRAID without backing it up (or only back up small portions).  This is in part due to the likelihood that a lot of unRAID installs are used to store media files that can (relatively) easily be reacquired.  Or in come cases, unRAID is being used as the backup target for other devices (again, in which case it is actually "better" in a lot of situations than striped RAID even ignoring the cost issue, since backups, off-line, or near-online storage is generally always going to need to be expanded).

 

I'm moving in the direction of a hybrid server, with unRAID drives reserved for "static" data that does not get written to, or need high-speed access, and 4-drive RAID-0 for cache where my "active" work resides for speed.  Nightly, the cache drive will be synced to regular unRAID drives rather than use the mover, so the "fast" files remain on the RAID-0.  Periodically, I move cache files no loner in need of fast access to unRAID drives.  In the future, I may split the hybrid server into two separate boxes.

 

My desktop box has a 4x256G Samsung 830 SSDs in RAID0 for speed.  This gets synced to a a spinning WD green drive nightly and backed up to unRAID regularly.

 

I also use a 16GB write-back RAM cache (FancyCache) on my desktop, again for speed.

 

So I layer speed vs. capacity, and migrate data to the slower (but higher capacity) media with time.

Link to comment

This is a legitimate argument with merit, but one element left out is disaster recovery.  unRAID is tops when it comes to disaster recovery.  Got 12 drives and lose 3?.... data on the other nine are still fine.  That is a big reason I stay with unRAID rather than move to a RAID-6 monster. 

 

+1

 

Absolutely. I've thought at times; if I had a large budget and wanted to upgrade my server- what would I go with? The obvious answers are RAID 5, 6 or 10. All lose their appeal when you think about what can happen when you a) lose more than one drive or b) lose a drive X years down the track and need to find a replacement that won't cause the raid controller to have a cry.

 

A mate of mine who is also an unRAID user had to help his father-in-law replace a bad drive in his RAID 10 system. Good luck trying to find that particular 120GB Seagate that was so readily available back in the day. He ended up replacing it with a different model drive and the controller rebuilt the array and then had a hissy-fit. While the array was still accessible (although slower) with one drive down the headaches you can run into trying to repair these setups makes unRAID look like an even better choice.

 

Before using uNRAID I was using RAID1 and I still feel this is the best option if you don't need speed or much storage. Anything else just seems too risky when you're talking about 10, 20, 30, 40TB+ of data.

Link to comment

My raidz2 stripes support any 5 drives failing at once, plus special cases of losing up to 12 drives. I have retired several 100s of thousands of disk drives. True multiple drive failures are very rare. Much more common is a second drive failing during rebuild. Brand X with containmented platters and spiffy firmware gave us a run, but with work, even those didn't take data offline with multiple drive failures. Sure a power supply or cable or card can take out 24, but the drives haven't failed. With sata3 ssd for zil and l2arc, 250MB/s is normal write speed(2 streams), and that is only because of networking. I have backups, but don't like restore times. A full stripe restore would be 24-30 hours, basically a 2-3 day outage.

 

Drives can be replaced with anything the same size or larger.

 

You could do better with raidz3.

 

Does reiserFS have bit rot protection...

Link to comment
  • 2 weeks later...

Does reiserFS have bit rot protection...

 

It does not, but the parity check should detect bit rot. One of the reasons why running a monthly parity check is very important.

 

Monthly, or even daily parity checks does not help with bit rot, since raid 5 parity check can not indicate which data has rotted. The assumption that parity is the defect is against probability since most raid 5 stripes are more data than parity.

Link to comment

Does reiserFS have bit rot protection...

 

It does not, but the parity check should detect bit rot. One of the reasons why running a monthly parity check is very important.

 

Monthly, or even daily parity checks does not help with bit rot, since raid 5 parity check can not indicate which data has rotted. The assumption that parity is the defect is against probability since most raid 5 stripes are more data than parity.

 

Statistically speaking wouldn't it be more likely to be able to repair an array where a data drive has flipped a bit? E.g. You have 10 data drives and 1 parity drive, therefore it is already 10x more likely that bit rot would occur on a data drive. If bit rot occured on the parity drive though, what affect would this have on a rebuild? Would unRAID just determine that the parity is invalid and repair the parity? Could it affect more than one drive during a rebuild (have the reverse effect)? Saying that, bit rot is also something that usually only occurs on your oldest drives and your parity drive should, in most cases be a fairly new drive in relation to the data drives as it has to be upgraded more regularly than any of the data drives do.

Link to comment

Statistically speaking wouldn't it be more likely to be able to repair an array where a data drive has flipped a bit? E.g. You have 10 data drives and 1 parity drive, therefore it is already 10x more likely that bit rot would occur on a data drive. If bit rot occured on the parity drive though, what affect would this have on a rebuild? Would unRAID just determine that the parity is invalid and repair the parity? Could it affect more than one drive during a rebuild (have the reverse effect)? Saying that, bit rot is also something that usually only occurs on your oldest drives and your parity drive should, in most cases be a fairly new drive in relation to the data drives as it has to be upgraded more regularly than any of the data drives do.

 

Guessing will not help, hence filesystems with crc are used as protection for bit rot.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.