Hardware Recommendations for new Server's


twok

Recommended Posts

Hi Guys,

 

first things first -

I've been using unRaid (6) for a few years already -privately. Just for some docker's and testing VMs before I did some installations at work. - And I'm loving it!

Now I've got a new position in a new company and besides many other things the hardware situation is also one of my responsibilities.

 

At the moment we are using a 4+ year old server that runs on Windows Server 2008 r2 and is used for:

  • User Shares
  • exchange
  • antivirus endpoint manager
  • and few more things - wampserver etc

 

that server runs a Intel Xeon Processor E5649 and 30 gigs of ram with a HP Smart Array P410i using 4x 300 GB SAS Disks (Raid1+0) and it's been accessed by 20 users 24/5 and another 40 users on part-time base. We are currently lacking performance and also storage space, also I would like to separate parts of the server in different VMs- and that's when I started to consider to go with unRaid.

 

I don't consider myself a sys-admin and therefore spend a lot of my spare time for research and private try and test. Since our budged doesn't allow outsourcing and is always short I was hoping to get some recommendations or at least "look for ..." from you guys.

 

So far I know that I'd like to go with 2 dedicated/ physical machines -one for production one for backup purpose.

 

From hardware space our requirements are minor (like 10 TB would be enough for next 5 years I'd say)

As for CPU and RAM requirements I don't know how to scale accordingly.

 

Also rack-mountable is a must-have and I'd like to go with redundant power-supply.

 

looking forward for your recommendations and tips. also please be nice to me - it's quite some challenge :-)

 

Thank you and best regards,

Matt

Link to comment

Just curious about the recommendations that will follow.

 

While I consider unRAID very good at media server purposes (storage), I figure it might be a little

overburdened with the I/O of 40-50 users at times!?

Maybe if you consider SSD drives it might take it, but with regular spinners you probably won't

get past a solid hardware RAID solution.

 

What is the nature of the performance issues that you are referring to?

 

Link to comment

Hi Fireball3,

 

I thought so myself and considered a share pool of ssds and a few faster disks should be enough to handle our load. at least that's my thought

performance in the sense of

 

a) application server's installed are bringing CPU and RAM to 80% of it's total already resulting in unhappy users

b) poor read/ write speeds on the user shares

 

not exactly performance but directly related

c) limited storage space

d) limited options in backup and backup restore (currently on 7x 500gb rdx tape) that's been rewritten every 7 days (and if the guy, now me - forgets to replace the tapes or is out of office) there is no backup at all from previous days

 

 

Link to comment

I wouldn't recommend unRAID for a business its really not designed for that.

 

If you want to look at virtualization I would suggest you look into either Hyper-V or VMware, both have solid solutions. For backups, I would suggest you look into Veeam, its the number one leading software for backing up and replicating virtual environments.

 

For hardware I would look at least a single core Xeon system, six core ( no less) 64GB of RAM, and eight or ten drives in RAID 10. Its not going to be cheap, and you can certainly look at SSD's if you want but they are more expensive then that 15K sas drives you are likely going to have to settle for in the 900GB capacity.

Link to comment

Well, 4x300GB SAS is not that much nowadays.

In a first step, you could replace them with SSD of 1 TB for example.

You get a good increase in capacity and I/O on that end.

Given some compatibility of your existing hardware!

 

CPU is also not the top end of that line.

If the board supports a faster one you could replace that as well.

 

For backup purposes you could install an unRAID to regularly rsync (or whatever) the

data to.

 

 

Link to comment

I wouldn't recommend unRAID for a business its really not designed for that.

As a hypervisor I strongly agree. As a NAS I strongly disagree. There are many business use cases that unraid fills well, long term storage of rarely accessed files, daily backup destination for a few workstations, anything a typical NAS does, unraid does better, given the right hardware.

 

The OP's use case is as a general application server, and you are right, I don't consider unraid a good fit for a business application server. The NAS part of unraid is mature, feature filled, and stable. The hypervisor part is growing, changing, adapting, great for tinkering and home use, not so good for business. Trying to shoehorn a windows domain controller into an unraid server for a primary business application would be suicide.

Link to comment

I don't necessarily disagree that you could make a business use case for unRAID as a NAS, but my main concern is support. I don't like to recommend solutions to clients that have only forum based support. I know you can setup a support case with Lime-tech for a fee and that is certainly better than forum only based support, but typically for a business you want better more immediately available support.

 

As a backup NAS, sure, just not something that is going be part of production and relied upon by many people who might be affected if there were a problem.

Link to comment

hi - thanks already for your feedback.

 

Last night I did some more research about ClearOS and their already pre-configured hardware solutions. I requested an offer since both price and options sound suitable.

Still considering unRAID as backup and maybe also as test station for new installations / vms since license is much cheaper than MS related stuff.

Link to comment

As a hypervisor I strongly agree. As a NAS I strongly disagree. There are many business use cases that unraid fills well, long term storage of rarely accessed files, daily backup destination for a few workstations, anything a typical NAS does, unraid does better, given the right hardware.

 

Would you be kind enough to either list or point me to a list of recommended hardware?

 

 

I want to believe that unRAID will work in a business environment - my environment, specifically - but there was some hard drive failure and once that was fixed another problem happened and although there were individual issues that came up, my boss clumps them all together and 'wants a solution that works!'. Meaning two things, a solution that will have a  mirror of the live data, synched via the internet to either a second unRAID or NAS and a data server that communicates well with Macs. So I am tasked to find that solution. Either unRAID or another solution.

 

My feeling is that the drives are the culprit, which brings me to my next question.

 

Would WD Red or WD Black drives be better? Sites seems to say that Black is basically Red+, but that Red is designed for NAS while Blacks are just faster - and yet WD Blacks https://www.newegg.com/Product/Product.aspx?Item=N82E16822236971 are cheaper on Newegg than WD Reds https://www.newegg.com/Product/Product.aspx?Item=9SIABZ053G6539. Are WD Blacks ALSO good for NAS devices?

 

Another point that is worth mentioning is that we currently have over 50TB worth of data. I'd like to have a solution with about 80TB. Also, we will never have many users connecting simultaneously, but we will have a Windows and up to 5 Macs connecting. The Windows box could be taken out of the picture if I were able to get excellent transfer rates with a Mac.

 

Any thoughts are eagerly welcomed!

 

 

Link to comment

Connectivity with Macs:

You got a qualified reply from John_M in your other thread.

If it is still of an issue, I would open a ticket with LT to get that solved first.

I'm with your users. While failures can occasionally occur, no problem as long as nothing gets lost.

An unreliable/unavailable storage system on the other hand is not acceptable.

 

With regard to the drives:

Blacks are good, but run quite hot.

If your server has a good cooling you're fine with them.

80TB of space, that's a challenge.

Probably best to go with 8 or 10 TB drives - although quite expensive,

keeps the drive count low so you're still good with single parity.

The simpler the better.

 

Link to comment

Connectivity with Macs:

You got a qualified reply from John_M in your other thread.

If it is still of an issue, I would open a ticket with LT to get that solved first.

 

Thanks for that reminder. I've got a few threads I'm going to have to revisit once my restore is done. I can't really proceed to the next step until that is done, and that just takes... time!

 

With regard to the drives:

Blacks are good, but run quite hot.

If your server has a good cooling you're fine with them.

 

They are running about 40 C. That's a bit higher than the other drives.

 

80TB of space, that's a challenge.

Probably best to go with 8 or 10 TB drives - although quite expensive,

keeps the drive count low so you're still good with single parity.

The simpler the better.

 

That brings up a REALLY good point! Hard to believe that there are 10 TB drives out there! (yes, I am a few years behind...) So it looks like Seagate has greater option for the 6-10 TB range. I am guessing that the Seagate IronWolf 6TB NAS is comprable to the WD Red. Since I am going to have to get quite a few new drives, I might as well get what I need in the long run. Am I correct that the currently $200.00 Seagate IronWolf 6TB NAS https://www.newegg.com/Product/Product.aspx?Item=N82E16822179004 drives are an good choice? I like to stay about one or two models back, if possible. That way hardware is already tested plus cost is usually reasonable for those older items.

 

Hmmm, I was meaning to look into the multiple parity drives, as I just saw it available in version 6 (only recently upgraded). My understanding is that if I have two 8TB parity drives than that would be a good thing, no? Now up to two drives could fail without losing data. I take it that your idea is that the more drives I have, the great the chances that more than one will fail. Thus having 10TB drives... So I can either have less more costly drive with 1 parity drive or more smaller drives with 2 parity drives.

 

So back to the Red vs. Black question... If starting new, would red or black be better? It seems that Red (Seagate or WD) would be best. Is that correct thinking?

Link to comment

That brings up a REALLY good point! Hard to believe that there are 10 TB drives out there! (yes, I am a few years behind...) So it looks like Seagate has greater option for the 6-10 TB range. I am guessing that the Seagate IronWolf 6TB NAS is comprable to the WD Red. Since I am going to have to get quite a few new drives, I might as well get what I need in the long run. Am I correct that the currently $200.00 Seagate IronWolf 6TB NAS https://www.newegg.com/Product/Product.aspx?Item=N82E16822179004 drives are an good choice? I like to stay about one or two models back, if possible. That way hardware is already tested plus cost is usually reasonable for those older items.

My understanding is that fewer parts can cause fewer errors. Less stress on the whole system (temps, power draw, etc.)

Using 6 TB drives renders you with 14 drives.

Using 8 TB drives brings you an 11 drive array.

Using 10 TB drives leads to 9 drives.

Of course, cost go up the more drives you save.

 

Hmmm, I was meaning to look into the multiple parity drives, as I just saw it available in version 6 (only recently upgraded). My understanding is that if I have two 8TB parity drives than that would be a good thing, no? Now up to two drives could fail without losing data. I take it that your idea is that the more drives I have, the great the chances that more than one will fail. Thus having 10TB drives... So I can either have less more costly drive with 1 parity drive or more smaller drives with 2 parity drives.

You can always have 2 parity drives, no matter how many drives your array actually has. The higher the drive count the more reasonable

a second parity.

 

So back to the Red vs. Black question... If starting new, would red or black be better? It seems that Red (Seagate or WD) would be best. Is that correct thinking?

Given the fact, that they advertise them explicitely as NAS drives, yes.

With adequate cooling, I wouldn't hesitate to use blacks also.

 

I assume you know the "rules" you should follow when buying multiple drives for use in an array!?

Link to comment

Using 6 TB drives renders you with 14 drives.

Using 8 TB drives brings you an 11 drive array.

Using 10 TB drives leads to 9 drives.

Of course, cost go up the more drives you save.

However, the cost for the whole system may be comparable when you add in the cost of expanding beyond 10 SATA ports. I wonder if the cost of a decent quality HBA, cabling, step up in power supply, larger case and extra drive cages is more than the premium for fewer larger drives.

 

I'm too lazy to do the math, but my WAG is that the total system cost for 14 SATA slots with 6TB drives is probably pretty comparable to 9 slots with 10TB drives.

Link to comment

Thank you for your feedback!

 

I will probably go with the 8TB drives. Not sure if red or black yet.

 

I assume you know the "rules" you should follow when buying multiple drives for use in an array!?

 

"Rules"? What "rules"?!? Sorry, I really can't think of what you might be referring to.

 

I'm too lazy to do the math, but my WAG is that the total system cost for 14 SATA slots with 6TB drives is probably pretty comparable to 9 slots with 10TB drives.

 

Perhaps, but you lose the expand-ability of the server.

 

Link to comment

Thank you for your feedback!

 

I will probably go with the 8TB drives. Not sure if red or black yet.

 

I assume you know the "rules" you should follow when buying multiple drives for use in an array!?

 

"Rules"? What "rules"?!? Sorry, I really can't think of what you might be referring to.

When purchasing large quantities of drives to use in a fault tolerant array, you want to minimize the chances that multiple drives will fail simultaneously. That means diversifying your purchase as much as possible. Ideally, even if you decide on a specific brand and model, you would want to purchase from different factories, manufacturing weeks, vendors, and shipping methods. That way if someone in shipping drops your box of drives damaging them, it's only a couple at most that are at risk. Poor handling is a much bigger risk factor for premature drive failure than manufacturing defects, so if you buy from different vendors it should be enough to diversify the handling.

 

I'm too lazy to do the math, but my WAG is that the total system cost for 14 SATA slots with 6TB drives is probably pretty comparable to 9 slots with 10TB drives.

Perhaps, but you lose the expand-ability of the server.
Not really. If you purchase a case and PSU with enough empty space and power to add a hot swap drive cage, you can always add the HBA and drive cage at a later date. Rough numbers, $100 for HBA and cables, and between $75 and $200 for hot swap cages.

 

Plus, if you have enough capacity now, swapping to larger drives one at a time may be cheaper than adding the ports when it comes time to expand. Buying unused capacity is seldom a good deal in the computer world, by the time you need it, it's going to be cheaper.

Link to comment

When purchasing large quantities of drives to use in a fault tolerant array, you want to minimize the chances that multiple drives will fail simultaneously. That means diversifying your purchase as much as possible. Ideally, even if you decide on a specific brand and model, you would want to purchase from different factories, manufacturing weeks, vendors, and shipping methods. That way if someone in shipping drops your box of drives damaging them, it's only a couple at most that are at risk. Poor handling is a much bigger risk factor for premature drive failure than manufacturing defects, so if you buy from different vendors it should be enough to diversify the handling.

Excellent point! I never really thought about that. As it turns out, I have never purchased more than 3 drive in one shot, so I have been adhering to this without knowing I should, but thank you for that excellent pointer!

 

Not really. If you purchase a case and PSU with enough empty space and power to add a hot swap drive cage, you can always add the HBA and drive cage at a later date. Rough numbers, $100 for HBA and cables, and between $75 and $200 for hot swap cages.

 

Plus, if you have enough capacity now, swapping to larger drives one at a time may be cheaper than adding the ports when it comes time to expand. Buying unused capacity is seldom a good deal in the computer world, by the time you need it, it's going to be cheaper.

 

Good tips there as well! Trying to decide how to go about both upgrading our current system and deciding if we should get a whole new system for backup. Or get a new system and use the current one for backup. I'll know by the end of the week.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.