Jump to content

Looking for some build clarification


Spyderturbo007

Recommended Posts

I think it might be easiest for me to explain what I want and perhaps someone can point me in the right direction.  I currently have a very cheap unRAID build with an old AMD single core chip with 2GB of RAM.  It currently supports six HDDs.  As my needs begin to change, I want to offload my NzbDrone and SABnzbd to the server.  I also want to set up MySQL, so a single core chip just isn't going to cut it.  I have a GIGABYTE GA-Z97X-UD5H Mobo laying around that I was going to use.

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16813128707&cm_re=GA-Z97X-UD5H-_-13-128-707-_-Product

 

I was thinking about an Intel i3 4150 dual core chip and 16GB of G. Skill RipJaw RAM.

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16819116995&cm_re=intel_lga1150-_-19-116-995-_-Product

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16820231429&cm_re=g._skill_ripjaw_1600-_-20-231-429-_-Product

 

I found this Cooler Master V850power supply that has a single +12v70A Rail.

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16817171079

 

Is any of that overkill for what I want?  As far as drives are concerned, this is where I'm really getting confused.  I would like a case where I can install the docks I see everyone posting about.  I suspect something with as many 5.25" drive bays is what I need?  It also looks like I need a dock that replaces those bays and allows me to install the HDDs in the dock.  My confusion is on the "back end" of that setup.  In most cases, it looks like the drives plug into a backplane and then I would need a breakout cable that connects the drives to a RAID card? 

 

If so, does anyone have an suggestions about where to start as far as hardware is concerned?

 

Thanks!

 

 

Link to comment

I wouldn't go with that RAM, instead I would use a pair of 8GB chips.

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007611%20600006050%204017%20600006072&IsNodeId=1&name=16GB%20%282%20x%208GB%29&Order=PRICE&Pagesize=20

 

The benefit of such high specced RAM is highly overrated IMHO.

I tend to stay with value RAM and conservative timings and have a stable rig.

 

You don't tell us how many drives you want to have in the end.

 

The PSU looks OK in terms of current (A) output - will be sufficient for 20-33 drives.

If you run green drives only it might be a bit OP.

 

In addition to the 6 SATA ports on your mainboard you need host bus adaptors for more SATA ports.

The board has 2x PCIe x8 slots and 1x PCIx x4 slot.

The SAS2008 based cards can be found for reasonable prices.

You end up with 2x8+6=22 ports.

(The board has an SATA express port that you could possibly use? I'm not sure)

 

If you need even more SATA ports, you can add one more HBA in the x4 slot.

PCIe x8 cards won't use their full potential but they will work in the x4 slot.

The card in this slot might show bandwidth issues during parity check if fully populated.

Use it to connect the cache drive or other drives that run outside the array.

Lets say 50% full should be OK --> leaving you with 26 drives...24 is the license limit.

 

But, with the actual single parity solution in place (v 5.0.x or v6.x) I don't recommend

arrays of that size. Too many drives increase the chance/risk of a second failure.

 

Case:

There are only few cases with that many 5.25" slots.

To accomodate 4x 5in3 bays you need already 12 slots. That's probably the maximum you'll find

on a big tower case. Some have additional 3.5" bays internally.

 

Drive bays can be connected either to the mainboard or the HBA by the use of plain SATA - SATA cables or MiniSAS SFF-8087 forward breakout cables. (perhaps there are other/better sources?)

Link to comment

I wouldn't go with that RAM, instead I would use a pair of 8GB chips.

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007611%20600006050%204017%20600006072&IsNodeId=1&name=16GB%20%282%20x%208GB%29&Order=PRICE&Pagesize=20

 

The benefit of such high specced RAM is highly overrated IMHO.

I tend to stay with value RAM and conservative timings and have a stable rig.

 

I guess I'm just used to building workstations.  Is 16GB overkill for running MySQL + SabNzbd + NbzDrone, or would that be the recommended amount of memory?  Is the i3 Intel a good choice?

 

You don't tell us how many drives you want to have in the end.

 

To be honest, I don't quite know.  I currently have 5 data drives, one Parity Drive and want to add a Cache / Apps drive.  I just don't want to pigeon hole myself down the road if I want to add 4 or 5 more drives. 

 

In addition to the 6 SATA ports on your mainboard you need host bus adaptors for more SATA ports.

The board has 2x PCIe x8 slots and 1x PCIx x4 slot.

The SAS2008 based cards can be found for reasonable prices.

You end up with 2x8+6=22 ports.

(The board has an SATA express port that you could possibly use? I'm not sure)

 

It looks like the SAS2008 based cards are much more expensive than something like the AOC-SASLP-MV8.  I saw the AOC-SASLP-MV8 on the hardware compatibility list in the Wiki.  Is there a big difference?  I'm looking for something that would be plug and play.

 

If I'm reading this right, adding one of the controller cards would give me a total of 8 SATA ports, which would handle what I have now and a little room to grow. 

 

Case:

There are only few cases with that many 5.25" slots.

To accomodate 4x 5in3 bays you need already 12 slots. That's probably the maximum you'll find

on a big tower case. Some have additional 3.5" bays internally.

 

I was looking at this Cooler Master Storm Trooper case.  It says that there are 9 x 5.25" External Bays.

 

 

Drive bays can be connected either to the mainboard or the HBA by the use of plain SATA - SATA cables or MiniSAS SFF-8087 forward breakout cables. (perhaps there are other/better sources?)

 

Then theoretically, I wouldn't actually need the RAID controller *yet* since the motherboard lists 8 available SATA ports?  I'm assuming, since some are different colors, that they might use a different controller, but I can't say for sure.  Is there any issue using different controllers, or is that something that isn't recommended?

 

Thanks so much for your help!

Link to comment

Have a look at this case:

 

  http://www.newegg.com/Product/Product.aspx?Item=N82E16811129021&cm_re=antec_nine_hundred-_-11-129-021-_-Product

 

It also has nine external 5.25" bays and costs a bit less.  I have two of them and really like them.

 

If you want a few more bays, have a look at this case:

 

  http://www.newegg.com/Product/Product.aspx?Item=N82E16811129100&cm_re=antec_nine_hundred-_-11-129-100-_-Product

Link to comment

I agree you should use 2 x 8GB modules instead of 4 x 4GB => not because of the specifications of the modules, but because with unbuffered RAM the memory subsystem will be FAR more reliable with only 2 modules installed (due to bus loading).    You can still use high performance modules if you want to maximize the memory performance:  http://www.newegg.com/Product/Product.aspx?Item=N82E16820233538

 

For the number of drives you've indicated (no more than perhaps 10), a 650w supply is PLENTY ... actually a 500w unit would be good, but the extra "headroom" a 650w unit will provide is reasonable.  You do NOT want an 850w unit -- it will just reduce your efficiency and provide no tangible benefit.

 

This would be a good choice:  http://www.newegg.com/Product/Product.aspx?Item=N82E16817139012

 

With 8 SATA ports on the motherboard, you certainly don't need an add-in SATA controller ... at least not for a while.  In fact, you could use an M.2 SSD for your cache drive (if you want to use one) and that would allow you to have parity plus 7 data drives before you'll need an add-in controller.  With modern high-capacity drives you may never need one  :)

 

As for a case ... you can buy a case with a lot of 5.25" bays and then buy add-on drive cages;  or you can simply buy a case with a prodigious number of internal bays, such as the excellent Fractal Define R4:

http://www.newegg.com/Product/Product.aspx?Item=N82E16811352020&cm_re=Fractal_R4-_-11-352-020-_-Product    This case has 8 internal drive bays; plus 2 5.25" bays that could hold a 3-in-2 cage; plus room for a drive expansion cage that holds 4 more drives ... so it can easily go to 15 drives total.  It's a bit large (check the dimensions), but it's a superb case with excellent cooling.

 

Link to comment

By the way, here's the add-in drive cage that fits between the PSU and current cage and holds 4 more drives:  http://www.caselabs-store.com/standard-hdd-cage-assy/

 

It provides for an extra 120mm fan to cool those drives; and, together with a 3-in-2 cage, would let you support 15 drives (plus an M.2 unit) ... which should certainly be enough for the foreseeable future  :)

 

[With 4TB drives, that would be 56TB of storage; with 6TB drives it would be 84TB  8) ]

 

Link to comment

For the number of drives you've indicated (no more than perhaps 10), a 650w supply is PLENTY ... actually a 500w unit would be good, but the extra "headroom" a 650w unit will provide is reasonable.  You do NOT want an 850w unit -- it will just reduce your efficiency and provide no tangible benefit.

 

This would be a good choice:  http://www.newegg.com/Product/Product.aspx?Item=N82E16817139012

 

Any idea what the maximum number of drives the 650w power supply would support?  All my drives (6 of them) are 7200rpm drives.  If I add a cache / apps drive, that will be 7.  I want to make sure that if I decide to add 4 or 5 more that I'll have the ability without needing to upgrade the power supply.

 

With 8 SATA ports on the motherboard, you certainly don't need an add-in SATA controller ... at least not for a while.  In fact, you could use an M.2 SSD for your cache drive (if you want to use one) and that would allow you to have parity plus 7 data drives before you'll need an add-in controller.  With modern high-capacity drives you may never need one  :)

 

That's a great idea of using the M.2 slot on the motherboard!  It would be the cache / apps drive.  What do you think would be appropriate for size in that case?  I know it has to do with how much you normally write to the array in a 24h time period, but mine is sporadic.  Most of the time it's just a few TV shows, but other times, I'll buy a few movies at the store and rip them 1:1.  If I get 3 or 4, it could easily be 100GB depending on the movie.

 

Would 128GB be sufficient?  From what I see, if there isn't space on the cache drive, then unRAID is smart enough to just write the data to the array.  Is that correct?  If so, then 128GB should give me plenty of space for both cache and apps?  Or am I missing the overhead of the temp storage for SabNzbd while it downloads and unpacks everything?

 

As for a case ... you can buy a case with a lot of 5.25" bays and then buy add-on drive cages;  or you can simply buy a case with a prodigious number of internal bays, such as the excellent Fractal Define R4:

http://www.newegg.com/Product/Product.aspx?Item=N82E16811352020&cm_re=Fractal_R4-_-11-352-020-_-Product    This case has 8 internal drive bays; plus 2 5.25" bays that could hold a 3-in-2 cage; plus room for a drive expansion cage that holds 4 more drives ... so it can easily go to 15 drives total.  It's a bit large (check the dimensions), but it's a superb case with excellent cooling.

 

Is there a good reason to use 5 in 3 cages other than being able to jam more drives in a case and still have it looking "sexy"?  Dropping the two cages would save me $160 on the build which I could use to start throwing HDDs at my old build and use that for a backup.  I really need a backup.

 

[With 4TB drives, that would be 56TB of storage; with 6TB drives it would be 84TB  8) ]

 

I could totally download the Internet......the entire thing!  :P

Link to comment

Any idea what the maximum number of drives the 650w power supply would support?

 

I have 3 servers;  the closest to yours has 14 drives (most are 7200rpm units) using a 500w Seasonic PSU, which is PLENTY.    The 650w Corsair unit I suggested will easily handle all the drives you could put in the Fractal R4.

 

 

That's a great idea of using the M.2 slot on the motherboard!  It would be the cache / apps drive.  What do you think would be appropriate for size in that case?

 

I'd use a 256GB M.2 unit.  They're quite reasonably priced these days:

http://www.newegg.com/Product/Product.aspx?Item=N82E16820148798&cm_re=256GB_M.2-_-20-148-798-_-Product

 

 

Is there a good reason to use 5 in 3 cages other than being able to jam more drives in a case and still have it looking "sexy"?

 

There are two reasons folks use 5-in-3 cages:  (1)  to increase the number of drives a case can support; and (2) to make it easier to swap drives in the event of failure.

 

I don't think #1 is needed if you use an R4 case; and it's VERY simple to change drives in the R4 ... not quite as simple as the hot-swap cages; but not a difficult task at all.    And the drives are better cooled in the internal cage with a large fan blowing directly over them than they are in any of the 5-in-3 cages.

 

 

... I really need a backup.

 

Absolutely agree !!  You'd be surprised how many folks go to all the trouble of building a fault-tolerant server; and collecting prodigious amounts of data ... and then DON'T back it up !!

 

 

Link to comment

QUOTE from OP:  "... Is there a good reason to use 5 in 3 cages other than being able to jam more drives in a case and still have it looking "sexy"? ..."

 

Once you get beyond five or six drives, you really want your drives in cages where drives can be removed without disturbing the power and SATA cables at the back.  What can very easily happen is that when you work on one drive, the cable movement will displace a cable on other drives.  Working on that new issue now can cause a similar problem on on other drive.

 

Link to comment

Absolutely agree !!  You'd be surprised how many folks go to all the trouble of building a fault-tolerant server; and collecting prodigious amounts of data ... and then DON'T back it up !!

 

I think I'm on to something with your help Gary.  It looks like I can do the upgrade for about $630. 

 

Fractal Design Define R4

 

CORSAIR HX series HX650 650W

 

G.SKILL Trident X Series 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600

 

Crucial M550 M.2 SSD

 

Intel Core i3-4150 Haswell Dual-Core 3.5GHz

 

 

The other thing I've been kicking around, is to upgrade my desktop computer with a X99 Motherboard I have laying around (not the Gigabyte we've been discussing) and transfer the i5 chip and motherboard in my desktop currently to the unRAID server.  But DDR4 RAM and the LGA2011-v3 chips are freakin' expensive.  It would cost me about $1,400 to do that cascade upgrade. 

Link to comment

You keep some mighty nice stuff just "laying around"  :)

 

That X99 board would make an awesome system ... but I certainly agree that a nice Socket 2011v3 CPU can get expensive, and DDR4 is equally pricey.    You could, of course, settle for a hex-core Core-I7, but the new 8-core is Oh-So-Nice  :)

Link to comment
It looks like the SAS2008 based cards are much more expensive than something like the AOC-SASLP-MV8.  I saw the AOC-SASLP-MV8 on the hardware compatibility list in the Wiki.  Is there a big difference?  I'm looking for something that would be plug and play.

 

The big difference is where you live. In Europe, Supermicro equipment is not so common as in the US.

I got my controllers from the bay for 30-50€ each - all pulled ones from DELL servers and crossflashed to IT mode.

Of course your very fine with the  AOC-SASLP-MV8 - it is the controller that Limetech is also using in their builds.

 

Other topics that come into my mind:

- Use the locking SATA cables!

- Use green drives in the 3in2 bay, if you ever use one and noise is an issue. A proper cooling of this 3in2 bays is noisy as there is almost no room between the drives left.

Link to comment

Once you get beyond five or six drives, you really want your drives in cages where drives can be removed without disturbing the power and SATA cables at the back.  What can very easily happen is that when you work on one drive, the cable movement will displace a cable on other drives.  Working on that new issue now can cause a similar problem on on other drive.

 

I think I might skip them for right now, just to save the $180.  It's something I can always add down the road as I add more drives.

Link to comment

It looks like things might not be a smooth as I thought they were going to be.  According to the manual for the Gigabyte GA-Z97X-UD5H, when you use the M.2 port, you lose SATA ports 3 & 4. 

 

Chipset:

 

- 1 x M.2 PCIe connector

 

- 1 x SATA Express connector

 

- 6 x SATA 6Gb/s connectors (SATA3 0~5)

(M.2, SATA Express, and SATA3 4/5 connectors can only be used one at a time.  The SATA3 4/5 connectors will become unavailable when an M.2 SSD is installed.)

 

- Support for RAID 0, RAID 1, RAID 5, and RAID 10

Marvell ® 88SE9172 chip:

- 2x SATA 6Gb/s connectors (GSATA3 6~7)

- Support for RAID 0 and RAID 1

 

 

The board has a total of 8 SATA connectors, minus 2 when I install the M.2 SSD as the cache drive, leaves me with 6 available ports.  I  currently have 5 data drives and one parity drive, so the motherboard will be full.  That would mean that I need to add a controller card when I want to upgrade.  So this isn't an issue today, but will be when I want to add the next drive, which led me to look at the PCIx slots.  The motherboard shows 3 x PCI Express x16 slots, but it looks like they share bandwidth.

 

1 x PCI Express x16 slow, running at x16 (PCIEX16)

    *For optimum performance, if only one PCI Express graphics card is to be installed, be sure to install it in the PCIEX16 slot.

 

1 x PCI Express x16 slow, running at x16 (PCIEX8)

    *The PCIEX8 slot shares bandwidth with the PCIEX16 slot.  When the PCIEX8 is populated, the PCIEX16 slot will operate at up to x8 mode.

 

1 x PCI Express x16 slow, running at x4 (PCIEX4)

    *The PCIEX4 slot shares bandwidth with the PCIEX8 and PCIEX16 slots.  When the PCIEX4 is populated, the PCIEX16 slot will operate at up to x8 mode and the PCIEX8 will operate at up to x4 mode.

 

2 x PCI Express x1 slots

 

 

I remember reading somewhere that you need "x" lanes per drive attached to the PCI bus, but I can't seem to find the correlation right now.

 

Also, do I need to be concerned that there are two different SATA controller chipsets?  Will that lead to issues?

 

Thoughts before I place an order? 

 

Link to comment

That's unfortunate.  I knew that using the SATA Express connection would cost a SATA port (that's common) ... but didn't realize the M.2 slot also did.

 

I wonder if that is a function of whether you use a SATA M.2 device or a PCIe device ...

 

I'll have a look at the manuals for a couple of boards and see if that's discussed.  [The Crucial M.2 you ordered is a SATA device; but there are others that use the PCIe interface on the M.2 slot -- they're a bit faster, but also pricier]

Link to comment

Apparently that's a chipset limitation.  I recently built a system using a Z97-based mini-ITX motherboard; and I wondered why they had only brought out 4 SATA ports, since the chipset supports 6.  I now know why => the board also has an M.2 slot; so they simply didn't bother to include the 2 other SATA ports that would have been disabled by using the M.2  :)

 

I had read about the SATA Express limitation ... but didn't (until now) realize that was also true with the M.2 slot.

 

I still like the M.2 devices, however => in addition to the ultra-compactness of not needing a drive bay for your system drive, they are much faster than SATA if you use the PCIe versions of the SSDs ... in fact there are now units that use the full PCIe x4 bandwidth available in the newer M.2 slots.

 

Link to comment

It looks like the choice is either to live with the M.2 limitation and lose two ports, or use a traditional SSD and lose one bay in the case.  How does the PCI Express bandwidth sharing effect me moving forward as I add drives?  I Have some smaller drives in my array that could be upgraded to larger drives if needed so that's an option as well.

 

 

Link to comment

I don't think the bandwidth of the PCIe bus will have any impact on your drive speeds => the only time that's ever an issue is if you're using a very high end graphics card (or two) which can saturate the PCIe bus.  Not likely the case in any UnRAID system :)    [Although with virtualization, some may want to use high-end cards to virtualize a gaming machine -- not something I'd do, as if I wanted a high-end gaming system (I don't) I'd build a dedicated gaming box.]

 

I'd use the M.2 and lose (effectively) one SATA port.

 

Link to comment

I can get a 256GB traditional SSD for about $40 cheaper which is why I was considering that route.  I see that the Fractal Design case allows for mounting a SSD on the back side of the motherboard tray.  Neat idea.  :)

 

So if I installed a RAID controller in the x16 slot and maxed it out with drives, the PCI bus wouldn't be a bottleneck for the controller?

Link to comment

I can get a 256GB traditional SSD for about $40 cheaper which is why I was considering that route.  I see that the Fractal Design case allows for mounting a SSD on the back side of the motherboard tray.  Neat idea.  :)

 

Yes, I've used that case with an SSD mounted on the back.  It does indeed work nicely.  But I still prefer the M.2 slot, since it can interface so much quicker than SATA.    In the system I built with one, I used a Plextor M6e, which interfaces at PCIe x2 [ http://www.neweggbusiness.com/Product/Product.aspx?Item=9B-20-249-046&nm_mc=KNC-GoogleBiz-PC&cm_mmc=KNC-GoogleBiz-PC-_-pla-_-Internal+SSDs-_-9B-20-249-046&gclid=CMjm4MrsvsECFQiIaQod5wEAbQ ]    The system boots in < 10 seconds  :)

... and there are now motherboards with the new M.2 slots that can interface at PCIe x4 -- and a few SSDs are becoming available that support those speeds as well !!

 

 

So if I installed a RAID controller in the x16 slot and maxed it out with drives, the PCI bus wouldn't be a bottleneck for the controller?

 

Not a problem at all.

 

Link to comment

I guess I was thinking that since this is just a cache / apps drive that the increased access speed wouldn't be worth the additional $40 price tag.  I've never used a M.2 SSD interface, so I really don't know.  You obviously know what you're talking about and have been a tremendous help, so I'll take your advice if you think it is worth the additional cost?

Link to comment

For use as an UnRAID cache drive I'd agree it's NOT worth the extra cost => there's NO advantage to the potentially higher speeds when used as a cache (since you're network limited anyway); and the apps really don't need the extra speed either.    I do think it's worth the extra cost if you're using it as a system drive ... but that's not the case here.

 

In your case, just using a traditional SSD mounted on the back of the motherboard would provide plenty of performance, and not cost you a drive bay -- so that's probably your best choice.

 

Link to comment

Excellent. I'll put that extra $40 towards the UPS that I really need to buy and have been putting off.  You don't have a specific model of UPS you like while I'm harassing you with questions, do you?  I always use the CyberPower line when I install Poweredge servers for clients, but the ones I buy would be overkill.  I checked the hardware compatibility list, but it seems to be infrequently updated. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...