Ultra Low Power 24-Bay Server - Thoughts on Build?


Pauven

Recommended Posts

Hey guys,

 

I'm looking to build an energy efficient and fast 24 bay server, and I could use your feedback.  I'm not sure if I'm about to make a big mistake.

 

Background: 

I'm been running an 18 drive unRAID in a Norco 4020 enclosure for about 4 years now (and luvin' it).  I primarily use the server for music and movie playback (Blu-Ray ISO's), plus PC backups.  I don't do any extra stuff like web/db serving, torrenting, or encoding, so the horsepower requirements are on the low side.  The most challenging thing the server ever does is rebuild data or run a parity check.

 

I used to sleep the server (I like to conserve energy whenever possible), but over the years I have come to find that problematic for various reasons.  Lately I've been letting the server run 24-7, and I love the always-on convenience.  Unfortunately it draws 108 watts at idle, and 270 during parity checks.  I'm looking to build a replacement server, and want to make it more energy efficient this time around.  My goal is to idle at 20 watts or less.  I'm also looking to improve parity check/rebuild performance, something that seems limited in my current box by the four Adaptec 1430SA cards.

 

Here's the build:

  • Processor: Core i5-3320M (Mobile G2 Socket) - $240
  • Motherboard:  JetWay JNF9G-QM77 (Mobile G2 Socket) - $200
  • Memory: 4 Gigs total (2×204pin SO-DIMM, DDR3-1333) - $35
  • HD Controller:  HighPoint RocketRAID 2760A (24 drive SAS, PCIe 2.0 16x) - $620
  • Enclosure:  Norco RPC-4224 ($450) or X-Case RM424 Pro ($900) as the enclosure.  Either way, this means SAS.

 

Total cost is about $1.5-$2k.  This is obviously not a value server, but I'm okay with that, as my goals are energy efficiency and performance.

 

How I chose the components:

Starting with the processor, I've never noticed much CPU activity on my current server (AMD Athlon 64 X2 5600+, 2.9 GHz), and I don't anticipate I need a high horsepower CPU.  Looking at energy efficient options, I'm leaning towards an Intel mobile processor (Core i5-3320M) in the G2 socket.  This is the type of processor that would normally go in a laptop, and it is rated at only 35W TDP, not bad for a dual core that runs at 2.6 GHz and has video built in.  I'm expecting this alone would save 50-60 watts over my current build. 

 

Am I making a mistake in choosing too weak of a CPU?  I do see my Athlon 64 X2 5600+ hit 100% utilization during 18-drive parity checks, but only momentarily.  I also think my current performance is bottle-necked by the 4 Adaptec 1430SA cards - if I end up with a faster HD controller card, the CPU might become the bottle-neck. I think the Core i5-3320M is similar in performance, but I'm not really sure.  There is a 35W mobile quad-core, the Core i7-3840QM, but at over $600 I don't think that the price premium is worth it.

 

There aren't many desktop motherboards that have a G2 socket, so the easy option is the JetWay JNF9G-QM77 Socket G2.  It only has one PCIe slot, but it's a nice and fast 3.0 16x slot, so tons of bandwidth on that one slot.

 

Of course, since the motherboard only has one slot, that means I need a 24 drive single card SAS adapter to go with the case.  I think only having one adapter, instead of my current four, should also reduce power consumption a bit, helping me reach my 20 watt idle goal. 

 

Most of the 24-drive adapters (LSI/areca/3ware) only have a PCIe 2.0 8x connection, which is technically slower per drive than my current Adaptec 1430SA solution (167 MB/s for 24 drives vs. 250 MB/s for 4 drives), and they are all super expensive, so I not considering them.  There's one option, the HighPoint RocketRAID 2760A, that has a PCIe 2.0 16x connection, which could theoretically deliver 333 MB/s for all 24 drives, so this would be a 33% increase in bandwidth, and it is also much cheaper.  I would really like a PCIe 3.0 16x adapter, which would give 657 MB/s per drive, but I didn't find any.

 

Any thoughts on minimum bandwidth per drive to allow the drives to perform at optimum?  I know I see a parity check slowdown if I set my Adaptec 1430SA in 1x mode instead of 4x, but I've never been able to test higher bandwidth to see if they speed up any.

 

I don't see anyone using the HighPoint RocketRAID 2760A on the forums, and I'm hesitant to be the first, but it seems to be the only card that fits the bill.  If this card won't work with unRAID, then my alternatives require multiple SAS adapter cards instead of one, and that would force a different motherboard with more PCIe slots, which would force a regular processor instead of a G2 socket mobile processor.  From everything I've looked at, the 2760A with a Intel mobile CPU is probably going to be most energy efficient 24-bay server I can build, but I have no idea if it will even work.

 

Does anyone have unRAID experience with the 2782/2760/2744/2740 family of HighPoint products? 

 

Anyone else build an ultra-low power server?  If so, how'd you do it?

 

Thanks for any help!  I need it!

 

Paul

 

 

 

EDIT JUN 9, 2013 with the results from the actual build:

 

THE BUILD

 

COMPONENT

NAME

NOTES

Power Supply                Kingwin Lazer Platinum LZP-650SPCR tested this power supply to have 80+ efficiency even below 10% utilization, this server will be idling at 12.5% utilization

Motherboard                  Foxconn H61SMini-ITX Intel 1155 Socket motherboard with minimal features, delivering excellent performance at high efficiency

CPU                              Intel Celeron G1610Ivy Bridge CPU with low-power HD2500 graphics, very affordable, energy efficient and great performance

Memory                        G.SKILL ECO 4GB (2 x 2GB) 240-Pin DDR3 1600 (PC3 12800)    Low voltage (1.35V) for this memory size/speed

SAS Controller              HighPoint RocketRAID 2760A24-drive SAS controller with 3x Marvel 88SE9485 HD Controllers, PCIe 2.0 x 16 connection, very high per drive bandwidth

24-bay Server Chassis    X-Case RM-424Excellent build quality, great SAS backplanes, incredible cooling performance from 3x120mm fans (MB connection, controllable via Linux)

 

 

THE NUMBERS

 

Tested under unRAID 5.0 RC13, Linux kernel 3.9.3.

 

  COMPONENT 

    ITEM WATTAGE        CUMULATIVE WATTAGE    NOTES

Foxconn H61S + Intel Celeron G1610 + 4GB RAM (1.35V) 

17.50 Watts

17.5 Watts

New Intel power saving features in Linux 3.9.3 saved about 0.5W over 3.4.36 (5RC12a)

Above + X-Case RM-424 SAS BackPlanes

  1.00 Watts

18.5 Watts

Measured with no drives inserted

Above + X-Case RM-424 3x 120mm Case Fans 

  6.50 Watts

25.0 Watts

Measured with PWM set to 70, lowest fan idle speed **

Above + HighPoint 2760A SAS Controller

28.00 Watts

53.0 Watts

Measured connected to SAS BackPlanes with no drives attached

Above + 16 Hard Drives

  1.15 Watts

71.0 Watts

Standby Wattage for 2 x Samsung F2 1.5TB, 3 x Samsung F3 2TB, 11 x WD Red 3TB - All drives averaged about the same wattage

Estimated with 24 Drives

81.0 Watts

Estimated power consumption of above build when all 24 drives are installed - UNTESTED

 

**NOTE:  I was not able to lower the case fan speed under RC12a for two reasons - (1) The IT8772E chipset (fan control) on the H61S motherboard didn't have support added to the IT87.KO driver until Linux kernel 3.8, and (2) unRAID 5RC12a was delivered without many kernel module drivers, including the IT87.KO driver.  My weak Linux skills were insufficient for me to figure out how to get a newer IT87 driver compiled onto RC12a, besides it is a waste of time now that Tom has taking unRAID onto Linux 3.9.x.  I was able to load the IT87 driver under 5RC13 and control the case fan speeds from Linux.

 

Idle wattage on 5 RC12a, with identical hardware configuration, is 75W.  The 4W savings under RC13 primarily come from the ability to control the case fan speed on my particular motherboard, with small additional power savings from newer Intel CPU power saving features.  There was no observable difference in 2760A idle wattage between RC12a and RC13.

 

 

Additional numbers coming soon: Idle Wattage with all drives spinning, Wattage with 1-drive spinning playing a Blu-Ray ISO, Wattage while copying data to array, Max Cold Boot Wattage, Max unRAID All-Drive Spin-Up Wattage, Max Parity Check Wattage, Parity Check Speed/Time with only WD Red 3TB drives

Link to comment
  • Replies 307
  • Created
  • Last Reply

Top Posters In This Topic

20W is a challenging goal for idle power consumption.

 

My 2nd UnRAID server achieves that, using a SuperMicro X7SPA-H-D525 motherboard.

http://www.superbiiz.com/detail.php?name=MB-X7SPA5

 

This is a mini-ITX board, but would mount in your case with no problem (as you obviously know, since the board you listed is also mini-ITX).    Like the board you listed, it only has one PCIe x16 slot; but you could use the same RAID controller you listed (which seems like serious overkill for UnRAID, since all you need is the SATA interfaces);  or simply put an 8-port controller in it and use 2 or 3 port multipliers to support 24 drives.

 

I don't have the add-in controller or port-multipliers, so they'd add a small amount of idle consumption (but almost certainly less than a 2760A); but with 6 3TB WD Reds (15TB total storage) my system draws ~ 20W on idle and 48W during parity checks, which take right at 8 hrs.    I've VERY happy with this setup [in a Lian-Li PC-Q25B case].

 

The components you've listed MAY get fairly close to that -- but I doubt they'll drop below 30W plus whatever the controller draws when idle.    There's a very similar build to mine on this forum using an H77 chipset mini-ITX board and an i3 that draws ~ 33W on idle ... and that's without any add-in boards.

 

Link to comment

  well... I was thinking about using an Intel Atom, like Garycase listed.  I just noticed yesterday that Tom has new server builds listed too!  INCLUDING a low power Intel Atom based one, also using the same board Garycase is using!  :-)

 

Supermicro X7SPA-HF-D525 (or equivalent)

Intel Atom D525 Pineview-D 1.8GHz 13Watt dual-core processor

4GB DDR-3/1333 DRAM

Supermicro AOC-SASLP-MV8 controller

 

  The mother board even has Intel Gb LAN!  :-)  sooooo cool!  It also looks like it has an internal USB port for booting unRAID!  :-)

 

  It would be interesting to know if the HighPoint card would work for sure or not with unRAID, but it does look like the card should work with the low power Atom board too!

 

 

Link to comment

The SuperMicro Atom board is indeed an outstanding choice.  I had ordered an "HF" version, but it was out of stock at the time, so I just got the "H" version, which I'm very happy with.  The only difference is the HF version has IPMI ... a NEAT feature that I'd have liked to have; but not a compelling reason to wait for the board when I wanted it  :)    ... and yes, there's an internal USB port where you can put the USB flash drive  8)

 

I don't think there's any doubt about whether the HighPoint card will work with the board -- although I don't know whether it works with UnRAID.

 

An alternative is to use the AOC-SASLP-MV8 (same as Tom's using in the new server) and a couple of port multipliers IF they work with the MV8.    It's predecessor clearly did [http://lime-technology.com/forum/index.php?topic=6590.0 ], but I don't know for sure about this card.    Without a multiplier, you'd have 14 ports.    With 5-port multipliers ($65 each) you could add up to 32 more !! [http://www.addonics.com/products/ad5sarpm-e.php ]    There's clearly plenty of room in the case to mount a few of the multiplers if you're using a mini-ITX motherboard  :)

 

The #1 question I'd have is whether or not the AOC-SASLP-MV8 supports port multipliers.

Link to comment

I found a couple posts on HardForum that confirm that this card works fine with the Addonics port multipliers:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816215322

 

What I do NOT know for certain is whether the Sil 3124 works with UnRAID.

It's listed as working under "Motherboard SATA controllers" ... but is not listed under the add-on controllers, except for the generic statement "... Other addon cards using any Silicon Image or other chipset mentioned above."    http://lime-technology.com/wiki/index.php/Hardware_Compatibility

 

Bottom line:  The X7SPA-HF-D525 (or "H" version) would be a great choice for a very-low-power system; but you might have to experiment a bit to determine how to best support 24 drives.  Choices would include simply using the 2760A (in my view, too expensive and probably draws more power than you'd like);  or using an 8-port controller card with port multipliers.      Of course, with 4TB drives readily available these days, you may find that the 14 ports you can support without any multipliers is all you need  :)  [You could build a 52TB system with 14 drives !!]

 

Link to comment

The SuperMicro Atom board is indeed an outstanding choice.  I had ordered an "HF" version, but it was out of stock at the time, so I just got the "H" version, which I'm very happy with.  The only difference is the HF version has IPMI ... a NEAT feature that I'd have liked to have; but not a compelling reason to wait for the board when I wanted it  :)

 

....just note that the -F version with IPMI includes a small GPU and BMC and requires at least one NIC to be powered up all time.

I've read somewhere that this would draw/add another 3-5W.

 

The non-F version comes with another GPU, but I don't know if this is an IGP, which would most likely have an idle mechanism, like the CPU.

AFAIR the Atom will not idle/sleep...it will just run on the same power state when idle and when under load.

The 1155 CPUs, including IGP,  are able to idle at very low rates.

 

Maybe these Celeron 847 based mobos are a better choice when it comes to idle power, because they have a low power chipset like the Atoms?

http://lime-technology.com/forum/index.php?topic=27265.0

 

...anyone tested it already?

 

Edit: I just got word from a colleague who uses the MSI mobo (http://www.msi.com/product/mb/C847MS-E33.html#/?div=Basic) in a non-unraid build.

        Witth 2x S-ATA Disks and an "old" 250W PSU it draws 35W when not in idle/spindown. ... looks promising.

Link to comment

I've got a friend who built an identical system with the HF version ... when we measured the idle power on his unit (with my Kill-a-Watt, so both systems were using the same meter), it was 1 watt more than mine -- mine measured 20W, his measured 21W.

 

Obviously there's some tolerance on the Kill-a-Watt, but I'm confident it's pretty close.  In any event, at that level, it's kinda in the "who cares" range  :)

 

I don't know what a Celeron 847 system would idle at, but agree that with a low-power chipset it should be pretty close to the same range.  HOWEVER, it would be totally unacceptable for the system the OP wants to build here => it's only got 3 SATA ports (plus an eSATA); and has one PCI (NOT PCIe) expansion slot;  so there's no reasonable way to use it for a 24 drive system.

 

Link to comment

I've got a friend who built an identical system with the HF version ... when we measured the idle power on his unit (with my Kill-a-Watt, so both systems were using the same meter), it was 1 watt more than mine -- mine measured 20W, his measured 21W.

...cool....are the PSUs identical as well?

 

I am running two of the X7SPA-HF-D510 in a SM-1U case with a 80gold PSU...i cannot get them to go below 28 with one SSD attached.

...but I also cannot get them to go higher than 35W (which is peak during boot) ;D

 

I don't know what a Celeron 847 system would idle at, but agree that with a low-power chipset it should be pretty close to the same range.  HOWEVER, it would be totally unacceptable for the system the OP wants to build here => it's only got 3 SATA ports (plus an eSATA); and has one PCI (NOT PCIe) expansion slot;  so there's no reasonable way to use it for a 24 drive system.

 

...aggreed as far as that mITX board is concerned...but with a Norco case, I don't see this as a major constraint  :D

There are others with more PCIe slots available.

Link to comment

Yes, identical PSUs.

 

I'm surprised the D510 board won't get lower than you've noted.  What PSU are you using?

 

With a low-power system like this you want to use the smallest quality power supply possible.  Remember that an 80+ supply is rated for that level of efficiency at 20% and higher loads (the specific certification measurements are done at 20%, 50%, and 100% load).    Efficiency falls off rapidly at loads lower than 20%.

 

So, for example, if you're using a 600W supply, then you're NEVER running anywhere near peak efficiency -- in fact you'd be WAY below the 20% point, so could easily be running at 60% efficiency or worse.  I'm using a 300W 80+ unit, and I'm still always below the 20% mark.

 

There's no space constraint in the Norco case ... if there's an 847-based board with PCIe expansion slots it might be worth a try.    Personally, I'm a HUGE fan of the SuperMicro X7SPA-H-D525 that there's simply no way I'd use anything else UNLESS I needed more CPU "horsepower" for additional add-ons.

 

Link to comment

I'm surprised the D510 board won't get lower than you've noted.  What PSU are you using?

It is this case: http://www.supermicro.nl/products/chassis/1U/503/SC503L-200.cfm but with a 200W 80plus variant of that PSU (PWS-201-1H, see: http://www.supermicro.com/support/resources/pws/)

 

And I have two...purchased approx 1 year accross and both give same results....so most likely not a glitch in one of the PSUs.

I cannot tell if it is my Metering device though.

 

There's no space constraint in the Norco case ... if there's an 847-based board with PCIe expansion slots it might be worth a try.    Personally, I'm a HUGE fan of the SuperMicro X7SPA-H-D525 that there's simply no way I'd use anything else UNLESS I needed more CPU "horsepower" for additional add-ons.

 

yepp....the SM board is a real server grade whilst the 847 mobos are not.

Link to comment

Wow, everyone's contributing some really great info, thanks!

 

I guess I didn't expect my sub 20W goal to be so... challenging.  Last year I built an always on HTPC using an AMD A10-5700, and I hit a 23W idle out of the box, no problem (though adding in the CableCard adapter pulled it up to about 35W).

 

I guess I was expecting that the use of a Mobile processor would some how magically idle lower, and I have read (unconfirmed) reports that Intel's mobile processors idle at 800 MHz, whereas their desktop processors idle at 1600MHz.  Anyone know if this is true?  This could have a significant impact on idle wattage.

 

 

Why don't you go with a 1155 socket and an I3 2120T?  They have a 35W TDP as well..

 

That's a great suggestion BetaQuasi.  From looking at the specs, this looks almost like a mobile processor in a desktop socket.  Any idea at what MHz these idle?  I'm not as familiar with Intel's offerings, so your link got me to researching.  On the desktop processor scene, the T processors denote their low power offerings.  I see many other T processors, many quite a bit newer and faster, all within the same 35W TDP envelope.  The only downside I see is that most socket 1155 motherboards have more features, and from an energy efficiency standpoint, more features =more power draw.  Any suggestions on a power efficient 1155 motherboard?  It doesn't have to be mini-ITX, but that form factor would probably be among the most energy efficient.

 

 

20W is a challenging goal for idle power consumption.

 

My 2nd UnRAID server achieves that, using a SuperMicro X7SPA-H-D525 motherboard.

 

Congrats garycase, awesome achievement!  I think your post was the most eye-opening for me, as in my mind the Atom processors are a significant step down in horsepower and power consumption, and yet you just barely achieved 20W with no add-in boards.  I looked at the specs for the HighPoint 2760A, and it has a 45W Max power consumption, but I didn't see any idle power consumption figures.  I'm guessing it might... might idle around 10W if I'm lucky.  Maybe this needs to be the sub 30W 24-drive server build...

 

I hadn't even considered these Atom processors.  This does bring up several questions for me:

 

Does adding more drives increase CPU utilization during parity checks and rebuild?  I ask because you are getting great performance with 6 drives, but what happens when you have 24?  Wouldn't the processor being seeing a much larger workload?  As electron286 pointed out, Tom is using the same platform (thanks for the tip!), but that chassis maxes out at 14 drives.  I really have no idea how much the processor impacts parity check/rebuild performance, but I'm concerned this platform may not have the grunt for 24 drives.

 

As a quick note, the SuperMicro X7SPA-H-D525 motherboard's PCIe x16 connection is the physical size only, the electrical connection is PCIe 2.0 x4 (the x8 and x16 pins are not wired up).  That means 2GB/s max throughput.  The board I listed, the JetWay JNF9G-QM77, has a PCIe 3.0 x16 electrical connection, which is 15.76GB/s max throughput, or roughly 8x faster (but only if used with both a processor and an add-in card that support PCIe 3.0 x16).  This PCIe 2.0 x4 connection is still twice as fast as the AOC-SASLP-MV8 needs, but it's not going to match up well to the HighPoint 2760A.

 

You're 8-hour parity check for 3TB is fantastic, and I would be happy with that, but I would say you currently have the best possible scenario - all drives have a direct, dedicated connection to the motherboard.  As you correctly point out, adding the AOC-SASLP-MV8 doesn't get you to 24 drives, so you would still need some port multipliers.  I have two concerns/questions with this approach:

[*]Does the use of port multipliers affect performance?  I think it has to, because you're taking dedicated bandwidth for 1 drive and sharing it with 5 drives. We're talking IDE levels of bandwidth per drive, but I think this would only rear it's ugly head during parity checks/rebuilds. Does anyone know?

[*]

The AOC-SASLP-MV8 only has a PCIe 1.0 x4 connection, so 1GB/s max throughput.  The X7SPA-H-D525 motherboard only has 6 SATA ports, so that means 18 drives would need to be connected through the AOC-SASLP-MV8, and during parity check each drive's share of the 1GB/s max throughput would be 55.5 MB/s, at best!  I'm fairly certain the performance will be horrible!  When I ran one of my Adaptec 1430SA's at PCI 1.0 x1, each of the 4 drives was allocated only 62.5 MB/s, and I was getting parity 3TB checks closer to 30 hours!  Unacceptable!

 

On second thought - it would be better to use the port multipliers on the motherboard SATA ports, and not on the AOC-SASLP-MV8 (is this possible?).  That means only 8 drives are sharing that PCIe slot's 1GB/s bandwidth, for about 125MB/s each, so not as bad, but this still goes back to my question about the performance impact of port multipliers.

 

For comparison, the HighPoint 2760A would allocate 333MB/s of bandwidth per drive (on a compatible motherboard), probably way faster than rotating hard drives need, but it certainly wouldn't be holding them back.

 

 

Maybe these Celeron 847 based mobos are a better choice when it comes to idle power, because they have a low power chipset like the Atoms?

 

I think you're on to something Ford Prefect (love the name, btw)!  In years past, Celeron's were horrible processors to be avoided at all costs, but the current offerings are basically just the i3 processor minus a few features.  Some people are calling them Atom killers, since they are faster than Core 2 Duo's and they idle like an Atom: I read a report of one person with a Celeron G1610 Ivy Bridge 2.6GHz 55W processor seeing a 20W idle, with a 860W power supply (good grief!).  With a good, efficient 1155 motherboard and super efficient power supply, this might be the right ticket.

 

 

here is a link to ppl who build systems that use 10 watt while idle.

 

Thanks b0ssman.  I had stumbled onto some of the same info (in German too) during my searching.  I need to do some more research to fully understand how they do it, but it seems one of the main things they are doing is using a PicoPSU - think of a laptop power supply - which are much more efficient.  I'm not sure if these are physically compatible with the Norco/X-Case backplane power connections, and even if they were, do they have enough power to spin up 24 drives?  The WD Red 3TB peak Amps is 1.73 @ 12VDC, so I think that is a 21 watt max spin-up power draw.  Hopefully a staggered spin-up would work with an add-on controller card, but I'm not sure if the motherboard SATA ports do a staggered spin-up in all scenarios (including boot, before unRAID is loaded).  Normal Read/write power requirements for the Red drives is 4.4 W, so for 24 drives that would be 106W, plus whatever the motherboard, processor, and add-in card draw.  I'm not sure if you could do a 24-drive server with less than a 225W power supply (and keep in mind, there are minimum requirements on the 12V rails, so even 225W may not be big enough with all those drives).  The biggest PicoPSU I see is rated for 160W.

 

Thanks for all the info!  It's nice to see positive, constructive responses to responsible computing - I was actually expecting many comments along the line of this being a waste of money with no ROI.

 

I'd love to hear some more thoughts and ideas.

 

Paul

 

 

Link to comment

You're correct about the bandwidth restrictions you'd see due to both the 2GB/s limit on the x4 port (I knew that, and should have noted it when I listed the specs of the board);  and with port multipliers -- clearly they share the individual SATA channel's bandwidth.    Few on-board ports work with port multipliers ... I don't know if the ones on this board do or not (it would certainly be a good test; since if they DO work, you could do this without any add-in boards !!).

 

It's certainly true that an x4 slot has limited bandwidth ... but assuming 125MB/drive is enough bandwidth, this isn't really a factor until you have more than 16 drives connected.    You'd want to connect 18, so your bandwidth would drop to about 111MB/drive.    I don't think that would cause a major drop in parity check times, but it would have SOME impact.    Port multipliers would be more of a concern, although if you use a SATA-3 card (with 600 MB speed) even those wouldn't be too bad, as an x5 multiplier would still have 120MB/drive of available bandwidth.

 

The CPU utilization is very low during parity checks -- I think it's safe to say that's essentially not a factor.  Given enough SATA bandwidth, I think the Atom could do a parity check on 24 3TB drives in very close to the same time it's doing it now.  I noticed virtually no difference in parity check speeds as I migrated my initial build from 3 to 6 drives.

 

I think the reason my system consumes 20W at idle is multi-faceted:  the Atom CPU itself is clearly consuming much less (its max TDP is 13W);  but I have 6 WD Red drives that consume about 0.6W each in sleep (3.6W total);  the motherboard components (SATA controller, GPU, etc.); plus whatever the Atom is actually drawing.  I'm not at all unhappy with 20W -- although I suppose less would be even nicer  :)

 

On the system you want to build, I'd think the add-in cards; port multipliers (if used); 24 drives; etc. is going to make it VERY difficult (or impossible) to get your idle consumption in the 20W range !!

 

Link to comment

... by the way, you're correct that the mobile CPUs have slower idle states, and in fact run at reduced clocks relative to their desktop "cousins."    They're clearly designed to minimize power utilization.

 

Note that the "T" series desktop units don't idle at significantly lower power consumption than their full-power versions ... they're simply designed to never exceed a lower TDP.    If you'd rather have "full power" than you can safely use a higher-power version without notably impacting the idle power.  I replaced an i5-3570T (45W TDP) in an HTPC with an i5-3570 (77W TDP) and noticed NO difference in it's idle power consumption ... but notably better graphics and quicker processing speeds for CPU-intensive activities.

 

I know your goal here is 24 drives -- but is that really the requirements, or do you have a capacity goal?    With 5TB drives coming out later this year (WD Reds), you could build a 65TB system with 14 drives  :)    Won't idle at 20W ... even using the same board I used, 8 more drives would add 4.8W to the idle power; and an add-in controller would probably add 5-10 more.    But you should be able to achieve ~ 35W idle, and ~ 100W max, which I'd consider very good for 65TB of storage !!  :)

Link to comment

Why don't you go with a 1155 socket and an I3 2120T?  They have a 35W TDP as well..

 

That's a great suggestion BetaQuasi.  From looking at the specs, this looks almost like a mobile processor in a desktop socket.  Any idea at what MHz these idle?  I'm not as familiar with Intel's offerings, so your link got me to researching.  On the desktop processor scene, the T processors denote their low power offerings.  I see many other T processors, many quite a bit newer and faster, all within the same 35W TDP envelope.  The only downside I see is that most socket 1155 motherboards have more features, and from an energy efficiency standpoint, more features =more power draw.  Any suggestions on a power efficient 1155 motherboard?  It doesn't have to be mini-ITX, but that form factor would probably be among the most energy efficient.

 

If you're considering going that route, I'd suggest the newer Ivy Bridge i3-3220T over the older Sandy Bridge i3-2120T. In general, the Ivy Bridge CPU's consume less power due to the die shrink to 22nm. I have the i3-3220T in my build and it idles at ~33W. Full power consumption specs are in the build thread, which is linked in my sig.

 

 

Link to comment

The Asus P8H77-I dirtysanchez used in his build is an excellent choice for a system with more "horsepower" than an Atom ... and has a another significant advantage:  the PCIe slot is both x16 AND v3 ... so bandwidth is NOT an issue for adding all your other drives  :)

 

The P8H77 with an i3 is going to idle ~ 12W higher than the SuperMicro Atom board;  but you're getting a lot of "bang" for that small wattage "buck"  :)

Link to comment

It's certainly true that an x4 slot has limited bandwidth ... but assuming 125MB/drive is enough bandwidth, this isn't really a factor until you have more than 16 drives connected.

 

Keep in mind that PCIe 1.x x4  (1GB/s) is half the bandwidth of PCIe 2.x x4 (2GB/s), which in turn half the bandwidth of PCIe 3.x (3.94 GB/s).  In this case, it is the AOC-SASLP-MV8 that is the limiting factor as (unless I'm mistaken) it is only PCIe 1.x capable.  If 125MB/drive is enough bandwidth, that is a limit of 8 drives on that card.

 

You're absolutely right about the idle power of these drives, so that's something I've neglected in my calcs.

 

Assuming the motherboard+processor are ~20W, and 24 Red drives @ .6W each are another 14.4W, this is already a 35 watt server at idle, at best.  Add in the HighPoint 2760A, which I'm guessing might idle ~10W, and we're at 45W.  That's still a significant improvement over my current system, even if it doesn't come close to my original goal.  But like you said, a sub 35W 24 drive server probably isn't possible, at least not without customized circuitry.

 

I don't really have a specific capacity goal as much as an expandability goal.  My personal perspective is that a 24-bay server is cheaper, per drive and per gigabyte, than a small 7-drive server like what you are running, as there are shared hardware costs that are amortized over a larger number of drives.  I expand, as needed, using the highest capacity NAS rated drives available at the time.  Adding drives ends up cheaper than replacing drives, and 24 bays allows me to keep adding far longer.

 

I am looking forward to 5TB Reds, though... that will help.  A 115TB server, now we're talking!  It seems to me that we've been stuck at current 4TB HD capacity for way too many years, and I have little faith that 5TB RED drives will be out this year.

Link to comment

we've been stuck at current 4TB HD capacity for way too many years, and I have little faith that 5TB RED drives will be out this year.

 

I'm hopeful you're wrong about the 5TB availability.  Western Digital has announced that 4TB Reds are expected in the "last quarter" this year, with 5TB to follow within 90 days.    So it may not be by the end of the year, but it should certainly be very early next year.    But even with 4TB drives, a 24 drive system can expand to 92TB ... not exactly "small"  :)

Link to comment

If you're considering going that route, I'd suggest the newer Ivy Bridge i3-3220T over the older Sandy Bridge i3-2120T. In general, the Ivy Bridge CPU's consume less power due to the die shrink to 22nm. I have the i3-3220T in my build and it idles at ~33W. Full power consumption specs are in the build thread, which is linked in my sig.

 

Thanks DirtySanchez, that's pretty close to what I'm now thinking.  Instead of the i3-3220T, I'm looking at a Celeron G1610.  It has weaker graphics (which is actually good from a power perspective), is on the same Ivy Bridge 22nm process, and seems to idle slightly lower than the i3-3220T.  Performance wise, it is very similar.  If I went this path, I would run it on something like a H77 based mini-ITX, and your ASUS P8H77-I is on the short list of options I'm considering.

 

Another board I'm considering is the ASRock H61MV-ITX.  It's slightly older, but still supports that Celeron G1610, has one PCIe 3.0 x16, and only costs $60.  The G1610 is only $50.  Add in 4GB of RAM, and I'm out the door for under $150 with an energy efficient and powerful server.  I think this combo is gonna be pretty hard to beat.

 

Add in $60 for a 300W 80+ Gold power supply, $620 for the HighPoint 2760A controller card, $450 for a Norco RPC-4224, and another $50 for some SAS cables, and my total is around $1330.  That's a cost of only $56 a drive bay, which is not bad at all, especially considering each drive gets dedicated bandwidth of 333MB/s, and the server should have a very low idle power consumption (for a 24 drive server).

 

Still... does no one have any confirmation that the HighPoint 2760A will work with unRAID?  I haven't found any other single controller cards that I feel to be an acceptable alternative to the 2760A for a 24-bay server.  The port multipliers split bandwidth, which I don't like.  The other 24 port controller cards are both more expensive and slower.  If the 2760A won't work, I need to take a very different path.

Link to comment

Agree that would be a very good system with very low power consumption for a 24 drive setup.  The biggest unknown in terms of idle power consumption is the Highpoint 2760A ... and, like you, I'd hope that it isn't above 10W when all drives are spun down and it's effectively inactive.    And in any event, it's PROBABLY still going use less wattage than an 8-port card plus 3 port multipliers.

 

Just to muddy the waters a bit ... you could also look at the low-power server boards using C2xx chipsets and low power Xeons  :)      These are lower power consumers than their desktop i-series cousins, but still have plenty of power.    SuperMicro makes a couple of nice micro-ATX boards for these ... and that would also give you more expansion slots, thus expanding the options for adding the SATA ports you need.  You'd also get ECC memory support -- not as good as buffered RAM, but nevertheless a nice bump in reliability.

 

But overall I like the Asus P8H77-I with a 2760A for what you're wanting to do -- it simply requires confirmation that the 2760A will work with UnRAID.    Not sure how to get that unless either (a) Tom knows;  or (b) you simply try it [might cost you a restock fee  :) ].    Note that the Marvell 88xx chipsets are supported by UnRAID (listed in several products on the compatibility page);  but there's no mention of the Marvell 9485 used in the 2700 series RocketRAID products.    My best guess is it WILL work ... but the only way to know for sure is to either try it, or confirm that Tom has tested it.

 

Link to comment

In regards to the possibility that port multipliers might work with the Atom mb... in looking at the Intel forums... the ICH9R was designed for use with port multipliers, though at the time of the posts I read, no Intel drivers had been written at that time to allow the use of port multipliers.  Since a couple of years has now passed, and we are talking about Linux, and not windows... There seems to be a good "chance" that unRAID may have proper drivers to allow the ICH9R chipset work with port multipliers.------  But I have not found anything yet of anyone that has posted that it will work as designed at all, let alone in an unRAID linux build.

 

  It sounds like it would be worth a try to play with however!  :-)  So I think I will start saving for a new hardware build!  using the Atom board, and SAS-mv8 and a port multiplier to play with!  :-)  I will let everyone know what the results are afterwards, but it might be about two months... :-(

 

  planned build...

 

    1.  MBD-X7SPA-H-O -  http://www.newegg.com/Product/Product.aspx?Item=N82E16813182233

 

    2.  AOC-SASLP-MV8 - http://www.newegg.com/Product/Product.aspx?Item=N82E16816101358

 

    3.  AD5SAPM - http://www.shopaddonics.com/itemdesc.asp?ic=AD5SAPM

 

Link to comment

In regards to the possibility that port multipliers might work with the Atom mb... in looking at the Intel forums... the ICH9R was designed for use with port multipliers, though at the time of the posts I read, no Intel drivers had been written at that time to allow the use of port multipliers.  Since a couple of years has now passed, and we are talking about Linux, and not windows... There seems to be a good "chance" that unRAID may have proper drivers to allow the ICH9R chipset work with port multipliers.------  But I have not found anything yet of anyone that has posted that it will work as designed at all, let alone in an unRAID linux build.

 

  It sounds like it would be worth a try to play with however!  :-)  So I think I will start saving for a new hardware build!  using the Atom board, and SAS-mv8 and a port multiplier to play with!  :-)  I will let everyone know what the results are afterwards, but it might be about two months... :-(

 

  planned build...

 

    1.  MBD-X7SPA-H-O -  http://www.newegg.com/Product/Product.aspx?Item=N82E16813182233

 

    2.  AOC-SASLP-MV8 - http://www.newegg.com/Product/Product.aspx?Item=N82E16816101358

 

    3.  AD5SAPM - http://www.shopaddonics.com/itemdesc.asp?ic=AD5SAPM

 

Nice board, but buy this instead:  http://www.superbiiz.com/detail.php?name=MB-X7SPA5

 

The one listed at Newegg uses a D510;  the newer board uses a D525.    There's an even newer version coming out very shortly, but it's got fewer SATA ports, so I'd stay with the D525 board.

 

Note that IF the ICH9R supports port multipliers, you don't need the -MV8 board ... you can just use the Addonics port multipliers  :)    [it may be more convenient internally to mount this version of the port multiplier:  http://www.shopaddonics.com/itemdesc.asp?ic=AD5SARPM%2DE&eq=&Tp=

(depends on what case you're using -- if your case have available slots; the slot-mounted version is better;  if not, this might be easier to attach at an arbitrary spot or in an available bay)

 

Link to comment

Are you sure the Highpoint 2760 isn't using a built in expander to increase the port count to 24?  I know another 24 port controller was essentially an 8 port controller with a built in expander and would therefore just split the bandwidth similar to using a 8 port controller and external expander - at least based on a review I read written by a purchaser.  The Highpoint would be almost certain to use less power however.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.