Ultra Low Power 24-Bay Server - Thoughts on Build?


Pauven

Recommended Posts

I'm close to wrapping up my testing, and will soon publish my final results on the "Ultra Low Power 24-Bay Server build.  Here are the tests I have outstanding:

 

  • Test G1610 CPU idle power consumption under 5RC13, which has Linux kernel 3.9.3 and built in Ivy Bridge power saving support
  • Test 2760A idle power consumption under 5RC13, to see if any mvsas driver changes have miraculously unlocked a power saving mode (doubtful)
  • Test X-Case RM-424 backplane wattage without any drives
  • Test the exact idle wattage of the WD Red 3TB drives
  • Test the exact idle wattage of the Samsung F2 1.5TB & F3 2TB drives
  • Determine a way to lower the fanspeed on the X-Case RM-424 120mm chassis fans, as they seem to idle somewhat fast and consume 10W total at idle.
  • Swap out remaining Samsung drives to WD Red 3TB drives.

 

I have 4 Samsung drives left to upgrade, and I haven't even ordered the new drives yet, so this last item will push my final results out a couple weeks. 

 

My goal in replacing the Samsungs is three-fold.  I'd like to measure Parity Check performance with all identical drives.  I'd like to get my Parity-Check time down to 7h32m (I'm calling you out garycase ;)  ).  And  the WD Red 3TB drives will further reduce my idle wattage.  The Samsung drives I have are rated at 1W idle on the spec sheets, but my hunch is that spec is rounded down from something a little higher (maybe 1.1W-1.4W).  The Red's, with a .6W idle spec, are about half the wattage.  Choosing the "right" drives with a very low idle wattage adds up over 24 drives. - something I gave no consideration to when I started this build. (Ahh, I see garycase is on the same brainwave).

 

The X-Case RM-424 has a very sophisticated fan controller built in.  Each of the 3 120mm fans has a quick release and pulls out of the case in a split second (really nice feature), and if a fan fails it sounds an alarm (annoying, but in a wonderful sort of way).  It also has a fan speed header that attaches to the MB.  In the video demo on X-Case's web site, they showed the 3 case fans speed up when the CPU fan stopped - I tried this test and it did not work for me, but I think that is a motherboard feature which I don't have.  I also set the motherboard fan speed to its lowest level, but I'm not sure the X-Case fan controller is responding to the input.  I never notice the fans speed up during a Parity Check, they seem to stay at the same idle speed all the time, and this idle speed has been sufficient to keep all drives under 40 degree C even during Parity Checks.  Is there a way for me to monitor fan speed in Linux/unMENU?  I'm a little concerned about under-volting the fan controller, since it has on-board logic beyond just fan speed.  I find that the fan noise is just a touch loud for my preference (I have really sensitive hearing), so I'm hoping to slow them down to both save power and lower noise.

 

Lower than 17w ??  :)

 

Indeed an admirable goal ... although as you know, your PSU efficiency is probably relatively poor at those power levels, so I'm not sure it's worth the bother, as there are other places that it's a lot easier to "find" a few watts to save.   

 

I will only be performing the measurement at those levels to isolate the effects of Ivy Bridge power saving features.  Technically, my server will be idling in the 70W range, inside my particular power supply's efficient range, so a watt saved is a watt saved, regardless of where it comes from.

 

-Paul

Link to comment
  • Replies 307
  • Created
  • Last Reply

Top Posters In This Topic

I never notice the fans speed up during a Parity Check, they seem to stay at the same idle speed all the time, and this idle speed has been sufficient to keep all drives under 40 degree C even during Parity Checks.

 

While you're trying to shave a second off my parity check results [ :)], you can work on your temps as well ==> my temps range between 32 and 35 during parity checks (and are lower at all other times)  :)

 

Link to comment

... by the way, I've been looking around trying to find a good multi-fan thermal controller that would let you basically turn off the fans automatically when the drives are spun down, but haven't found one yet.    If you could do that, and wired it in to all of the fans -- case and drive cages -- you could shave off ~ 10w of idle power.    I'll let you know if I find one (I plan to buy it and test it if I do).

 

Link to comment

I never notice the fans speed up during a Parity Check, they seem to stay at the same idle speed all the time, and this idle speed has been sufficient to keep all drives under 40 degree C even during Parity Checks.

 

While you're trying to shave a second off my parity check results [ :)], you can work on your temps as well ==> my temps range between 32 and 35 during parity checks (and are lower at all other times)  :)

You can't post temps for HD's, without including posting room temps the unRAID server is in.

 

Link to comment

I never notice the fans speed up during a Parity Check, they seem to stay at the same idle speed all the time, and this idle speed has been sufficient to keep all drives under 40 degree C even during Parity Checks.

 

While you're trying to shave a second off my parity check results [ :)], you can work on your temps as well ==> my temps range between 32 and 35 during parity checks (and are lower at all other times)  :)

You can't post temps for HD's, without including posting room temps the unRAID server is in.

 

Fair enough => Ambient is 75F

 

Link to comment

I never notice the fans speed up during a Parity Check, they seem to stay at the same idle speed all the time, and this idle speed has been sufficient to keep all drives under 40 degree C even during Parity Checks.

 

While you're trying to shave a second off my parity check results [ :)], you can work on your temps as well ==> my temps range between 32 and 35 during parity checks (and are lower at all other times)  :)

You can't post temps for HD's, without including posting room temps the unRAID server is in.

 

Fair enough => Ambient is 75F

 

You also have to consider drive layout in the chassis.  A single row of drives will run cooler than 6 rows of 4 drives.  The drives in the middle get surrounded on all sides, so you get this "localized warming" effect.  The drives are also packed in so tightly in 20/24 drive cases that there's hardly any room for air to pass between the drives.

 

Also, beyond the consideration of ambient temp is server location.  Mine?  It's under an IKEA sofa/bed that has a 'skirt' all the way around.  Technically, the server breathes in some of the exhaust air that stagnates under the sofa.  Certainly not ideal.

 

Under 40 was also a generalization, most of my drives run in the range you mention.

 

All in all, a very impressive result, and more than 10 degrees superior to my Norco 4020 in the same basic setup and same location.  I have no complaints.

 

... by the way, I've been looking around trying to find a good multi-fan thermal controller that would let you basically turn off the fans automatically when the drives are spun down, but haven't found one yet.    If you could do that, and wired it in to all of the fans -- case and drive cages -- you could shave off ~ 10w of idle power.    I'll let you know if I find one (I plan to buy it and test it if I do).

 

Those types of solutions won't work in the RM-424 case, which uses a proprietary cooling connection.

 

Besides, I don't want to turn off the fans, as I need airflow for the 2760A.  I only want to slow them down another 10%-25%.

 

Something I have to consider is that, if the MB is truly deciding how fast to spin the fans, I'm trusting ambient air temps on the MB to control cooling my HD's, which are 12"+ away.  I could also have one drive on thermal runaway, and it might not be enough to heat up the case temperature, triggering the MB to spin up the fans.

 

I wouldn't mind having some software control/override, since the HD temp can be read through SMART, if even a single drive's temp spiked it could tell the MB BIOS to raise the case fan speed.  Anything like this for Linux?

Link to comment

You also have to consider drive layout in the chassis.  A single row of drives will run cooler than 6 rows of 4 drives.  The drives in the middle get surrounded on all sides, so you get this "localized warming" effect.  The drives are also packed in so tightly in 20/24 drive cases that there's hardly any room for air to pass between the drives.

 

Also, beyond the consideration of ambient temp is server location.  Mine?  It's under an IKEA sofa/bed that has a 'skirt' all the way around.  Technically, the server breathes in some of the exhaust air that stagnates under the sofa.  Certainly not ideal.

 

Under 40 was also a generalization, most of my drives run in the range you mention.

 

All in all, a very impressive result, and more than 10 degrees superior to my Norco 4020 in the same basic setup and same location.  I have no complaints.

ALL very good points, to tried today to type to much. I injected as I felt that sh^t his box could be outside in the cold for all we know. So some detail is require. But you nailed it, all these points matter, getting my Norco 4224 running cool was not easy or cheap. 39C are the high's when all drives are spun up and being written to in that chassis.

 

Those types of solutions won't work in the RM-424 case, which uses a proprietary cooling connection.

 

Besides, I don't want to turn off the fans, as I need airflow for the 2760A.  I only want to slow them down another 10%-25%.

 

Something I have to consider is that, if the MB is truly deciding how fast to spin the fans, I'm trusting ambient air temps on the MB to control cooling my HD's, which are 12"+ away.  I could also have one drive on thermal runaway, and it might not be enough to heat up the case temperature, triggering the MB to spin up the fans.

 

I wouldn't mind having some software control/override, since the HD temp can be read through SMART, if even a single drive's temp spiked it could tell the MB BIOS to raise the case fan speed.  Anything like this for Linux?

 

Have you seen this thread: Temperature based fan speed control?

http://lime-technology.com/forum/index.php?topic=5548.msg110988#msg110988

http://lime-technology.com/forum/index.php?topic=5548.msg52398#msg52398

Link to comment

Well, when I get a bit past the novice stage maybe I'll learn a bit more  :) I've only been doing this stuff for ~ 50 years or so  :)

[ :) all you want ... but,] on the subject of disk subsystem performance (measurement/analysis), you are at the novice stage!  You can continue to deny that, or you can embrace it and endeavor to educate yourself on the subject. While you might not like my approach (think of it as a corollary to "tough love"), it is my way of helping you. Consider ... isn't it better to learn that the premise upon which you are basing your claims, and even technical advice to others, is incorrect? (Rather than continue to misinform others, and (ultimately) embarass yourself.)

 

As for ~ 50 years ...

So what!!

 

I'm an old fart myself (received my first software royalties in 1969, and was supporting Internet development when there were 8 machines connected [1970]). But something I learned back then was the extreme importance of knowing what you don't know ... and the value of admitting your mistakes.

I didn't mean to imply Limetech's "new Atom-based server" used new technology;  it's simply a new server in Limetech's offerings.

Sounds like back-pedalling to me. You had stated

I don't think 11 or 12 is an issue at all.  As I noted in an earlier post, I'm confident it would perform very sell with a SASLP-MV8 and 8 more drives ... total of 14.    Beyond that, there would clearly be bandwidth issues.    Note that this is the exact configuration used in Limetech's new Atom-based server

in an attempt to bolster your (flawed) argument that

My X7SPA does a parity check in 8:07 with six 3TB WD Reds ... and I suspect would do it in ~ the same time with 14 of them if I added an 8-port controller.

and

I think it would work every bit as good as it does now with up to 14 drives [the 6 it supports natively plus one fully-loaded 8-port card].

by using the (presumed) authority of Limetech choosing the same (antiquated) [chip-level] components for their "new" product.

Parity checks averaging over 110MB/s are certainly good enough for me  8)

Then (like I said), stick to 11 or 12 drives maximum. Because the 14 drive config (that you are so sure of!) will drop that average by 20+%.

 

--UhClem

 

Link to comment

I wish I had any inkling of what any of that meant, because I'm the type of guy who can't back down from a challenge.  Since you know, UhClem, I'm surprised you don't take your own challenge.  Think about it-- everyone would benefit!

Take my own challenge?? Where's the fun in that? :)

 

Maybe you don't think I already know the answer [but I do!]

or

Maybe you're trying to goad me into spilling the beans ...

 

Which would deprive someone who (like you) loves a challenge, but also does have an inkling.  That person should get the satisfaction (of the solution) and the appreciation from their community. [Remember, I don't use unRAID--think of me as a "stranger from a strange land" :)]

 

--UhClem "Welcome to the Future Fair--a fair for all, and no fair to anybody"

 

 

Link to comment

I wish I had any inkling of what any of that meant, because I'm the type of guy who can't back down from a challenge.  Since you know, UhClem, I'm surprised you don't take your own challenge.  Think about it-- everyone would benefit!

Take my own challenge?? Where's the fun in that? :)

 

Maybe you don't think I already know the answer [but I do!]

or

Maybe you're trying to goad me into spilling the beans ...

 

Which would deprive someone who (like you) loves a challenge, but also does have an inkling.  That person should get the satisfaction (of the solution) and the appreciation from their community. [Remember, I don't use unRAID--think of me as a "stranger from a strange land" :)]

 

--UhClem "Welcome to the Future Fair--a fair for all, and no fair to anybody"

 

I believe you know the answer and I don't care if you spill any beans.  While I may have an inkling, honestly I'm not your guy.  I'm spending all my inklings designing products and writing code for my own company, with the hopes of one day turning a buck or two.  I have no inklings to spare.  I simply suggested that, since you know the answer, and since you believe the outcome will be so wonderful, then why don't you simply go ahead and code it, for the benefit of all.  Your dskt.sh script is a wonderful help, and I'm not the only person who appreciates your contributions, and I hope you continue to contribute and make unRAID a better product and a better community.

 

As for the rest of your "conversation" with garycase, please take it somewhere else.  It has gone off-topic and carries a negative tone, and is not appropriate or appreciated in this thread, which is supposed to be about power-efficient high drive count servers.  I have made a couple of posts trying to get back on target, and value any insight or links you may have on the topic at hand. 

 

Thank you.

 

-Paul

 

Link to comment

... I'd like to repeat a "challenge" I made ...

 

[Same exact functionality/results as now.]

 

Think about it-- several hours saved per Preclear cycle for a 2TB drive.

I'd put this in the "so what" category => the time it takes to do a pre-clear is very much irrelevant.

Maybe to you ... but how about the rest of unRAID users:

1. What about the saving in power (and heat) [Pardon me for getting back on-topic :)]

2. Reduction in array downtime (for those who don't keep a spare)

2a. Reduced time for running unprotected, because spouse/kids need to watch a flick

3. [unproductive] wear-and-tear on the new drive and the rest of the system

4. (All other things being equal) Faster is better!!

 

[And speaking of the (...) in #4] ...

I believe Joe L intentionally does a bunch of additional seeking during the post-read phase to intentionally "work out" the drive ... these seeks are obviously what causes the notably longer timing, but are also adding to the "confidence level" you can have in the drive.  To quote from Joe L's description:  "... I purposely keep the disk head moving a LOT more than if just reading each block in turn.  This is to identify hardware that is marginal. "

To start

these seeks are obviously what causes the notably longer timing

is absolutely false. As I originally pointed out, they contribute a very negligible 0.5% overhead, AND

I am not removing that from the procedure. [Note (above): same exact functionality] In addition to taking minimal time, it also doesn't impact the memory usage (because it is accomplished using Direct I/O).

 

Something else you failed to comprehend was that the "head dislocation dance" does NOT contribute to "confidence level" (read the original explanation). But it is harmless (and takes negligible time), and keeps the drive from getting bored :).

Bottom line:  Certainly not worth the effort to speed it up.

Sour grapes ?? Obviously, you don't have the knowledge or the creativity/ingenuity to take/solve the challenge.

 

But I think someone else will. And, once that light bulb flashes, it's only about 20 lines of sh and less than an hour of time.

 

Pretty damn good ROI !! [not that I'm trying to get back on-topic, and attempting to put a damper on all the "fun" you Green is god fans seem to be having :). I know Paul gets that one--anyone else?]

 

--UhClem

 

 

Link to comment

 

Awesome links!  Thank you very much madburg!  I have lots of reading to do.

 

Okay, so I spent some time on this today, but I've hit a few challenges.  When I run pwmconfig, it tells me I have no pwm-capable sensors install.  Okay, fine, let's install them, right?  I run sensor-detect (after installing perl, of course), and it finds both an unknown ITE sensor and i2c bus sensors, but otherwise tells me it can't help me.

 

Running sensors shows only my core temps, not fan speeds.

 

So, since I can't seem to find the right module to load to support my PWM fans (which both fan headers on the motherboard are 4-pin pwm type), I can't interact with them at all.  This is under 5RC13.

 

I checked the lm-sensors version, and it is 3.1.2 (from Feb 2010 !!!).  On the lm-sensors.org website, the current version is 3.3.4, release a couple weeks ago, and looking at the changelog, I think this is my problem.

So, how do I actually upgrade lm-sensors?  I'm guessing I have to download it, move it onto the server, and compile it with make, right?  Do I need to first install the "C" compiler & development tools that are listed in unMENU?  I have almost no experience compiling, so I'm pretty lost here.  I really appreciate any help at all, and step-by-step for dummies is really what I need.

 

-Paul

 

EDIT:  Nevermind, I just figured out how to upgrade.  I searched google on slackware 14 lm_sensors, and found the 3.3.2 package (link below).  I downloaded the package and ran the install instructions, and now I'm at 3.3.2.

 

http://pkgs.org/slackware-14.0/slackware-i486/lm_sensors-3.3.2-i486-1.txz.html

 

Now sensor-detect works properly, but I get the following:

 

Now follows a summary of the probes I have just done.
Just press ENTER to continue:

Driver `to-be-written':
  * ISA bus, address 0xa30
    Chip `ITE IT8772E Super IO Sensors' (confidence: 9)

Driver `coretemp':
  * Chip `Intel digital thermal sensor' (confidence: 9)

Note: there is no driver for ITE IT8772E Super IO Sensors yet.
Check http://www.lm-sensors.org/wiki/Devices for updates.

 

So looks like I need a driver for ITE IT8772E.  Looking at the changelog for lm-sensors, 3.3.4 had a note that IT8772E is mapped to it87. 

 

Running modprobe it87 gave me a device busy error, the fix for that is to edit the syslinux.cfg file in the boot directory, changing this:

 

label unRAID OS
  menu default
  kernel bzimage
  append initrd=bzroot 

 

...to this...

 

 

label unRAID OS
  menu default
  kernel bzimage
  append initrd=bzroot acpi_enforce_resources=lax

 

Then reboot.  Now I can do a modprobe it87 and get sensor data that includes the fans:

 

root@Tower:/boot/packages# sensors
coretemp-isa-0000
Adapter: ISA adapter
Physical id 0:  +32.0 C  (high = +85.0 C, crit = +105.0 C)
Core 0:         +32.0 C  (high = +85.0 C, crit = +105.0 C)
Core 1:         +29.0 C  (high = +85.0 C, crit = +105.0 C)

it8772-isa-0a30
Adapter: ISA adapter
in0:          +0.74 V  (min =  +1.96 V, max =  +0.35 V)  ALARM
in1:          +1.58 V  (min =  +1.36 V, max =  +1.21 V)  ALARM
in2:          +2.83 V  (min =  +0.23 V, max =  +1.25 V)  ALARM
in3:          +2.87 V  (min =  +0.56 V, max =  +1.06 V)  ALARM
in4:          +2.98 V  (min =  +2.33 V, max =  +0.05 V)  ALARM
in5:          +2.98 V  (min =  +1.20 V, max =  +2.50 V)  ALARM
in6:          +2.98 V  (min =  +0.24 V, max =  +0.84 V)  ALARM
3VSB:         +3.26 V  (min =  +0.26 V, max =  +4.73 V)
Vbat:         +3.17 V
fan1:           0 RPM  (min =   20 RPM)  ALARM
fan2:        1310 RPM  (min =   62 RPM)
fan3:        1418 RPM  (min = 2576 RPM)  ALARM
temp1:         -8.0 C  (low  = +52.0 C, high = +127.0 C)  sensor = thermistor
temp2:        +32.0 C  (low  = +10.0 C, high = +127.0 C)  sensor = thermistor
temp3:        +24.0 C  (low  =  +0.0 C, high = +127.0 C)  sensor = Intel PECI
intrusion0:  ALARM

Link to comment

SUCCESS!! ... sorta

 

After jumping though the hurdles above to get modprobe it87 loaded, I was able to run pwmconfig and configure the case fans.  But something is buggy in the version of pwmconfig, so it didn't quite create the file right.  Also, when I used fancontrol to actually control the fans, the case fans simply ramped up to 100% and stayed there.

 

I finally found that if I set /sys/class/hwmon/hwmon1/device/pwm2_enable to 1 instead of 0 (fancontrol was setting this to zero, don't know why), I could then set /sys/class/hwmon/hwmon1/device/pwm2 to any pwm value I wanted from 0 to 255, and the fans responded.

 

The default speed was 120.  When I set it to 100, the fans got noticeably quieter and 1W+ dropped off the power meter.  When I set it to 80, another half watt was saved.  From there, I tried all the way down to pwm value 1, but I didn't see any more power savings, and the fans never stopped.  Setting it to 0 would ramp them to full speed.

 

I also found the fan control script "unraid-fan-speed.sh" which xamindar wrote here: http://lime-technology.com/forum/index.php?topic=5548.msg52398#msg52398

 

This script takes the place of pwmconfig and fancontrol, which is great because those tools weren't working for me.

 

You simply adjust a few parameters in the script, set it to a cron job every so many minutes, and it reads the temps of all your drives, picks the highest temp, and sets the fan speed based upon that.  This is exactly what I wanted!

 

The only shortcoming is that I wasn't able to actually turn the fans off.  By my measure, they are consuming about 6.5W now, so a little better than the 8.5W they previously consumed at idle.  These three fans consume 19W total at full speed.

 

The best part is the silence!

Link to comment

Nicely done.  6.5w isn't bad for fans.  ... and clearly that's what it will idle at.

 

So ...

 

motherboard/CPU  ~ 18w

2760a                  ~ 29w

Fans                    ~  6.5w

24 x 3TB Reds      ~ 14.4w

 

              Total:  ~ 67.9 w

 

... Looks like a sub 70w idle is very likely once you get your drives swapped out  :)

 

Very nice -- I doubt there's anything you could do to reduce that UNLESS the Adaptec controller we talked about earlier turns out to be viable ... and that would be a lot of extra $$ for about a 10w savings in idle consumption.  Clearly not worth switching to !!

 

Link to comment

Looked at another way ...

 

24 drives with 67.9w idle = 2.83 w/drive idle power

 

6 drives with 20w idle = 3.33 w/drive idle power

 

So your system is more efficient than my trusty little Atom  :)

 

And if I bumped it up by 8 drives and an SASLP-MV8, which would add about 12w for the SASLP-MV8 (according to another thread here) and another 4.8w for the additional drives, and probably 5w for a cooling fan for the drives, I'd still be a bit above your per drive average [20w plus an additional 21.8w = 41.8w for 14 drives = 2.99 w/drive]

 

In any event, we're both running systems that idle at around 3w/drive with 3TB drives ... so that's about 1 watt/TB of idle power consumption -- I'd say that's not at all bad  8)

Link to comment

... Looks like a sub 70w idle is very likely once you get your drives swapped out  :)

 

Your figures are missing the power consumption from the SAS backplanes in the case.  I'll be measuring that this morning.  I believe it is in the 3W-6W range, so a sub 70W idle probably won't happen.

 

Interestingly enough, if the 2760A had idled around 5W, like we hoped, this would have been a 45W server build.  So close...

 

Looked at another way ...

 

24 drives with 67.9w idle = 2.83 w/drive idle power

 

I really like your way of thinking!  Watts per drive should be the benchmark, and <3W a drive is pretty good. 

 

Had it been a 45W idle, then we would be talking about <2W/drive.  I think 2W/drive could be the green target to strive for.

 

I'm curious what is the power consumption of the new ASRock Z87 motherboard with 22 SATA ports.  I don't imagine the LSI SAS 3008 controller chip is magically energy efficient compared to other chips, but being integrated onto the motherboard may save a few watts here and there.  Of course, with all that extra circuitry on the feature laden motherboard, I don't really expect it to be energy efficient, but who knows...

Link to comment

... by the way, I've been looking around trying to find a good multi-fan thermal controller that would let you basically turn off the fans automatically when the drives are spun down, but haven't found one yet.    If you could do that, and wired it in to all of the fans -- case and drive cages -- you could shave off ~ 10w of idle power.    I'll let you know if I find one (I plan to buy it and test it if I do).

 

  hmmm... I had started a project a few years ago to make my own fan controller, since I did not like what was on the market for my wants...  Perhaps I need to pick it up again... :-)  Originally I was going to use a Basic Stamp... now I am thinking about using the cool little Raspberry Pi...  Either way, it could be made to do EXACTLY what the user wants it to do...

Link to comment

I think 2W/drive could be the green target to strive for.

 

Now THAT is a real challenge.  Clearly we need a VERY efficient SATA controller card to have any real hope of hitting that => remember, that even with the most efficient drives you can get (WD Reds) you have 0.6w/drive of idle consumption just from the drive itself !!  I agree, however, that this is a good goal -- the only thing really standing in the way is the relatively high-power consumption of the SATA card even when the drives aren't active. 

 

I would, however, be VERY interesting to see power consumption details for the 22-port AsRock board.  I agree that putting the LSI chip onboard should provide SOME power advantages, but I'm just not sure how much that might be.

[Of course you'd still need an add-in card to get those last 2 ports for a maxed-out system  :) ]

 

I'm also interested in the power consumption results of the Adaptec 72405 card, although the specs on Adaptec's site aren't encouraging => they show an "operating" consumption of 21.93w [1.28A @ 12v + .1A @ 3.3v] and make no mention of any power-saving idle state.      I wonder if Khelm has got his card yet ... [http://lime-technology.com/forum/index.php?topic=27671.msg246025#msg246025 ]

 

In any event, even with the backplane consumption, you'll probably still be no higher than 72w idle ... so you should hit the 3w/drive target  8)

Link to comment

remember, that even with the most efficient drives you can get (WD Reds) you have 0.6w/drive of idle consumption just from the drive itself

 

Unfortunately, this is not true...

 

Today I tested both spinning and idle drive consumption of my drives.  For the Samsung drives, I tested 4 drives simultaneously of each model, for the WD Red drives, I tested 10 drives simultaneously.  I took the total power increase and divided it by the number of drives tested to come up with the drive average.

  • Samsung F2 1.5TB:  1.20W Standby, 4.75W Spinning Idle
  • Samsung F3 2.0TB:  1.30W Standby, 5.00W Spinning Idle
  • WD Red NAS 3TB:    1.25W Standby, 4.50W Spinning Idle

I was very disappointed.  I'm not sure if some of the extra power consumption for the WD Red's is related to SAS backplane connectivity or even the 2760A maintaining connectivity, but I expected the Samsung drives to be about 0.4 Watts higher, as they are rated at 1W standby.

 

My numbers aren't exact, due to measuring with a Kill-A-Watt meter, but the WD Red numbers are pretty solid.  10 WD Red drives saw an increase of 13W at standby (spun-down); had the 0.6W standby number been true, I would have seen only a 6W increase with 10 drives.

 

I guess the good news is that the Samsung Eco F2/F3 drives are just as efficient as the WD Reds.

 

Using 1.25W instead of 0.6W, anticipated total power consumption is now 83W for a 24-drive server when idle with all drives spun-down (standby), and that is after my fan tweaking that saved a few more watts.

 

That also means... 3.46W/drive.  :-\

 

You're winning in efficiency again.  ;)  Which actually makes sense, as the 2760A idle power consumption is soooo high.

 

I'm now thinking a 60W server might be the "best possible" efficiency, assuming you found the right controller card and mobile-class motherboard/cpu.  And that would give you a 2.5W/drive efficiency score.

 

I know some are complaining that the 22-port AsRock board is SATA ports and not SAS ports, but I'm now wondering if there is a power consumption penalty with SAS.  Perhaps that AsRock board has the right stuff to build an efficient 20-bay Norco 4020 server, using all SATA cables.

 

-Paul

Link to comment

Very interesting.    One of these days I'll have to disconnect my WD Reds and measure the system's power without them.  I suspect you're right and your consumption is also influenced by the backplane, but it'd be interesting to confirm that.

 

By the way, I think even a 2.5w/drive goal would be VERY good !!

 

Also, I note that the LSI support page indicates that the 2308 chip used on that board supports "Advanced Power Management" -- so I wonder if it has a low-power idle mode !!

 

I've not got any plans for a large-capacity system anytime this year;  but I occasionally build systems for friends ... I think the next person who wants one is going to use the 22-port AsRock board  8)  [They pretty much use whatever I tell them to !!]

 

... likely won't get populated with 22 drives => but at least I'll get a chance to measure the board's idle power.

 

Link to comment

Whoops !!  Forget that idea  8)

 

The AsRock board's "younger brother" (a 14-port version with only one of the LSI chips) was included on the following test, which shows the idle power for the board.    The 14-port version of the board IDLES at 57.6 watts !!    I've sure the 22-port version is even worse ... so it's no competitor for what you've already built.

 

http://us.hardware.info/reviews/3555/5/asrock-z77-extreme11-review-with-workstation-features-energy-consumption

 

I think your 2760A setup is likely to win the 24-drive power consumption battle UNLESS the Adaptec 72405 that Khelm is testing proves to have better numbers.

 

Sounds like 3W/drive is about as good as it's going to get with what's currently available.

 

Link to comment

Today I tested both spinning and idle drive consumption of my drives.  For the Samsung drives, I tested 4 drives simultaneously of each model, for the WD Red drives, I tested 10 drives simultaneously.  I took the total power increase and divided it by the number of drives tested to come up with the drive average.

  • Samsung F2 1.5TB:  1.20W Idle, 4.75W Spinning
  • Samsung F3 2.0TB:  1.30W Idle, 5.00W Spinning
  • WD Red NAS 3TB:    1.25W Idle, 4.50W Spinning

I think you may be using the wrong terms.

 

I believe:

 

idle = spinning, but not transferring data

standby = powered up, but spun down

 

e.g. specs for a ST3000DM001:

 

Power - Operating 8.0W, Idle 5.4W, Standby .75W

Link to comment

Today I tested both spinning and idle drive consumption of my drives.  For the Samsung drives, I tested 4 drives simultaneously of each model, for the WD Red drives, I tested 10 drives simultaneously.  I took the total power increase and divided it by the number of drives tested to come up with the drive average.

  • Samsung F2 1.5TB:  1.20W Idle, 4.75W Spinning
  • Samsung F3 2.0TB:  1.30W Idle, 5.00W Spinning
  • WD Red NAS 3TB:    1.25W Idle, 4.50W Spinning

I think you may be using the wrong terms.

 

I believe:

 

idle = spinning, but not transferring data

standby = powered up, but spun down

 

e.g. specs for a ST3000DM001:

 

Power - Operating 8.0W, Idle 5.4W, Standby .75W

 

Yes & No.  You're correct that the manufacturer's show "Idle" as the power consumption of a spinning but not-being-accessed drive and use Standby and Sleep for the modes when the drive isn't spinning, but the PC is either active or in the S3 state;  but No, the power consumption figures listed are indeed correct for the INTENT of this discussion -- which is the "Standby" state with drives not spinning.

 

In any event, what Pauven's recent tests show is that the actual consumption of the drives is higher than the specs show for the Standby state.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.