60TB, Cool, Quiet, Living Room Friendly


Recommended Posts

60TB Storage Array

 

OS: 5b10 (but all testing done with 5b9)

CPU: Intel XEON 3430 2.4GHz Quadcore

Motherboard: Supermicro X8SIL-FO

RAM: Kingston KVR1333D3E9SK2/4GI DDR3-1333 4GB (2x 2GB) ECC Memory Kit - 2 kits, 8GB total

Case: iCute Super18 5G1-BB ATX Tower

Drive Cage(s): 4 x Norco SS-500 5-in-3 Hot Swap Racks

Power Supply: Corsair CMPSU-850TX 850W

SATA Expansion Card(s) and cabling:

  • 1 x HP SAS Port Expander PCIe (468406-B21) ... everything you need to know is here: http://hardforum.com/showthread.php?t=1431652
  • 1 x Supermicro AOC-SASLP-MV8 SAS Controller
  • 1 x Skymaster S508 30cm Mini-SAS SFF8087 to Mini-SAS SFF8087 cable (joins HP SAS expander to MV8 SAS controller)
  • 5 x 3Ware SFF8087 Sata 1M Cables - (1->4 splitter cable) (joins each port on HP SAS Expander to 4 individual SATA ports on back of a Norco SS-500
    cage)
  • 2 x 1->1 SATA Cables (came with motherboard) (joins 2 onboard SATA ports (6 available) to 2 internal HDDs (parity + online spare))

 

Fans:

  • 1 x Arctic Cooling AC-ALPINE-11GT-REV2 CPU Fan to suit XEON 3430 (1156) (2065 RPMs, very quiet) - stock Intel fan is extremely
    noisy! - motherboard fan header 1
  • 1 x 250mm Side-panel fan (comes with case) - large = low RPM (unknown RPM) = very quiet but lots of cooling - running off Molex
    power connector (not off motherboard fan header)
  • 1 x Noctua NF-P14-Flx 140mm Rear Case Fan - set to run at maximum RPMs (1140 RPMs, very quiet) - motherboard fan header 5
  • 2 x Arctic Cooling AC-F8-PRO-PWM 80mm Side-Panel fans (to cool 2 x internal HDDs) - set to run at maximum RPMs
    (2065 RPMs, very quiet) - daisy-chained together using PST and connected to motherboard fan header 4
  • 4 x Arctic Cooling AC-F8-PWM 80mm fans to replace fans in 4xNorco SS-500 cages (very noisy!) - set to run at maximum RPMs
    (2065 RPMs, very quiet) - 1st pair daisy-chained using PST to fan header 2; 2nd pair daisy-chained using PST to fan header 3

 

Parity Drive: Hitachi 3TB OS3230 Green HDD

Data Drives: 20 x Hitachi 3TB OS3230 Green HDDs

Cache Drive: not used

Online Spare Drive: Hitachi 3TB OS3230 Green HDD

Total Drive Capacity (usable): 60TB

 

Primary Use: Media Storage and Private Cloud Backups

Likes: extremely quiet, very low power (not measured yet), living-room friendly (tower not rack, and looks good in black next to

AV equipment), massive capacity, online spare drive ready to manually replace failed drive even when I'm travelling away from home

(thanks to remote control built into motherboard)

Dislikes: nothing so far

Add Ons Used: only Unmenu so far

Future Plans: depends on what features Tom adds later, would like to see 20 drive limit increased.  May add more plug-ins although

I prefer to use my media centre server for extras and Unraid just for the storage.

 

Testing and burn-in:

I had some weird issues in the early part of testing, but after re-seating all the data cables and upgrading to 5b9 (now running 5b10),

it has been ROCK SOLID.

I simultaneously ran a full preclear (version 12 of the preclear script) on the 2 x motherboard-attached HDDs (parity + online spare)

and 4 x data HDDs (all off the same SAS expander port/cable) ... took around 39 hours to complete.  I then added the parity + 4 x data

HDDs to Unraid and formatted then ran parity sync/parity check. All good. I then ran a full preclear SIMULTANEOUSLY on the remaining

16 x data HDDs which took 115 hours but never missed a beat!  Then I added those drives to the array, formatted, synced and parity checked.

All perfect. Absolutely no issues reported by SMART or preclear or in the syslog.

 

Build notes:

I wouldn't have known where to start without the awesome support of the community forums. Nothing I've built is original, I've borrowed

all the ideas from others (thank you!!).  There are already great posts about the case (which is also called the Sharkoon 12) and the

Norco cages and the motherboard and the MV8 controller card, so I won't repeat that info here, I'll just add what insights I can to that

body of knowledge.

 

The case is awesome if you want a living-room friendly tower (as opposed to a rack-mount build).  It's fairly discrete (ie not massive

nor does it stand out).  If the power led is too bright it's easily covered or disconnected.  I kinda like the blue and green leds on

the Norco HDD trays (not very bright) but again I suppose could be covered if required (lot of leds though!).  The case comes with

11 x 5.25 inch bays and 1 x 5.25 bay for a floppy drive.  So the trick is to cut out the floppy bay to create 12 x 5.25 inch bays ready

for the Norco cages to be added.  For the cutting I purchased and took a crash course in "Dremelling".  Pretty easy, but my end result

wasn't perfect (but I can live with it).  My advice - really take your time and slowly cut away the bits that need to be removed.

One thing that was more difficult than I imagined - bending/hammering all the unneeded 5.25 inch tray holders out of the way (the

little metal "feet" that say a DVD drive would normally sit on when inserted in the bay) - this needs to done otherwise you can't insert

the Norco cages.  I found that lots of swearing helps this process!

 

The Norco cages are quite cheap (around $100 each) and work very well. But the build quality isn't perfect (fair enough at that price).

After replacing the cage fans (very very very noisy) with Arctic Cooling ones (beautifully quiet fans even at full RPMs), I found the

screws which hold on the back plate of the cages seemed just a bit too narrow and short and many kept falling out!  Norco were great and

I have replacement (longer) screws on the way for free (even though I'm in Australia).

 

I'm very happy with the noise levels of the fans. I bought mainly PWM fans to control the speed, but in the end it was just easier to run

all fans at their maximum RPMs because they are so quiet anyway (and I hope the power consumption will be reasonable too, which I'll measure

later).  Noise levels are essential considering the design goal (to be living-room friendly).  Temperatures have ranged from the low

20's (celcius) when all drives are spun down, to mid to high 30's when all drives are spun up and working hard (preclearing simultaneously

for example).  My only concern is that it's the middle of winter here right now so ambient temperatures in the room are around 10 degrees,

so summer might be a challenge (with ambient temp rising to high 20's or low 30's) - but fortunately most drives will stay spun down most

of the time thanks to the way Unraid works, my own usage patterns and the drives being "green".  Time will tell.

 

Other design considerations:

I chose not to use a cache drive because performance is a very low priority for me. If that changes I can easily add a cache drive later.

I have a fully precleared drive waiting online as a spare in case a drive fails while I'm away travelling (which is a lot). I can easily

connect remotely and add the spare drive in and rebuild the array (with the remote control feature of the motherboard (awesome btw!), I can

easily power off or on the array anytime remotely).

 

Boot (peak): not measured yet

Idle (avg):

Active (avg):

Light use (avg):

 

Images (cabling still needs to be tidied up after new Norco screws added):

http://photobucket.com/toby9999

 

Feel free to ask questions or point out omissions from this posting, thanks.

 

UPDATE: with the upgrade to 5b10, I've noticed some drive-related errors in the syslog (see below). I've seen others mention this same error

in other postings. They don't appear to be causing any problems however.

 

Jul 29 00:48:18 Tower kernel: sas: command 0xee26f6c0, task 0xf7412f00, timed out: BLK_EH_NOT_HANDLED (Drive related)

Jul 29 00:48:18 Tower kernel: sas: Enter sas_scsi_recover_host (Drive related)

Jul 29 00:48:18 Tower kernel: sas: trying to find task 0xf7412f00 (Drive related)

Jul 29 00:48:18 Tower kernel: sas: sas_scsi_find_task: aborting task 0xf7412f00 (Drive related)

 

UPDATE 2:  Just to be sure about those errors, I ran a complete Parity Check across the whole 60TB.  Took 35 hours but absolutely no errors reported in the syslog or the GUI interface.  My confidence is very high now, and I'm very happy to recommend this hardware and Unraid 5b10 to others.

 


 

future edits

 

Link to comment

Nice build!  I sure wish those iCute/Sharkoon cases were more readily available in the US, they look like nice cases.  I have a few questions about your build:

 

Why a Xeon and 8 GBs of RAM?  You say that you prefer to use your server just for storage and minimal add-ons, so why all the extra processing horsepower and memory?

 

Why the SAS port expander instead of a second SASLP card?

 

By 'remote control of the motherboard' I assume you are referring to IPMI.  How does IPMI help you replace a failed drive from afar?  If the warm spare drive is already installed in the system, you can just use the unRAID webGUI to stop the array, unassign the failed drive, assign the warm spare, and start the data-rebuild.  No need to access the system BIOS.

 

I'm not familiar with the errors in your syslog, so I apologize that I can't offer any advice there.

Link to comment

Wow 60TB monster. Thats impressive.

Thanks.  Should take me about 5 minutes to fill all the space  ;D  Luckily I've got just about everything I need to build another one when required ... only thing holding me back is the thought of doing more hammering and dremmeling on the case (aka gentle persuasion)!

Link to comment

Nice build!  I sure wish those iCute/Sharkoon cases were more readily available in the US, they look like nice cases.  I have a few questions about your build:

 

Why a Xeon and 8 GBs of RAM?  You say that you prefer to use your server just for storage and minimal add-ons, so why all the extra processing horsepower and memory?

 

Why the SAS port expander instead of a second SASLP card?

 

By 'remote control of the motherboard' I assume you are referring to IPMI.  How does IPMI help you replace a failed drive from afar?  If the warm spare drive is already installed in the system, you can just use the unRAID webGUI to stop the array, unassign the failed drive, assign the warm spare, and start the data-rebuild.  No need to access the system BIOS.

 

I'm not familiar with the errors in your syslog, so I apologize that I can't offer any advice there.

 

Thanks for the questions Raj.

 

I decided to go with a bit more horsepower just in case I ever wanted to use the array for more than just storage (eg download client, media centre server, transcoder etc etc).  I'm in two minds which way to go to be honest, but I'll take each requirement a step at a time and decide case by case which server to use.  It's all a journey of discovery and learning and experimentation, so what the build will look like one year from now is anyone's guess.

 

SAS ... my original design goal was to try to crack the 100TB mark with additional HDDs but time ran out and I needed to travel for 2 months for business, so when I returned I decided I could live with 60TB for now (which is the limit of Unraid currently anyway (20 data drives) - I was going to have the additional HDDs sitting outside Unraid to be used for other purposes).

 

So while I could have used MV8s plus the 6 onboard SATA ports, I already had the HP SAS Expander so I used it (and I wanted to test/prove/qualify it for Unraid use - which so far looks like it works perfectly!).  Anyway, an MV8 gives you 2 x 4-HDD ports (ie 8 HDDs can be attached), whereas the HP SAS Expander has 9 of those 4-HDD ports.  You need to use one of the 9 ports to daisy-chain the expander to the MV8 (because unfortunately the expander only "expands", it doesn't do any controlling of the HDDs - so the MV8 needs to step in and act as the controller).  So this leaves 8 x 4-HDD ports (7 internal, 1 external) so you can easily attach up to 32 x HDDs.  If you look at my photos, you'll see that with just 2 x PCIe cards, you get LOTS of HDD connectivity (but beware, the expander is around $300 - $400 in the USA I believe - I paid $500 here in Australia).  For anyone considering the HP SAS Expander, a little trick - if you buy the next generation MV8 cards which are SAS2 not SAS1, you can apparantly connect TWO of the MV8s to the same expander (leaving 7 x 4-HDD ports) resulting in a massive increase in bandwidth thanks to (a) double bandwidth of the new MV8s and (b) double cable connections/data paths between the MV8s and the expander card.  Refer to the hardforum link in my OP for more info on that.

 

Hot spare - I suppose I could have explained that a bit better.  You are absolutely correct.  With the spare drive already prepared and just sitting there, I simply need to VPN back into my home network and browse to the Unraid web GUI and swap in the spare drive (NEVER connect your Unraid server openly to the internet without a strong VPN and very strong password).  My reference to the remote control feature of the motherboard (yes, IPMI) was more highlighting the fact that remotely I have full control over my server including things like power control, console (including access to the BIOS if required), remote mapping of files/folders ("Virtual Media" feature), temperature/fan RPM status info etc etc - I *really* like the Supermicro motherboard because of that feature, as well as the fact that it's well tested for Unraid, supports a good range of CPUs, has lots of fan headers and SATA ports and decent PCIe expandibility.

 

 

Link to comment

Indeed very impressive build!

 

Slightly offtopic questions: 10 degrees ambient temp? Is this in your living room? That I would feel cold to spend most of my time at home.

 

Thanks  :)  Yep, it's winter here and yes that's my living room (actually it's my whole apartment).  I like the cold, so 10 degrees is ok as long as I wear a coat ... I rarely need to turn on a heater.  But as I said in my OP, the real challenge will be to keep the server cool during the (very) hot summer months around xmas time, especially as I currently don't have any air conditioning at home. Let's see what summer brings...

Link to comment

60TB Storage Array

...

 

Very impressive build.  King of the hill in capacity! Would love some pics!

 

Only concern, as you mentioned, is temps in summertime operations.  If you have a temp rise of +30C over ambient, then with ambient at 25C, you'd see 55C temps.  And at 30C ambient , you'd see 60C in the case.  That's just way too hot.

 

It'd be working to lower the temp rise over ambient to 15C-20C now, before the hot weather comes.

 

I don't have quite as many drives as you but otherwise very similar build.  And I am in northern hemisphere so it is hottest of summer now.  Temps in my basement where server is located are 25C.  My drive temps have maxed out at just under 40C.  In winter my ambient at about 10C, and temps inside my case were maxing around 25C.

 

Make sure that all of your fans are exhausting out of the case except the ones in the drive cages, which should be pulling air through the drives and out the back of the cage.  Having one fan spinning in the wrong direction could really muck with the physics of the airflow.  Something seems terribly wrong to me.

 

After checking your fans, try putting an original fan back in one of the Norco cages and see if the temps fall for those drives.  If so, you might want to find a quiet but beefier fan for the cages.  That's the critical cooling point IMO.  I use the SuperMicro cages which have the 92MM vs 80MM fans.  92MM fans can more move air at same RPM and can therefore help when quiet and high cooling are both important (although the SuperMicro default fans are on the loud side, they do cool well).

 

Good luck!

Link to comment

60TB Storage Array

...

 

Very impressive build.  King of the hill in capacity! Would love some pics!

 

Only concern, as you mentioned, is temps in summertime operations.  If you have a temp rise of +30C over ambient, then with ambient at 25C, you'd see 55C temps.  And at 30C ambient , you'd see 60C in the case.  That's just way too hot.

 

It'd be working to lower the temp rise over ambient to 15C-20C now, before the hot weather comes.

 

I don't have quite as many drives as you but otherwise very similar build.  And I am in northern hemisphere so it is hottest of summer now.  Temps in my basement where server is located are 25C.  My drive temps have maxed out at just under 40C.  In winter my ambient at about 10C, and temps inside my case were maxing around 25C.

 

Make sure that all of your fans are exhausting out of the case except the ones in the drive cages, which should be pulling air through the drives and out the back of the cage.  Having one fan spinning in the wrong direction could really muck with the physics of the airflow.  Something seems terribly wrong to me.

 

After checking your fans, try putting an original fan back in one of the Norco cages and see if the temps fall for those drives.  If so, you might want to find a quiet but beefier fan for the cages.  That's the critical cooling point IMO.  I use the SuperMicro cages which have the 92MM vs 80MM fans.  92MM fans can more move air at same RPM and can therefore help when quiet and high cooling are both important (although the SuperMicro default fans are on the loud side, they do cool well).

 

Good luck!

 

There's a link to some photos in my OP (photobucket).

 

I just tried an experiment - I completely covered the 250mm side fan, thinking it might be messing with the front to back airflow, and sure enough, temps of both hard drives and cpu have dropped about 5C - nice!  So looks like the side fan actually isn't so useful in this configuration ... I have a couple more cases without the side fan, so I'll substitute one of them tomorrow and also seal up any unused holes.

 

Thanks for getting me to think about this a bit more (I'm sure it's not perfect but it's a good start and overall I'm very happy with the whole server).

Link to comment

You mentioned two primary uses one being a private cloud backup. I would be very interested to know what you use to achieve this and maybe toss in a bit of explanation on how you use it. Alas, I only have 1/10th the storage capacity, however I have not even put a dent in 6TB yet.

Link to comment

You mentioned two primary uses one being a private cloud backup. I would be very interested to know what you use to achieve this and maybe toss in a bit of explanation on how you use it. Alas, I only have 1/10th the storage capacity, however I have not even put a dent in 6TB yet.

 

Sounds fancier than it is, and it's not implemented yet.  I use a public cloud provider (SpiderOak) who are pretty good, but others on here use Crashplan and Wuala, so there's a few good ones around.  But, as I have a highly-encrypted tunnel back to my home and lots of free space on my array, and as I travel for business a lot, my plan is to also sync my files via mobile internet back to my storage array at home.  I haven't decided which syncing software to use, or if I'll just script it (maybe using rsync).  Like I said, nothing fancy, especially given that its private, so there's no need to encrypt the data (except as it moves through the internet, which is handled automatically by the VPN).

Link to comment

So while I could have used MV8s plus the 6 onboard SATA ports, I already had the HP SAS Expander so I used it (and I wanted to test/prove/qualify it for Unraid use - which so far looks like it works perfectly!).  Anyway, an MV8 gives you 2 x 4-HDD ports (ie 8 HDDs can be attached), whereas the HP SAS Expander has 9 of those 4-HDD ports.  You need to use one of the 9 ports to daisy-chain the expander to the MV8 (because unfortunately the expander only "expands", it doesn't do any controlling of the HDDs - so the MV8 needs to step in and act as the controller).  So this leaves 8 x 4-HDD ports (7 internal, 1 external) so you can easily attach up to 32 x HDDs.  If you look at my photos, you'll see that with just 2 x PCIe cards, you get LOTS of HDD connectivity (but beware, the expander is around $300 - $400 in the USA I believe - I paid $500 here in Australia).  For anyone considering the HP SAS Expander, a little trick - if you buy the next generation MV8 cards which are SAS2 not SAS1, you can apparantly connect TWO of the MV8s to the same expander (leaving 7 x 4-HDD ports) resulting in a massive increase in bandwidth thanks to (a) double bandwidth of the new MV8s and (b) double cable connections/data paths between the MV8s and the expander card.  Refer to the hardforum link in my OP for more info on that.

 

Thanks for this info - very useful.

 

I'm guessing at present you only have one link between the MV8 and the expander currently?

 

Is there any benefit to connecting both ports of the MV8 into the expander - I have no idea how the bandwidth is distributed internally or if this is something that would confuse matters in terms of paths to luns.

 

 

Link to comment

So while I could have used MV8s plus the 6 onboard SATA ports, I already had the HP SAS Expander so I used it (and I wanted to test/prove/qualify it for Unraid use - which so far looks like it works perfectly!).  Anyway, an MV8 gives you 2 x 4-HDD ports (ie 8 HDDs can be attached), whereas the HP SAS Expander has 9 of those 4-HDD ports.  You need to use one of the 9 ports to daisy-chain the expander to the MV8 (because unfortunately the expander only "expands", it doesn't do any controlling of the HDDs - so the MV8 needs to step in and act as the controller).  So this leaves 8 x 4-HDD ports (7 internal, 1 external) so you can easily attach up to 32 x HDDs.  If you look at my photos, you'll see that with just 2 x PCIe cards, you get LOTS of HDD connectivity (but beware, the expander is around $300 - $400 in the USA I believe - I paid $500 here in Australia).  For anyone considering the HP SAS Expander, a little trick - if you buy the next generation MV8 cards which are SAS2 not SAS1, you can apparantly connect TWO of the MV8s to the same expander (leaving 7 x 4-HDD ports) resulting in a massive increase in bandwidth thanks to (a) double bandwidth of the new MV8s and (b) double cable connections/data paths between the MV8s and the expander card.  Refer to the hardforum link in my OP for more info on that.

 

Thanks for this info - very useful.

 

I'm guessing at present you only have one link between the MV8 and the expander currently?

 

Is there any benefit to connecting both ports of the MV8 into the expander - I have no idea how the bandwidth is distributed internally or if this is something that would confuse matters in terms of paths to luns.

 

 

 

With the SAS1 version of the MV8, which most people including me have, you can only connect a maximum of one cable between the SAS expander card and the MV8.  The SAS2 version of the MV8 allows 2 connections, which effectively quadruples your bandwidth (double the connections, and double the bandwidth due to it being SAS2 not SAS1), so there's a pretty big advantage.  However, having said all that, for me it's not about speed at all so I'm very happy with the SAS1 version of the MV8 and I don't even use a cache drive at this stage.

Link to comment

With the SAS1 version of the MV8, which most people including me have, you can only connect a maximum of one cable between the SAS expander card and the MV8.  The SAS2 version of the MV8 allows 2 connections, which effectively quadruples your bandwidth (double the connections, and double the bandwidth due to it being SAS2 not SAS1), so there's a pretty big advantage.  However, having said all that, for me it's not about speed at all so I'm very happy with the SAS1 version of the MV8 and I don't even use a cache drive at this stage.

 

Thanks. My poor maths suggests, from the time frames given of your parity check, that you're seeing quite a low throughput when hitting all the disks at once.

 

I don't suppose you have a throughput figure you can attach to your parity check? I've figured it out to be ~ 24 megabytes per second for your parity check across all drives.

Link to comment

That's a really cool build but it sure illustrates the range of what's acceptable when it comes to living room friendly. To me, no server is living room friendly - I have a HTPC running a single slow speed 120mm fan to be acceptable. But, it seems you're fine with it in the living room and that's all that matters.

 

If you're seeing a 25C to 30C rise in HDD temps with a 10C ambient then at a 30C ambient the drives will hitting 50C, which is getting too hot for a HDD.

 

Link to comment

One thing I would be concerned about is vibrations from my sub woofers I tend to watch a lot of action/Sci-fi movies pretty loud and lots of base.

 

I would be concerned with 20+ hard drives that close to to my TV being heavily vibrated while in use

 

I use a cheap 60gb SSD in my HTPC for the same reason (in addition to reduced heat and power consumption along with increased boot speed)

 

Link to comment

I suppose my initial needs are pretty basic as well for sync'ing files remotely. What I've been trying (and getting nowhere fast) is looking at replicating a "Dropbox.com"-style sync. After much searching I ran across iFolder from Novell, but that requires more linux knowledge than I currently posses and so is currently beyond my reach. It looked to be the most promising and the closest match to having that "Dropbox" experience. Anyhow, I then settled on trying to use vsftp but realized that my router cannot re-map ports so I dropped that idea. If you or anyone who reads this has any alternate suggestions for me to try, that would be fantastic! I am already streaming my audio using Subsonic to my workplace, which is really cool. I don't have to take anything with me.

Link to comment
Thanks. My poor maths suggests, from the time frames given of your parity check, that you're seeing quite a low throughput when hitting all the disks at once. I don't suppose you have a throughput figure you can attach to your parity check? I've figured it out to be ~ 24 megabytes per second for your parity check across all drives.
Yep, that's about right ... I've seen it drop as low as 15MBps.  But that's only when hammering 20 drives at the same time of course.

 

That's a really cool build but it sure illustrates the range of what's acceptable when it comes to living room friendly. To me, no server is living room friendly - I have a HTPC running a single slow speed 120mm fan to be acceptable. But, it seems you're fine with it in the living room and that's all that matters. If you're seeing a 25C to 30C rise in HDD temps with a 10C ambient then at a 30C ambient the drives will hitting 50C, which is getting too hot for a HDD.
Actually, I'm currently living a studio apartment where everything is basically in one big room, so I don't have a choice.  But seriously, my (gaming) laptop is much noisier than the storage array, so for me, it's quite acceptable.  Later, once I move into my next (multi-room) place, I'll probably find a nice little closet for it.  As to the temps, I actually messed up on the measurements, and I'm only seeing a 10C difference above ambient, so even in the hottest summer time, I might see a few days of array temps of around 40-45C worst case, so it's much better than I first thought.

 

One thing I would be concerned about is vibrations from my sub woofers I tend to watch a lot of action/Sci-fi movies pretty loud and lots of base. I would be concerned with 20+ hard drives that close to to my TV being heavily vibrated while in use I use a cheap 60gb SSD in my HTPC for the same reason (in addition to reduced heat and power consumption along with increased boot speed).
Vibrations ... never even considered that.  I'll be sure to keep an eye (ear) on that.  Thanks.

 

I suppose my initial needs are pretty basic as well for sync'ing files remotely. What I've been trying (and getting nowhere fast) is looking at replicating a "Dropbox.com"-style sync. After much searching I ran across iFolder from Novell, but that requires more linux knowledge than I currently posses and so is currently beyond my reach. It looked to be the most promising and the closest match to having that "Dropbox" experience. Anyhow, I then settled on trying to use vsftp but realized that my router cannot re-map ports so I dropped that idea. If you or anyone who reads this has any alternate suggestions for me to try, that would be fantastic! I am already streaming my audio using Subsonic to my workplace, which is really cool. I don't have to take anything with me.
iFolder ... I'll check it out when I get a chance, thanks.  If I come up with any useful solutions (even just a few scripts), I'll update this thread with them.

 

Nice build. Interested to see the power figures if you ever measure them.
Thanks - I'm very happy with the outcome.  I'm travelling at the moment, but when I get back home, I'll attach my power meter for a few days and report back the measurements.
Link to comment

Although this is a drool worthy build, I do think the HP SAS expander is misused. The HP costs triple what another SASLP would cost and slows down parity checks and disk rebuilds by a factor of three.

As I stated earlier in this thread, the expander was purchased for a slightly different design goal (more HDDs), then I decided to go with less HDDs but I still wanted to test and qualify the expander for use with Unraid (which it's passed with flying colours). I have 2 extra MV8s sitting on the shelf, and if performance was a requirement for me, which it's not, I'd open the case and swap things around.  If and when I build my next array, I certainly wouldn't go out and buy another expander, there's no need, the 2 MV8s would easily do the job as you correctly say.  And I certainly wouldn't recommend the expander to others, also for the reasons you correctly state.  The expander is suited for larger numbers of HDDs, and if performance was a concern, you'd couple it with 2 x MV8 version 2 (SAS2) for a lot more bandwidth.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.