PCIX SATA Controller - 4tb/3tb drives


ozmillsy

Recommended Posts

Hi,  I'm looking to build a NAS server with unRAID. 

 

I've purchased a 2nd hand Supermicro 933T server.    It has a LSI Megaraid controller in it,  but this controller doesnt recognise drives over 2tb.

 

Can anyone recommend me a multiport SATA controller card, that will let me use the full capacity of 3tb and 4tb drives.  Are the Areca's the go?

 

 

Link to comment

Here you will find the proven in use controllers.

Thanks,  it wasnt clear if all the sata controllers on that list supported large drives.  I assume the SataII ones do.    Not many pcix cards on the list.

 

Edit:

What LSI controller are you talking about?

A firmware update might solve your problem?

300-8X,  it appears to have the latest firmware.

My personal favorite PCI-x HBA is the AOC-SAT2-MV8.  I used it with unRAID and my X7SBE MB for a year or two before I upgraded to X9SCM-F and it's better virtualization capabilities.
Link to comment

300-8x looks like a "real" hardware RAID controller.

You need a host bus adapter or a software RAID controller.

Yeah, I dont need or want hardware RAID.  Just a card that will allow me to connect 3tb/4tb sata drives.  The 300-8X came with the 933T that I bought (off ebay).

 

As you can see in the wiki there are only two PCI-X controllers.

I remember I read some hardware offers around here...

Check this out. He has the AOC-SAT2-MV8.

Hopefully you're living in the US.

Thanks, I've sent a PM.    I'm downunder.    I see some of those cards readily available on Ebay.    Either way, I think they are the shot.

 

What Areca controller did you intend to use?

Hadnt picked one.  I read somewhere that they supported 3tb drives after a firmware flash.  But I posted here asking, as I wasnt sure what to get. 

Link to comment

But that ARC-1110/1120 also looks like real "hardware" RAID.  :(

I don't know if that matters does it,  as long as the drives can be configured as JBODs.  Appreciate it may be overkill.  I just want a solution that works, without spending huge dollars.   

 

I will be trying the Supermicro that has been recommended.

 

Those would be the HBA's.

But all PCI-e.

Yes, PCI-e , and this,,,,,,,

 

48 bits LBA, support HDD partition larger than 137GB

That is likely to be a problem for me.

Link to comment

I'm all for reusing old server hardware.  I'm running 10 drives so far on an Areca ARC-1170 in a Dell 1600SC server.  I got the server for free and the card for a reasonable price from a popular auction site.

 

The card has 24 physical SATA2 connectors and supports the 3TB and 4TB drives I have plugged into it.  I really like the built-in web interface it includes through its built-in ethernet port.

Link to comment

I'm all for reusing old server hardware.  I'm running 10 drives so far on an Areca ARC-1170 in a Dell 1600SC server.  I got the server for free and the card for a reasonable price from a popular auction site.

 

The card has 24 physical SATA2 connectors and supports the 3TB and 4TB drives I have plugged into it.  I really like the built-in web interface it includes through its built-in ethernet port.

That's good feedback, cheers.  Good to know that model supports the larger current drives.  I saw a 24 port 1170 on Ebay,  but it was $380.    Alot more than I'd like to spend.

 

A question on it though, which is probably relevant to the lower port count models like 1160.    When configuring JBODs,  can you take a drive with data on it,  add it to your system,  and configure it as a JBOD without the card writing to the HDD ?      Might sound like a silly question,  but some of the old raid controllers would configure a single HDD as it's own logical drive, which can sometimes mess with what is on the disk.

 

Adding drives into the system with data on them, is a key thing I want to do.  I'm new to unraid, havent played with it yet,  and fear it doesnt let me do it seamlessly?  ie: put the drive straight into the array and sync the data.      But I could add a drive to the system,  mount it,  copy data on the array,  and then incorporate the drive into the array.    A bit of a time consuming pain, but could be done that way.

 

Link to comment

I'm all for reusing old server hardware.  I'm running 10 drives so far on an Areca ARC-1170 in a Dell 1600SC server.  I got the server for free and the card for a reasonable price from a popular auction site.

 

The card has 24 physical SATA2 connectors and supports the 3TB and 4TB drives I have plugged into it.  I really like the built-in web interface it includes through its built-in ethernet port.

That's good feedback, cheers.  Good to know that model supports the larger current drives.  I saw a 24 port 1170 on Ebay,  but it was $380.    Alot more than I'd like to spend.

 

A question on it though, which is probably relevant to the lower port count models like 1160.    When configuring JBODs,  can you take a drive with data on it,  add it to your system,  and configure it as a JBOD without the card writing to the HDD ?      Might sound like a silly question,  but some of the old raid controllers would configure a single HDD as it's own logical drive, which can sometimes mess with what is on the disk.

 

Adding drives into the system with data on them, is a key thing I want to do.  I'm new to unraid, havent played with it yet,  and fear it doesnt let me do it seamlessly?  ie: put the drive straight into the array and sync the data.      But I could add a drive to the system,  mount it,  copy data on the array,  and then incorporate the drive into the array.    A bit of a time consuming pain, but could be done that way.

 

not sure 100% but I would think that if a controller configured to JBOD shouldn't it just accept a drive and pass it to the system? why would controller be writing to the drive?

you are not creating an RAID array, it should just act as an hub right ?

 

on the second note, unraid will not let you add drives with data on them as it wants to format all drives when adding it to array.

but you can configure your unraid server and add SNAP.

than you can add the drive to the system, mount it with SNAP. and copy data onto array.

when done pre-clear the drive and add it to array.

 

Link to comment

What vl1969 said.  :)

 

I preclear my drives on a separate system then add them to my server.  The server sees the drive as precleared and it's a quick add to the array.  The Areca card does not modify the drive data in any way when I plug it in.  This is for JBOD.  I never checked with the card in RAID mode if you would have to create an array for each individual disk.  I remember having to do this for SCSI RAID controllers.

Link to comment

I preclear my drives on a separate system then add them to my server. 

I'll be copying terabytes of data at a time,  and I didnt want to do that between systems.    I want to put the HDD in my server with data on it.  It sounds like SNAP will let me copy the data off, before I incorporate the drive into the array.  I'll play with it.

 

The Areca card does not modify the drive data in any way when I plug it in.  This is for JBOD. 

Thanks for confirming.

 

I never checked with the card in RAID mode if you would have to create an array for each individual disk.  I remember having to do this for SCSI RAID controllers.

The silly LSI 300-8X I am using doesnt seem to let me boot into the system without configuring the HDDs connected to it.  I *have* to go into the card setup, and configure the HDD.  It's a pain.  I look forward to getting something else.

 

Link to comment

@earthworm

Would you mind providing some information on that controller like it is done in the wiki.

Not the bare specs - I can look them up in the inet. I'm more interested in the way it works with unRAID.

e.g. crossflashed or out of the box, configuration issues (jbod), parity sync speeds (it's only PCI-X), whatever

I will include that card in the list.

 

Edit:

If the ARC-1170 is working with unRAID it is likely for the others of that series to work also.

Manual

I'm very curious how the configuration has to look like.

Is it the pass-through disk they mention in the manual on page 86?

Link to comment

on the second note, unraid will not let you add drives with data on them as it wants to format all drives when adding it to array.

but you can configure your unraid server and add SNAP.

than you can add the drive to the system, mount it with SNAP. and copy data onto array.

when done pre-clear the drive and add it to array.

Hey VL,  I have a hypothetical query for you around the above.    I am building a 15 drive system, with a mix of 3tb and 4tb sata drives.  I need a minimum of 39tb usable space,  I'll probably end up with a bit more than that.  And I am expecting to hold about 30tb of real data.

 

If I build this system with unRAID.  I understand that my redundancy will be 1 parity drive, and a hot spare (cache drive).  If a drive fails,  I expect that the rebuild time of a 3tb sata drive on this old system (legacy pci-x) could take well over 24 hours. 

 

You know what I am going to ask.  Scenario is I lose a 2nd drive before the cache drive finishes rebuilding my redundancy.    Questions;

1) At the point of 2nd drive failure, I expect the unRAID array is effectively down.  But the data on the remaining drives is in tact,  and I could access the data on another system?

2) What is the process to rebuild my array?    Is there a way to rebuild it, without destroying the data on the remaining healthy disks?  I ask this, because I'm worried about the formatting process of adding a drive to an array.      I suspect that I may have to create a new unRAID array,  with empty drives, and then copy data from old disks into new array.      Is that right?      With 25+tb of real data to copy into a new array,  that could take a long long time.

Link to comment

I am not sure, I am not an unraid oficionado , but  I would say yoy do not need to do a whole new setup. If I am right, and more proficient members correct me if I am not, but all you would need to do in scenario 2 is stop the array, un asigne both failed  drives, and rebuild parity. You might need to do new config and cerefully reasigne all working drives and parity than start parity rebuild, but I am not 100% sure.

Question is, why are you expecting 2 drives failure? .  It is very rare to happen.

Instead be proactive. If you have drives that you do not trust this much, either do not use them, or make plans for the nearest future to replace them asap. I have not had a fatal drive failure in years, my unraid have been up 24/7 for over 2 years now. Just be vigilant.

 

Sent from my SGH-T889 using Tapatalk

 

 

Link to comment

Question is, why are you expecting 2 drives failure? .  It is very rare to happen.

I would like to understand what the process is,  in that situation.    It can happen (I've seen a thread on this forum, where it has) .   

 

With over 30tb of real data in the array (eventually),  and the length of time it takes to replace a failed (3 or 4tb) disk,  it's potentially a nightmare if I have to build a fresh new array and move data around.     

 

 

Link to comment

i'll add my .02

 

If a second drive fails, your parity protection is useless, since it can no longer calculate it correctly.

 

You WOULD lose any data either not recovered by parity at that point, as well as the 2nd drive that fails.

 

you could unassign the parity drive, add in two new drives to replace the failed ones (or just one, since one would have SOME of your data recovered) and then reassign and calculate new parity.

 

Just my .02, you wouldn't need to rebuild/recopy the entire thing.

 

You should ask this question in the general forum though, to get a fully correct answer.

Link to comment

OK,  just so I understand you correctly, this is what I think you meant. 

 

1) Running unRAID array:  D1, D2, D3, D4, D5, D-Parity, D-Cache

 

2) Drive 3 fails:  D1, D2, X, D4, D5, D-Parity, D-Cache

 

3) Cache disk kicks in, and starts rebuilding D3:  D1, D2, X, D4, D5, D-Parity, D3(rebuilding)

 

3a) Remove failed drive, and race down to local pc shop.

 

4) Drive 1 fails, before D3 rebuild completes:  X, D2, X, D4, D5, D-Useless, D3(partial)

 

4a) Curse !!!!!  :o  >:(  :'(

 

5) Action; Insert new replacement HDD.  Remove now useless Parity drive from unRAID configuration.  Configure new Parity drive (using new HDD or existing parity disk, doesnt matter), and calculate parity.

 

6) Assuming parity process completes without further failures,

Outcome; data on D1 is lost in full,  data on D3 is partially available(if lucky), D2, D4, D5 are in tact:  X, D2, D-Parity, D4, D5, D-Unused, D3(partial)

 

6a) Get another HDD from the shop.  :-[

 

7) Insert 2nd replacement drive, which is effectively a new drive to the array, with no data on it.  And reconfigure the Unused drive as a new cache drive:  D-New, D2, D-Parity, D4, D5, D-Cache, D3(partial)

Link to comment

You would have to manually initiate the rebuild.  unRAID does not automatically rebuild a drive.  I only use a cache drive on 1 of my arrays.  I keep cold spares (precleared disks ready to be added as a new drive or to replace a failed drive) around.  Your scenario still applies but I wanted to make sure you knew it isn't automatic - at least not with the current version.

 

Most drive failures I've had do NOT mean I cannot get data off the drive if I take it out and put it into another box.  So if a drive fails (never had one red ball in unRAID yet but have had many get smart errors that make me consider them as a failed drive), I pull the old drive from the machine so that I can get data off it if needed and put in my cold spare and rebuild.  That way if a drive fails on a rebuild I can then pull the new failure and use another cold spare.  I setup a new array where the data is missing but I now have the two failed drives that I can put into another computer to extract files from and transfer across the network back to the new drives.

 

Also I plan on having additional protection with external backups.  I'm hoping to get my 120TB in my arrays down to about 80TB in size so the 40TB unused plus with some old drives I retired I should be able to have all 80TB backed up.  It will likely take me a couple of years to get this done because of the size and what I have to do to reduce my data down to 80TB.

Link to comment

OK,  just so I understand you correctly, this is what I think you meant. 

 

1) Running unRAID array:  D1, D2, D3, D4, D5, D-Parity, D-Cache

 

2) Drive 3 fails:  D1, D2, X, D4, D5, D-Parity, D-Cache

 

3) Cache disk kicks in, and starts rebuilding D3:  D1, D2, X, D4, D5, D-Parity, D3(rebuilding)

 

3a) Remove failed drive, and race down to local pc shop.

 

4) Drive 1 fails, before D3 rebuild completes:  X, D2, X, D4, D5, D-Useless, D3(partial)

 

4a) Curse !!!!!  :o  >:(  :'(

 

5) Action; Insert new replacement HDD. Remove now useless Parity drive from unRAID configuration.  Configure new Parity drive (using new HDD or existing parity disk, doesnt matter), Set e New Config that only includes remaining working drives and calculate parity.

 

6) Assuming parity process completes without further failures,

Outcome; data on D1 is lost in full,  data on D3 is partially available(if lucky) is lost, D2, D4, D5 are intact:  X, D2, D-Parity, D4, D5, D-Unused, D3(partial)X

 

6a) Get another HDD from the shop.  :-[

 

7) Insert 2nd replacement drive, which is effectively a new drive to the array, with no data on it.  And reconfigure the Unused drive as a new cache drive:  D-New, D2, D-Parity, D4, D5, D-Cache, D3(partial)

 

Data from failed drives may be recoverable using reiserfsck or a Windows reiserfs recovery tool.

Link to comment

Ok, thanks for clarifying guys.  1 other question.

 

If I can create a new config with existing drives with data in tact, then calculate parity.

 

Why cant I add a new reiserfs formatted drive (with data on it) to my array,  delete my parity disk, and recalculate?  (without reformatting, or losing my data on the hdd)

 

 

Link to comment

@earthworm

Would you mind providing some information on that controller like it is done in the wiki.

Not the bare specs - I can look them up in the inet. I'm more interested in the way it works with unRAID.

e.g. crossflashed or out of the box, configuration issues (jbod), parity sync speeds (it's only PCI-X), whatever

I will include that card in the list.

 

Edit:

If the ARC-1170 is working with unRAID it is likely for the others of that series to work also.

Manual

I'm very curious how the configuration has to look like.

Is it the pass-through disk they mention in the manual on page 86?

 

Here, I'll give you probably more than you're asking for.  First the board specs:

 

Controller Name ARC-1170

Firmware Version V1.49 2010-12-02

BOOT ROM Version V1.49 2010-12-02

Main Processor 500MHz IOP331

CPU ICache Size 32KBytes

CPU DCache Size 32KBytes/Write Back

System Memory 1024MB/333MHz/ECC 

 

 

My drives...so far:

Channel Usage Capacity Model

Ch01 N.A. N.A. N.A.

Ch02 JBOD 4000.8GB ST4000DM000-1F2168 

Ch03 JBOD 3000.6GB ST3000DM001-1E6166 

Ch04 JBOD 3000.6GB ST3000DM001-1E6166 

Ch05 JBOD 3000.6GB WDC WD30EZRX-00DC0B0 

Ch06 JBOD 3000.6GB ST3000DM001-1CH166 

Ch07 JBOD 2000.4GB ST2000DM001-9YN164 

Ch08 JBOD 1500.3GB WDC WD15EADS-00S2B0 

Ch09 JBOD 500.1GB Hitachi HDP725050GLA360 

Ch10 JBOD 500.1GB Hitachi HDP725050GLA360 

 

There's no crossflashing involved.  I've never tried running in RAID mode but I'm assuming that's where passthrough would be used.

 

Drive temperatures aren't reported in unRAID but the card monitors them directly and emails me if there are any drive issues.  The Main tab in unRAID shows my first 2 drives as such:

Parity 20004d927859eb810 (sda) 3907018532 * 4 TB - - 12,866,224 2,296,820 0 

Disk 1 20004d927859eb820 (sdb) 2930266532 * 3 TB 2.90 TB 100 GB 6,221,984 288 0

 

 

System Configuration:

System Beeper Setting  Enabled

Background Task Priority  High(80%)

JBOD/RAID Configuration  JBOD

Max SATA Mode Supported  SATA300

HDD Read Ahead Cache  Enabled

Volume Data Read Ahead  Normal

Empty HDD Slot LED  ON

HDD SMART Status Polling  Enabled

Auto Activate Incomplete Raid  Enabled

Disk Write Cache Mode  Auto

Disk Capacity Truncation Mode  No Truncation

 

I haven't really played with the above settings other than disabled NCQ.  I tend to not tinker when things are working.

 

My last parity check:

Last checked on Thu Nov 7 05:33:52 2013 CST (yesterday), finding 0 errors.

> Duration: 15 hours, 46 minutes, 50 seconds. Average speed: 70.4 MB/sec

 

The 100MHz PCI-X bus speed is good for 768MB/s theoretically and I've seen over half that when preclearing 3 drives simultaneously.

 

My Dell PE 1600SC server is powered by 2 Xeon 3.2GHz processors. SL7AE if I remember correctly.

 

If you want to know anything else, please ask.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.