More on port multipliers


toonz

Recommended Posts

So I'm possibly looking into upgrading my unRAID server. Currently have 7x drives and my case is full. So I'm looking at upgrading to a Centurion 590 with 3x 5-in-3 boxes.

 

So with 15 drives, I was thinking a nice way to do this would be to use a motherboard with 6x on-board SATA, a 2 channel PCIE card (http://www.addonics.com/products/host_controller/adsa3gpx1.asp) and 2 internal port multipliers (http://www.addonics.com/products/host_controller/ad5sapm.asp).

 

I've read through the other port-multiplier threads but can't seem to tell if this is likely to work or not? I'm in Australia, so all this would probably be ordered from the USA as pricing here in Aus is very gay!

 

Any thoughts?

Link to comment

There have been at least 2 instances where people with external box port multipliers have reported success.

I have not seen any reports of the specific hardware mentioned.

The addonics site used to have links for the drivers, now they are included in the kernel.

Therefore I would say there is a good chance they will run fine.

Link to comment

So I'm possibly looking into upgrading my unRAID server. Currently have 7x drives and my case is full. So I'm looking at upgrading to a Centurion 590 with 3x 5-in-3 boxes.

 

So with 15 drives, I was thinking a nice way to do this would be to use a motherboard with 6x on-board SATA, a 2 channel PCIE card (http://www.addonics.com/products/host_controller/adsa3gpx1.asp) and 2 internal port multipliers (http://www.addonics.com/products/host_controller/ad5sapm.asp).

 

I've read through the other port-multiplier threads but can't seem to tell if this is likely to work or not? I'm in Australia, so all this would probably be ordered from the USA as pricing here in Aus is very gay!

 

Any thoughts?

 

I'm using a 3-in-3 and two 4-in3 boxes in my Centurion 590. The 4-in-3 drives ran a fair bit hotter than the 3-in-3 drives; I'd imagine that the 5-in-3 boxes would trap more heat, still. I got my temperatures down by being less haphazard with my wiring, installing 120 mm fans in every slot available, including 1 on each of the drive boxes for a total of 8, and blocking other vents. This way my average drive temperature is 10 -12 C cooler on the ones that ran hottest before. The regulars of these forums were invaluable in helping me, e.g. weebotech etc.

Link to comment

Thanks guys.

Need to get a bit more money up my sleeve before I start this rebuild of my server.

Can't wait - lower power, more HDD support and never having to open the case again.

 

I might even upgrade my USB from 256MB to something more substantial.

Link to comment

Here's a question I always wonder when looking at port multipliers yet never seems to be addressed in the info. Say I put 5 drives on a PM into RAID5. How doe the PM affect the read speed?

 

RAID 5 has the data striped over all the drives so it has to read or write to all 5 drives to retrieve or store data. Since it has to multiplex reading 5 drives over a single SATAII port wouldn't that make it 1/5 as fast as a RAID 5 configuration using 5 different SATAII ports? Well, in theory anyways assuming the ports are all being fully utilized.

 

I'm assuming the PM might slow down Unraid write speed further but since Unraid reads from a single disk at a time it won't really affect read speed, right? But then, if there are multiple read accesses from all the drives on the PM it could then affect overall read speed?

 

Peter

Link to comment

the biggest problem you are going to face is the parity drive. Should you RAID5 a few disks your going to need a drive the same size for the parity drive.

 

assuming thats OK with you I really cant see there being any significant speed improvement and you then have to wonder is the extra complication just something else that can break.

 

Not pre-judging just some things to consider.

 

FYI: Ive said this before but I hate RAID. I spent alot of time experimenting with and buying some pretty high end raid cards and there was not one that i couldnt break doing something fairly trivial.

Link to comment

RAID 5 has the data striped over all the drives so it has to read or write to all 5 drives to retrieve or store data. Since it has to multiplex reading 5 drives over a single SATAII port wouldn't that make it 1/5 as fast as a RAID 5 configuration using 5 different SATAII ports? Well, in theory anyways assuming the ports are all being fully utilized.

 

I would think that with a single RAID5 array on a port multiplier, read and/write speed would be pretty close to what it might normally be.

With RAID5 (and a single array) you will be at most reading from one drive at a time anyway. And small blocks too.

Now if you had multiple arrays or you were doing lots of reads and writes with many processes at the same time, you may see a bottle neck.

 

I would guess for the average user, a port multiplier will do satisfactory, but will show a bottle neck during parity generation.

Heck, we see this bottleneck now with PCI cards and drives next to one another as devices in the array.

 

We have yet to see, if staggering drives over multiple controllers, port multipliers will alleviate this bottle neck.

 

 

In addition, FWIW, Drives rarely max out the 3GB/s bandwidth available when doing long continuous reads. So there is probably bandwidth available for handling the multiplier drives. If not, I don't think the design would have made it into the mainstream.

 

 

 

More information available here.    http://www.serialata.org/portmultiplier.asp

 

In particular see about Command-based or FIS-based Host Support.

 

 

Link to comment

Just thought I would chip in a little here as I JUST set up something like this last night.

 

I got the AMS DS-2350S for my enclosure (5 drives, uses sil4726 port multiplier) and the

SYBA SD-SATA2-2E2I (2 external, 2 internal) for my PCI SATA controller (sil3124 chipset).

 

The enclosure actually comes with a sil3132 based chipset SATA card (external ports only) but it is PCIe based, and I didn't have any of those ports.

 

So long as you use the latest version of unRAID (I used 4.3.3) it should detect the port multiplier just fine and see all of the disks.  If you don't use a version based on the 2.6.24 kernel (you can the version see at the tower prompt) then it will not see anything beyond the first disk.

 

I'm not sure how other cards work with port multipliers, but the card I have requires OS driver level support to recognize any disks beyond the first in the enclosure, so don't despair if you only see the first disk detected on boot.  Let the OS load up and then check (the enclosure makes it very clear which drives are being recognized).

Link to comment

OK, so a PM really does bottleneck 4 or 5 drives down into a single 3Gb port.

 

With Unraid, I can see it effecting performance when generating parity and even sometimes when writing data to a drive (since a data write will read all the drives to generate parity). Also, to a lesser extent, when a number of different file accesses on the same PM are happening at the same time. Overall though, it's likely not a big deal for the average user.

 

However, with something like a RAID5 (or RAID0) where the data is striped across the 4 or 5 drives it's got to hurt performance compared to having 4 or 5 separate ports. RAID5 divides the data out over all the disks in the array so every read or write accesses every drive. So, if you have 5 disks on a single 3Gb port then the PM has to be slower than having 5 disks each with their own 3Gb port.

 

Either way, I've somewhat hijacked the thread but the OP seems to have the right idea. It sounds like that hardware will work.

 

Peter

 

Link to comment

With Unraid, I can see it effecting performance when generating parity and even sometimes when writing data to a drive (since a data write will read all the drives to generate parity).

That statement is NOT true.

 

In unRAID ONLY the drive being written to and the parity drive need be read to generate the new parity to be written.  The other drives are not accessed at all.

 

Joe L.

Link to comment

Joe - The software must read the old data from the disk being written to and the old parity for that data and then calculate the new parity based on that data, correct?

 

Actually, I should express the speeds more correctly. SATAII is 3Gbps.

 

I also didn't look at the PCIe speeds. A PCIe x1 interface is 2.5Gbps. Also, I believe a x1 PCIe is about 4x as fast as the PCI interface clearly showing why PCI is not desirable for SATA port cards or Gbps ethernet.

 

Anyways, this is just me looking at the various interface speeds. I think either a x1 PCIe and 1-port card or x4 PCIe and 3-port card would get about all you can on the motherboard side. Adding PM's would slow it down further but staggering the drives should still show good performance for about all operations except parity calculation.

 

Of course, I guess when you look at the fact that Unraid is typically connected with gigabyte ethernet then the on-board speeds really don't matter most of the time.  ;D

 

Peter

 

PS, I hope I got the speeds right.

Link to comment

Joe - The software must read the old data from the disk being written to and the old parity for that data and then calculate the new parity based on that data, correct?

True, but it need to read only those two disks before writing them, it does not need to read any of the others in the array.

Of course, I guess when you look at the fact that Unraid is typically connected with gigabyte ethernet then the on-board speeds really don't matter most of the time.  ;D

Most of the network media players only have 100Meg ethernet anyway, and if over a wireless link, far less than that.  The port multiplier is not the bottleneck.

 

Joe L.

Link to comment

OK, so a PM really does bottleneck 4 or 5 drives down into a single 3Gb port.

I would say, multiply or multiplex.

 

With Unraid, I can see it effecting performance when generating parity and even sometimes when writing data to a drive (since a data write will read all the drives to generate parity). Also, to a lesser extent, when a number of different file accesses on the same PM are happening at the same time. Overall though, it's likely not a big deal for the average user.

 

When generating parity, performance can be affected, If multple processes and drives are being written to, performance can be affected.

 

This also depends on command based switching (akin to multple P-ATA drives on the same cable) or FIS based switching where performance limitations subside.

 

Keep in mind, drives rarely do sustained transfer at 3GB/s anyway. So 5 drives at approx 60MB/s on a 3GB/s cable is doable.

With command based switching the controller has to wait for completion, this is where performance can be affected.

The same goes for 2 P-ATA drives on the same cable.

 

 

To elaborate/clarify further with subject matter. (correct me if I am wrong).

 

During parity Generation, all drives are read and parity is written.

During parity Update, The parity drive is read, the data drive is read, the data block is xor'd out of parity, The new data block is xord into parity, the data block is written to the data drive, the parity block is written to the parity drive.  So only 2 drives are used during a single block update. (note order may not be exact, but should be representative of what happens).

 

I only see an issue if data drives and parity drive are on the same port multiplier box.

I would always try to put parity on the fastest, quietest path possible. I use a separate controller and PCIe lane for mine.

Link to comment

I can't say that I've noticed much, if any, hit on my parity check times since going with a port multiplier.  Nothing that wouldn't be the result of the added drives, that is.

 

I did notice that the initial parity estimation time seemed very high to me (around 10 hours for 4x1TB drives of which 1 is parity and 5x500GB at around 50% capacity).  However, at some point the parity must have kicked up because it took less than 8 possibly only somewhere over 6, but I'm not 100% sure).

 

I have my parity drive running directly off of the motherboard's PCIe-based SATA controller.

Link to comment
I have my parity drive running directly off of the motherboard's PCIe-based SATA controller.

 

That's the way I would have done it also.

 

Do you remember what the KB/s value was when running the parity check?

I'm curious if that revealed a performance penalty.

 

on my Abit AB9 PRO, with 3 1TB (5400) data drives and 1  1TB (7200) parity drive it takes a lil under 5 hours.

It starts out at 71,000 kb/s, then goes down to about 68,0000-69,0000.

 

This is on a parity write(create) process which may be faster as parity is only written and not read/compared. (i.e. after a re-store)

I'll have to do it only on a parity check.

Link to comment

I have my parity drive running directly off of the motherboard's PCIe-based SATA controller.

 

That's the way I would have done it also.

 

Do you remember what the KB/s value was when running the parity check?

I'm curious if that revealed a performance penalty.

 

on my Abit AB9 PRO, with 3 1TB (5400) data drives and 1  1TB (7200) parity drive it takes a lil under 5 hours.

It starts out at 71,000 kb/s, then goes down to about 68,0000-69,0000.

 

This is on a parity write(create) process which may be faster as parity is only written and not read/compared. (i.e. after a re-store)

I'll have to do it only on a parity check.

 

My parity check was around 56,000KB/s and never dipped to anything worth worrying about once going to the PM.  I do have to run those at SATA150 since my board hangs at POSTing with them set at SATA300.  Not sure if that would account for a difference or not.

 

At any rate, I still average around 56K KB/s with the PM in place.  All of my PM 500GB drives are able to be set at SATA300 with no problems.

Link to comment
  • 2 weeks later...

Well, I guess I'm the first to report on the biggest negative that PMP have going against them.  Some time yesterday evening my PMP module failed, dropping the 3 active of 5 drives I had in my PMP enclosure.  It appears to just go through a continuous series of PMP resets before giving up and then I'm presented with absolutely no drives detected once unRAID is allowed to boot.  Removing the PMP upon subsequent reboots shows all onboard drives available.

 

The drives left off I can do without for a while since they held only applications and my TV shows.  The onboard were my movie drives.  I'll just reimport the PMP drives when I get a replacement module.

 

I was able to confirm the PMP failure as it presented the same continuous resetting when I installed in on my Windows box (without any drives...I was hoping to flash the firmware out of desperation).  All I got were 3-5 second repeating freezes, forcing me to shut down and uninstall the hardware.

 

I just thought I should throw this out here in case someone else experiences the same thing.  I'm just so happy about how unRAID is able to adapt - even in failure - to be able to provide me with data.  What's even better is that I'll be able to just re-add those PMP disks later for a full recovery.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.