Jump to content

Installed SuperMicro AOC-SASLP-MV8, Slow Parity Check


Recommended Posts

Just installed the SuperMicro AOC-SASLP-MV8 controller, running 5.0 RC5.

 

I figured if I put most of the drives on it, it would be faster than using the main board, since the controller is much newer than my ASUS P5B Deluxe motherboard.

 

Right now seing 26.57 MB/sec on the parity check.

 

Should I move the Parity drive back?  I mean I'd like to use the card.

 

One thing I saw in syslog:

 

Jan 31 23:43:05 Tower kernel: mvsas 0000:05:00.0: mvsas: driver version 0.8.2

Jan 31 23:43:05 Tower kernel: mvsas 0000:05:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16

Jan 31 23:43:05 Tower kernel: mvsas 0000:05:00.0: setting latency timer to 64

Jan 31 23:43:05 Tower kernel: mvsas 0000:05:00.0: mvsas: PCI-E x1, Bandwidth Usage: 2.5 Gbps

 

 

Shouldn't that card be PCI-E x4?  It is sitting in a PCI-E x16 slot, the secondary one on my board.

 

I can attach any logs that may help.  I only saw one other board with the AOC, where it appears it may be RC related? 

 

Thanks,

Marcus

Link to comment

I'd cancel and upgrade.

 

Ok, upgraded.  And the parity check is the same speed. 

 

Moved Parity drive to the main board <Loving the Icy Dock Hot Swap 5 in 3's> and am seeing now around 30.8 MB/sec.

 

Now, I know the second PCI-E x16 slot in my ASUS P5B Deluxe board only runs x4.  Which is what the AOC Controller is rated at, it bugs me that the syslog says it's x1.  The log also says 2.5 Gbps, and the card says "Up to 3.0 Gigabits/sec per port".  I'm not sure if they mean each SATA port or each SAS port.  Either way, it seems like I have an issue with a motherboard/bios setting, or a driver issue?

 

Thanks!

 

 

Link to comment

AHA.

 

Found the Motherboard setting to change the secondary PCI-E x16 port to x4, it was at x1.

 

Feb  1 01:36:52 Tower kernel: mvsas 0000:03:00.0: mvsas: PCI-E x4, Bandwidth Usage: 2.5 Gbps

 

That looks better.  Now I'm running at 77.4 MB/sec.

 

Still slower than I was running before, but this will do for now.

 

So, should I put as many drives on the AOC card, or utilize the onboard SATA controllers first?  I still am thinking the AOC card may be faster than an onboard controller, on my 5 year old <maybe more> motherboard?

 

Thanks!

Link to comment

So, once the 2TB drives were done, with just the 3TB drive, and the 3TB Parity drive finishing the Parity check I'm at 130.65 MB/sec. 

 

So next question I have, with running 8 drives on the AOC-SASLP-MV8 controller, is there a chance that during the parity check, I'm maxing out the connection?  The card is supposed to be rated at Up to 3.0 Gigabits/sec per port, however the Syslog entry says  Bandwidth Usage: 2.5 Gbps. 

 

Is the driver version limiting in my Bandwidth usage?  With 3.0Gigabits/sec per port, I would expect 6Gbps not 2.5.

 

My drives are  Sata 2, 3Gbps, so in theory, if the controller is only using 2.5Gbps, one drive at full throughput could max out the bandwidth.  Do the onboard controllers not run into the same issue, since they are directly using the board?

 

I can live with 70-80MBps, since that's only for parity checks. Rarely will all drives be up at the same time, so I expect my write speeds to be the same as before.  And of course I have the Cache drive too.

 

Thanks,

Marcus

Link to comment

So, once the 2TB drives were done, with just the 3TB drive, and the 3TB Parity drive finishing the Parity check I'm at 130.65 MB/sec. 

 

So next question I have, with running 8 drives on the AOC-SASLP-MV8 controller, is there a chance that during the parity check, I'm maxing out the connection?  The card is supposed to be rated at Up to 3.0 Gigabits/sec per port, however the Syslog entry says  Bandwidth Usage: 2.5 Gbps. 

 

Is the driver version limiting in my Bandwidth usage?  With 3.0Gigabits/sec per port, I would expect 6Gbps not 2.5.

 

My drives are  Sata 2, 3Gbps, so in theory, if the controller is only using 2.5Gbps, one drive at full throughput could max out the bandwidth.  Do the onboard controllers not run into the same issue, since they are directly using the board?

 

Just a little perspective, even the old SATA I speed of 1.5Gbps is far greater than your  individual drives need.  That is 1500Mbps (roughly 150MB/s), and your fastest drive needs around 135MB/s?  So even with multiple drives clamoring for attention, 2500Mbps is a lot of headroom.  The reason they keep bumping the limit higher is for fast RAID arrays, and perhaps faster SSD's.

 

Edit:  made corrections above, some how I mixed up Gbps with GB/s...  most embarrassing...

Link to comment
The card is supposed to be rated at Up to 3.0 Gigabits/sec per port, however the Syslog entry says  Bandwidth Usage: 2.5 Gbps.

 

The MV8 (and your mobo?) are PCIe V1 - that is limited to 2.5 Gbps per lane - I suspect that is where the syslog value is coming from.  However, the card is 4 lane so, provided that it is installed in a x4, or greater, socket, the theoretical total bandwidth is 10Gbps.

 

Is the driver version limiting in my Bandwidth usage?  With 3.0Gigabits/sec per port, I would expect 6Gbps not 2.5.

 

The 3 Gbps transfer rate is a per drive value, specified by SATA2, so with 8 drives, the theoretical maximum would be 24Gbps - this does not imply that the data can be transferred to the host bus at that rate.

 

However, as RobJ has pointed out, no mechanical drive is capable of transferring the data on/off the platter at anything like those speeds.  Even SSDs have trouble achieving 3Gbps.  Rest assured that the determining factor in your parity speeds is the rotational latency of the physical drives (ameliorated by the read-ahead caching which is implemented in the drive controller), and the more drives you have, the longer your parity check will take.

Link to comment

...

So next question I have, with running 8 drives on the AOC-SASLP-MV8 controller, is there a chance that during the parity check, I'm maxing out the connection?

Yes, it is very likely. (A simple measurement will tell--see below.)

The card is supposed to be rated at Up to 3.0 Gigabits/sec per port, however the Syslog entry says  Bandwidth Usage: 2.5 Gbps. 

First, you should realize that both of those numbers are theoretical maximums. "You can't get there from here.:)" That 2.5 Gbps is indicating a PCIe v1 connection (vs 5 Gbps for v2). And that number is per lane, [x4 in your now-corrected setup]. Both your motherboard and your AOC card are PCIe v1, so no bandwidth is being "wasted".

Is the driver version limiting in my Bandwidth usage?

No.

Do the onboard controllers not run into the same issue, since they are directly using the board?

Not the same issue, because real on-board SATA ports are in the Southbridge, and do not rely on the PCIexpress mechanism. But they are subject to other upper-limit factors.

I can live with 70-80MBps, since that's only for parity checks.

But, wouldn't 120-140 be better? I.e., strive to let the actual disk drive performance be the limiting factor, not the controllers or PCIexpress mechanism. This can possibly be achieved by not pushing the limit of any one bandwidth pool with too much  combined disk drive bandwidth consumption. Don't put too many eggs (drives) in any one basket (controller/SB).

 

But how much is too much? You can use the little shell script here [link], and peruse the related discussion in that thread. Given the questions you've asked, and the hardware/drives you have, I'm certain you will benefit.

 

--UhClem

 

"If you push something hard enough, it will fall over." --Fud's First Law of Opposition

 

Link to comment

The card is supposed to be rated at Up to 3.0 Gigabits/sec per port, however the Syslog entry says  Bandwidth Usage: 2.5 Gbps.

 

The MV8 (and your mobo?) are PCIe V1 - that is limited to 2.5 Gbps per lane - I suspect that is where the syslog value is coming from.  However, the card is 4 lane so, provided that it is installed in a x4, or greater, socket, the theoretical total bandwidth is 10Gbps.

 

Is the driver version limiting in my Bandwidth usage?  With 3.0Gigabits/sec per port, I would expect 6Gbps not 2.5.

 

The 3 Gbps transfer rate is a per drive value, specified by SATA2, so with 8 drives, the theoretical maximum would be 24Gbps - this does not imply that the data can be transferred to the host bus at that rate.

 

However, as RobJ has pointed out, no mechanical drive is capable of transferring the data on/off the platter at anything like those speeds.  Even SSDs have trouble achieving 3Gbps.  Rest assured that the determining factor in your parity speeds is the rotational latency of the physical drives (ameliorated by the read-ahead caching which is implemented in the drive controller), and the more drives you have, the longer your parity check will take.

 

Thanks Peter and RobJ,

 

That actually makes a lot more sense.  So if I'm running  with 3 drives, and 1 parity drives, the reads are faster because I'm only performing a read on 4 drives.  Now that I have 7 data drives and 1 parity drive, I'm essentially doubling the read i/o during the parity check, and as such my read numbers will go down.  Is that correct?

 

So, one last question.  Currently the parity drive is on the motherboard controller, all data drives are on the MV8.  Should I move the Parity drive back to the MV8?  Or keep it on the board?

 

Thanks for all the help. 

Link to comment

...

So next question I have, with running 8 drives on the AOC-SASLP-MV8 controller, is there a chance that during the parity check, I'm maxing out the connection?

Yes, it is very likely. (A simple measurement will tell--see below.)

The card is supposed to be rated at Up to 3.0 Gigabits/sec per port, however the Syslog entry says  Bandwidth Usage: 2.5 Gbps. 

First, you should realize that both of those numbers are theoretical maximums. "You can't get there from here.:)" That 2.5 Gbps is indicating a PCIe v1 connection (vs 5 Gbps for v2). And that number is per lane, [x4 in your now-corrected setup]. Both your motherboard and your AOC card are PCIe v1, so no bandwidth is being "wasted".

Is the driver version limiting in my Bandwidth usage?

No.

Do the onboard controllers not run into the same issue, since they are directly using the board?

Not the same issue, because real on-board SATA ports are in the Southbridge, and do not rely on the PCIexpress mechanism. But they are subject to other upper-limit factors.

I can live with 70-80MBps, since that's only for parity checks.

But, wouldn't 120-140 be better? I.e., strive to let the actual disk drive performance be the limiting factor, not the controllers or PCIexpress mechanism. This can possibly be achieved by not pushing the limit of any one bandwidth pool with too much  combined disk drive bandwidth consumption. Don't put too many eggs (drives) in any one basket (controller/SB).

 

But how much is too much? You can use the little shell script here [link], and peruse the related discussion in that thread. Given the questions you've asked, and the hardware/drives you have, I'm certain you will benefit.

 

--UhClem

 

"If you push something hard enough, it will fall over." --Fud's First Law of Opposition

 

Ok, this makes a lot of sense as well.  My ultimate goal is to max out this Antec 1200 with 4 Icy Dock 5 in 3 cages for a total of 20 drives. 

 

Currently I have the MV8 Controller with 8 ports, and 7 on board ports.

 

Even if I split the data usage of the drives even across the MV8, and the On board, as I add more drives, I simply will continue to hit the same data throughput limit, correct?  By maximizing now, I simply am giving myself faster read/write speeds until I add more drives?

Link to comment
Just a little perspective, even the old SATA I speed of 1.5Gbps is far greater than your  individual drives need.  That is 1500Mbps, and your fastest drive needs around 135Mbps?

 

Err... 135MBps?  That's 1080 Mbps.

 

So even with multiple drives clamoring for attention, 2500Mbps is a lot of headroom.

 

... but with 4 lanes, we have 10,000Mbps, but eight physical drives (at 135MBps) only need 8,640 Mbps.  That's a little bit of theoretical headroom.

Link to comment

Ok, this makes a lot of sense as well.  My ultimate goal is to max out this Antec 1200 with 4 Icy Dock 5 in 3 cages for a total of 20 drives.

Which will require an additiional controller (to replace the Syba; the Syba has a max (real-world) bandwidth of 150-175 MB/s, being PCIe x1 v1). Think about a PCIe v2 controller (maybe a M1015), so that you can exploit the v2 if/when you upgrade mobo, or totally reconfigure.

Currently I have the MV8 Controller with 8 ports, and 7 on board ports.

 

Even if I split the data usage of the drives even across the MV8, and the On board, as I add more drives, I simply will continue to hit the same data throughput limit, correct?  By maximizing now, I simply am giving myself faster read/write speeds until I add more drives?

Couple of clarifications. We're only talking about read speed limitations (your write speed (to array) is limited by the RAID4 methodology employed by unRAID). And, realistically, it is only during parity checks that you will push these limits, and only when you're in the first 30-50% of the check (outer/faster drive zones). So this is not something to really sweat about. But no reason not to be optimal either.

 

I expect that you won't notice the "upper limit" until you exceed 6 drives on the MV8, or try using both of the JMicron  ports. I think you can use all 5 of the Intel (real/SB) mobo ports without reaching their "tipping point" but dskt tell you (I don't have any ICH8 experience). So, allocated optimally, 11 data drives (+ parity) should be able to "parity check" at max (with current hardware).

 

You don't list your drive model #s, but either the Sgt or Hit 2TBs will be the slowest, and will place an inherent limit on the others (during a parity check), so factor that into your "tipping point" decision. Ie, the Sgt 3TB data drives do not need to use all of their max 170+ MB/s, only what the slowest drive's max is.

 

--UhClem

 

Link to comment

Adding more drives should not increase parity check time as long as there is enough bus capacity. PCI-e x4 has enough capacity to allow access of up to 8 drives concurrently with no bus contention. Adding a larger parity drive will increase check time proportionally. The maximum per drive access rate is about 156MBps. Drives are starting to approach this rate with the newest Seagate drives reaching 140MBps. These fast new drives should be connected to at least SATA 2 ports to allow unconstrained disk access. Once drive access rates exceed 150MBps then PCIe x4 will no longer be sufficient to support 8 drives.

Link to comment

Just a little perspective, even the old SATA I speed of 1.5Gbps is far greater than your  individual drives need.  That is 1500Mbps, and your fastest drive needs around 135Mbps?

 

Err... 135MBps?  That's 1080 Mbps.

 

Yeah, I blew that, didn't I !!! :-[

Link to comment

... Drives are starting to approach this rate with the newest Seagate drives reaching 140MBps.

You're a couple of years behind; the DL green 2TBs did that. The newest ones (DM, 7200 [ie, OP's 3TBs]) do 170-180.

Once drive access rates exceed 150MBps then PCIe x4 will no longer be sufficient to support 8 drives.

(access transfer) You're still talking theory. PCIe x4 (v1) can only sustain 6 drives @140 MB/s (and only @130 MB/s on lesser motherboards).

 

Jus' keepin' it real ...

 

--UhClem

 

Link to comment

... Drives are starting to approach this rate with the newest Seagate drives reaching 140MBps.

You're a couple of years behind; the DL green 2TBs did that. The newest ones (DM, 7200 [ie, OP's 3TBs]) do 170-180.

Once drive access rates exceed 150MBps then PCIe x4 will no longer be sufficient to support 8 drives.

(access transfer) You're still talking theory. PCIe x4 (v1) can only sustain 6 drives @140 MB/s (and only @130 MB/s on lesser motherboards).

 

Jus' keepin' it real ...

 

--UhClem

 

So..... if I went with the ASRock Z77 Extreme4 LGA 1155 Z77 ATX Intel Motherboard, and an i3 3225, I should be off pretty well?

 

Onboard 4 SATA 2, 2 SATA 3

 

That would give me 6 OnBoard, The MV8 with 8.  And then when I'm ready to go bigger add the IBM 1015?  That board can hold 2 x16 cards at x8 each.

 

And I'm guessing the i3 3225 will blow the 2.4ghz Core 2 Duo out of the water.

 

Thanks!

Link to comment

So.... I took a drive to Microcenter....

 

I picked up a ASRock Z77 Extreme4 LGA 1155 Z77 ATX Intel Motherboard, and an i3 3225, for $210 out the door.  I had 6GB of DDR 3 left over from another project.  Connected both 3TB drives to 2 of the on board SATA3 connections.  The other drives are connected to the MV8 controller.  I went ahead and am running another parity check and am happy to report a steady 99.4MBps.  I've seen it as high as 115MBps, and fully expect it to go much higher once the slower drives are passed.

 

Thanks everyone for your help, I appreciate it and have been enjoying this.  I now have another 6 onboard ports to use <2 SATA 3, 4 SATA 2>, and still have the Syba for 2 SATA 2 connections.  Between these 8 free ports I think I will be good for a while.  I'll need to order a couple more Icy Dock cages to fill it out, but I'm psyched.  Will post pictures soon.  =)

Link to comment

So.... I took a drive to Microcenter....

 

I picked up a ASRock Z77 Extreme4 LGA 1155 Z77 ATX Intel Motherboard, and an i3 3225, for $210 out the door.

Gee, I feel guilty--indirectly responsible for your expenditureinvestment. Looking on the bright side, PC hardware is such an incredible bang-for-the-buck these days. [When I started programming, gasoline was $0.35/gallon and computer memory was $1/byte--I earned about $10k/yr--1968-69.]

I had 6GB of DDR 3 left over from another project.  Connected both 3TB drives to 2 of the on board SATA3 connections.  The other drives are connected to the MV8 controller.  I went ahead and am running another parity check and am happy to report a steady 99.4MBps.  I've seen it as high as 115MBps, ...

I would have expected to see a sustained 115-120 for the first 20% of the check. It may be that the chip on the MV8 (Marvell 88SE6480) might not have the processing crunch and/or data-handling throughput to saturate the PCIe x4 (v1). You might try moving one more drive from the MV8 to the Z77, and see if that speeds up the check. If so, move another. (and repeat). If not, please do run that dskt script.  You might have an abnormally slow drive.

 

Enjoy your new toy(s).

 

--UhClem

 

Link to comment

I also disabled INT13H in the AOC's bios.  I came across THIS in the forum.

 

Did you mean to put a Forum link in here?

 

I did, crud, I'll have to find it.  I had been up for a while, and only got 4 hours of sleep lol.  I'll look for it while at work =)

 

 

Could I remind you about this?

 

 

Thanks

 

 

Link to comment

I also disabled INT13H in the AOC's bios.  I came across THIS in the forum.

 

Did you mean to put a Forum link in here?

 

I did, crud, I'll have to find it.  I had been up for a while, and only got 4 hours of sleep lol.  I'll look for it while at work =)

 

 

Could I remind you about this?

 

 

Thanks

http://lime-technology.com/forum/index.php?topic=12404.msg117904#msg117904

 

Special Instructions:

First off, always disable the INT13 function on these cards.  During boot, hold ctrl-m at the appropriate time (the screen that lists all the drive's serial numbers), then go to the second BIOS menu and disable INT13.  If you have more than one of these cards, you need to do this for each card (they share the same BIOS menu, so just select the second controller and disable INT13 for that one as well).

 

 

-Marcus

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...