Drive Placement


ajeffco

Recommended Posts

Hello,

 

In lurking on these forums, I've seen that for the best performance, drives should be on the motherboard sata ports.  I've got an atom board with 2 ports, and a promise sata300 tx4. 

 

From what I've seen, the parity drive should be on the motherboard.  Should I place the largest data drive (2TB) on the last open motherboard port?  Or would it make sense to place my smaller cache drive (380GB) there?

 

Thanks,

 

Al

Link to comment

Place your Parity and largest data drive on your MB port.

 

You want the largest and most written to drives on the MB ports. You want the smallest drives and least written to drives on the PCI Bus. Now I'll explain why this is the case.

 

You might not be as impacted on perceived performance since you're using a cache drive to write to. This delays the possible performance impact of writing to multiple drives until later, when you won't notice it. The performance impact comes into play during parity builds, parity checks, failed drive situation, drive rebuilds, simultaneous writes, and simultaneous reads.

 

The PCI Bus has 133 MB/sec maximum combined bandwidth for ALL devices on it. To determine the upper bounds, you take the total available bandwidth for the bus and divide it by the number of active devices. By having 3 array drives your maximum upper bounds during those situations is 44 MB/sec [ 133 / 3 ]. If you were to have 4 array drives your maximum upper bounds during those situations is 33 MB/sec [ 133 / 4 ].

 

Now in the situation of moving data from the cache drive to a MB connected array drive, your theoretical maximum upper bounds is 133 MB/sec [ 133 / 1 ]. In the case of going from cache drive to one PCI connected array drive, the theoretical maximum upper bounds is 66MB/sec [ 133 / 2 ].

 

In reality, the speed of writing to the array is well below these limits but that is without being limited by the PCI bus. It would possibly be lower yet if both the parity and data drives were both on the PCI bus. To determine a close approximation of what speed you'll have when writing to the array, you need to factor in the 4 I/O operations required. This is the time your array drives take to read the original data, perform at least one disk rotation, and to write the new data.

 

In high performance systems, the maximum observed writes speeds to the array is 43 MB/sec. Most users systems are around 25-35 MB/sec for writes. Reads are mostly limited by the speed of the data drive. If you're moving the data across the network, then network limitations come into play.

 

The bus limit occurs in the following situations, during parity builds, parity checks, failed drive situation, drive rebuilds, simultaneous writes, and simultaneous reads.

 

Link to comment

Cache drive should be as large as "the most data you'll ever want to transfer to the array in a single day or two", excluding initial loading of the array. For most people this probably means "a backup of their main system" or a few movies. 

 

For this reason I'd buy a cheap IDE drive for cache duties, assuming your using an early Atom board and it has an IDE port. I'd put the 380GB sata drive to use as a data drive.

 

Keep your smallest, slowest and least used drives on the PCI bus for another reason. As parity progresses beyond their capacity they'll no longer be accessed and the parity check will speed up.

 

Another consideration might be what NIC the Atom has, some have PCI 10/100 or PCI Gbit and others have PCI-e Gbit cards.  If the board has a 10/100 NIC your limited to 12.5MB/s which pretty much any unraid system can keep up with so need for a cache disk.

 

 

 

 

Link to comment

This board has a gig card, and using teracopy I'm seeing transfers of up to 36MB/s.  The only thing that struck me as odd is that the verify step on teracopy is showing 90-100MB/s.

 

The 380GB WD is just an oddball drive, I figured I'd use it for cache, since I really doubt I'll be transferring more than that per day ;).  I'm planning on getting more 2TB drives, and  moving the 1TB's down, which will eliminate the 500GB.  Just a matter of time, money, and the right deal :)

 

Thanks again for the answers.

Link to comment

Wow, you were dead on in the numbers Brit.

 

I'm running the pre-clear on all the drives, since I was playing around the last few days, the drives had some old junk on them...

 

The two drives on the MB sata ports each reading at 100MB/s.

 

The 4 drives on the Promise SATA300 TX4 card on the pci slot are reading at 30-33.6 MB/s.

 

Ok, question for the PC illiterate :).  Are pci-x and pci-e slots faster, resulting in better performance?

Link to comment

When there are more USB ports on a motherboard, do they all run at the same speed?  or is there some point where they start sharing the bandwidth?

 

After watching this, I gotta start looking out for a new motherboard :)  I'm assuming also that if I change out mobo's, then unraid won't care as long as it 1)supported and 2)booting from the same usb key that I regged with?

Link to comment

Are pci-x and pci-e slots faster, resulting in better performance?

 

Yes. Do not confuse PCI-X and PCI-Express. There are not the same. The trend on consumer motherboards is to provide more PCI-Express slots, replacing the normal PCI slots.

 

For a similar read on performance of PCI-Express, read this topic as well: http://lime-technology.com/forum/index.php?topic=7526.0

 

With PCI-Express you can have anywhere from 1 lane to 16 lanes. In the first revision 1.0 of PCI-Express each lane of a PCI-Express 1.0 provides for 250 MB/sec bandwidth in each direction. The common PCI-Express slots are 1x, 4x, 8x, and 16x. Their corresponding maximum bandwidth is 250 MB/s, 1000 MB/s, 2000 MB/s, and 4000 MB/s.

 

There is a revision 2.1 of PCI-Express. It provides for double the bandwidth of the first. This allows for 1x, 4x, 8x, and 16x lanes to provide a maximum bandwidth of 500 MB/s, 2000 MB/s, 4000 MB/s, and 8000 MB/s.

 

There's a proposed revision 3.x of PCI-Express which is set to double the bandwidth of PCI-Express revision 2.1. This allows for 1x, 4x, 8x, and 16x lanes to provide a maximum bandwidth of 1000 MB/s, 4000 MB/s, 8000 MB/s, and 16000 MB/s.

 

PCI-Express provides a variety of possible configurations for physical and electrical characteristics. The physical configuration is the actual connector on the motherboard. The electrical configuration is in how many lanes are wired to it. Usually they're the same, but sometimes the motherboard builds will use an 8x physical slot that's wired for 4x electrical.

 

Now for PCI-X, they are typically reserved to server motherboard only. They revised PCI (32bit, 33Mhz) to expand to 64 bit, thus doubling the bandwidth. Then they revised it to expand from 33 Mhz to 133 Mhz, thus quadrupling the bandwidth. When these improvements are combined, the bandwidth of PCI-X (133 Mhz 64Bit) is 8 times that of standard PCI (33 Mhz 32Bit). On a server spec PCI-X slot - 133 Mhz, 64bit, it provides roughly 1064 MB/s.

 

 

Most of this information is summed up from Wikipedia entries: http://en.wikipedia.org/wiki/PCI-X and http://en.wikipedia.org/wiki/Pci-express

Link to comment
PCI-Express provides a variety of possible configurations for physical and electrical characteristics. The physical configuration is the actual connector on the motherboard. The electrical configuration is in how many lanes are wired to it. Usually they're the same, but sometimes the motherboard builds will use an 8x physical slot that's wired for 4x electrical.

 

They can be variable also, say you have two physical x16 connectors, populating both may result in two x8 lanes (check with MB manufacturer or chipset). Common on crossfire and SLI boards, this is due to a lack of PCI-e lanes available in the chipset. Also MB manufacturers tend to fit only x1 and x16 slots. Supermicro mb's still use x4 or x8 slots, however more mainstream consumer boards dont tend not to use the x4 and x8 connectors anymore. Check with the MB manufacturer before purchasing that the MB x16 slot isnt designated as video only. Some MB manufacturers wont support fitting of raid or I/O controllers in the x16 slot. This doesnt mean it wont work, but it does mean they wont help if it doesnt work. An x4 PCI-e slots bandwidth is good for 8 sata ports. 

 

Lastly PCI-E is full duplex (read and write at same time), PCI-X is half duplex (read or write).

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.