Proper way of using RocketRaid 2310 as sata controller with UnRAID?


henris

Recommended Posts

I recently ran out of sata ports on my motherboard so it was time to buy my very first sata controller addon card. I read through the forums and Highpoint RocketRaid (RR) 2310 came on top of the bunch as I wanted PCI-e and it was the only one of the "known-to-work" ones available at my local dealer. A bit pricey at 160eur but compared to overall system price it didn't really matter.

 

I didn't find exact configuration instructions in UnRAID forums but I read a few posts talking about non-RAID controller- or JBOD-modes. As I didn't find any hint of non-RAID controller mode in RR manual I decided to try with JBOD. I had two new Samsung F1 1TB disks which I initialised with the RR BIOS utility. I then created two JBOD arrays with one disk in each of them. Everything went fine so I proceeded to boot up UnRAID. The new disks were visible in Devices-tab. I decided to test a little a bit by assigning one of the disks as Cache-disk. After formatting the disk, Cache-disk was available for use and I copied 300GB of data to the disk successfully. No problems at all.

 

For some other reason I restarted the machine (with proper Powerdown script) and I briefly saw a text "legacy" displayed somewhere during RR BIOS load. I restarted again and went to RR BIOS utility. The disk I had assigned as Cache disk had become "Legacy" while the unused disk was still with old, proper status. Also the first JBOD array (with the legacy-disk) I had created was gone, the other JBOD-array (with the unused disk) was still there. First I thought there was something totally wrong but luckily I decided to try to boot UnRAID. Cache-disk was still there with all the data present and was also otherwise functioning properly. I came to a conclusion that the UnRAID assignment and formatting had overwritten RR's own array-definition. I then deleted the existing JBOD-array and re-initialised both disks (no harm as the data was only for test purposes). I left the disks in Initialised-state without creating any arrays and started UnRAID.

 

The disks were once again visible in Devices-tab and once I assigned and formatted them they became "Legacy" in RR BIOS Utility. So the behaviour was consistent. Now I have a few questions:

 

1. Is the above conclusion about UnRAID overwriting RR's array-information correct?

 

2. Would it actually have required any configuration actions within RR BIOS utility to make the the new disks available to UnRAID (my assumption based on the above is "no")?

 

3. Does this also mean that you cannot use RR's RAID-mode at all with UnRAID since it always overwrites array information when assigning and formatting the disks? I was considering using HW-RAID1 for parity but this would make it impossible, atleast with RR-2310. How would I be able to see the RR arrays in UnRAID as UnRAID seems to see only disks?

 

4. Is it safe to use the drives in "Legacy"-mode? Is there any performance hit expected?

 

Thanks in advance for any replies!

Link to comment

I can't answer the other questions authoritatively.

 

3. Does this also mean that you cannot use RR's RAID-mode at all with UnRAID since it always overwrites array information when assigning and formatting the disks? I was considering using HW-RAID1 for parity but this would make it impossible, atleast with RR-2310. How would I be able to see the RR arrays in UnRAID as UnRAID seems to see only disks?

 

I would doubt you can use RAID mode reliably without using a highpoint driver. (and possibly some special configuration in unRAID).

In addition, there is little benefit from using raid1 on your parity drive.

If anything it will be a detriment to performance. If you are using a recent drive for pariity (like a 1.5tb 32MB cache seagate) you will have good performance. Adding a raid1 mirror may slow down writes as the writes usually have to finish before the next operation.

 

4. Is it safe to use the drives in "Legacy"-mode? Is there any performance hit expected?

 

I doubt if there is a performance hit on using a drive in legacy mode. However, I would recommend somehow you use the drive then remove it to another machine or test it on another port in the same machine.

 

I've seen where a RAID BIOS sets the partition up in a special way that does not allow the drive to be used outside of the RAID Card directly.

I would expect that Legacy mode allows this.

 

This is a very important test. If your raid card were to fail for some reason.

You want to be sure the data is accessible on another controller otherwise you will need to purchase the same controller again.

 

Link to comment

Q3.

I would doubt you can use RAID mode reliably without using a highpoint driver. (and possibly some special configuration in unRAID).

In addition, there is little benefit from using raid1 on your parity drive.

If anything it will be a detriment to performance. If you are using a recent drive for pariity (like a 1.5tb 32MB cache seagate) you will have good performance. Adding a raid1 mirror may slow down writes as the writes usually have to finish before the next operation.

I guess the only performance benefit would be during parity check since with RAID1 you get better read performance but that is insignificant for the big picture. For Cache-disk RAID1 would make a point by providing data safety until next mover execution. I guess I just stick to non-rain controller mode.

 

Q4.

I doubt if there is a performance hit on using a drive in legacy mode. However, I would recommend somehow you use the drive then remove it to another machine or test it on another port in the same machine.

 

I've seen where a RAID BIOS sets the partition up in a special way that does not allow the drive to be used outside of the RAID Card directly.

I would expect that Legacy mode allows this.

 

This is a very important test. If your raid card were to fail for some reason.

You want to be sure the data is accessible on another controller otherwise you will need to purchase the same controller again.

I will definetely test this. I will also try to attach a disk with existing partitions and data to the controller and see if it is recognised by UnRAID without any configuration in RR bios utility. It would make adhoc data transfers much easier and faster to perform (in my setup local read/write speeds are much higher compared to over-the-network ones) as all the sata ports on mb are already in use.

Link to comment

Does the Rocket Raid controller support the idea of Multiple Raid methods on multiple drives.

 

I.E. ala SAFE 50. 

Where  2 drives are used.

  50% of the two drives are in a raid 1 arrangement.

  50% of the two drives are in a raid 0 arrangement.

 

I was looking at the Areca controllers which allow you to configure drives and arrays in this sort of arrangement.

 

It would be a great way to have two drives supporting cache (protected) and parity (speed).

 

 

Link to comment

It sounds like that would introduce a lot of half stroke seeking on the pair of drives when copying from the cache drive to the array.  The simpler raid1 of the cache drive and standard parity might be better performance wise unless the files in question fit within unRAID's cache (probably not the case for DVDs/Blu-ray).

Link to comment
  • 4 months later...

2. Would it actually have required any configuration actions within RR BIOS utility to make the the new disks available to UnRAID (my assumption based on the above is "no")?

 

4. Is it safe to use the drives in "Legacy"-mode? Is there any performance hit expected?

Just added another disk to the system. This time I didn't do anything in the RR BIOS. The drive was correctly seen by UnRAID and now working perfectly. There has been no issues related to performance, in fact the drives connected to RocketRaid 2310 are providing equal performance compared to the ones connected to the MB SATA-connectors.

Link to comment

Yes it is fully working with stock 4.4.2. However I stumbled across the topic below which is making me a little worried:

http://lime-technology.com/forum/index.php?topic=3879.0

 

I'm using the drives without any initialisation from RocketRAID bios so the disks are reported there as legacy. I couldn't make the UnRAID see initialised (jbod or raid1) arrays at all. I think the RocketRAID bios provides some sort of native, transparent, interface for drives outside RR arrays and this is why Linux is able to see them. There were some comments from Tom about RR2320 compatibility in the thread below:

http://lime-technology.com/forum/index.php?topic=2974.0

 

RR2310 is priced here around 150eur and RR2320 (8-port, 4x) is around 275eur. Since I have multiple PCI-E 4x slots still available I will propably go with safe bet and get two RR2310 in the near future.

Link to comment

Yes it is fully working with stock 4.4.2. However I stumbled across the topic below which is making me a little worried:

http://lime-technology.com/forum/index.php?topic=3879.0

 

I'm using the drives without any initialisation from RocketRAID bios so the disks are reported there as legacy. I couldn't make the UnRAID see initialised (jbod or raid1) arrays at all. I think the RocketRAID bios provides some sort of native, transparent, interface for drives outside RR arrays and this is why Linux is able to see them. There were some comments from Tom about RR2320 compatibility in the thread below:

http://lime-technology.com/forum/index.php?topic=2974.0

 

RR2310 is priced here around 150eur and RR2320 (8-port, 4x) is around 275eur. Since I have multiple PCI-E 4x slots still available I will propably go with safe bet and get two RR2310 in the near future.

It appears as if the RR2310 writes a "Legacy" indicator of some kind to sector 8 of a disk when used in the "Legacy" mode.

The big question is what is in sector 8/9 of the data disk in unRAID?   Assuming 512 byte sectors, I think you might get lucky.   

 

The first sector (sector 0) on the disk is the master boot record and partition table.  The first "partition" is started on the first whole "cylinder" as reported by the disk's geometry.  In every case I've seen, the first partition starts at sector 63.

 

This leaves sector 1 through 62 as unused.  In fact, in unRAID, parity calculations are not performed on the area before sector 63.   (Tom has stated in a PM to me he will eventually change this, as it is one of the items that would need to change to support other file-system types other than reiserfs.)

 

For now, I think the use of sector 8 by the card is safe... since I know unRAID is not using it.  In the future, I cannot see any file-system starting earlier than the existing start point (sector 63) since that breaks compatibility with older operating systems and their ability to mount the partition, so odds are good it will be OK then too.

 

Some history:

At one point I was testing the new "Verify Only" parity check feature. I purposely tried to create a parity error by writing to this un-used area of the disk between sector 1 and 62.  I quickly learned the parity "error" I created by writing directly to the disk was not detected. 

 

I wrote a quick PM to Tom stating that there might be an issue with the parity detection process, including some details of what I found, and he responded:

Thank you for the detailed report!  There is a simple explanation: parity is only calculated over partition 1 of each drive, so the MBR is excluded and what you saw is expected.

 

The history behind this is as follows.  Originally (like circa 2005), I wanted to let there be two partitions on each disk, p1 and p2.  p1 would exist following the MBR and extend to some point on the drive, then p2 would start from the end of p1 and extend to the end of the disk.

 

There would be two uses for p2, one use would be to provide some storage that would be outside the read/modify/write area of the disk, and thus be much faster for writes.

 

The second use was to handle case where you replaced a small data drive with one that was larger than Parity.  In this case I would just make p1 on the new disk the same size as Parity, and p2 would be available for non-parity-protected space.

 

So I just went about writing the code to map the 'md' devices to partition 1 on the underlying disk devices, and never got around to implementing the 'partition 2' idea, mainly because it will prove to be a can-of-worms from a management and usage perspective.

 

Unfortunately however, when the unraid driver starts, it is "hardcoded" to use partition 1, and if partition 1 does not exist, and does not take up the full size of the hard disk, then bad things will happen.  Hence, emhttpd will ensure these rules are enforced and will not start the array otherwise.

 

Turns out this little code shortcut has been the source of a few problems and causes some issues for features I'd like to add, such as adding windows-formatted NTFS drives.

 

So what I want to do is make some driver changes so that the 'md' devices map to the enitrety of the hard drive, and then define additional 'partition' md devices.  That is, now we have this:

 

  /dev/sda1 -> /dev/md1

  /dev/sdb1 -> /dev/md2

  :

 

I want to change to:

 

  /dev/sda -> /dev/md1

  /dev/sda1 -> /dev/md1p1

  /dev/sdb -> /dev/md2

  /dev/sdb1 -> /dev/md2p1

 

This change will eliminate complication in system management and permit some new features (should have done this from the start -oh well).  Unfortunately, it will also require a parity sync run to generate parity for the MBR's (and now Parity will really not have an MBR).  So this change will be in release 5.0.

He wrote that note to me last April, so who knows if it is still planned for version 5.0 of unRAID.  Even then, it sounds as if the first partition would still start where it currently does (sector 63), but that parity would just be calculated on the entire device instead of just the first partition.

 

The only "wrinkle" is if "Legacy" is written to sector 8 of the parity drive after Tom changes the parity routines to include the entire disk and not just the first partition.  It would then overwrite/corrupt whatever true "parity" data is stored there.  For that reason, I'd never put the parity disk on the RocketRaid controller.  It might not be an issue now, but it might be some day.

 

If the RR2310 does not write the "Legacy" indicator, and instead writes to somewhere in the last two Gig of space on the drive, who knows what will happen as it clobbers your data.  For that reason, do not configure the disk in the RR BIOS...

 

Joe L.

Link to comment

While Highpoint is working on this issue I did some digging on my own. I noticed that some people have the sata_mv listed in their syslog while some don't (I'm one of the latter). An example is in this post:

http://lime-technology.com/forum/index.php?topic=4390.msg41418#msg41418

 

I also tried to find information regarding AHCI and SATA in Linux in general. I stumbled across the following:

http://linuxmafia.com/faq/Hardware/sata.html

 

From the above I concluded (perhaps erroneously) that AHCI driver and sata_mv work in the same level and only one of them is used at a time. With UnRAID it seems like the AHCI driver is used when AHCI is selected in bios and the sata_mv is used in other situations (ide, enhanced ata). There might some other factors which affect the used driver but this is beyond my current knowledge.

 

So this might explain why I'm not seeing the warning message; my setup does not utilise the sata_mv driver since I'm using AHCI. But this does not remove the original worry about possible data corruption since only the warning message is missing ;)

 

I also found an installation guide for RocketRAID 23xx for Suse which contains the following:

In the welcome screen, select "Installation", press F6 and select Yes to load driver
update medium, if the to be installed system is openSUSE 10.3, if the to be installed
system is openSUSE 10.3, type in "brokenmodules=sata_mv"(without quotation
mark) after the Boot Options, if the driver is on the floppy diskette and it is
openSUSE 10.3 x86_64, add extra "insmod=floppy "(without quotation mark) after
the "brokenmodules=sata_mv", and then press Enter to start installation.

 

This is someway related to the same issue as the brokenmodules configuration item disables the loading of sata_mv driver and then a proprietary driver from Highpoint is installed.

 

Link to the guide itself:

http://www.highpoint-tech.com/BIOS_Driver/rr231x_00/Linux/newformat/v2.4-090420/Install_openSUSE_RR231x_0x.pdf

 

 

 

 

Link to comment
  • 2 months later...

I finally received an answer from Highpoint:

Hello,

 

The card does not write any data to the disks at the BIOS level, unless they were configured as arrays, and an error was reported (status change).

 

The sata_mv driver should be disabled unless our driver is not installed.

 

Regards,

 

Customer Support Department

The first point is understandable and relieving; there is no writing to the disks by the controller itself when using legacy-mode. But I don't fully understand latter comment about disabling the sata_mv driver. I assume it is only related to using sata_mv driver for the RR2310 controller, but you could have a system which makes a use of both sata_mv and RR2310 drivers. In this case you would get the warning message in the syslog which you could ignore.

 

I currently have seven drives in my system and I have had zero problems regarding RocketRAID 2310.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.