Areca Contoller Configuration for unRAID


Recommended Posts

Recently there have been some very well priced Areca controllers on ebay including:

 

ARC-1230, ARC-1231ML, ARC-1260, ARC-1261ML, ARC-1280, ARC-1280ML

 

I bought an ARC-1231ML and experienced 2 primary issues with drives connected to it.

 

1 - The drive names in the unRAID dropdown use a strange notation that identifies the slot on the Areca controller and not the drive. Using this number would work, but if you ever tried to move the drive to a different slot, put in a replacement drive, or move the drive to a different controller, unRAID, which uses the drive serial number to track these changes, would be oblivious. This is a very serious issue and without a workaround, would make these controllers very difficult to use for us. Fortunately there is an easy fix courtesy of bubbaQ.

 

2 - Less critical, but the smartctl command doesn't work in the normal way. It is not at all obvious, but there is a way to coerce smartctl to work with the controller, again courtesy of bubbaQ.  Newer controllers got a firmware update that fixed the problem with running smartctl which would probably be more compatible with unRAID's GUI, but myMain has a configuration option to make getting smart reports easy. It may take some help from unRAID and Dynamix to better support the strange way you need to call smartctl, but the built-in GUI works well enough to get the controller configured, and myMain does have the configuration options to make smart reporting easy to do until (if?) unRAID's GUI supports this configuration.

 

Although excited to get my $69 12-drive controller, I will say one thing on the negative side. It is a long controller board, and the connections are on the far edge, making the controller even longer. If you are cramped in that dimension, which I was, you may have trouble fitting it in your case. As for me, I was able to make it just barely work. I only needed 8 ports for now, and need to do some work to use the last 4, but it is possible. Do some measuring to make sure it will work for you! Wish they put the connectors on the top edge like LSI.

 

I'll say one other negative thing - people running ZFS with this controller had complaints about drives dropping from the controller due to error-correcting behavior of the SMART system in the drives. I see there are configuration options to elongate these timeouts, and I (personally) almost never see a reallocated sector which is the main culprit. I don't think this is a big deal, but trying to give all the pros and cons I know.

 

On the positive side, you should be able to configure a RAID0 parity set on the controller, and pass through the remaining disks. I have been using a RAID0 parity for years with an ARC-1200 (2 port card). Being able to combine the capacities of 2 or even more older drives to make a parity big enough to allow a large new disk in your array is pleasing. Also you get a little speed bump. I am using 2 3TB drives to make a 6TB parity on my main array and I am very happy with the solution. The alternative is having to buy 2 of the larger disks to get any benefit, kind of a downer when the new larger disks are pricey. Some will quickly point out that this impacts the reliability of your parity, Certainly take that into consideration and don't do it if you subscribe that this is an undue risk.

 

INSTRUCTIONS

 

Here are the instructions to overcome the Areca's peculiarities. I am working on this in real time and will update these with more information as I continue.

 

1 - Copy bubbaQ's 60-persistent-storage.rules from the zip file in THIS post to the your flash drive (e.g., \\tower\flash).

 

1.5 - Optionally you can bring up the unRAID GUI and see the strange identifications on the drives. Don't assign anything.

 

2 - Run the following commands from a telnet prompt:

        cp /boot/60-persistent-storage.rules /lib/udev/rules.d

        udevadm trigger

 

3 - Bring up / refresh the unRAID GUI. You should see the drives on the controller identified in the familiar drive naming convention. If this is an existing array, they should match up properly with slots in the array already configured for that drive.

 

4 - Run the command

        lsscsi -g|grep "Areca"

 

      it should produce output something like ....

        [2:0:16:0]  process Areca    RAID controller  R001  -          /dev/sg9

 

5 - Run the command:

        smartctl -a -d areca,{slot number}/{container number} {device id from above in blue}

 

      for example:

        smartctl -a -d areca,5/1 /dev/sg9

 

      Notes:

        - For me, the container number didn't matter - has to be 1-8. Maybe important on some controllers. The slot number is very important.

        - If you are using JBOD mode on the controller, the slot number will correlate to the slot on the controller board. For my ARC-1231ML, there are three SAS connectors. The bottom one is slots 1-4, the middle one is slots 5-8, and the top one is slots 9-12. I had some issues, due to motherboard obstructions, using the bottom connector, so I used the middle and top and my populated slots are 5-12. I'll have to do some redecorating to use the other SAS connector to use slots 1-4 in the future.

        - If you are using a RAID array and/or pass through mode, TBD

 

6 - Configure and start your array.

 

7 - Place the following commands into your go file just before loading emhttp so that this will happen every time you boot. Do a test reboot if you'd like to confirm the configuration is working across reboots.

        cp /boot/60-persistent-storage.rules /lib/udev/rules.d

        udevadm control --reload-rules

        udevadm trigger

        sleep 5

 

-- Remaining instructions are to configure myMain to pull smart reports. --

 

8 - Go into myMain (part of unmenu)

 

9 - Click on one of the drive ID columns for a drive connected to the Areca controller. It contains the last 4 characters of your serial number. It should bring up a configuration panel of drive attributes. In the "custom attributes" section (under Other Notes), you need to configure two custom attributes.

 

      smartopt      -d areca,X/Y /dev/sgZ      (you need to specify the X, Y, and Z based on the instructions in step 5 above)

      spinind          -1

 

      Then click SAVE, close the tab

 

10 - Refresh the myMain page, then click the "sm" hyperlink on the row of the drive you just set the attributes on. The smart report should come up in a new tab. Verify that the last 4 of the serial number in the smart report match the drive ID. If so, you are golden, If not, go back to step 9 and correct your smartopt.

 

11 - Repeat steps 9 and 10 for each drive connected to the Areca controller.

 

 

Good luck. Reply in this thread with successes, failures, or questions. I will try to update the instructions as I get feedback and learn more.

  • Upvote 1
Link to comment

Word of caution... the first time you change the udev naming rules, if you already have drives on an Areca controller in an unRAID array, they will not be found and you will need to reassign them.  Be sure you have a way to ID each drive and put it back into the correct slot number with its new udev name.

Link to comment

I noticed that. But even so, I definitely recommend those using any Areca controller follow my instructions on replacing the rules file in the go file. It will change the drive naming, but you should be able to do a new config, reassign the disks, and trust parity and be back up and running in a matter of minutes.

 

I have an ARC-1200 in my prod box exposing a RAID0 parity that I've used for years. With your input I can now pull smart reports for the drives embedded. On my other (backup) array I replaced a BR10i with this an ARC-1231ML. All of those disks are JBOD. Still need to experiment with a mixed setup (i.e., a RAID0 parity pair and the rest setup as "pass through" disks).

 

Have noticed a peculiarity. If you try to pull a smart report from a sleeping drive, it gets garbled (spinning the disk up). If you then run another immediately after, it works fine. (Except WD drives which are able to provide a smart report without spinning up the drive. They always work and don't soon up the drive.)

 

Working on some technology to aid me and others with these controllers. Stay tuned.

Link to comment

Swapped Areca ARC-1231ML in today. Like you mentioned it is *tight* fit even in a mid size tower. Good news is the card displays info & access to BIOS during POST (other card stopped for some reason).

 

Can you explain what the purpose of this part is? I assumed you meant to run it for each slot (i did). What I got was SMART report for each. Is this required or just informational? Adding commands to go file now and will reboot and run some benches. Hoping for a bump in speed.

 

5 - Run the command:

        smartctl -a -d areca,{slot number}/{container number} {device id from above in blue}

 

      for example:

        smartctl -a -d areca,5/1 /dev/sg9

 

      Notes:

        - For me, the container number didn't matter - has to be 1-8. Maybe important on some controllers. The slot number is very important.

        - If you are using JBOD mode on the controller, the slot number will correlate to the slot on the controller board. For my ARC-1231ML, there are three SAS connectors. The bottom one is slots 1-4, the middle one is slots 5-8, and the top one is slots 9-12. I had some issues, due to motherboard obstructions, using the bottom connector, so I used the middle and top and my populated slots are 5-12. I'll have to do some redecorating to use the other SAS connector to use slots 1-4 in the future.

        - If you are using a RAID array and/or pass through mode, TBD

Link to comment

Disk names do not survive a reboot. Here is the contents of my Go file:

 

#!/bin/bash

# Start the Management Utility

 

cp /boot/60-persistent-storage.rules /lib/udev/rules.d

udevadm trigger

 

/usr/local/sbin/emhttp &

 

#  cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c

/boot/unmenu/uu

 

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c

 

running commands from telnet works for now.

Link to comment

FYI I am getting a nice speed boost from ARC-1231ML over the AOC-SASLP-MV8 it replaced.  Ran unraid-tunables-tester.sh in full auto and the speeds show about 67% improvement AND using -7MB less RAM.  I am a happy customer.

 

current run on ARC-1231ML

Completed: 2 Hrs 24 Min 48 Sec.

Best Bang for the Buck: Test 1 with a speed of 98.8 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 71MB of RAM on your hardware.


Unthrottled values for your server came from Test 40 with a speed of 90.9 MB/s

     Tunable (md_num_stripes): 1416
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 71MB of RAM on your hardware.
This is -7MB less than your current utilization of 78MB.
NOTE: Adding additional drives will increase memory consumption

 

previous results on the AOC-SASLP-MV8...

 

Completed: 2 Hrs 14 Min 4 Sec.

Best Bang for the Buck: Test 2 with a speed of 66.6 MB/s

     Tunable (md_num_stripes): 1536
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 640

These settings will consume 78MB of RAM on your hardware.

Unthrottled values for your server came from Test 23 with a speed of 69.3 MB/s

     Tunable (md_num_stripes): 6024
     Tunable (md_write_limit): 2712
     Tunable (md_sync_window): 2712

These settings will consume 305MB of RAM on your hardware.
This is 240MB more than your current utilization of 65MB.
NOTE: Adding additional drives will increase memory consumption.

Link to comment

Awesome news!

 

Did you update the firmware to the latest?

 

Did you set so that drives would spin down after an hour?

 

Are you an unmenu user? (If not, you might want to install it as I'm working on some enhancements for Areca controllers and can share with you to try them out).

Link to comment

Disk names do not survive a reboot. Here is the contents of my Go file:

 

#!/bin/bash

# Start the Management Utility

 

cp /boot/60-persistent-storage.rules /lib/udev/rules.d

udevadm trigger

 

/usr/local/sbin/emhttp &

 

#  cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c

/boot/unmenu/uu

 

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c

 

running commands from telnet works for now.

 

Hmmmm... not sure why that isn't working the go file. Maybe put it after the load of emhttp?

 

And I see you are an unmenu user.

Link to comment

Awesome news!

 

Did you update the firmware to the latest?

 

Did you set so that drives would spin down after an hour?

 

Are you an unmenu user? (If not, you might want to install it as I'm working on some enhancements for Areca controllers and can share with you to try them out).

 

I loaded the 1.49 BOOT & FIRM bins through the HTTP interface. Do I need to load the other two also (BIOS, MBR0)? I was hesitant since the upload only mentioned the 2 files & once they were loaded System Information inside ARC BIOS showed the new firmware number for both.

 

Is the Spindown in BIOS? I did not change that but will if recommended.

Link to comment

Awesome news!

 

Did you update the firmware to the latest?

 

Did you set so that drives would spin down after an hour?

 

Are you an unmenu user? (If not, you might want to install it as I'm working on some enhancements for Areca controllers and can share with you to try them out).

 

I loaded the 1.49 BOOT & FIRM bins through the HTTP interface. Do I need to load the other two also (BIOS, MBR0)? I was hesitant since the upload only mentioned the 2 files & once they were loaded System Information inside ARC BIOS showed the new firmware number for both.

 

Is the Spindown in BIOS? I did not change that but will if recommended.

 

Instructions I read said to update all 4 firmware files.

Link to comment

firmware all updated, HDD timeout set to 60. Disk config is not surviving reboot. :( Current go file:

 

#!/bin/bash
# Start the Management Utility

/usr/local/sbin/emhttp &

cp /boot/60-persistent-storage.rules /lib/udev/rules.d
udevadm trigger

#  cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 
/boot/unmenu/uu

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 

Link to comment

Going to try adding it at the end?

 

ps. okay now it updates the configuration but the array doesn't start on its own.

 

now current go file

 

#!/bin/bash
# Start the Management Utility

/usr/local/sbin/emhttp &

#  cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 
/boot/unmenu/uu

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 

cp /boot/60-persistent-storage.rules /lib/udev/rules.d
udevadm trigger

Link to comment

Going to try adding it at the end?

 

ps. okay now it updates the configuration but the array doesn't start on its own.

 

now current go file

 

#!/bin/bash
# Start the Management Utility

/usr/local/sbin/emhttp &

#  cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 
/boot/unmenu/uu

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 

cp /boot/60-persistent-storage.rules /lib/udev/rules.d
udevadm trigger

 

This is not something I tested. I just assumed that running those commands any time during the boot process would result in the naming rules being put into practice and used.

 

I do not believe that the rules have anything to do with emhttp or are impacted by the packages that get loaded based on unmenu configuration. But I could be wrong. I am now wondering if the reason moving the commands to the end is making it work is because of the amount of time the other activities is causing sufficient delay for something else the OS is doing in the background to complete. I suggest adding a "sleep 30" (30 second delay) into the go file before running the copy and udevadm trigger commands all before emhttp is loaded.

 

If that doesn't work, add the line "v /dev/disk/by-id >/boot/debug.txt" right after the udevadm trigger command. The output of that command will let you know if the rules change took effect. Ultimately I'd want to know if the rules changed and then were reverted, or if the rules never changed.

 

If you continue to see the rules not taking effect, could you confirm that the updated rules file is, in fact, in the /lib/udev/rules.d directory, or whether the file has been replaced with the original (or some other file) somehow. The command "cp /lib/udev/rules.d/60-persistent-storage.rules /boot/60rules.txt" will put the current file on your flash drive for comparison to bubbaQ's version.

 

I'll also play around on my array this weekend. But since I reboot so infrequently, and I never let the array start on its own, not a big deal for me.

Link to comment

Going to try adding it at the end?

 

ps. okay now it updates the configuration but the array doesn't start on its own.

 

now current go file

 

#!/bin/bash
# Start the Management Utility

/usr/local/sbin/emhttp &

#  cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 
/boot/unmenu/uu

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 

cp /boot/60-persistent-storage.rules /lib/udev/rules.d
udevadm trigger

 

This is not something I tested. I just assumed that running those commands any time during the boot process would result in the naming rules being put into practice and used.

 

I do not believe that the rules have anything to do with emhttp or are impacted by the packages that get loaded based on unmenu configuration. But I could be wrong. I am now wondering if the reason moving the commands to the end is making it work is because of the amount of time the other activities is causing sufficient delay for something else the OS is doing in the background to complete. I suggest adding a "sleep 30" (30 second delay) into the go file before running the copy and udevadm trigger commands all before emhttp is loaded.

 

If that doesn't work, add the line "v /dev/disk/by-id >/boot/debug.txt" right after the udevadm trigger command. The output of that command will let you know if the rules change took effect. Ultimately I'd want to know if the rules changed and then were reverted, or if the rules never changed.

 

If you continue to see the rules not taking effect, could you confirm that the updated rules file is, in fact, in the /lib/udev/rules.d directory, or whether the file has been replaced with the original (or some other file) somehow. The command "cp /lib/udev/rules.d/60-persistent-storage.rules /boot/60rules.txt" will put the current file on your flash drive for comparison to bubbaQ's version.

 

I'll also play around on my array this weekend. But since I reboot so infrequently, and I never let the array start on its own, not a big deal for me.

 

Just to be clear the config DOES take effect. Just that the array does not autostart. Like yours, reboots are infrequent here and I can live with needing to manually review & start the array if and when I do.

 

I will try and get this data for you probably into next week as I am replacing two drives this weekend and with data rebuilds and parity checks its going to be occupied for a while.

Link to comment

udev hasn't settled yet when you start emhttp.  Put in sleep 5 between the trigger and  starting emhttpd.

 

added to go. will test next reboot :) Thanks for the help.

 

#!/bin/bash
# Start the Management Utility

cp /boot/60-persistent-storage.rules /lib/udev/rules.d
udevadm trigger

sleep 5

/usr/local/sbin/emhttp &

/boot/unmenu/uu

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 

Link to comment

udev hasn't settled yet when you start emhttp.  Put in sleep 5 between the trigger and  starting emhttpd.

 

added to go. will test next reboot :) Thanks for the help.

 

#!/bin/bash
# Start the Management Utility

cp /boot/60-persistent-storage.rules /lib/udev/rules.d
udevadm trigger

sleep 5

/usr/local/sbin/emhttp &

/boot/unmenu/uu

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 

 

sleep 5 had no effect. Had to run commands manually through telnet after reboot.

Link to comment

No change. Boots up to "invalid configuration" with drive lables garbled. Running commands via telnet fixes it.

 

Go file:

#!/bin/bash
# Start the Management Utility

cp /boot/60-persistent-storage.rules /lib/udev/rules.d
udevadm trigger

sleep 30

/usr/local/sbin/emhttp &

/boot/unmenu/uu

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.