unRAID Server release 4.5 "final" Available


limetech

Recommended Posts

unRAID Server 4.5.1 Release Notes

=================================

 

Changes from 4.5 to 4.5.1

-------------------------

 

Bug fixes:

- Fix javascript bug checking valid settings on the Settings page.

- Fix bug where a disk can appear 'Unformatted' immediately after array Start.

- Increase unmount polling from 1 second to 5 seconds when Stopping the array when external extensions still have a disk or use share mounted.

 

Other:

- Updated linux kernel to version 2.6.31.12

- Updated Samba to version 3.4.5

- Added Areca driver.

- Added Marvell legacy PATA support.

- Added USB printer support.

 

Link to comment
  • Replies 208
  • Created
  • Last Reply

Top Posters In This Topic

- Updated linux kernel to version 2.6.31.12

 

For anyone trying to get unRAID working under the 2.6.32 series of kernels, there's been a bit of rework related to barriers, which causes a compile issue in the main unRAID driver (md.c). I haven't fully followed the Linux kernel changes on this, so I hit a bit of a speedbump in upgrading my system. I'll update as I find more info.

 

 

Link to comment

I found the same issue, and was really bummed, since I wanted to test the new flushing system.

 

The new flushing system in 2.6.32 does away with the pdflush threads, and uses a dedicated kernel thread to flush the dirty memory of each storage device.  Instead of one thread per CPU, you have a dedicated thread per device, that can keep the device busier, and hopefully at a performance level closer to its potential. This could make a significant impact ion unRAID write performance.

 

 

Link to comment

After I finish running a parity check, I'm going to be stupid, and try out unRAID under 2.6.32 series. I didn't do an extremely deep analysis of the changes the standard multi-disk drivers went through from 2.6.31 to 2.6.32, but I did find what seemed to be simple changes in bio.h.

 

removed in bio.h, under Linux 2.6.32:

/*
* Old defines, these should eventually be replaced by direct usage of
* bio_rw_flagged()
*/
#define bio_barrier(bio)       bio_rw_flagged(bio, BIO_RW_BARRIER)

 

So I modified unRAID 4.5.1's unraid.c, lines 1300 to be:

        /* barriers not supported */
        //      if (bio_barrier( bi)) {
        if (bio_rw_flagged( bi, BIO_RW_BARRIER)) {
                bio_endio( bi, -EOPNOTSUPP);
                return 0;
        }

 

This single change allowed the kernel and driver to compile. However, I'm not certain that is all that needs to be changed. I'm not that familiar with the standard multi-disk drivers to know quickly if it's changes for performance sake (multiple barriers used instead of a single one) or changes for necessity.

 

If you try it out on a spare unraid system or under a dev instance, share the results.

Link to comment

NAS, I wouldn't say what I or other advanced users are trying to do fall into the realm of problems. We're pushing the envelope beyond what the software solution is supported.

 

In that regards, I now have unRAID 4.5.1 installed on my Slackware-Current running Kernel 2.6.32.7.

 

/var/log/messages:

Jan 31 13:50:37 Reaver emhttp: unRAID System Management Utility version 4.5.1
Jan 31 13:50:37 Reaver emhttp: Copyright (C) 2005-2009, Lime Technology, LLC
Jan 31 13:50:37 Reaver emhttp: Unregistered
Jan 31 13:50:37 Reaver emhttp: Device inventory:
Jan 31 13:50:37 Reaver emhttp: pci-0000:00:1f.2-scsi-0:0:0:0 host0 (sda) WDC_WD1001FALS-00J7B0_WD-WMATV1120303
Jan 31 13:50:37 Reaver emhttp: pci-0000:00:1f.2-scsi-1:0:0:0 host1 (sdb) WDC_WD20EADS-00R6B0_WD-WCAVY0211284
Jan 31 13:50:37 Reaver emhttp: pci-0000:00:1f.2-scsi-4:0:0:0 host4 (sdc) WDC_WD20EADS-00R6B0_WD-WCAVY0247937
Jan 31 13:50:37 Reaver emhttp: pci-0000:00:1f.2-scsi-5:0:0:0 host5 (sdd) WDC_WD20EADS-00R6B0_WD-WCAVY0252670
Jan 31 13:50:37 Reaver emhttp: pci-0000:03:00.0-ide-0:0 ide0 (hda) no id
Jan 31 13:50:37 Reaver emhttp: shcmd (1): rmmod md-mod >>/var/log/go 2>&1
Jan 31 13:50:37 Reaver /usr/sbin/gpm[3156]: *** info [mice.c(1766)]:
Jan 31 13:50:37 Reaver /usr/sbin/gpm[3156]: imps2: Auto-detected intellimouse PS/2
Jan 31 13:50:37 Reaver emhttp: shcmd (2): modprobe md-mod super=/boot/config/super.dat slots=8,16,8,32,8,48 >>/var/log/go 2>&1
Jan 31 13:50:37 Reaver kernel: xor: automatically using best checksumming function: pIII_sse
Jan 31 13:50:37 Reaver kernel:    pIII_sse  :  9084.000 MB/sec
Jan 31 13:50:37 Reaver kernel: xor: using function: pIII_sse (9084.000 MB/sec)
Jan 31 13:50:37 Reaver emhttp: Spinning up all drives...
Jan 31 13:50:37 Reaver emhttp: shcmd (3): mkdir /mnt/disk1
Jan 31 13:50:37 Reaver emhttp: shcmd (3): mkdir /mnt/disk2
Jan 31 13:50:37 Reaver emhttp: shcmd (4): mount -t reiserfs -o noacl,nouser_xattr,noatime,nodiratime /dev/md1 /mnt/disk1  >/dev/null 2>&1
Jan 31 13:50:37 Reaver kernel: REISERFS (device md1): found reiserfs format "3.6" with standard journal
Jan 31 13:50:37 Reaver kernel: REISERFS (device md1): using ordered data mode
Jan 31 13:50:37 Reaver emhttp: shcmd (5): mount -t reiserfs -o noacl,nouser_xattr,noatime,nodiratime /dev/md2 /mnt/disk2  >/dev/null 2>&1
Jan 31 13:50:37 Reaver kernel: REISERFS (device md1): journal params: device md1, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
Jan 31 13:50:37 Reaver kernel: REISERFS (device md1): checking transaction log (md1)
Jan 31 13:50:37 Reaver kernel: REISERFS (device md2): found reiserfs format "3.6" with standard journal
Jan 31 13:50:37 Reaver kernel: REISERFS (device md2): using ordered data mode
Jan 31 13:50:37 Reaver kernel: REISERFS (device md2): journal params: device md2, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
Jan 31 13:50:37 Reaver kernel: REISERFS (device md2): checking transaction log (md2)
Jan 31 13:50:37 Reaver kernel: REISERFS (device md1): Using r5 hash to sort names
Jan 31 13:50:37 Reaver kernel: REISERFS (device md2): Using r5 hash to sort names
Jan 31 13:50:37 Reaver emhttp: shcmd (7): rm /etc/samba/smb-shares.conf >/dev/null 2>&1
Jan 31 13:50:37 Reaver emhttp: shcmd (: cp /etc/exports- /etc/exports
Jan 31 13:50:37 Reaver emhttp: shcmd (9): mkdir /mnt/user
Jan 31 13:50:37 Reaver emhttp: shcmd (10): /usr/local/sbin/shfs /mnt/user  -o noatime,big_writes,allow_other,default_permissions
Jan 31 13:50:38 Reaver emhttp: shcmd (11): killall -HUP smbd
Jan 31 13:50:38 Reaver emhttp: shcmd (12): /etc/rc.d/rc.nfsd restart | logger

 

/var/log/syslog:

Jan 31 13:50:37 Reaver emhttp: _shcmd: shcmd (1): exit status: 1
Jan 31 13:50:37 Reaver kernel: md: unRAID driver 0.95.4 installed
Jan 31 13:50:37 Reaver kernel: md: import disk0: [8,16] (sdb) WDC WD20EADS-00R WD-WCAVY0211284 offset: 63 size: 1953514552
Jan 31 13:50:37 Reaver kernel: md: import disk1: [8,32] (sdc) WDC WD20EADS-00R WD-WCAVY0247937 offset: 63 size: 1953514552
Jan 31 13:50:37 Reaver kernel: md: import disk2: [8,48] (sdd) WDC WD20EADS-00R WD-WCAVY0252670 offset: 63 size: 1953514552
Jan 31 13:50:37 Reaver kernel: mdcmd (2): set md_num_stripes 2784
Jan 31 13:50:37 Reaver kernel: mdcmd (3): set md_write_limit 1536
Jan 31 13:50:37 Reaver kernel: mdcmd (4): set md_sync_window 576
Jan 31 13:50:37 Reaver kernel: mdcmd (5): set spinup_group 0 0
Jan 31 13:50:37 Reaver kernel: mdcmd (6): set spinup_group 1 0
Jan 31 13:50:37 Reaver kernel: mdcmd (7): set spinup_group 2 0
Jan 31 13:50:37 Reaver kernel: mdcmd (: spinup 0
Jan 31 13:50:37 Reaver kernel: mdcmd (9): spinup 1
Jan 31 13:50:37 Reaver kernel: mdcmd (10): spinup 2
Jan 31 13:50:37 Reaver kernel: mdcmd (12): start STOPPED
Jan 31 13:50:37 Reaver kernel: unraid: allocating 39628K for 2784 stripes (3 disks)
Jan 31 13:50:37 Reaver kernel: md1: running, size: 1953514552 blocks
Jan 31 13:50:37 Reaver kernel: md2: running, size: 1953514552 blocks
Jan 31 13:50:37 Reaver kernel: mdcmd (14): check
Jan 31 13:50:37 Reaver kernel: md: recovery thread woken up ...
Jan 31 13:50:37 Reaver kernel: md: recovery thread has nothing to resync

 

Link to comment

I'm in process of doing a read-only parity check, so that will run the rest of today. At 46% done it's estimating 215+ minutes left and increasing as tit moves onto the slower portion of the disks. I didn't grab benchmarks under 2.6.31 before I upgraded to 2.6.32, mostly because I didn't expect 2.6.32 to work on the first go. I'll grab benchmarks tomorrow.

Link to comment

NAS, I wouldn't say what I or other advanced users are trying to do fall into the realm of problems.

 

Its not really the specifics its the release principle. unRAID historically has all been about slow, steady and stable then.... pop a stable gets released with no community testing at all. It, by definition shouldn't, be called a stable.

 

Doesn't really effect me as I am treating it as beta but I am here all the time and know to treat it this way.

 

Anyways food for thought.

Link to comment

I went ahead and built a new 2.6.32.7 kernel to test the new kernel buffer flushing system (no more pdflush).

 

Results were fantastic. 

 

- The "spikiness" when writing to unRAID over the LAN is gone.

- Writing over the LAN was 20% to 25% faster.

- Raw write speed using dd was 10% to 15% faster, with better CPU utilization (higher IO Waits, but also higher system time)

 

This post has a graph showing before and after:

 

    http://lime-technology.com/forum/index.php?topic=5146.msg48413#msg48413

Link to comment

 

He mentioned he was gonna put in 24 disk support in 4.5.1 i guess he didn't ;(

 

Then he better put in 10 disk support in the Plus version while he's at it!  :)

 

Seriously, Pro used to be 12 disks, then 14 disks, then 18, then 20, now 24...

Plus is still stuck at 5 data disks.  He could show a little generosity here.

 

The Plus people are not the enemy, you know. They are still paying customers.

And the 70 bucks for the Plus license is not exactly pocket change.

 

Link to comment

Then he better put in 10 disk support in the Plus version while he's at it!  :)

 

Seriously, Pro used to be 14 disks, then 18, then 20, now 24...

Plus is still stuck at 5 data disks.  He could show a little generosity here.

 

The Plus people are not the enemy, you know. They are still paying customers.

And the 70 bucks for the Plus license is not exactly pocket change.

yeah, what he said.  (shameless Plus user bump  8) ).
Link to comment

 

He mentioned he was gonna put in 24 disk support in 4.5.1 i guess he didn't ;(

 

Then he better put in 10 disk support in the Plus version while he's at it!  :)

 

 

Sorry but I don't see the need to upgrade the number of disks supported on the plus version.

 

I can see the reason for adding the cache disk as that provides a benefit to any system (whether its really necessary with the new kernel is a separate question) and I can see the need to increase disk support in the top of the range system as the ability to add more disks to the array becomes possible thanks to hardware and kernel changes. 

 

I just cannot see the reason for increasing the number of disks in the plus addition. The only benefit is to save money for people who would otherwise be paying for a upgrade.

Link to comment

I disagree. What about people buying a MB with no plans to use any extra cards or port multipliers evr. 6-8 sata ports is the general range of most motherboards for this task but people have to buy a license that support 20 that they will never use or choose not to utilise some of their ports.

 

There is a definite need for either a new version or a Plus disk count upgrade.

Link to comment

I disagree. What about people buying a MB with no plans to use any extra cards or port multipliers evr. 6-8 sata ports is the general range of most motherboards for this task but people have to buy a license that support 20 that they will never use or choose not to utilise some of their ports.

 

There is a definite need for either a new version or a Plus disk count upgrade.

 

But it already does that. I have a server with 6 sata ports. Connected to it are:-

1 parity disk

5 data disks.

 

to add a cache disk I need to either add an extender card or use an Ide disk as cache. I can see there is still expandability on motherboards with 8 sata ports but I given the cost of those motherboards I don't think $30 on a pro licence is much of a difference.

Link to comment
I don't think $30 on a pro licence is much of a difference.
FYI  $119 (pro) - $69 (plus) = $50.  Pro Upgrade from plus = $59 ($49 with the current coupon).  Show me a $30 upgrade and I will show you $30 leaving my wallet.  $30 each is for 2 keys....most only have one.

 

While I think it's great a cache drive was added to plus, I'd rather have an extra data drive than a cache drive.  Write just aren't that painful, and my system writes slower than many.

 

(Note: I am not complaining about the cost of unRaid.  While I don't think it's cheap, I don't think it's expensive either. I'm just pointing out that $30 is nowhere in the equation for a pro single key upgrade)

Link to comment

I agree no one is complaining that unRAID is too expensive but lets keep this in perspective. A single unRAID pro key would cost the same as TWO... yes two copies of "Microsoft Windows Home Server w/Power Pack 1 - Licence and media - 1 server, 10 CALs - OEM - CD/DVD - 32-bit - English" over here in the EU. Anyways we go OT.

Link to comment

Please take the "cost complaint/analysis" to another thread, perhaps in the lounge or address it with limetech directly. Tom was very direct in posting the rules for the announcement board.

 

The Announcement Board is used to announce the availability of new unRAID OS releases, and the reporting and discussion of bugs and/or features specific to the announced release only.  Any off-topic posts will be deleted without warning.

 

http://lime-technology.com/forum/index.php?topic=4640.0

Link to comment

To be fair he also said everything to do with a release should be in the release thread. You are obviously correct but everyone's following the rules, the rules just overlap just as all one file pile rules do.

 

Again also to be fair feature suggestions and Lounge threads almost never get Limetech responses.

 

I agree with both sides,Tom can decide with his delete button.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.