unRAID Server Release 5.0-beta6 Available


Recommended Posts

Download

 

Please read: There's a bug that started with 5.0-beta4 where the driver is writing the 'config/super.dat' file in the wrong format.  If you are running -beta4, -beta5, -beta5a, or -beta5b, you must delete the file 'config/super.dat' before booting 5.0-beta6.  This means you must re-assign all your hard drives again (sorry).  If you are coming from 4.7, you will not need to do this.

 

Please read the Release Notes located on the unRAID wiki.

 

In particular, uninstall or disable any 3rd party add-ons until the add-on author has verified correct operation with this release.

Link to comment
  • Replies 119
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

small question about your installation notes

 

it says

 

  1. Prepare the flash: either shutdown your server and plug the flash into your PC or Stop the array and perform the following actions referencing the flash share on your network:

          * Copy the files bzimage and bzroot from the zip file to the root of your flash device, overwriting the same-named files already there.

          * Delete the file config/super.dat. You will need to re-assign all your hard drives.

  2. Start the array. All disks should mount and parity should be valid.

 

it doesn't mention a reboot if you are upgrading the flash from the network

but i assume a reboot is always needed as otherwise the new image will not be used ...

 

for me i always needed a reboot after upgrading the image

only starting the array never brought up the new image ... which is normal i think?

 

i just think it might confuse NEW people to unraid/Linux ...

Link to comment

small question about your installation notes

 

it says

 

   1. Prepare the flash: either shutdown your server and plug the flash into your PC or Stop the array and perform the following actions referencing the flash share on your network:

          * Copy the files bzimage and bzroot from the zip file to the root of your flash device, overwriting the same-named files already there.

          * Delete the file config/super.dat. You will need to re-assign all your hard drives.

   2. Start the array. All disks should mount and parity should be valid.

 

it doesn't mention a reboot if you are upgrading the flash from the network

but i assume a reboot is always needed as otherwise the new image will not be used ...

 

for me i always needed a reboot after upgrading the image

only starting the array never brought up the new image ... which is normal i think?

 

i just think it might confuse NEW people to unraid/Linux ...

 

Good point, I'll revise the instructions.

Link to comment

Please read: There's a bug that started with 5.0-beta4 where the driver is writing the 'config/super.dat' file in the wrong format.  If you are running -beta4, -beta5, -beta5a, or -beta5b, you must delete the file 'config/super.dat' before booting 5.0-beta6.  This means you must re-assign all your hard drives again (sorry).  If you are coming from 4.7, you will not need to do this.

 

I updated the Linux kernel unRAID drivers along with the other changed files, deleted the super.dat, and rebooted. After assigning the existing drives to the array and clicking START, 3 of 5 drives show up as UNFORMATTED. Stopping and restarting the array did not help.

 

I will try rebooting into a pure unRAID 5.0b6 environment to see if its any different.

 

Mar  1 04:32:15 reaver kernel: read_file: error 2 opening /boot/config/super.dat

Mar  1 04:32:15 reaver kernel: md: could not read superblock from /boot/config/super.dat

Mar  1 04:32:15 reaver kernel: md: initializing superblock

Mar  1 04:32:15 reaver kernel: mdcmd (1): import 0 8,16 1953514552 ST32000542AS_6XW1PDHJ

Mar  1 04:32:15 reaver kernel: md: import disk0: [8,16] (sdb) ST32000542AS_6XW1PDHJ size: 1953

514552

Mar  1 04:32:15 reaver kernel: md: disk0 new disk

Mar  1 04:32:15 reaver kernel: mdcmd (2): import 1 8,32 1953514552 WDC_WD20EADS-00R6B0_WD-WCAV

Y0247937

Mar  1 04:32:15 reaver kernel: md: import disk1: [8,32] (sdc) WDC_WD20EADS-00R6B0_WD-WCAVY0247

937 size: 1953514552

Mar  1 04:32:15 reaver kernel: md: disk1 new disk

Mar  1 04:32:15 reaver kernel: mdcmd (3): import 2 8,48 1953514552 WDC_WD20EADS-00R6B0_WD-WCAV

Y0252670

Mar  1 04:32:15 reaver kernel: md: import disk2: [8,48] (sdd) WDC_WD20EADS-00R6B0_WD-WCAVY0252

670 size: 1953514552

Mar  1 04:32:15 reaver kernel: md: disk2 new disk

Mar  1 04:32:15 reaver kernel: mdcmd (4): import 3 8,64 1953514552 WDC_WD20EADS-00R6B0_WD-WCAV

Y0211284

Mar  1 04:32:15 reaver kernel: md: import disk3: [8,64] (sde) WDC_WD20EADS-00R6B0_WD-WCAVY0211

284 size: 1953514552

Mar  1 04:32:15 reaver kernel: md: disk3 new disk

Mar  1 04:32:15 reaver kernel: mdcmd (5): import 4 8,96 1953514552 ST32000542AS_5XW1J686

Mar  1 04:32:15 reaver kernel: md: import disk4: [8,96] (sdg) ST32000542AS_5XW1J686 size: 1953

514552

Mar  1 04:32:15 reaver kernel: md: disk4 new disk

Mar  1 04:32:15 reaver kernel: mdcmd (6): import 5 8,112 1953514552 ST32000542AS_6XW0DCJ4

Mar  1 04:32:15 reaver kernel: md: import disk5: [8,112] (sdh) ST32000542AS_6XW0DCJ4 size: 195

3514552

Mar  1 04:32:15 reaver kernel: md: disk5 new disk

Mar  1 04:32:15 reaver kernel: mdcmd (7): import 6 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (8): import 7 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (9): import 8 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (10): import 9 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (11): import 10 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (12): import 11 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (13): import 12 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (14): import 13 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (15): import 14 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (16): import 15 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (17): import 16 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (18): import 17 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (19): import 18 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (20): import 19 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (21): import 20 0,0

Mar  1 04:32:15 reaver kernel: mdcmd (22): set md_num_stripes 3840

Mar  1 04:32:15 reaver kernel: mdcmd (23): set md_write_limit 2304

Mar  1 04:32:15 reaver kernel: mdcmd (24): set md_sync_window 864

Mar  1 04:32:15 reaver kernel: mdcmd (25): set spinup_group 0 0

Mar  1 04:32:15 reaver kernel: mdcmd (26): set spinup_group 1 0

Mar  1 04:32:15 reaver kernel: mdcmd (27): set spinup_group 2 0

Mar  1 04:32:15 reaver kernel: mdcmd (28): set spinup_group 3 0

Mar  1 04:32:15 reaver kernel: mdcmd (29): set spinup_group 4 0

Mar  1 04:32:15 reaver kernel: mdcmd (30): set spinup_group 5 0

Mar  1 04:32:15 reaver kernel: mdcmd (31): spinup 0

Mar  1 04:32:15 reaver kernel: mdcmd (32): spinup 1

Mar  1 04:32:15 reaver kernel: mdcmd (33): spinup 2

Mar  1 04:32:15 reaver kernel: mdcmd (34): spinup 3

Mar  1 04:32:15 reaver kernel: mdcmd (35): spinup 4

Mar  1 04:32:15 reaver kernel: md: disk4: ATA_OP_SETIDLE1 ioctl error: -22

Mar  1 04:32:15 reaver kernel: mdcmd (36): spinup 5

Mar  1 04:32:15 reaver kernel: md: disk5: ATA_OP_SETIDLE1 ioctl error: -22

Mar  1 04:32:21 reaver kernel: mdcmd (37): start NEW_ARRAY

Mar  1 04:32:21 reaver kernel: unraid: allocating 106320K for 3840 stripes (6 disks)

Mar  1 04:32:21 reaver kernel: md1: running, size: 1953514552 blocks

Mar  1 04:32:21 reaver kernel: md2: running, size: 1953514552 blocks

Mar  1 04:32:21 reaver kernel: md3: running, size: 1953514552 blocks

Mar  1 04:32:21 reaver kernel: md4: running, size: 1953514552 blocks

Mar  1 04:32:21 reaver kernel: md5: running, size: 1953514552 blocks

Mar  1 04:32:21 reaver kernel: mdcmd (38): check NOCORRECT

Mar  1 04:32:21 reaver kernel: md: recovery thread woken up ...

Mar  1 04:32:21 reaver kernel: md: recovery thread syncing parity disk ...

Mar  1 04:32:21 reaver kernel: md: using 3456k window, over a total of 1953514552 blocks.

Mar  1 04:32:21 reaver emhttp: _shcmd: shcmd (62): exit status: 32

Mar  1 04:32:21 reaver kernel: REISERFS warning (device md1): sh-2021 reiserfs_fill_super: can not find reiserfs on md1

Mar  1 04:32:21 reaver kernel: REISERFS warning (device md2): sh-2021 reiserfs_fill_super: can not find reiserfs on md2

Mar  1 04:32:21 reaver emhttp: _shcmd: shcmd (60): exit status: 32

Mar  1 04:32:21 reaver kernel: REISERFS warning (device md3): sh-2021 reiserfs_fill_super: can not find reiserfs on md3

Mar  1 04:32:21 reaver emhttp: _shcmd: shcmd (61): exit status: 32

Mar  1 04:32:22 reaver kernel: can't shrink filesystem on-line

Mar  1 04:32:22 reaver kernel: can't shrink filesystem on-line

Mar  1 04:32:22 reaver emhttp: _shcmd: shcmd (80): exit status: 1

Mar  1 04:33:21 reaver kernel: mdcmd (39): nocheck

Mar  1 04:33:21 reaver kernel: md: md_do_sync: got signal, exit...

Mar  1 04:33:21 reaver kernel: md: recovery thread sync completion status: -4

Mar  1 04:33:31 reaver kernel: mdcmd (40): spinup 0

Mar  1 04:33:31 reaver kernel: mdcmd (41): spinup 1

Mar  1 04:33:31 reaver kernel: mdcmd (42): spinup 2

Mar  1 04:33:31 reaver kernel: mdcmd (43): spinup 3

Mar  1 04:33:31 reaver kernel: mdcmd (44): spinup 4

Mar  1 04:33:31 reaver kernel: md: disk4: ATA_OP_SETIDLE1 ioctl error: -22

Mar  1 04:33:31 reaver kernel: mdcmd (45): spinup 5

Mar  1 04:33:31 reaver kernel: md: disk5: ATA_OP_SETIDLE1 ioctl error: -22

Mar  1 04:33:32 reaver kernel: mdcmd (46): stop

Mar  1 04:33:32 reaver kernel: md1: stopping

Mar  1 04:33:32 reaver kernel: md2: stopping

Mar  1 04:33:32 reaver kernel: md3: stopping

Mar  1 04:33:32 reaver kernel: md4: stopping

Mar  1 04:33:32 reaver kernel: md5: stopping

Mar  1 04:33:32 reaver kernel: md: unRAID driver removed

Mar  1 04:33:32 reaver kernel: md: unRAID driver 2.1.0 installed

Mar  1 04:33:32 reaver kernel: mdcmd (1): import 0 8,16 1953514552 ST32000542AS_6XW1PDHJ

Mar  1 04:33:32 reaver kernel: md: import disk0: [8,16] (sdb) ST32000542AS_6XW1PDHJ size: 1953514552

Mar  1 04:33:32 reaver kernel: mdcmd (2): import 1 8,32 1953514552 WDC_WD20EADS-00R6B0_WD-WCAVY0247937

Mar  1 04:33:32 reaver kernel: md: import disk1: [8,32] (sdc) WDC_WD20EADS-00R6B0_WD-WCAVY0247937 size: 1953514552

Mar  1 04:33:32 reaver kernel: mdcmd (3): import 2 8,48 1953514552 WDC_WD20EADS-00R6B0_WD-WCAVY0252670

Mar  1 04:33:32 reaver kernel: md: import disk2: [8,48] (sdd) WDC_WD20EADS-00R6B0_WD-WCAVY0252670 size: 1953514552

Mar  1 04:33:32 reaver kernel: mdcmd (4): import 3 8,64 1953514552 WDC_WD20EADS-00R6B0_WD-WCAVY0211284

Mar  1 04:33:32 reaver kernel: md: import disk3: [8,64] (sde) WDC_WD20EADS-00R6B0_WD-WCAVY0211284 size: 1953514552

Mar  1 04:33:32 reaver kernel: mdcmd (5): import 4 8,96 1953514552 ST32000542AS_5XW1J686

Mar  1 04:33:32 reaver kernel: md: import disk4: [8,96] (sdg) ST32000542AS_5XW1J686 size: 1953514552

Mar  1 04:33:32 reaver kernel: mdcmd (6): import 5 8,112 1953514552 ST32000542AS_6XW0DCJ4

Mar  1 04:33:32 reaver kernel: md: import disk5: [8,112] (sdh) ST32000542AS_6XW0DCJ4 size: 1953514552

Mar  1 04:33:32 reaver kernel: mdcmd (7): import 6 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (8): import 7 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (9): import 8 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (10): import 9 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (11): import 10 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (12): import 11 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (13): import 12 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (14): import 13 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (15): import 14 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (16): import 15 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (17): import 16 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (18): import 17 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (19): import 18 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (20): import 19 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (21): import 20 0,0

Mar  1 04:33:32 reaver kernel: mdcmd (22): set md_num_stripes 3840

Mar  1 04:33:32 reaver kernel: mdcmd (23): set md_write_limit 2304

Mar  1 04:33:32 reaver kernel: mdcmd (24): set md_sync_window 864

Mar  1 04:33:32 reaver kernel: mdcmd (25): set spinup_group 0 0

Mar  1 04:33:32 reaver kernel: mdcmd (26): set spinup_group 1 0

Mar  1 04:33:32 reaver kernel: mdcmd (27): set spinup_group 2 0

Mar  1 04:33:32 reaver kernel: mdcmd (28): set spinup_group 3 0

Mar  1 04:33:32 reaver kernel: mdcmd (29): set spinup_group 4 0

Mar  1 04:33:32 reaver kernel: mdcmd (30): set spinup_group 5 0

Mar  1 04:33:32 reaver kernel: md: unRAID driver removed

Link to comment

In a pure unRAID 5.0b6 environment same results, 3 of the 5 disks show up as unformatted:

 

Mar  1 04:52:22 Reaver emhttp: Mounting disks...

Mar  1 04:52:22 Reaver emhttp: shcmd (50): mkdir /mnt/disk1

Mar  1 04:52:22 Reaver emhttp: shcmd (52): mkdir /mnt/disk4

Mar  1 04:52:22 Reaver emhttp: shcmd (51): mkdir /mnt/disk2

Mar  1 04:52:22 Reaver emhttp: shcmd (53): mkdir /mnt/disk3

Mar  1 04:52:22 Reaver emhttp: shcmd (54): mkdir /mnt/disk5

Mar  1 04:52:22 Reaver emhttp: shcmd (55): set -o pipefail ; mount -t reiserfs -o noatime,nodi

ratime /dev/md1 /mnt/disk1 2>&1 |logger

Mar  1 04:52:22 Reaver emhttp: shcmd (56): set -o pipefail ; mount -t reiserfs -o noatime,nodi

ratime /dev/md4 /mnt/disk4 2>&1 |logger

Mar  1 04:52:22 Reaver emhttp: shcmd (58): set -o pipefail ; mount -t reiserfs -o noatime,nodi

ratime /dev/md2 /mnt/disk2 2>&1 |logger

Mar  1 04:52:22 Reaver emhttp: shcmd (57): set -o pipefail ; mount -t reiserfs -o noatime,nodi

ratime /dev/md3 /mnt/disk3 2>&1 |logger

Mar  1 04:52:22 Reaver emhttp: shcmd (59): set -o pipefail ; mount -t reiserfs -o noatime,nodi

ratime /dev/md5 /mnt/disk5 2>&1 |logger

Mar  1 04:52:22 Reaver logger: mount: wrong fs type, bad option, bad superblock on /dev/md3,

Mar  1 04:52:22 Reaver logger: mount: wrong fs type, bad option, bad superblock on /dev/md1,

Mar  1 04:52:22 Reaver logger:        missing codepage or helper program, or other error

Mar  1 04:52:22 Reaver logger:        In some cases useful info is found in syslog - try

Mar  1 04:52:22 Reaver logger:        dmesg | tail  or so

Mar  1 04:52:22 Reaver logger:

Mar  1 04:52:22 Reaver logger:        missing codepage or helper program, or other error

Mar  1 04:52:22 Reaver logger:        In some cases useful info is found in syslog - try

Mar  1 04:52:22 Reaver logger:        dmesg | tail  or so

Mar  1 04:52:22 Reaver logger:

Mar  1 04:52:22 Reaver emhttp: _shcmd: shcmd (55): exit status: 32

Mar  1 04:52:22 Reaver emhttp: disk1 mount error: 32

Mar  1 04:52:22 Reaver emhttp: shcmd (60): rmdir /mnt/disk1

Mar  1 04:52:22 Reaver emhttp: _shcmd: shcmd (57): exit status: 32

Mar  1 04:52:22 Reaver emhttp: disk3 mount error: 32

Mar  1 04:52:22 Reaver emhttp: shcmd (61): rmdir /mnt/disk3

Mar  1 04:52:22 Reaver logger: mount: wrong fs type, bad option, bad superblock on /dev/md2,

Mar  1 04:52:22 Reaver logger:        missing codepage or helper program, or other error

Mar  1 04:52:22 Reaver logger:        In some cases useful info is found in syslog - try

Mar  1 04:52:22 Reaver logger:        dmesg | tail  or so

Mar  1 04:52:22 Reaver logger:

Mar  1 04:52:22 Reaver emhttp: _shcmd: shcmd (58): exit status: 32

Mar  1 04:52:22 Reaver emhttp: disk2 mount error: 32

Mar  1 04:52:22 Reaver emhttp: shcmd (62): rmdir /mnt/disk2

Mar  1 04:52:22 Reaver kernel: mdcmd (38): check NOCORRECT

Mar  1 04:52:22 Reaver kernel: md: recovery thread woken up ...

Mar  1 04:52:22 Reaver kernel: md: recovery thread syncing parity disk ...

Mar  1 04:52:22 Reaver kernel: md: using 3456k window, over a total of 1953514552 blocks.

Mar  1 04:52:22 Reaver kernel: REISERFS warning (device md3): sh-2021 reiserfs_fill_super: can not find reiserfs on md3

Mar  1 04:52:22 Reaver kernel: REISERFS warning (device md1): sh-2021 reiserfs_fill_super: can not find reiserfs on md1

Mar  1 04:52:22 Reaver kernel: REISERFS warning (device md2): sh-2021 reiserfs_fill_super: can not find reiserfs on md2

Mar  1 04:52:22 Reaver kernel: REISERFS (device md5): found reiserfs format "3.6" with standard journal

Mar  1 04:52:22 Reaver kernel: REISERFS (device md5): using ordered data mode

Mar  1 04:52:22 Reaver kernel: REISERFS (device md4): found reiserfs format "3.6" with standard journal

Mar  1 04:52:22 Reaver kernel: REISERFS (device md4): using ordered data mode

Mar  1 04:52:23 Reaver kernel: REISERFS (device md5): journal params: device md5, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30

Mar  1 04:52:23 Reaver kernel: REISERFS (device md5): checking transaction log (md5)

Link to comment

In a pure unRAID 5.0b6 environment same results, 3 of the 5 disks show up as unformatted:

 

I don't see this happening with any of my test servers.

 

Some things to try:

 

- unassign parity disk and see what happens.  Perhaps some kind of interaction with parity sync, but nothing has changed in that area for a looong time.

 

- with array stopped, try to manually mount one of the failing disks:

 

mkdir /x

mount /dev/sdc1 /x

 

If no errors, spot check the disk, e.g.,

 

ls /x

 

You can un-mount using this:

 

umount /x    <-- "umount", not "unmount"

Link to comment

I'm now wondering if maybe something odd happened on my system right around the time of upgrading.

 

It seems only the disks that were on my motherboard controller were affected. They were Sector 63 aligned, but now they're showing as Sector 64 aligned!. The output of fdisk -lu for all 3 of those drives show identical. I had unRAID configured as MBR 4K aligned as the default value. The 2 drives on the LSI1068E were Sector 64 aligned and are still Sector 64 aligned. So somehow their partition tables got changed?

 

However the new output shows the following:

 

fdisk -lu /dev/sdc

 

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes

1 heads, 63 sectors/track, 62016336 cylinders, total 3907029168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

  Device Boot      Start         End      Blocks   Id  System

/dev/sdc1              64  3907029167  1953514552   83  Linux

 

 

mkdir -p /tmp/x

 

mount /dev/sdc1 /tmp/x

mount: you must specify the filesystem type

 

dmesg:

REISERFS warning (device sdc1): sh-2021 reiserfs_fill_super: can not find reiserfs on sdc1

EXT3-fs (sdc1): error: can't find ext3 filesystem on dev sdc1.

FAT: bogus number of reserved sectors

VFS: Can't find a valid FAT filesystem on dev sdc1.

FAT: bogus number of reserved sectors

VFS: Can't find a valid FAT filesystem on dev sdc1.

NTFS-fs error (device sdc1): read_ntfs_boot_sector(): Primary boot sector is invalid.

NTFS-fs error (device sdc1): read_ntfs_boot_sector(): Mount option errors=recover not used. Aborting without trying to recover.

NTFS-fs error (device sdc1): ntfs_fill_super(): Not an NTFS volume.

VFS: Can't find a romfs filesystem on dev sdc1.

 

mount -t reiserfs /dev/sdc1 /tmp/x

mount: wrong fs type, bad option, bad superblock on /dev/sdc1,

      missing codepage or helper program, or other error

      In some cases useful info is found in syslog - try

      dmesg | tail  or so

 

dmesg:

REISERFS warning (device sdc1): sh-2021 reiserfs_fill_super: can not find reiserfs on sdc1

 

reiserfsck --check /dev/sdc1

 

Will read-only check consistency of the filesystem on /dev/sdc1

Will put log info to 'stdout'

 

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes

 

reiserfs_open: the reiserfs superblock cannot be found on /dev/sdc1.

Failed to open the filesystem.

 

If the partition table has not been changed, and the partition is

valid  and  it really  contains  a reiserfs  partition,  then the

superblock  is corrupted and you need to run this utility with

--rebuild-sb.

Link to comment

I'm now wondering if maybe something odd happened on my system right around the time of upgrading.

 

It seems only the disks that were on my motherboard controller were affected. They were Sector 63 aligned, but now they're showing as Sector 64 aligned!. The output of fdisk -lu for all 3 of those drives show identical. I had unRAID configured as MBR 4K aligned as the default value. The 2 drives on the LSI1068E were Sector 64 aligned and are still Sector 64 aligned. So somehow their partition tables got changed?

 

However the new output shows the following:

 

fdisk -lu /dev/sdc

 

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes

1 heads, 63 sectors/track, 62016336 cylinders, total 3907029168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

  Device Boot      Start         End      Blocks   Id  System

/dev/sdc1              64  3907029167  1953514552   83  Linux

 

 

mkdir -p /tmp/x

 

mount /dev/sdc1 /tmp/x

mount: you must specify the filesystem type

 

dmesg:

REISERFS warning (device sdc1): sh-2021 reiserfs_fill_super: can not find reiserfs on sdc1

EXT3-fs (sdc1): error: can't find ext3 filesystem on dev sdc1.

FAT: bogus number of reserved sectors

VFS: Can't find a valid FAT filesystem on dev sdc1.

FAT: bogus number of reserved sectors

VFS: Can't find a valid FAT filesystem on dev sdc1.

NTFS-fs error (device sdc1): read_ntfs_boot_sector(): Primary boot sector is invalid.

NTFS-fs error (device sdc1): read_ntfs_boot_sector(): Mount option errors=recover not used. Aborting without trying to recover.

NTFS-fs error (device sdc1): ntfs_fill_super(): Not an NTFS volume.

VFS: Can't find a romfs filesystem on dev sdc1.

 

mount -t reiserfs /dev/sdc1 /tmp/x

mount: wrong fs type, bad option, bad superblock on /dev/sdc1,

      missing codepage or helper program, or other error

      In some cases useful info is found in syslog - try

      dmesg | tail  or so

 

dmesg:

REISERFS warning (device sdc1): sh-2021 reiserfs_fill_super: can not find reiserfs on sdc1

 

reiserfsck --check /dev/sdc1

 

Will read-only check consistency of the filesystem on /dev/sdc1

Will put log info to 'stdout'

 

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes

 

reiserfs_open: the reiserfs superblock cannot be found on /dev/sdc1.

Failed to open the filesystem.

 

If the partition table has not been changed, and the partition is

valid  and  it really  contains  a reiserfs  partition,  then the

superblock  is corrupted and you need to run this utility with

--rebuild-sb.

You can repair the MBR with a utility I wrote a while back.  It will create a correctly defined MBR for  a partition starting on sector 63.    The utility is attached here http://lime-technology.com/forum/index.php?topic=5072.msg47122#msg47122
Link to comment

You can repair the MBR with a utility I wrote a while back.  It will create a correctly defined MBR for  a partition starting on sector 63.    The utility is attached here http://lime-technology.com/forum/index.php?topic=5072.msg47122#msg47122

 

Thank you for pointing that out. I remembered reading about that and was actually in the process of searching for it. I did have to fix 3 lines in it, the 'echo -ne' to include the leading 0 (for octal notation?) because of a difference in the echo command in the newer environment.

 

Reiserfsck is now running on /dev/sdc1.

Link to comment

I'm now wondering if maybe something odd happened on my system right around the time of upgrading.

 

Would really like to see syslog from when you first booted -beta6.

 

Barring that, after your MBR's are restored, repeat experiment, and if same result, capture system log.  Though nerve-wracking, as long as you don't Format the drive, the file system should remain intact.

 

In the code here's what happens... when you click Start, code looks at the MBR of each disk marked "NEW" by the unraid driver - in this case since you have to assign all the disks again, they will all be marked NEW.

 

Now when it looks at the MBR its checking for specific unRAID signature, with partition 1 starting in either sector 63 or 64.  If neither signature is present, it's going to write the MBR.

Link to comment

You can repair the MBR with a utility I wrote a while back.  It will create a correctly defined MBR for  a partition starting on sector 63.    The utility is attached here http://lime-technology.com/forum/index.php?topic=5072.msg47122#msg47122

 

Thank you for pointing that out. I remembered reading about that and was actually in the process of searching for it. I did have to fix 3 lines in it, the 'echo -ne' to include the leading 0 (for octal notation?) because of a difference in the echo command in the newer environment.

 

Reiserfsck is now running on /dev/sdc1.

No problem.   I have fixed my local copy, and added the ability to deal with sector 64 partitions too.

 

It was the quickest way I knew to fix your MBR.

Link to comment

This is what's great about the community, being able to find help during critical times. I knew enough not to panic and do anything rash. I should have captured the full syslog at that time though.

 

It's still there in the full Slackware Distro environment but separated into syslog and messages, so I can probably get the needed time range before trying the pure unRAID environment.

 

Here's my plan of attack:

0. get the 3 drives MBRs working and restored again.

1. delete the config/super.dat file

2. shutdown the system

3. unplug all but the parity drive and 1 Sector 63 Aligned data drive.

4. boot into pure unRAID 5.0b6 environment

5. assign data drive 1

6. assign parity drive

7. start array

8. examine webGui display

9. capture full syslog and post here if shows as unformatted.

 

 

Step 0 is taking a while as I'm running it 1 drive at a time and reiserfsck is taking quite some time because of the number of files and nodes, but it's looking very promising. The first drive finished and everything is fine after running the MBR fixer.

 

reiserfsck --check started at Tue Mar  1 05:39:45 2011

###########

Replaying journal: Done.

Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed

 

Checking internal tree.. finished

Comparing bitmaps..finished

Checking Semantic tree:

finished

No corruptions found

There are on the filesystem:

        Leaves 278807

        Internal nodes 1704

        Directories 162

        Other files 8069

        Data block pointers 281717918 (0 of them are zero)

        Safe links 0

###########

reiserfsck finished at Tue Mar  1 06:16:07 2011

###########

 

 

Link to comment

just booted into the new one

no issues

as expected i had to put the drives back in the dropboxes

and it seemed to take a bit longer for the array to start  (can be imagination :P ) but the time from clicking start to return to the normal menu seemed longer

 

if you want a syslog Tom i attached mine

 

he is running parity check now

 

At the end of your syslog there is a media error being reported  :o

 

Also there is a slight slow down introduced in one of the -beta5's where code now uses 'smartctl' command to go get the disk temperatures instead of doing a direct IOCTL to the drives.  This is considerably slower and I will probably go back to IOCTL method as soon as I get ATA/SATA/SCSI ioctl methods sorted out.

Link to comment

just booted into the new one

no issues

as expected i had to put the drives back in the dropboxes

and it seemed to take a bit longer for the array to start  (can be imagination :P ) but the time from clicking start to return to the normal menu seemed longer

 

if you want a syslog Tom i attached mine

 

he is running parity check now

 

At the end of your syslog there is a media error being reported  :o

 

Also there is a slight slow down introduced in one of the -beta5's where code now uses 'smartctl' command to go get the disk temperatures instead of doing a direct IOCTL to the drives.  This is considerably slower and I will probably go back to IOCTL method as soon as I get ATA/SATA/SCSI ioctl methods sorted out.

Running a reiserfschk on that disk now ...

not sure what is a media error ?

i put a post in the general help section to ask what these errors are ...

 

Link to comment

Would really like to see syslog from when you first booted -beta6.

 

In the mean-time, here's the pieces of my /var/log/syslog and /var/log/messages from the time range of running unRAID 5.0b6 on my Slackware Current distro. Apologies ahead of time for them being split out into the two files, as I know it makes it harder to see what happened in exactly what order, as is critical when troubleshooting this.

 

I do see it saying it was writing MBR on Sector 64, but that might have been after a stop/start cycle after I saw the webGUI show the drives listed as unformatted but I can not be certain.

 

Tomorrow (well later today) I will attempt a fresh boot in a pure unRAID environment and limit my actions. Here is what I plan on limiting the actions to just assign 1 data drive first, assign parity drive, start array, and if the data drive shows as unformatted then a single stop array and capture the syslog to post up here.

 

logs50b6.zip

Link to comment

After having my drives' MBRs repaired...

 

I then started emhttp with no config/super.dat, added the data drives to the array, then added the parity drive, then started the array. I resisted clicking on 'refresh' or navigating back to 'Main' tab or stopping the array, or doing anything out of the ordinary.

 

I was also tailing the syslog and didn't see anything odd like the first go around. The drives all show as mounted in webGui and as 'df'. The parity is being calculated now.

 

I do not see shfs running, but I'll get that going after the full parity sync completes in 400 minutes or so.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.