Jump to content

[SOLVED] Four drives not mounting after USB key swap


cybo

Recommended Posts

I had a really bad day yesterday. While moving my unRAID 4.7 box to my new place, I somehow managed to snap the USB key in half. I went ahead and bought a new Pro key, but I've run into an issue with 4 of my 10 data disks refusing to mount, and appearing unformatted when the array is brought online.

 

I've verified the power and data connections to the drives, and they are all being detected by the BIOS/unRAID. Interestingly enough, the four drives that are refusing to mount are jumpered WD10EARS/EADS drives (the only jumpered drives I have in my array). These drives were precleared a few years ago with the jumpers attached. The web config is reporting these drives as "4K-aligned". I suspect the problem has to do with not being able to locate the reiserFS superblock on the drives. The drives that are not mounting are disk2, disk3, disk5, and disk6.

 

Any thoughts? Thanks!

syslog-2011-10-12.txt

Link to comment

You don't have a syslog from the first boot when this occurred?

 

It sounds like unRAID moved the partition. If so it's a simple fix as long as you DO NOT format the disks. There is a utility that you can run to move it back. Search the 5.0beta threads or maybe Joe L. will post the commands to check and change them back if necessary.

Link to comment

Sorry, no syslog from the first boot. Another mistake on my part!

Your symptoms are exactly the same as in this thread:

http://lime-technology.com/forum/index.php?topic=15385.0

 

You'll need to figure out where the partition is currently, and where the file-system actually is located, but the fix is really easy.

 

Also, since this is the second occurrence of this same bug, send a PM to limetech, or an e-mail to [email protected] informing him of this bug in 4.7 and point him to this thread.

 

Joe L.

 

 

Link to comment

Sorry, no syslog from the first boot. Another mistake on my part!

Your symptoms are exactly the same as in this thread:

http://lime-technology.com/forum/index.php?topic=15385.0

 

You'll need to figure out where the partition is currently, and where the file-system actually is located, but the fix is really easy.

 

Also, since this is the second occurrence of this same bug, send a PM to limetech, or an e-mail to [email protected] informing him of this bug in 4.7 and point him to this thread.

 

Joe L.

 

 

Odds are you want to change the partition so it starts on sector 63, which in combination with the jumper, actually gets to physical sector 64 on the disk.
Link to comment

Thanks Joe & lionel! That makes sense.

 

I'll verify the partitioning tonight using that thread as a reference, and update this post accordingly. I'll be sure to fire Tom off an e-mail as well.

In your case, since you want the partition to start on sector 63, as I expect that is where your file system starts, you'll not use the "-A" option to my unraid_partition_disk.sh script.

 

The other thread gives a command to dump the initial part of the disk to see the start of the reiser filsystem.  You can verify if the MBR should point to sector 63, or 64.

 

I suspect when there is no super.dat (the flash drive was replaced/rebuilt new) the MBRs on the disks are set as to the preference you set in the settings, even if wrong for where the partition already exists.

 

Joe L.

Link to comment

Joe,

 

Just finished up with your script, it worked perfectly! I saw in that thread the -A option was only used to set it to a 64 start, so I didn't use it. All drives have mounted and the parity sync has started.

 

I'll send that e-mail off to Tom now. Thanks again for all your help!

Link to comment

Excellent to hear you got it fixed.

 

Bummer to hear that unRAID is moving the partition on a good disk.

 

Peter

Yes... In my opinion, a fairly major bug in 4.7 that took a very long time to surface.  Easy to fix, but potentially a disaster if you pressed the "Format" button as the new file-system would be in a different place than the old, and probably far less likely to be able to recover the lost files.
Link to comment

cybo

 

What was "Default partition format:" set to when this occurred:

 

MBR: unaligned

 

or

 

MBR: 4k-aligned

 

This seems like a particularly nasty bug, I'm wondering what conditions will trigger it.

 

I believe it was set to the default of unaligned, but I'm not 100% sure. I did change it to 4k-aligned, but I believe only after the drives failed to mount. I'm really regretting not saving the initial boot syslog.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...