jokazeek Posted August 12, 2016 Share Posted August 12, 2016 I have rebuilt a new disk 3 times now and am still unable to mount and recover disk 3. My parity is good but no matter what disk I put into disk 3, it won't mount, even known working disks! I have also gone through 2 pre-clears on the disks and still after waiting days the disk 3 is always unmountable!? I have brought up the array with the disk unassigned. Stopped the array, shutdown and restarted a dozen times by now, still, no disk 3. I am hoping someone has seen this issue or has some usual advice on the matter? Quote Link to comment
Squid Posted August 12, 2016 Share Posted August 12, 2016 No matter how many times you preclear the disk or rebuild it, it is always going to come up as being unmountable because there is probably some corruption on the disk (and the parity information is reflecting that). Your going to have to Check The File System Posting diagnostics here wouldn't hurt either. Quote Link to comment
jokazeek Posted August 12, 2016 Author Share Posted August 12, 2016 What diagnostics Squid? I'm a newbie man and when I clicked the diagnostics script from GUI it gave me a zipped file with a ton of txt files equaling more than I can upload here? Quote Link to comment
JorgeB Posted August 12, 2016 Share Posted August 12, 2016 Upload the zipped file, if it's too big for the forum use Dropbox or similar. Quote Link to comment
jokazeek Posted August 12, 2016 Author Share Posted August 12, 2016 https://drive.google.com/file/d/0B39X5nzPMiC0a2haOV9LZ1JwRFk/view?usp=sharing Quote Link to comment
jokazeek Posted August 12, 2016 Author Share Posted August 12, 2016 I have 2 4TB drives that have been "rebuilt" now. Each had no issues mounting prior to moving into Disk 3. After assigning to Disk 3 and rebuilding the problems with mounting occur. I'm not sure what else to try here but I know I have a ton of content missing due to Disk 3 being unmountable. I want to invest in this unRAID array some more but am finding it difficult to justify with whats happening with Disk 3 right now. Highwater sounded cool before this... sure appreciate any help guys! Thank you for your time. Quote Link to comment
JorgeB Posted August 12, 2016 Share Posted August 12, 2016 https://lime-technology.com/wiki/index.php/Check_Disk_Filesystems#Drives_formatted_with_XFS You need to check disk3 (md3) Quote Link to comment
jokazeek Posted August 12, 2016 Author Share Posted August 12, 2016 As a last ditch effort I started running that on the drive thats currently in Disk 3. Ive started my array in maintenance mode and am running 'xfs_repair -L /dev/mdc3'. Its been running for nearly 12 hours now with dots slowly scrolling across the SSH window...? So far only seeing Writes to Disk 3 and Reads to all other disks in the array including the parity disk. Should I let it complete? Heck, will it complete? Quote Link to comment
JorgeB Posted August 12, 2016 Share Posted August 12, 2016 Let it complete, it will take some time because it's running on the emulated disk. Quote Link to comment
jokazeek Posted August 13, 2016 Author Share Posted August 13, 2016 Still running... Out of curiosity how many writes can a 4TB write? Quote Link to comment
jokazeek Posted August 14, 2016 Author Share Posted August 14, 2016 2 days, and 13 hours later it's still running.................................... ... ... ... ... Quote Link to comment
JorgeB Posted August 14, 2016 Share Posted August 14, 2016 Looks like you're rebuilding the disk and running xfs_repair at the same time, if that's the case you'll slow down both operations. Quote Link to comment
jokazeek Posted August 14, 2016 Author Share Posted August 14, 2016 I'm not rebuilding by GUI? I've only run the xfs_repair command at this point. Also from the screenshot I just attached my data appears to be invalid. Could that be why it's taking so long? Quote Link to comment
JorgeB Posted August 14, 2016 Share Posted August 14, 2016 There's something wrong going in there, xfs_repair does mostly reads, disk is not rebuilding so I don't know what all those writes are about, maybe because you have a disk assigned a slot that is invalid at the moment. You should cancel and either rebuild the disk first and then run xfs_repair or unassign that disk to run xfs_repair on the emulated disk. Quote Link to comment
jokazeek Posted August 14, 2016 Author Share Posted August 14, 2016 Thank you for replying. I'm gonna have to buy you a beer after all of this, JohnnieB. First, I hate to stop it when it's been running for 2 1/2 days and is almost complete. Second, I have already run a rebuild on this disk via GUI and after completing it showed up as green but was still "unmountable". At this point in the repair/rebuilding process I'd like to let it complete. Do you see any major concerns with that? Quote Link to comment
JorgeB Posted August 14, 2016 Share Posted August 14, 2016 If the disk was rebuilt what happened to make it orange and needing another rebuild? Quote Link to comment
jokazeek Posted August 14, 2016 Author Share Posted August 14, 2016 I couldn't mount it after the rebuild, that's the only message I received? The CA 'Common Problems' utility said it encountered an error and advised me that Disk 3 was unmountable. Also pointed me to the xfs_repair manual and recommended I follow that instead of formatting. At that point I realized I was missing about 300+ content from my array so I freaked out and just started following the recommended manual. That's where xfs_repair came in via CLI. Quote Link to comment
JorgeB Posted August 14, 2016 Share Posted August 14, 2016 Unmountable and needing to be rebuilt are two different things. Unmountable only disk has a green ball. Quote Link to comment
jokazeek Posted August 14, 2016 Author Share Posted August 14, 2016 Right, and it did. But the space portion was completely empty and like I said I freaked when I saw I was missing almost half of my content. If not a xfs_repair, what should I have done? I tried shutting down the array and bringing it up again and that's when the Yellow warning icon appeared. Quote Link to comment
jokazeek Posted August 14, 2016 Author Share Posted August 14, 2016 My specs if it'll help? Server: unRaid 6.1.9 Plus | Mobo: ASRock EP2C602-4L/D16 | CPU: Intel Xeon E5-2620 x 2 | RAM: 48GB DDR3 | Parity: WD Red 6TB | Data: WD Red 4TB x 4, WD Blue 4TB x 1, WD Red 6TB x 1 | Cache: Samsung 250GB SSD 850 EVO | Case: COSMOS II Ultra Tower Quote Link to comment
JorgeB Posted August 14, 2016 Share Posted August 14, 2016 Whatever you did, disk3 needs to be rebuilt again, this won't make it mount, but it has to turn green, after the rebuild is done you'll need to run xfs_repair. Quote Link to comment
jokazeek Posted August 14, 2016 Author Share Posted August 14, 2016 Ok, will do! Should I run the same command as last time? And should it be done immediately after turning green while the array is still up in maintenance mode? xfs_repair -L /dev/mdc3 Quote Link to comment
JorgeB Posted August 14, 2016 Share Posted August 14, 2016 -first rebuild disk3 -all disks have to be green -only then start in maintenance mode and run xfs_repair -v /dev/md3 only use -L if xfs_repair tells you to do it. Quote Link to comment
jokazeek Posted August 14, 2016 Author Share Posted August 14, 2016 After it completes I'll still be in Maintenance Mode then so once green, no matter what error message I receive, I'll start the CLI command you mentioned. How do I send you a beer or coffee for your time? Quote Link to comment
JorgeB Posted August 14, 2016 Share Posted August 14, 2016 You don't need to be in maintenance mode to rebuild, only to run xfs_repair. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.