Merijeek Posted January 21, 2017 Share Posted January 21, 2017 Today I came home from work and tried to open my Sickrage UI and I'm getting a connection refused on http://192.168.1.44:8081/. I did a scrub and got this: scrub status for 1d177fe9-4d58-4f9d-8a29-e94ebcecc842 scrub started at Fri Jan 20 17:05:55 2017 and finished after 00:02:00 total bytes scrubbed: 15.15GiB with 2 errors error details: csum=2 corrected errors: 0, uncorrectable errors: 2, unverified errors: 0 I've also attached diagnostics. I assume this is due to power issues. I did have the Fix Common Problems troubleshooting mode running. So I've got loads of .zip files at 30 minute increments, but I'm not sure which one would be of interest, so I don't quite want to spam those in there yet. unraid-diagnostics-20170120-1709.zip Quote Link to comment
Merijeek Posted January 21, 2017 Author Share Posted January 21, 2017 I've also run xfs_repair, but haven't found anything too damning. root@UnRAID:/dev# xfs_repair -v /dev/md2 Phase 1 - find and verify superblock... - block cache size set to 1087816 entries Phase 2 - using internal log - zero log... zero_log: head block 3353606 tail block 3353606 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 3 - agno = 0 - agno = 2 - agno = 1 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Fri Jan 20 20:06:44 2017 Phase Start End Duration Phase 1: 01/20 20:06:15 01/20 20:06:16 1 second Phase 2: 01/20 20:06:16 01/20 20:06:17 1 second Phase 3: 01/20 20:06:17 01/20 20:06:26 9 seconds Phase 4: 01/20 20:06:26 01/20 20:06:26 Phase 5: 01/20 20:06:26 01/20 20:06:26 Phase 6: 01/20 20:06:26 01/20 20:06:35 9 seconds Phase 7: 01/20 20:06:35 01/20 20:06:35 Total run time: 20 seconds done root@UnRAID:/dev# xfs_repair -v /dev/md4 Phase 1 - find and verify superblock... - block cache size set to 1072760 entries Phase 2 - using internal log - zero log... zero_log: head block 3556750 tail block 3556750 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 4 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Fri Jan 20 20:08:59 2017 Phase Start End Duration Phase 1: 01/20 20:07:17 01/20 20:07:18 1 second Phase 2: 01/20 20:07:18 01/20 20:07:21 3 seconds Phase 3: 01/20 20:07:21 01/20 20:08:06 45 seconds Phase 4: 01/20 20:08:06 01/20 20:08:06 Phase 5: 01/20 20:08:06 01/20 20:08:06 Phase 6: 01/20 20:08:06 01/20 20:08:46 40 seconds Phase 7: 01/20 20:08:46 01/20 20:08:46 Total run time: 1 minute, 29 seconds done root@UnRAID:/dev# xfs_repair -v /dev/md1 Phase 1 - find and verify superblock... - block cache size set to 1132584 entries Phase 2 - using internal log - zero log... zero_log: head block 174794 tail block 174794 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Fri Jan 20 20:09:09 2017 Phase Start End Duration Phase 1: 01/20 20:09:07 01/20 20:09:07 Phase 2: 01/20 20:09:07 01/20 20:09:07 Phase 3: 01/20 20:09:07 01/20 20:09:08 1 second Phase 4: 01/20 20:09:08 01/20 20:09:08 Phase 5: 01/20 20:09:08 01/20 20:09:08 Phase 6: 01/20 20:09:08 01/20 20:09:08 Phase 7: 01/20 20:09:08 01/20 20:09:08 Total run time: 1 second done root@UnRAID:/dev# xfs_repair -v /dev/md3 Phase 1 - find and verify superblock... - block cache size set to 1102608 entries Phase 2 - using internal log - zero log... zero_log: head block 862818 tail block 862818 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Fri Jan 20 20:11:34 2017 Phase Start End Duration Phase 1: 01/20 20:09:15 01/20 20:09:15 Phase 2: 01/20 20:09:15 01/20 20:09:19 4 seconds Phase 3: 01/20 20:09:19 01/20 20:10:14 55 seconds Phase 4: 01/20 20:10:14 01/20 20:10:14 Phase 5: 01/20 20:10:14 01/20 20:10:14 Phase 6: 01/20 20:10:14 01/20 20:10:40 26 seconds Phase 7: 01/20 20:10:40 01/20 20:10:40 Total run time: 1 minute, 25 seconds Quote Link to comment
JorgeB Posted January 21, 2017 Share Posted January 21, 2017 Docker image is corrupt, delete and recreate. Quote Link to comment
Merijeek Posted January 21, 2017 Author Share Posted January 21, 2017 I seem to have problems with Docker more often than...everything put together, really. Any idea why that would be? Quote Link to comment
trurl Posted January 21, 2017 Share Posted January 21, 2017 I seem to have problems with Docker more often than...everything put together, really. Any idea why that would be? Have you read the Docker FAQ? Quote Link to comment
Merijeek Posted January 22, 2017 Author Share Posted January 22, 2017 Yes, sadly I don't see anything as far as "Why do my dockers spontanously stop responding?" isn't on the list. Quote Link to comment
Squid Posted January 22, 2017 Share Posted January 22, 2017 Yes, sadly I don't see anything as far as "Why do my dockers spontanously stop responding?" isn't on the list. I think what trurl was getting at is their are 2 main causes of corruption on btrfs file systems - unclean shutdowns (buy and install a UPS -> its a requirement for any server IMHO regardless of the OS) - the file system / image filling up. This is covered within the FAQ Quote Link to comment
trurl Posted January 22, 2017 Share Posted January 22, 2017 Yes, sadly I don't see anything as far as "Why do my dockers spontanously stop responding?" isn't on the list. Maybe not that specific question but just before that it was suggested that your docker img was corrupt and there is a lot in the FAQ that might help with that. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.