Repeated parity sync errors after server upgrade SAS2LP-MV8


Rich

Recommended Posts

Hey all,

 

I'd appreciate some help trying to diagnosing repeated parity sync errors i am getting after upgrading my server  :'(

 

With the release of unRAID 6.2 i decided to upgrade to dual parity drives, i already had a 6TB parity, so just added another 6TB drive. When adding the extra drive i also changed the case, upgraded the PSU to a Corsair modular RM750i 750W and added another Supermicro SAS2LP-MV8 controller (please see my sig for all the hardware i'm using).

 

Before the upgrade i never had any parity sync errors at all, but now, i get errors with each sync i do that has seen data written to the array since the last check. If no data has been written to the array there don't seem to be any errors.

 

I have performed smart checks on all drives and they're all ok, i have done a 24 hour memtest and that came back with no errors as well. I haven't swapped out any cabling, but i have checked all connections and everything seems ok.

 

I'm currently half way through another parity check which has thrown up errors again, so am going to attach the syslog so far. I can see the errors in there, but i am not sure what they mean?

 

Any help would be really appreciated as i'm out of my depth here and am unsure how to go forward with diagnosing a cause for the problem.

 

Thank you

 

syslog.txt

Link to comment
  • Replies 71
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

You are having issues with these 4 disks:

 

ata15.00: ATA-9: WDC WD10EZRX-00A3KB0,      WD-WCC4J3KXVHSP
ata16.00: ATA-9: WDC WD10EZRX-00A3KB0,      WD-WCC4J1VA33SR
ata17.00: ATA-9: WDC WD60EZRX-00MVLB1,      WD-WX31D55A45S3
ata18.00: ATA-8: WDC WD10EADS-00L5B1,      WD-WCAU4C515839

 

Start by checking/replacing what they share in common, controller, cable, etc

 

You'll also need to check filesystem on disk8:

 

Dec  6 09:04:31 unRAID kernel: XFS (md8): Unmount and run xfs_repair

Link to comment

I'm in the process of testing the components connected to the four drives, results so far indicate that it might be one of the ports on the controller.

 

When running further parity checks however, i am seeing the below error in the syslog and its repeated 157 times  :-\ during roughly the middle of the parity check.

 

Is this related and also can anyone tell me what it actually means?

 

Thank you

 

Dec  8 04:03:53 unRAID kernel: 68348 pages reserved
Dec  8 04:03:53 unRAID kernel: qemu-system-x86: page allocation failure: order:4, mode:0x260c0c0
Dec  8 04:03:53 unRAID kernel: CPU: 0 PID: 9253 Comm: qemu-system-x86 Tainted: G        W       4.4.30-unRAID #2
Dec  8 04:03:53 unRAID kernel: Hardware name: ASUS All Series/Z87-K, BIOS 1402 11/05/2014
Dec  8 04:03:53 unRAID kernel: 0000000000000000 ffff88034a117798 ffffffff8136f79f 0000000000000001
Dec  8 04:03:53 unRAID kernel: 0000000000000004 ffff88034a117830 ffffffff810bd527 000000010260c0c0
Dec  8 04:03:53 unRAID kernel: ffff88034ec0e000 0000000400000040 0000000000000010 0000000000000004
Dec  8 04:03:53 unRAID kernel: Call Trace:
Dec  8 04:03:53 unRAID kernel: [<ffffffff8136f79f>] dump_stack+0x61/0x7e
Dec  8 04:03:53 unRAID kernel: [<ffffffff810bd527>] warn_alloc_failed+0x10f/0x127
Dec  8 04:03:53 unRAID kernel: [<ffffffff810c0548>] __alloc_pages_nodemask+0x870/0x8ca
Dec  8 04:03:53 unRAID kernel: [<ffffffff810c074c>] alloc_kmem_pages_node+0x4b/0xb3
Dec  8 04:03:53 unRAID kernel: [<ffffffff810f4d58>] kmalloc_large_node+0x24/0x52
Dec  8 04:03:53 unRAID kernel: [<ffffffff810f7501>] __kmalloc_node+0x22/0x153
Dec  8 04:03:53 unRAID kernel: [<ffffffff810209b0>] reserve_ds_buffers+0x18c/0x33d
Dec  8 04:03:53 unRAID kernel: [<ffffffff8101b3fc>] x86_reserve_hardware+0x135/0x147
Dec  8 04:03:53 unRAID kernel: [<ffffffff8101b45e>] x86_pmu_event_init+0x50/0x1c9
Dec  8 04:03:53 unRAID kernel: [<ffffffff810ae7bd>] perf_try_init_event+0x41/0x72
Dec  8 04:03:53 unRAID kernel: [<ffffffff810aec0e>] perf_event_alloc+0x420/0x66e
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00f958e>] ? kvm_dev_ioctl_get_cpuid+0x1c0/0x1c0 [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffff810b0bbb>] perf_event_create_kernel_counter+0x22/0x112
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00f96d9>] pmc_reprogram_counter+0xbf/0x104 [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00f992b>] reprogram_fixed_counter+0xc7/0xd8 [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa03d0987>] intel_pmu_set_msr+0xe0/0x2ca [kvm_intel]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00f9b2c>] kvm_pmu_set_msr+0x15/0x17 [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00dba57>] kvm_set_msr_common+0x921/0x983 [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa03d0400>] vmx_set_msr+0x2ec/0x2fe [kvm_intel]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00d8424>] kvm_set_msr+0x61/0x63 [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa03c99c4>] handle_wrmsr+0x3b/0x62 [kvm_intel]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa03ce63f>] vmx_handle_exit+0xfbb/0x1053 [kvm_intel]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa03d0105>] ? vmx_vcpu_run+0x30e/0x31d [kvm_intel]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00e1f92>] kvm_arch_vcpu_ioctl_run+0x38a/0x1080 [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00dc938>] ? kvm_arch_vcpu_load+0x6b/0x16c [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00dc9b5>] ? kvm_arch_vcpu_load+0xe8/0x16c [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00d2cff>] kvm_vcpu_ioctl+0x178/0x499 [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffffa00d5152>] ? kvm_vm_ioctl+0x3e8/0x5d8 [kvm]
Dec  8 04:03:53 unRAID kernel: [<ffffffff8111869e>] do_vfs_ioctl+0x3a3/0x416
Dec  8 04:03:53 unRAID kernel: [<ffffffff8112070e>] ? __fget+0x72/0x7e
Dec  8 04:03:53 unRAID kernel: [<ffffffff8111874f>] SyS_ioctl+0x3e/0x5c
Dec  8 04:03:53 unRAID kernel: [<ffffffff81629c2e>] entry_SYSCALL_64_fastpath+0x12/0x6d
Dec  8 04:03:53 unRAID kernel: Mem-Info:
Dec  8 04:03:53 unRAID kernel: active_anon:536036 inactive_anon:6133 isolated_anon:0
Dec  8 04:03:53 unRAID kernel: active_file:520606 inactive_file:1053929 isolated_file:0
Dec  8 04:03:53 unRAID kernel: unevictable:1754170 dirty:135 writeback:0 unstable:0
Dec  8 04:03:53 unRAID kernel: slab_reclaimable:64958 slab_unreclaimable:17450
Dec  8 04:03:53 unRAID kernel: mapped:37272 shmem:89672 pagetables:9137 bounce:0
Dec  8 04:03:53 unRAID kernel: free:69997 free_pcp:0 free_cma:0
Dec  8 04:03:53 unRAID kernel: Node 0 DMA free:15872kB min:132kB low:164kB high:196kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15956kB managed:15872kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Dec  8 04:03:53 unRAID kernel: lowmem_reserve[]: 0 3299 15839 15839
Dec  8 04:03:53 unRAID kernel: Node 0 DMA32 free:97140kB min:28124kB low:35152kB high:42184kB active_anon:399664kB inactive_anon:4252kB active_file:452216kB inactive_file:884428kB unevictable:1560752kB isolated(anon):0kB isolated(file):0kB present:3525636kB managed:3515880kB mlocked:1560752kB dirty:8kB writeback:0kB mapped:41496kB shmem:73524kB slab_reclaimable:54680kB slab_unreclaimable:14348kB kernel_stack:2128kB pagetables:7436kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Dec  8 04:03:53 unRAID kernel: lowmem_reserve[]: 0 0 12540 12540
Dec  8 04:03:53 unRAID kernel: Node 0 Normal free:166976kB min:106908kB low:133632kB high:160360kB active_anon:1744480kB inactive_anon:20280kB active_file:1630208kB inactive_file:3331288kB unevictable:5455928kB isolated(anon):0kB isolated(file):0kB present:13105152kB managed:12841600kB mlocked:5455928kB dirty:532kB writeback:0kB mapped:107592kB shmem:285164kB slab_reclaimable:205152kB slab_unreclaimable:55452kB kernel_stack:14000kB pagetables:29112kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Dec  8 04:03:53 unRAID kernel: lowmem_reserve[]: 0 0 0 0
Dec  8 04:03:53 unRAID kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 3*4096kB (M) = 15872kB
Dec  8 04:03:53 unRAID kernel: Node 0 DMA32: 5981*4kB (UME) 4632*8kB (UME) 1769*16kB (UME) 256*32kB (UME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 97476kB
Dec  8 04:03:53 unRAID kernel: Node 0 Normal: 37123*4kB (UMEH) 1915*8kB (UMEH) 50*16kB (UMEH) 21*32kB (UH) 12*64kB (H) 8*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 167844kB
Dec  8 04:03:53 unRAID kernel: 1664245 total pagecache pages
Dec  8 04:03:53 unRAID kernel: 0 pages in swap cache
Dec  8 04:03:53 unRAID kernel: Swap cache stats: add 0, delete 0, find 0/0
Dec  8 04:03:53 unRAID kernel: Free swap  = 0kB
Dec  8 04:03:53 unRAID kernel: Total swap = 0kB
Dec  8 04:03:53 unRAID kernel: 4161686 pages RAM
Dec  8 04:03:53 unRAID kernel: 0 pages HighMem/MovableOnly

Link to comment

I did that earlier this week and it was for 24 hours, as i was trying to identify the cause of my sync errors  :(

It came back with zero errors.

 

Is there anything else i can check?

 

Also, would there have been an impact of the kernel crashing, because everything seemed to be running OK, all the dockers and VM, plus the parity check were and still are running fine?

 

Thank you

Link to comment

You are having issues with these 4 disks:

ata15.00: ATA-9: WDC WD10EZRX-00A3KB0,      WD-WCC4J3KXVHSP
ata16.00: ATA-9: WDC WD10EZRX-00A3KB0,      WD-WCC4J1VA33SR
ata17.00: ATA-9: WDC WD60EZRX-00MVLB1,      WD-WX31D55A45S3
ata18.00: ATA-8: WDC WD10EADS-00L5B1,      WD-WCAU4C515839

 

Start by checking/replacing what they share in common, controller, cable, etc

 

You'll also need to check filesystem on disk8:

Dec  6 09:04:31 unRAID kernel: XFS (md8): Unmount and run xfs_repair

 

All four disks, (one of them is disk8) run off the same sas port on the controller. I ran xfs repair and it never highlighted anything, so i assume that fixed the problem?

I then swapped the sas cable to the spare port on the controller and ran three parity checks (running mover in between), all coming back with zero errors.

I have now swapped the cable back to its original port and am running another parity check, which is looking good so far.

 

Is it possible that the above file system problem could have caused the issues with the other four disks, which in turn cause the parity errors?

Link to comment

All four disks, (one of them is disk8) run off the same sas port on the controller. I ran xfs repair and it never highlighted anything, so i assume that fixed the problem?

 

It should be fixed and long as it was run without the -n option (read-only).

 

Is it possible that the above file system problem could have caused the issues with the other four disks, which in turn cause the parity errors?

 

No, most likely the errors on the 4 disks caused the filesystem corruption and parity sync errors.

 

Keep monitoring the syslog for a few days, if there are no more issues it could have been a badly seated SAS cable.

Link to comment

I spoke too soon! Just had over 1000 sync errors appear.

 

The sas cable i'm using, doesn't create any errors when used with the second port on the same controller. Which to me, suggests the first sas port is the problem.

Does that sound like reasonable cause for all the above faults?

 

Dec 10 12:04:00 unRAID afpd[3920]: child[11098]: asev_del_fd: 4
Dec 10 12:23:05 unRAID afpd[3920]: child[16112]: asev_del_fd: 5
Dec 10 13:26:43 unRAID kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1
Dec 10 13:26:43 unRAID kernel: sas: trying to find task 0xffff8803d312f000
Dec 10 13:26:43 unRAID kernel: sas: sas_scsi_find_task: aborting task 0xffff8803d312f000
Dec 10 13:26:43 unRAID kernel: sas: sas_scsi_find_task: task 0xffff8803d312f000 is aborted
Dec 10 13:26:43 unRAID kernel: sas: sas_eh_handle_sas_errors: task 0xffff8803d312f000 is aborted
Dec 10 13:26:43 unRAID kernel: sas: ata15: end_device-8:0: cmd error handler
Dec 10 13:26:43 unRAID kernel: sas: ata15: end_device-8:0: dev error handler
Dec 10 13:26:43 unRAID kernel: ata15.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Dec 10 13:26:43 unRAID kernel: ata15.00: failed command: READ DMA EXT
Dec 10 13:26:43 unRAID kernel: ata15.00: cmd 25/00:00:b0:64:8f/00:04:32:00:00/e0 tag 18 dma 524288 in
Dec 10 13:26:43 unRAID kernel:         res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Dec 10 13:26:43 unRAID kernel: ata15.00: status: { DRDY }
Dec 10 13:26:43 unRAID kernel: ata15: hard resetting link
Dec 10 13:26:43 unRAID kernel: sas: ata16: end_device-8:1: dev error handler
Dec 10 13:26:43 unRAID kernel: sas: ata17: end_device-8:2: dev error handler
Dec 10 13:26:43 unRAID kernel: sas: ata18: end_device-8:3: dev error handler
Dec 10 13:26:43 unRAID kernel: sas: sas_form_port: phy0 belongs to port0 already(1)!
Dec 10 13:26:45 unRAID kernel: drivers/scsi/mvsas/mv_sas.c 1430:mvs_I_T_nexus_reset for device[0]:rc= 0
Dec 10 13:26:46 unRAID kernel: ata15.00: configured for UDMA/133
Dec 10 13:26:46 unRAID kernel: ata15: EH complete
Dec 10 13:26:46 unRAID kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1
Dec 10 13:26:46 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 000003DF,  slot [5].
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259200
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259208
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259216
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259224
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259232
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259240
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259248
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259256
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259264
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259272
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259280
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259288
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259296
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259304
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259312
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259320
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259328
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259336
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259344
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259352
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259360
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259368
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259376
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259384
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259392
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259400
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259408
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259416
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259424
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259432
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259440
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259448
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259456
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259464
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259472
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259480
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259488
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259496
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259504
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259512
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259520
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259528
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259536
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259544
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259552
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259560
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259568
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259576
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259584
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259592
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259600
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259608
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259616
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259624
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259632
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259640
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259648
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259656
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259664
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259672
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259680
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259688
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259696
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259704
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259712
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259720
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259728
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259736
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259744
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259752
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259760
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259768
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259776
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259784
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259792
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259800
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259808
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259816
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259824
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259832
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259840
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259848
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259856
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259864
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259872
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259880
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259888
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259896
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259904
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259912
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259920
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259928
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259936
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259944
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259952
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259960
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259968
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259976
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259984
Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259992
Dec 10 13:26:46 unRAID kernel: md: recovery thread: stopped logging
Dec 10 13:26:54 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 00000300,  slot [7].
Dec 10 13:27:02 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 00000230,  slot [8].
Dec 10 13:27:10 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 00000210,  slot [5].
Dec 10 13:27:18 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 00000250,  slot [5].
Dec 10 13:27:26 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 00000210,  slot [6].
Dec 10 13:27:26 unRAID kernel: sas: Enter sas_scsi_recover_host busy: 2 failed: 2
Dec 10 13:27:26 unRAID kernel: sas: trying to find task 0xffff8801d31d3a00
Dec 10 13:27:26 unRAID kernel: sas: sas_scsi_find_task: aborting task 0xffff8801d31d3a00
Dec 10 13:27:26 unRAID kernel: sas: sas_scsi_find_task: task 0xffff8801d31d3a00 is aborted
Dec 10 13:27:26 unRAID kernel: sas: sas_eh_handle_sas_errors: task 0xffff8801d31d3a00 is aborted
Dec 10 13:27:26 unRAID kernel: sas: trying to find task 0xffff8800090a7d00
Dec 10 13:27:26 unRAID kernel: sas: sas_scsi_find_task: aborting task 0xffff8800090a7d00
Dec 10 13:27:26 unRAID kernel: sas: sas_scsi_find_task: task 0xffff8800090a7d00 is aborted
Dec 10 13:27:26 unRAID kernel: sas: sas_eh_handle_sas_errors: task 0xffff8800090a7d00 is aborted
Dec 10 13:27:26 unRAID kernel: sas: ata16: end_device-8:1: cmd error handler
Dec 10 13:27:26 unRAID kernel: sas: ata15: end_device-8:0: dev error handler
Dec 10 13:27:26 unRAID kernel: sas: ata16: end_device-8:1: dev error handler
Dec 10 13:27:26 unRAID kernel: ata16.00: exception Emask 0x0 SAct 0x60 SErr 0x0 action 0x6 frozen
Dec 10 13:27:26 unRAID kernel: sas: ata17: end_device-8:2: dev error handler
Dec 10 13:27:26 unRAID kernel: sas: ata18: end_device-8:3: dev error handler
Dec 10 13:27:26 unRAID kernel: ata16.00: failed command: READ FPDMA QUEUED
Dec 10 13:27:26 unRAID kernel: ata16.00: cmd 60/10:00:a0:74:8f/00:00:32:00:00/40 tag 5 ncq 8192 in
Dec 10 13:27:26 unRAID kernel:         res 40/00:08:48:f5:36/00:00:2a:00:00/40 Emask 0x4 (timeout)
Dec 10 13:27:26 unRAID kernel: ata16.00: status: { DRDY }
Dec 10 13:27:26 unRAID kernel: ata16.00: failed command: READ FPDMA QUEUED
Dec 10 13:27:26 unRAID kernel: ata16.00: cmd 60/00:00:b0:74:8f/04:00:32:00:00/40 tag 6 ncq 524288 in
Dec 10 13:27:26 unRAID kernel:         res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Dec 10 13:27:26 unRAID kernel: ata16.00: status: { DRDY }
Dec 10 13:27:26 unRAID kernel: ata16: hard resetting link
Dec 10 13:27:26 unRAID kernel: sas: sas_form_port: phy1 belongs to port1 already(1)!
Dec 10 13:27:28 unRAID kernel: drivers/scsi/mvsas/mv_sas.c 1430:mvs_I_T_nexus_reset for device[1]:rc= 0
Dec 10 13:27:29 unRAID kernel: ata16.00: configured for UDMA/133
Dec 10 13:27:29 unRAID kernel: ata16: EH complete
Dec 10 13:27:29 unRAID kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 2 tries: 1
Dec 10 13:27:29 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 0000022F,  slot [4].
Dec 10 13:27:29 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 000005FF,  slot [9].

Link to comment
  • 2 weeks later...

Well I waited a week and performed another check and sadly got errors :'( so its not the port. I'm now in the process of swapping out components.

 

I've swapped out the sas cable and have started another check, i'm a few hours in and this has come up in the syslog again, however this time there were no corrected sectors afterwards (see reply #8), so no errors (yet). Could someone explain to me what the error indicates please?

 

Dec 21 18:21:53 unRAID kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1
Dec 21 18:21:53 unRAID kernel: sas: trying to find task 0xffff8800187da000
Dec 21 18:21:53 unRAID kernel: sas: sas_scsi_find_task: aborting task 0xffff8800187da000
Dec 21 18:21:53 unRAID kernel: sas: sas_scsi_find_task: task 0xffff8800187da000 is aborted
Dec 21 18:21:53 unRAID kernel: sas: sas_eh_handle_sas_errors: task 0xffff8800187da000 is aborted
Dec 21 18:21:53 unRAID kernel: sas: ata16: end_device-8:1: cmd error handler
Dec 21 18:21:53 unRAID kernel: sas: ata15: end_device-8:0: dev error handler
Dec 21 18:21:53 unRAID kernel: sas: ata16: end_device-8:1: dev error handler
Dec 21 18:21:53 unRAID kernel: sas: ata17: end_device-8:2: dev error handler
Dec 21 18:21:53 unRAID kernel: ata16.00: exception Emask 0x0 SAct 0x40 SErr 0x0 action 0x6 frozen
Dec 21 18:21:53 unRAID kernel: sas: ata18: end_device-8:3: dev error handler
Dec 21 18:21:53 unRAID kernel: ata16.00: failed command: READ FPDMA QUEUED
Dec 21 18:21:53 unRAID kernel: ata16.00: cmd 60/00:00:10:d6:37/04:00:2b:00:00/40 tag 6 ncq 524288 in
Dec 21 18:21:53 unRAID kernel:         res 40/00:1c:00:d8:1f/00:00:1d:00:00/40 Emask 0x4 (timeout)
Dec 21 18:21:53 unRAID kernel: ata16.00: status: { DRDY }
Dec 21 18:21:53 unRAID kernel: ata16: hard resetting link
Dec 21 18:21:53 unRAID kernel: sas: sas_form_port: phy1 belongs to port1 already(1)!
Dec 21 18:21:55 unRAID kernel: drivers/scsi/mvsas/mv_sas.c 1430:mvs_I_T_nexus_reset for device[1]:rc= 0
Dec 21 18:21:56 unRAID kernel: ata16.00: configured for UDMA/133
Dec 21 18:21:56 unRAID kernel: ata16: EH complete
Dec 21 18:21:56 unRAID kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1
Dec 21 18:24:45 unRAID kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1
Dec 21 18:24:45 unRAID kernel: sas: trying to find task 0xffff88010761e500
Dec 21 18:24:45 unRAID kernel: sas: sas_scsi_find_task: aborting task 0xffff88010761e500
Dec 21 18:24:45 unRAID kernel: sas: sas_scsi_find_task: task 0xffff88010761e500 is aborted
Dec 21 18:24:45 unRAID kernel: sas: sas_eh_handle_sas_errors: task 0xffff88010761e500 is aborted
Dec 21 18:24:45 unRAID kernel: sas: ata17: end_device-8:2: cmd error handler
Dec 21 18:24:45 unRAID kernel: sas: ata15: end_device-8:0: dev error handler
Dec 21 18:24:45 unRAID kernel: sas: ata16: end_device-8:1: dev error handler
Dec 21 18:24:45 unRAID kernel: sas: ata17: end_device-8:2: dev error handler
Dec 21 18:24:45 unRAID kernel: ata17.00: exception Emask 0x0 SAct 0x40000000 SErr 0x0 action 0x6 frozen
Dec 21 18:24:45 unRAID kernel: sas: ata18: end_device-8:3: dev error handler
Dec 21 18:24:45 unRAID kernel: ata17.00: failed command: READ FPDMA QUEUED
Dec 21 18:24:45 unRAID kernel: ata17.00: cmd 60/00:00:90:ac:c3/04:00:2c:00:00/40 tag 30 ncq 524288 in
Dec 21 18:24:45 unRAID kernel:         res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Dec 21 18:24:45 unRAID kernel: ata17.00: status: { DRDY }
Dec 21 18:24:45 unRAID kernel: ata17: hard resetting link
Dec 21 18:24:45 unRAID kernel: sas: sas_form_port: phy2 belongs to port2 already(1)!
Dec 21 18:24:47 unRAID kernel: drivers/scsi/mvsas/mv_sas.c 1430:mvs_I_T_nexus_reset for device[2]:rc= 0
Dec 21 18:24:48 unRAID kernel: ata17.00: configured for UDMA/133
Dec 21 18:24:48 unRAID kernel: ata17: EH complete
Dec 21 18:24:48 unRAID kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1

 

 

Link to comment

Ah ok. Thank you.

 

As there were only two timeout cycles and no errors (plus no errors over the rest of the parity check) does that mean that it eventually found / did what it was trying to do? I assume that if it hadn't been able to complete what it was trying to do, that would have created an error?

Link to comment

I see. So as I've swapped out a sas cable for a brand new one and haven't got any errors, but did get timeouts, does that suggest that the problem is still there and its probably not the cable? I'm assuming that the timeouts previously lead to the errors, so the catalyst is potentially still there despite the cable change?

 

Or could it be a coincidence and be unrelated? What are the odds? lol

 

Also, is it possible to ascertain what disk the timeouts relate to?

Link to comment
  • 3 weeks later...

I'm still at it! lol.

 

Just recieved a new controller and swapped it over. I'm going to wait a week and then run a parity check and see what happens.

 

I checked the syslog after boot up, to make sure nothing strange appeared and i noticed the below. These errors never presented before and i never changed anything in the bios as far as ACPI is concerned, so could they be another symptom of my problem?

 

Could these errors signify a problem with the motherboard?

 

Thank you,

 

Rich

 

Jan 12 21:07:47 unRAID kernel: ata1: SATA max UDMA/133 abar m2048@0xf7c1a000 port 0xf7c1a100 irq 28
Jan 12 21:07:47 unRAID kernel: ata2: DUMMY
Jan 12 21:07:47 unRAID kernel: ata3: SATA max UDMA/133 abar m2048@0xf7c1a000 port 0xf7c1a200 irq 28
Jan 12 21:07:47 unRAID kernel: ata4: SATA max UDMA/133 abar m2048@0xf7c1a000 port 0xf7c1a280 irq 28
Jan 12 21:07:47 unRAID kernel: ata5: SATA max UDMA/133 abar m2048@0xf7c1a000 port 0xf7c1a300 irq 28
Jan 12 21:07:47 unRAID kernel: ata6: SATA max UDMA/133 abar m2048@0xf7c1a000 port 0xf7c1a380 irq 28
Jan 12 21:07:47 unRAID kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Jan 12 21:07:47 unRAID kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 12 21:07:47 unRAID kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 12 21:07:47 unRAID kernel: ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 12 21:07:47 unRAID kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 12 21:07:47 unRAID kernel: ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 12 21:07:47 unRAID kernel: ata4.00: ATA-9: WDC WD60EZRZ-00GZ5B1,      WD-WX21D36PPX7X, 80.00A80, max UDMA/133
Jan 12 21:07:47 unRAID kernel: ata4.00: 11721045168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Jan 12 21:07:47 unRAID kernel: ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20150930/psargs-359)
Jan 12 21:07:47 unRAID kernel: ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT5._GTF] (Node ffff88040f0629b0), AE_NOT_FOUND (20150930/psparse-542)
Jan 12 21:07:47 unRAID kernel: ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20150930/psargs-359)
Jan 12 21:07:47 unRAID kernel: ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT4._GTF] (Node ffff88040f062938), AE_NOT_FOUND (20150930/psparse-542)
Jan 12 21:07:47 unRAID kernel: ata6.00: supports DRM functions and may not be fully accessible
Jan 12 21:07:47 unRAID kernel: ata1.00: ATA-9: WDC WD10EZRX-00A3KB0,      WD-WCC4J2NH9DDT, 01.01A01, max UDMA/133
Jan 12 21:07:47 unRAID kernel: ata1.00: 1953525168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Jan 12 21:07:47 unRAID kernel: ata5.00: ATA-9: WDC WD5000LPVX-22V0TT0,      WD-WX21A152AR3V, 01.01A01, max UDMA/133
Jan 12 21:07:47 unRAID kernel: ata5.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Jan 12 21:07:47 unRAID kernel: ata3.00: ATA-9: WDC WD60EZRX-00MVLB1,      WD-WX21D9421D3V, 80.00A80, max UDMA/133
Jan 12 21:07:47 unRAID kernel: ata3.00: 11721045168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Jan 12 21:07:47 unRAID kernel: ata4.00: configured for UDMA/133
Jan 12 21:07:47 unRAID kernel: ata1.00: configured for UDMA/133
Jan 12 21:07:47 unRAID kernel: ACPI Error: 
Jan 12 21:07:47 unRAID kernel: scsi 2:0:0:0: Direct-Access     ATA      WDC WD10EZRX-00A 1A01 PQ: 0 ANSI: 5
Jan 12 21:07:47 unRAID kernel: [DSSP]<5>sd 2:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
Jan 12 21:07:47 unRAID kernel: sd 2:0:0:0: [sdb] 4096-byte physical blocks
Jan 12 21:07:47 unRAID kernel: sd 2:0:0:0: Attached scsi generic sg1 type 0
Jan 12 21:07:47 unRAID kernel: sd 2:0:0:0: [sdb] Write Protect is off
Jan 12 21:07:47 unRAID kernel: sd 2:0:0:0: [sdb] Mode Sense: 00 3a 00 00
Jan 12 21:07:47 unRAID kernel: sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 12 21:07:47 unRAID kernel: Namespace lookup failure, AE_NOT_FOUND (20150930/psargs-359)
Jan 12 21:07:47 unRAID kernel: ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT4._GTF] (Node ffff88040f062938), AE_NOT_FOUND (20150930/psparse-542)
Jan 12 21:07:47 unRAID kernel: ata5.00: configured for UDMA/133
Jan 12 21:07:47 unRAID kernel: ata3.00: configured for UDMA/133
Jan 12 21:07:47 unRAID kernel: scsi 4:0:0:0: Direct-Access     ATA      WDC WD60EZRX-00M 0A80 PQ: 0 ANSI: 5
Jan 12 21:07:47 unRAID kernel: sd 4:0:0:0: [sdc] 11721045168 512-byte logical blocks: (6.00 TB/5.46 TiB)
Jan 12 21:07:47 unRAID kernel: sd 4:0:0:0: Attached scsi generic sg2 type 0
Jan 12 21:07:47 unRAID kernel: sd 4:0:0:0: [sdc] 4096-byte physical blocks
Jan 12 21:07:47 unRAID kernel: scsi 5:0:0:0: Direct-Access     ATA      WDC WD60EZRZ-00G 0A80 PQ: 0 ANSI: 5
Jan 12 21:07:47 unRAID kernel: sd 4:0:0:0: [sdc] Write Protect is off
Jan 12 21:07:47 unRAID kernel: sd 4:0:0:0: [sdc] Mode Sense: 00 3a 00 00
Jan 12 21:07:47 unRAID kernel: sd 4:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 12 21:07:47 unRAID kernel: sd 5:0:0:0: [sdd] 11721045168 512-byte logical blocks: (6.00 TB/5.46 TiB)
Jan 12 21:07:47 unRAID kernel: sd 5:0:0:0: Attached scsi generic sg3 type 0
Jan 12 21:07:47 unRAID kernel: scsi 6:0:0:0: Direct-Access     ATA      WDC WD5000LPVX-2 1A01 PQ: 0 ANSI: 5
Jan 12 21:07:47 unRAID kernel: sd 6:0:0:0: [sde] 976773168 512-byte logical blocks: (500 GB/466 GiB)
Jan 12 21:07:47 unRAID kernel: sd 6:0:0:0: [sde] 4096-byte physical blocks
Jan 12 21:07:47 unRAID kernel: sd 6:0:0:0: Attached scsi generic sg4 type 0
Jan 12 21:07:47 unRAID kernel: sd 6:0:0:0: [sde] Write Protect is off
Jan 12 21:07:47 unRAID kernel: sd 6:0:0:0: [sde] Mode Sense: 00 3a 00 00
Jan 12 21:07:47 unRAID kernel: sd 6:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 12 21:07:47 unRAID kernel: ata6.00: disabling queued TRIM support
Jan 12 21:07:47 unRAID kernel: ata6.00: ATA-9: Crucial_CT512MX100SSD1,         15020E592B9D, MU01, max UDMA/133
Jan 12 21:07:47 unRAID kernel: ata6.00: 1000215216 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Jan 12 21:07:47 unRAID kernel: sd 5:0:0:0: [sdd] 4096-byte physical blocks
Jan 12 21:07:47 unRAID kernel: sd 5:0:0:0: [sdd] Write Protect is off
Jan 12 21:07:47 unRAID kernel: sd 5:0:0:0: [sdd] Mode Sense: 00 3a 00 00
Jan 12 21:07:47 unRAID kernel: sd 5:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 12 21:07:47 unRAID kernel: ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20150930/psargs-359)
Jan 12 21:07:47 unRAID kernel: ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT5._GTF] (Node ffff88040f0629b0), AE_NOT_FOUND (20150930/psparse-542)
Jan 12 21:07:47 unRAID kernel: ata6.00: supports DRM functions and may not be fully accessible
Jan 12 21:07:47 unRAID kernel: ata6.00: disabling queued TRIM support
Jan 12 21:07:47 unRAID kernel: ata6.00: configured for UDMA/133
Jan 12 21:07:47 unRAID kernel: scsi 7:0:0:0: Direct-Access     ATA      Crucial_CT512MX1 MU01 PQ: 0 ANSI: 5
Jan 12 21:07:47 unRAID kernel: sd 7:0:0:0: Attached scsi generic sg5 type 0
Jan 12 21:07:47 unRAID kernel: ata6.00: Enabling discard_zeroes_data
Jan 12 21:07:47 unRAID kernel: sd 7:0:0:0: [sdf] 1000215216 512-byte logical blocks: (512 GB/477 GiB)
Jan 12 21:07:47 unRAID kernel: sd 7:0:0:0: [sdf] 4096-byte physical blocks
Jan 12 21:07:47 unRAID kernel: sd 7:0:0:0: [sdf] Write Protect is off
Jan 12 21:07:47 unRAID kernel: sd 7:0:0:0: [sdf] Mode Sense: 00 3a 00 00
Jan 12 21:07:47 unRAID kernel: sd 7:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 12 21:07:47 unRAID kernel: ata6.00: Enabling discard_zeroes_data
Jan 12 21:07:47 unRAID kernel: sdf: sdf1
Jan 12 21:07:47 unRAID kernel: ata6.00: Enabling discard_zeroes_data
Jan 12 21:07:47 unRAID kernel: sd 7:0:0:0: [sdf] Attached SCSI disk
Jan 12 21:07:47 unRAID kernel: sdb: sdb1
Jan 12 21:07:47 unRAID kernel: sd 2:0:0:0: [sdb] Attached SCSI disk
Jan 12 21:07:47 unRAID kernel: sdc: sdc1
Jan 12 21:07:47 unRAID kernel: sd 4:0:0:0: [sdc] Attached SCSI disk
Jan 12 21:07:47 unRAID kernel: sdd: sdd1
Jan 12 21:07:47 unRAID kernel: sd 5:0:0:0: [sdd] Attached SCSI disk
Jan 12 21:07:47 unRAID kernel: sde: sde1
Jan 12 21:07:47 unRAID kernel: sd 6:0:0:0: [sde] Attached SCSI disk
Jan 12 21:07:47 unRAID kernel: random: nonblocking pool is initialized

Link to comment

Yeah i did that after reading other posts regarding the error, but i'm running the most up to date version. It's really odd that they never occured before though!?  :-\

 

Harmless is good :)

I'll keep an eye on them and also see what happens after the next parity check then.

 

Thank you.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.