Wally

Members
  • Posts

    16
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Wally's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Same problem here. I pre-cleared and formatted a drive on my test machine but when moved to my main machine and assigned to a slot, the array refuses to start.
  2. I just checked a WD Blue drive that I remove from and external drive and it's the smaller size. I ran the program Active Kill Disk to make sure there were no partitions with no change. The size shown in the Identity tab come from the drive itself and is fixed by WD. This is bad as these drives cannot be used to replace another 5TB that is the normal size and should not be used as a parity drive unless all drives are the same.
  3. Same problem here. The commands no longer run after wake up so the drive status gets out of sync and as a result some drives never spin down. I am on 6.1.2 and it hasn't worked since a few versions ago but before that S3 Sleep worked perfectly other than the drive temps sometimes not showing but that's a different problem. Try it with a reference to /usr/local/sbin/mdcmd Thanks Bonienl. That seems to have fixed it.
  4. Same problem here. The commands no longer run after wake up so the drive status gets out of sync and as a result some drives never spin down. I am on 6.1.2 and it hasn't worked since a few versions ago but before that S3 Sleep worked perfectly other than the drive temps sometimes not showing but that's a different problem.
  5. Justin, Have you tried the latest unRAID version 6.0.1? It seems to have fixed the problem for me. I believe the problem is with your Syba controller which uses the flakey Marvell controller as I had one also in my system with only my cache drive on it and once I remove it the parity check errors disappeared. I am now running again with the Marvell controller and 6.0.1 and have run at least 5 parity checks with no errors. The problem with the Marvell controller was mainly when used with VT-d enabled but I think it also caused glitches in some systems even without VT-d enabled.
  6. I never had any errors in my logs until I replaced my CPU with one that supported VT-d and then the DMA errors mentioned in the other posts showed up. I believe the Marvell controller still had problems as once I removed it the 5 parity check errors disappeared. Now with unRAID version 6.0.1 the problem seems patched as even with the Marvell controller installed, the parity errors are gone.
  7. Here's the thread that explains the problem: http://lime-technology.com/forum/index.php?topic=40683.0. If you google "VT-d Marvell" there's a lot of talk about it and the patches required. I believe even with VT-d disabled or not even available, these Marvell controllers can cause problems with certain system as seen here.
  8. Opentoe, I had the same exact problem at the same exact sectors as mentioned in the other thread here: http://lime-technology.com/forum/index.php?topic=38359.0. The problem is caused by the flakey Marvell controllers or Linux their drivers which causes problems mainly when using VT-D but also when not in my case. Try the latest unRAID 6.0.1 which seems to have fixed the problem in my system. My 5 errors were consistent and changed location when I upgraded to a larger parity drive. Apr 4 18:28:39 unRAID kernel: md: correcting parity, sector=3519069768 Apr 4 18:28:39 unRAID kernel: md: correcting parity, sector=3519069776 Apr 4 18:28:39 unRAID kernel: md: correcting parity, sector=3519069784 Apr 4 18:28:39 unRAID kernel: md: correcting parity, sector=3519069792 Apr 4 18:28:39 unRAID kernel: md: correcting parity, sector=3519069800 Apr 7 07:57:11 unRAID kernel: md: correcting parity, sector=1177606472 Apr 7 07:57:11 unRAID kernel: md: correcting parity, sector=1177606480 Apr 7 07:57:11 unRAID kernel: md: correcting parity, sector=1177606488 Apr 7 07:57:11 unRAID kernel: md: correcting parity, sector=1177606496 Apr 7 07:57:11 unRAID kernel: md: correcting parity, sector=1177606504 Wally
  9. johnny121b, I have this exact same problem with RC1 and RC2. For some reason the cache drive disappears and cannot be accessed by Plex or KVM and when trying to restart to troubleshoot, I get the "Retry unmounting use share" errors since the shares located on the cache are gone. I tried shutdown -r now but nothing happens so I've had to hard reset which the forces a parity calc when restarted. I've gone back to the previous unRAID version for now.
  10. I don't think this is a memory problem. There's almost no way that Justinchase and I could have the exact same 5 errors in the same locations. My 5 errors changed location and are consistant after I changed to a 5TB parity drive. I'm going to install a 4TB and see if my errors change back.
  11. Frank1940, Everytime I ran a correcting parity check for about the last 10 times in the last year I got the same invalid sectors. I even changed parity drives and the location of the 5 changed but was still constant with every check. Earlier this year I upgraded to unRAID 6 and XFS and had to swap each drive out after copying it's data to a newly XFS formatted drive and running a parity calc and check too see if that drive was the bad one but after doing all 10 drives, the problem still persists. I also replaced the RAM just in case. My system consists of an Intel DZ75ML-45K motherboard with an I5-2320 CPU, 8GB of ram, Dell Perc 310 flashed to LSI firmware in IT mode, 10 data drives in 3TB and 4TB Western Digital and Hitachi mix, 5TB Toshiba parity drive and a Sandisk 480GB SSD cache drive. I'm now running the latest beta 6 unRAID software. I just ran a parity check with the same 5 errors and am running it again too see what happens. It takes about an hour and half for the errors to show up.
  12. Wow, this is bizarre, I have the exact same problem but what's even more amazing is that my 5 errors were at the same exact 5 sectors! I thought I would have found the bad drive while upgrading from Riserfs to xfs and upgrading my parity drive from a 4TB to a 5TB but now I have 5 sectors incorrect in a different location constantly. During my FS upgrade, I changed two drives at a time and did parity checks with the same errors each time. This is not a bad drive problem at all but some kind of error caused by the hardware or software. Before: Apr 4 18:28:39 unRAID kernel: md: correcting parity, sector=3519069768 Apr 4 18:28:39 unRAID kernel: md: correcting parity, sector=3519069776 Apr 4 18:28:39 unRAID kernel: md: correcting parity, sector=3519069784 Apr 4 18:28:39 unRAID kernel: md: correcting parity, sector=3519069792 Apr 4 18:28:39 unRAID kernel: md: correcting parity, sector=3519069800 Now: Apr 7 07:57:11 unRAID kernel: md: correcting parity, sector=1177606472 Apr 7 07:57:11 unRAID kernel: md: correcting parity, sector=1177606480 Apr 7 07:57:11 unRAID kernel: md: correcting parity, sector=1177606488 Apr 7 07:57:11 unRAID kernel: md: correcting parity, sector=1177606496 Apr 7 07:57:11 unRAID kernel: md: correcting parity, sector=1177606504 I have done at least 10 parity check with the first set of bad sectors and about 5 with the second. I hope we can get to the bottom of this.
  13. reggierat, My problem was exactly the same as you described including the disk activity in the logs and messed up temp display. Before S3 Sleep activates, the drives are spun down but when awoken, all drives are spun up as power is applied to them. The problem is that unRAID still thinks they are spun down so never starts its timer to spin them down and S3 Sleep checks the drives directly and sees if any are spinning and thus the constant log entry of drive activity and reset of its timer. In my case, drive sdb showed activity until I accessed file on it and unRAID spun it down normally then sdc began showing up as active in the logs. Either spinning all the drives up or down manually will sync the drives actual status to unRAID and allow S3 Sleep to work properly as you have noticed. The command you posted works perfectly as long as you let S3 Sleep do the sleeping as doing it manually is direct and bypasses everything.
  14. I just tried the spinup command posted by reggierat and it works if you've let the S3 sleep script do the sleeping but not if done manually thru the webpage as expected. My server is now sleeping normally. Thanks all.
  15. reggierat, How did you modify the S3 sleep script? I'm having a similar problem as my drives are spun up after coming out of S3 sleep but unRAID thinks they are spun down since that's how they were just before S3. Because of this, my server won't go to sleep again unless I either do a manual spinup or spindown to sync the status of the drives and unRAID. I tried adding post S3 commands to spinup with no success.