tgggd86

Members
  • Posts

    93
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

tgggd86's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. That is not what I intend to do, but I do have a backup of my most critical data. My understanding of dual parity, is that you can still recover data after two drives fail. Why is this situation different?
  2. I do require encryption, but unfortunately did not back up my LUKS headers. Searched that and will do so once my array has been rebuilt. If I can't mount my two drives due to encryption issues, I'm assuming I'll need to format them and then rebuild them using parity. I don't see how rebuilding parity is the best course of action. What am I missing?
  3. Diagnostics attached tower-diagnostics-20211114-2029.zip
  4. I left on vacation for 10 days and when I returned I noticed 2 of my disks in my array were disabled. I rebooted and when starting the array, 2 of my disks could not be mounted with the "unmountable wrong encryption key" error (these were the same disks that were disabled). I googled the error and found the below thread which hints at a memory error/issue. https://forums.unraid.net/topic/114191-solved-multiple-unmountable-disks-with-wrong-encryption-key-but-its-correct/ I've been running memtest 86+ on my server for 36 hours (7 passes) with 0 errors. So I assume memory is not the issue here. I have dual parity so I'm assuming that I can just rebuild those 2 drives without any issues. But I want to make sure I prevent this issue from happening again and that I won't end up with data corruption if theirs another underling issue that hasn't revealed itself yet. So before I begin rebuilding my array looking for any advice/recomendations before I commit. Below are a couple things that I've noticed or done that may be a contributing factor. - Upgraded my cache drive aprx. 3 weeks ago. Did not notice any issues beforehand and followed the "replace a cache drive" on the unraid wiki. - Plex seemed to have "lost" many of my files located in various folders. Example: I had a playlist with 20 movies in it and it only remebered 2. - I forced plex to rescan my folders and it began adding items that were already on the array as if they were "new". - Log file when I first looked at it (before my initial restart) indicated a bunch of read errors on disks that were not disabled/emulated. - Those disk errors haven't popped up since restarting. Thanks in advance! Log files keep failing to upload. I'll try in a follow on post.
  5. I don't know how, but I'm willing to learn. I was able to open the radarr.db and find the blacklist table. I just don't know which portion I need to edit.
  6. I have been having a similar issue to the above users. Although I have a InvalidCastException error which I'm not sure is the cause. Can't seem to figure out a solution. [Info] Bootstrap: Starting Radarr - /app/radarr/bin/Radarr.dll - Version 3.0.1.4259 [Info] AppFolderInfo: Data directory is being overridden to [/config] [Info] Router: Application mode: Interactive [Info] MigrationController: *** Migrating data source=/config/radarr.db;cache size=-20000;datetimekind=Utc;journal mode=Wal;pooling=True;version=3 *** [Info] MigrationLoggerProvider: *** 154: add_language_to_files_history_blacklist migrating *** [Info] add_language_to_files_history_blacklist: Starting migration to 154 [Error] MigrationLoggerProvider: System.InvalidCastException: Specified cast is not valid. at System.Data.SQLite.SQLiteDataReader.VerifyType(Int32 i, DbType typ) at System.Data.SQLite.SQLiteDataReader.GetInt32(Int32 i) at NzbDrone.Core.Datastore.Migration.add_language_to_files_history_blacklist.UpdateLanguage(IDbConnection conn, IDbTransaction tran) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\154_add_language_to_file_history_blacklist.cs:line 143 at FluentMigrator.Runner.Processors.SQLite.SQLiteProcessor.Process(PerformDBOperationExpression expression) at FluentMigrator.Expressions.PerformDBOperationExpression.ExecuteWith(IMigrationProcessor processor) at FluentMigrator.Runner.MigrationRunner.<>c__DisplayClass70_0.<ExecuteExpressions>b__1() at FluentMigrator.Runner.StopWatch.Time(Action action) at FluentMigrator.Runner.MigrationRunner.ExecuteExpressions(ICollection`1 expressions) [v3.0.1.4259] System.InvalidCastException: Specified cast is not valid. at System.Data.SQLite.SQLiteDataReader.VerifyType(Int32 i, DbType typ) at System.Data.SQLite.SQLiteDataReader.GetInt32(Int32 i) at NzbDrone.Core.Datastore.Migration.add_language_to_files_history_blacklist.UpdateLanguage(IDbConnection conn, IDbTransaction tran) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\154_add_language_to_file_history_blacklist.cs:line 143 at FluentMigrator.Runner.Processors.SQLite.SQLiteProcessor.Process(PerformDBOperationExpression expression) at FluentMigrator.Expressions.PerformDBOperationExpression.ExecuteWith(IMigrationProcessor processor) at FluentMigrator.Runner.MigrationRunner.<>c__DisplayClass70_0.<ExecuteExpressions>b__1() at FluentMigrator.Runner.StopWatch.Time(Action action) at FluentMigrator.Runner.MigrationRunner.ExecuteExpressions(ICollection`1 expressions) [Fatal] ConsoleApp: EPIC FAIL! [v3.0.1.4259] NzbDrone.Common.Exceptions.RadarrStartupException: Radarr failed to start: Error creating main database ---> System.InvalidCastException: Specified cast is not valid. at System.Data.SQLite.SQLiteDataReader.VerifyType(Int32 i, DbType typ) at System.Data.SQLite.SQLiteDataReader.GetInt32(Int32 i) at NzbDrone.Core.Datastore.Migration.add_language_to_files_history_blacklist.UpdateLanguage(IDbConnection conn, IDbTransaction tran) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\154_add_language_to_file_history_blacklist.cs:line 143 at FluentMigrator.Runner.Processors.SQLite.SQLiteProcessor.Process(PerformDBOperationExpression expression) at FluentMigrator.Expressions.PerformDBOperationExpression.ExecuteWith(IMigrationProcessor processor) at FluentMigrator.Runner.MigrationRunner.<>c__DisplayClass70_0.<ExecuteExpressions>b__1() at FluentMigrator.Runner.StopWatch.Time(Action action) at FluentMigrator.Runner.MigrationRunner.ExecuteExpressions(ICollection`1 expressions) at FluentMigrator.Runner.MigrationRunner.ExecuteMigration(IMigration migration, Action`2 getExpressions) at FluentMigrator.Runner.MigrationRunner.ApplyMigrationUp(IMigrationInfo migrationInfo, Boolean useTransaction) at FluentMigrator.Runner.MigrationRunner.MigrateUp(Int64 targetVersion, Boolean useAutomaticTransactionManagement) at FluentMigrator.Runner.MigrationRunner.MigrateUp(Boolean useAutomaticTransactionManagement) at FluentMigrator.Runner.MigrationRunner.MigrateUp() at NzbDrone.Core.Datastore.Migration.Framework.MigrationController.Migrate(String connectionString, MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\Framework\MigrationController.cs:line 67 at NzbDrone.Core.Datastore.DbFactory.CreateMain(String connectionString, MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 115 --- End of inner exception stack trace --- at NzbDrone.Core.Datastore.DbFactory.CreateMain(String connectionString, MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 130 at NzbDrone.Core.Datastore.DbFactory.Create(MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 79 at NzbDrone.Core.Datastore.DbFactory.Create(MigrationType migrationType) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 67 at NzbDrone.Core.Datastore.DbFactory.RegisterDatabase(IContainer container) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 45 at Radarr.Host.NzbDroneConsoleFactory.Start() in D:\a\1\s\src\NzbDrone.Host\ApplicationServer.cs:line 95 at Radarr.Host.Router.Route(ApplicationModes applicationModes) in D:\a\1\s\src\NzbDrone.Host\Router.cs:line 56 at Radarr.Host.Bootstrap.Start(ApplicationModes applicationModes, StartupContext startupContext) in D:\a\1\s\src\NzbDrone.Host\Bootstrap.cs:line 77 at Radarr.Host.Bootstrap.Start(StartupContext startupContext, IUserAlert userAlert, Action`1 startCallback) in D:\a\1\s\src\NzbDrone.Host\Bootstrap.cs:line 40 at NzbDrone.Console.ConsoleApp.Main(String[] args) in D:\a\1\s\src\NzbDrone.Console\ConsoleApp.cs:line 41 Press enter to exit... Non-recoverable failure, waiting for user intervention...
  7. So I have had Sonarr and radarr setup with sabnzbd. When SABnzbd completes a download and puts it in the completed folder it prevents Sonarr and Radarr from modifying or deleting the files as it has created permissions for those files under user: Nobody. In windows I receive this error, "You require permission from Unix User\nobody to make changes to this file" I can run the "Docker Safe New Perms" tool from unRAID but, the issue returns whenever SABnzbd creates a new file. Any thoughts on what I can do to fix this? The share is set to public and no other files on my server have this issue.
  8. Hopefully you haven't run into the same problem I did as explained here: Even after buying a new SATA controller card my problems still occurred. I had to upgrade my entire MOBO/CPU/RAM to resolve the issue. Of course no issues using that hardware outside of unraid
  9. So I replaced the Supermicro card with the Marvell controller with an Dell H310 and flashed it to the P20 firmware. When I started my array and had parity sync begin, I encountered the exact same errors. Of note, while I was waiting for the Dell H310, I was running preclear on 2x new 8TB WD red drives. The one attached to the Mobo SATA ports ran around 100MB/s, but the one attached through the Supermicro/Marvell controller crawled at around 2MB/s. Also I ran a SMART extended test on all drives with the new controller and they all passed with no issues. My next thought is to replace the SATA breakout cables and see if that fixes the problem. Anyone have any thoughts?
  10. Thanks for the quick fix and reply Fireball3! So running 1e.bat came back with bad command or filename since the .bat looks for the sas2flsh.exe in the same folder as the .bat so I just copied sas2flsh.exe to that same folder. Should only take modding the path to point to any of the sub folders to make it dummy proof. Otherwise everything went as planned. Really I don't think there is a need for the SAS address maybe unless you have multiple cards in your system. I used the SAS address already in the .bat and after it booted up and recognized all my drives just fine. Hopefully my issues were due to the Supermicro controller and I'll be worry free from here on out.
  11. Ok, got my Dell Perc H310 Card. I started following the attached instructions and on both my Win10 computers, could not get it to format two different flash drives. So I went to Rufus which apparently worked the same (hint it didn't) and followed all the steps up to Step 4 where I could not find my ADAPTERS.txt file. Back during Step 1, I had assumed a problem happened when I got the error that not enough memory was available and no ADAPTERS.txt file was in my root folder. At the time I assumed it wasn't a big deal... obviously it is. Apparently Rufus uses a version of FreeDOS which does not have the HIMEM.SYS which makes it very problematic using FreeDOS on modern systems. So I forced Rufus to use FreeDOS 1.2 and didn't run into that error. Only problem is since I wiped my old firmware out, my ADAPTERS.txt file only says "Exit Code: 0x01" I'm assuming that's because it's not being seen by the system since I wiped it. So should I go back and reflash the adapter with the file created in Step 2, or should I keep going and then jump back to Step 1 to get my Hardware ID so I can complete Step 6 (if that's even possible)? I also do not have a sticker on the backside of my card showing the SAS address. Thanks in advance!
  12. Thanks Fireball3 Just ordered a Dell H310 for just under $50 on the bay. Hopefully that will help get my server up and running again!
  13. Flashed new firmware on my card (Supermicro AOC-SASLP-MV8) and even updated my mobo bios to the latest version and still the same problems. I disabled virtualization and also modded the config file as mentioned in the recommended link and still same issues. Leads me to believe my card is bad and/or the Marvell controller just no longer works with unRaid. Hopefully a new Controller card fixes the problem.
  14. Seems like there are a lot of people having issues with Marvell controllers and unRAID. I just ran into an issue I can't seem to resolve and I think it's time to find a new SATA controller card. I currently have a Supermicro AOC-SASLP-MV8 which is throwing errors no matter what I do (change drives, change cables, change power source, update firmware) and am hoping to find something similar to replace it. Any recommendations? Hoping for something that will be a little future proof but I only need 8 ports. Also if you're running unRAID v6.1.9 or higher and using this same controller without issues please chime in Thread about my server issues: https://lime-technology.com/forum/index.php?topic=54768.0
  15. Left my server on for a few days without starting the array and ran pre-clear on a new drive and there were no issues. Started the array and same issue. Full syslogs capturing the hang attached. Specifically the ICRC ABRT error seems to be the main culprit. Only problem is it is recorded as a different ata device each time. (ata7 x 2 and ata3) both of these ata devices are on my Marvel SATA adapter (88SE63xx/64 BIOS: 3.1.0.15N) I'm assuming my Marvel SATA controller is the main problem here, but it has no issues running pre-clear or running extended SMART tests on my drives. Only thing I can point to is that 6.2.x broke my SATA controller, so reverting back to 6.1.8 might be my best option. Thoughts? syslog1.txt syslog2.txt syslog3.txt