dirtysanchez

Members
  • Posts

    949
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    SW Missouri

Recent Profile Visitors

1904 profile views

dirtysanchez's Achievements

Collaborator

Collaborator (7/14)

10

Reputation

  1. Thanks @Squid. Appreciate your assistance.
  2. Long story short the original Landfill was finally rebuilt with current hardware. All is running great but I am getting Machine Check Events occasionally. Jun 26 18:56:00 Landfill root: Fix Common Problems Version 2021.05.03 Jun 26 18:56:04 Landfill root: Fix Common Problems: Error: Machine Check Events detected on your server Jun 26 18:56:04 Landfill root: mcelog: Family 6 Model 165 CPU: only decoding architectural errors Jun 26 18:56:04 Landfill root: mcelog: warning: 8 bytes ignored in each record Anything to worry about? Thanks in advance for any assistance provided. landfill-diagnostics-20210626-1953.zip
  3. Radarr stopped working. From what I can tell from the logs it appears Radarr updated to v3 but blew up in the migration process. Any help would be appreciated. Relevant log section below. EDIT: Rolled back to v0.2 and container starts but has no config (no indexers, no movies, etc). Tried restoring db from backups and container fails with “corrupted db”. Long story short, there doesn’t appear to be much support for all the v3 upgrade failures (not throwing shade, I understand you all do this for nothing but the love of the platform and the community), so I started from scratch and reconfigured a fresh v3 from the ground up. All good now. [Info] Bootstrap: Starting Radarr - /app/radarr/bin/Radarr.dll - Version 3.0.0.4204 [Info] AppFolderInfo: Data directory is being overridden to [/config] [Info] Router: Application mode: Interactive [Info] MigrationController: *** Migrating data source=/config/radarr.db;cache size=-20000;datetimekind=Utc;journal mode=Wal;pooling=True;version=3 *** [Info] MigrationLoggerProvider: *** 165: remove_custom_formats_from_quality_model migrating *** [Info] remove_custom_formats_from_quality_model: Starting migration to 165 [Error] MigrationLoggerProvider: Newtonsoft.Json.JsonReaderException: Unterminated string. Expected delimiter: ". Path 'protocol', line 16, position 1. at NzbDrone.Common.Serializer.Json.Deserialize[T](String json) in D:\a\1\s\src\NzbDrone.Common\Serializer\Json.cs:line 48 at NzbDrone.Core.Datastore.Migration.remove_custom_formats_from_quality_model.AddIndexerFlagsToBlacklist(IDbConnection conn, IDbTransaction tran) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\165_remove_custom_formats_from_quality_model.cs:line 102 at FluentMigrator.Runner.Processors.SQLite.SQLiteProcessor.Process(PerformDBOperationExpression expression) at FluentMigrator.Expressions.PerformDBOperationExpression.ExecuteWith(IMigrationProcessor processor) at FluentMigrator.Runner.MigrationRunner.<>c__DisplayClass70_0.<ExecuteExpressions>b__1() at FluentMigrator.Runner.StopWatch.Time(Action action) at FluentMigrator.Runner.MigrationRunner.ExecuteExpressions(ICollection`1 expressions) [v3.0.0.4204] Newtonsoft.Json.JsonReaderException: Unterminated string. Expected delimiter: ". Path 'protocol', line 16, position 1. at NzbDrone.Common.Serializer.Json.Deserialize[T](String json) in D:\a\1\s\src\NzbDrone.Common\Serializer\Json.cs:line 48 at NzbDrone.Core.Datastore.Migration.remove_custom_formats_from_quality_model.AddIndexerFlagsToBlacklist(IDbConnection conn, IDbTransaction tran) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\165_remove_custom_formats_from_quality_model.cs:line 102 at FluentMigrator.Runner.Processors.SQLite.SQLiteProcessor.Process(PerformDBOperationExpression expression) at FluentMigrator.Expressions.PerformDBOperationExpression.ExecuteWith(IMigrationProcessor processor) at FluentMigrator.Runner.MigrationRunner.<>c__DisplayClass70_0.<ExecuteExpressions>b__1() at FluentMigrator.Runner.StopWatch.Time(Action action) at FluentMigrator.Runner.MigrationRunner.ExecuteExpressions(ICollection`1 expressions) [Fatal] ConsoleApp: EPIC FAIL! [v3.0.0.4204] NzbDrone.Common.Exceptions.RadarrStartupException: Radarr failed to start: Error creating main database ---> Newtonsoft.Json.JsonReaderException: Unterminated string. Expected delimiter: ". Path 'protocol', line 16, position 1. at NzbDrone.Common.Serializer.Json.Deserialize[T](String json) in D:\a\1\s\src\NzbDrone.Common\Serializer\Json.cs:line 48 at NzbDrone.Core.Datastore.Migration.remove_custom_formats_from_quality_model.AddIndexerFlagsToBlacklist(IDbConnection conn, IDbTransaction tran) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\165_remove_custom_formats_from_quality_model.cs:line 102 at FluentMigrator.Runner.Processors.SQLite.SQLiteProcessor.Process(PerformDBOperationExpression expression) at FluentMigrator.Expressions.PerformDBOperationExpression.ExecuteWith(IMigrationProcessor processor) at FluentMigrator.Runner.MigrationRunner.<>c__DisplayClass70_0.<ExecuteExpressions>b__1() at FluentMigrator.Runner.StopWatch.Time(Action action) at FluentMigrator.Runner.MigrationRunner.ExecuteExpressions(ICollection`1 expressions) at FluentMigrator.Runner.MigrationRunner.ExecuteMigration(IMigration migration, Action`2 getExpressions) at FluentMigrator.Runner.MigrationRunner.ApplyMigrationUp(IMigrationInfo migrationInfo, Boolean useTransaction) at FluentMigrator.Runner.MigrationRunner.MigrateUp(Int64 targetVersion, Boolean useAutomaticTransactionManagement) at FluentMigrator.Runner.MigrationRunner.MigrateUp(Boolean useAutomaticTransactionManagement) at FluentMigrator.Runner.MigrationRunner.MigrateUp() at NzbDrone.Core.Datastore.Migration.Framework.MigrationController.Migrate(String connectionString, MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\Framework\MigrationController.cs:line 67 at NzbDrone.Core.Datastore.DbFactory.CreateMain(String connectionString, MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 115 --- End of inner exception stack trace --- at NzbDrone.Core.Datastore.DbFactory.CreateMain(String connectionString, MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 130 at NzbDrone.Core.Datastore.DbFactory.Create(MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 79 at NzbDrone.Core.Datastore.DbFactory.Create(MigrationType migrationType) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 67 at NzbDrone.Core.Datastore.DbFactory.RegisterDatabase(IContainer container) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 45 at Radarr.Host.NzbDroneConsoleFactory.Start() in D:\a\1\s\src\NzbDrone.Host\ApplicationServer.cs:line 95 at Radarr.Host.Router.Route(ApplicationModes applicationModes) in D:\a\1\s\src\NzbDrone.Host\Router.cs:line 56 at Radarr.Host.Bootstrap.Start(ApplicationModes applicationModes, StartupContext startupContext) in D:\a\1\s\src\NzbDrone.Host\Bootstrap.cs:line 77 at Radarr.Host.Bootstrap.Start(StartupContext startupContext, IUserAlert userAlert, Action`1 startCallback) in D:\a\1\s\src\NzbDrone.Host\Bootstrap.cs:line 40 at NzbDrone.Console.ConsoleApp.Main(String[] args) in D:\a\1\s\src\NzbDrone.Console\ConsoleApp.cs:line 41 Press enter to exit... Non-recoverable failure, waiting for user intervention...
  4. Just love unRAID in general. Built my server in 2012 and it's been humming along ever since. The improvements in unRAID since then have been amazing and it has soooooo much more functionality, and the unRAID community is second to none! Would like to see the improvements keep on coming.
  5. When I got the Xeon it was used and did not come with the stock cooler. I used a stock i5 cooler I had lying around for a bit before I upgraded to the Noctua cooler. Temps definitely went up compared to the i3 but nothing too drastic. I think it was idling around 42C or thereabouts. The server typically idles around that now as well, even with the Noctua cooler, but that's just what happens when you have a 77W CPU in such a small case. If you will be doing more transcodes temps will definitely go up so I'd put the biggest cooler on it you can fit. I'm using the small form factor Noctua becuase I'm pressed for space. Can't fit a much larger cooler in there. When I've seen 3 or 4 transcodes running at the same time the CPU definitely spikes into the 60's.
  6. Congrats on the build, glad it has been working well for you. Yes, I just dropped in the E3-1245v2 and called it a day, no issues. Nothing special needed that I recall as long as the BIOS version supported the CPU, which you stated you already checked. Yes, the mobo supports VT-d, my build shows both HVM and IOMMU as enabled. The reason it shows as disabled for you is that the i3-3240 does not support VT-d. Once you drop in the Xeon it should show enabled assuming you have it enabled in the BIOS. I did add a SATA expansion card, but totally forgot to add that to the OP. I used the IOCrest SY-PEX40039. It uses an ASM1061 chipset. https://www.amazon.com/gp/product/B005B0A6ZS/ref=oh_aui_search_asin_title?ie=UTF8&amp;psc=1 Just FYI, before the IOCrest I ordered a StarTech 2 port PEXSAT32 card that uses the Marvell 88SE9230 chipset. It did not work. I don't recall if unRAID didn't detect the card, or if it detected the card but could not detect the attached drives.
  7. Still going strong. It's been running 24x7x365 for over 6 years now. There have been many changes to the server over the years, most all of which has been detailed in this thread. Only changes since my last update were upgrading from 8GB RAM to 16GB RAM and finally getting around to converting all drives to xfs. My sig below reflects the current state of the server. I'll update the OP shortly with all the changes not already mentioned in the OP.
  8. Everything working as it should. Marking this as solved.
  9. Nuked the docker.img and recreated containers from my templates. So far so good. I am able to stop containers successfully. I'll marked this solved in another 24 hours if all is still working. Thanks again for the help.
  10. Thanks for taking a look Squid. Yes, plex crashed a few days ago (first time in years if I recall correctly) and that is what lead to the discovery of the issue as I was unable to get it running again without a hard reset that caused an unclean shutdown. The following day I discovered it went beyond the plex container when I was attempting to stop some containers to prepare for migrating all my drives to xfs. As for the current container refusing to stop (not the docker system per se), even if you turn Docker off it doesn't kill the running containers. You can change Settings > Docker > Enable Docker to no and the containers don't die nor does Docker itself stop. docker ps still shows them running. Once the migrations are complete (tomorrow morning) I'll nuke the docker.img and reinstall all dockers per your suggestion and report back. Thank you for the assistance.
  11. Hello all, Recently starting having an issue, unsure when exactly it started as I don't often have a reason to stop a docker container. Issue exists on 6.6.6 as well as 6.6.5. Unknown if it existed in prior versions or if it is even unRAID OS version related. I run the following containers. All containers are the linuxserver.io version with the exception of UniFi Video which is pducharme: Plex UniFi UniFi Video Radarr Sonarr Sabnzbd Tautulli Transmission Problem is as follows. If you attempt to stop a running container, it does not stop. The GUI shows the spinning arrows forever and once you finally refresh it still shows it running. The container is in fact dead at that point and the container WebGUI does not respond, but it does not finish exiting correctly. Issuing a docker stop containername or a docker kill containername from cmd line does nothing and hangs until you ctrl-c out. I have not found a way to kill and/or restart the container successfully. Some of the container logs appear as if the container exited successfully, while others the last line in the logs is "[s6-finish] syncing disks". At this point the only way to get the container running again is to restart unRAID. Problem is unRAID is now unable to stop the docker service, and therefore unable to stop the array. The only way to restart the server is a hard power cycle, and hence an unclean shutdown. In limited testing I have found that if only Plex and UniFi Video are running you can stop the array and the containers will successfully stop and the array successfully stops. I have yet to start the containers one by one and find which ones are causing the issue. I am currently in the process of migrating all drives to xfs and so have not yet had the time to test further. All that said, it appears when the containers are automatically stopped/restarted weekly to update via CA Auto Update settings, they do stop and start correctly. I have searched the forums and have not found a similar issue with resolution. Attached are diags from when the containers were hung. Any assistance would be appreciated. landfill-diagnostics-20190102-1851.zip
  12. Hello all, Updated to 6.6.6 from 6.6.5. I am now unable to stop dockers. Most containers are linuxserver.io. Have also tried to stop from cmd line and it does not stop. Also rolled back to 6.6.5 and no change. It is possible this started before the upgrade and I just haven't had to manually stop a docker in a month or two. Did some searching and didn't find much. Not sure this is related to the unRAID version or some other issue. Diags attached. Many thanks for any assistance. landfill-diagnostics-20190102-1851.zip
  13. Also having this issue with 6.6.6, also with linuxserver containers. Reverted to 6.6.5, no change.
  14. If you've installed the LSIO Unifi container as default, it is version 5.6.37 (the LTS version). If you need the 5.7.x branch, you'll need to change the repository to linuxserver/unifi:unstable in the Docker config. Once that is done it should be 5.7.23 and you should be able to import your existing backup. As for the forgetting of devices, there are 2 ways you can go about it. If you plan to give the docker the same IP as the previous Windows controller, then you just import the backup and the devices will show up. If you're migrating to another IP address, you'll either need to forget the devices and re-adopt in the new controller, or SSH into the AP's and re-run the set-inform command to point them to the new controller. There are other ways to do it as well, but a bit more convoluted (via the override setting in the controller).