JorgeB

Moderators
  • Posts

    61501
  • Joined

  • Last visited

  • Days Won

    647

JorgeB last won the day on April 24

JorgeB had the most liked content!

Retained

  • Member Title
    On vacation, back in a week or so

Converted

  • Gender
    Male

Recent Profile Visitors

45132 profile views

JorgeB's Achievements

Grand Master

Grand Master (14/14)

7.5k

Reputation

3.3k

Community Answers

  1. It doesn't look to me like device problem. it just dropped offline, but you can try to remove it to see if it's better with just the other one.
  2. If you have a key problem you need to contact support, forum can't help with license issues.
  3. The diags posted only have a rebuild, and there weren't any disk errors, but if it's for example RAM problem nothing would be logged anyway.
  4. No errors logged, and SMR disks should perform normally on reads, could be an issue with one of them, you can pause the sync and run the diskspeed docker to see if they are performing normally there.
  5. Btrfs usually work fine with good hardware, but try zfs, if you also have issues with that, there could be an underlying hardware problem.
  6. JorgeB

    Dashy

    See if there's a support thread for that container:
  7. You could change them, but probably best to just wait.
  8. Apr 25 04:47:34 PLEX-PROD kernel: BTRFS: error (device nvme0n1p1: state A) in __btrfs_free_extent:3072: errno=-2 No such entry Apr 25 04:47:34 PLEX-PROD kernel: BTRFS info (device nvme0n1p1: state EA): forced readonly Cache pool went read only, with btrfs recommend backing up and reformatting the pool.
  9. Run it again without -n, and if it asks for -L use it.
  10. If it's a RAM problem, all data can be affected, but since you don't have any other btrfs or zfs filesystems, only the docker image is detecting data corruption.
  11. Start by running memtest, that could be a RAM problem.
  12. Cache pool is not mounting, see if this helps, type: btrfs rescue zero-log /dev/nvme1n1p1 then re-start the array
  13. Jellyfin container was killed twice because it was using a lot of RAM, and making the server run OOM, check its config or limit its RAM usage.