fmp4m

Members
  • Content count

    146
  • Joined

  • Last visited

  • Days Won

    1

fmp4m last won the day on April 22

fmp4m had the most liked content!

Community Reputation

7 Neutral

About fmp4m

  • Rank
    Advanced Member
  • Birthday September 26

Converted

  • Gender
    Male
  • Personal Text
    unRAID: 6.5.1 Pro License | CPU: AMD Ryzen Threadripper 1950x | MB: Asus Prime X399-A | Memory: 32GB DDR4
    Case: Norco RPC-4116 w/ 6x 2.5" hotswap in 5.25 tray | GPU: TBD | Network: 1GbE x2 Balance-RR
    Donation: https://cash.me/$fmp4m

Recent Profile Visitors

248 profile views
  1. fmp4m

    unRAID OS version 6.5.3 available

    Been running for 4 days with no ill effect. The parity check speed is the same for me and completed in 9hrs for 32TB.
  2. fmp4m

    TRX - 1950x Compatible Memory

    Can anyone confirm or deny if Patriot Viper Elite Series DDR4 32GB (2x16GB) 2666MHz PC4-21300 will work on the x399 threadripper combo?
  3. sudo mount -a (the space was missed by the original suggestor)
  4. fmp4m

    Uptime 10 days - /var/log full

    I ran another test from FCP and the alert went away. When this happens again (if) I will rerun all of the commands and post. It would be nice to know what caused it. heh.
  5. fmp4m

    [Support] Linuxserver.io - Ombi

    May want to post this in the community apps support thread.
  6. fmp4m

    Uptime 10 days - /var/log full

    Error still present at 98% now. root@NAS:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 16G 2.0G 14G 13% / tmpfs 32M 1.6M 31M 5% /run devtmpfs 16G 0 16G 0% /dev tmpfs 16G 20M 16G 1% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 11M 118M 8% /var/log /dev/sda1 15G 926M 14G 7% /boot /dev/loop0 7.5M 7.5M 0 100% /lib/modules /dev/loop1 4.5M 4.5M 0 100% /lib/firmware /dev/md1 3.7T 2.8T 910G 76% /mnt/disk1 /dev/md2 3.7T 2.8T 898G 76% /mnt/disk2 /dev/md3 3.7T 2.8T 925G 76% /mnt/disk3 /dev/md4 3.7T 2.8T 931G 76% /mnt/disk4 /dev/md5 3.7T 1.9T 1.8T 52% /mnt/disk5 /dev/md6 3.7T 1.9T 1.8T 51% /mnt/disk6 /dev/md7 3.7T 1.9T 1.9T 51% /mnt/disk7 /dev/md8 1.9T 1.9G 1.9T 1% /mnt/disk8 /dev/md9 1.9T 1.9G 1.9T 1% /mnt/disk9 /dev/sdq1 1.9T 74G 1.8T 4% /mnt/cache shfs 30T 17T 13T 57% /mnt/user0 shfs 31T 17T 15T 54% /mnt/user /dev/sde1 120G 43G 77G 36% /mnt/disks/128gbssd-livetru /dev/sdd1 239G 86G 153G 36% /mnt/disks/DieFalse /dev/sdh1 466G 508M 466G 1% /mnt/disks/VMs /dev/sdc1 477G 80M 477G 1% /mnt/disks/512SSD-BTM /dev/loop2 100G 11G 87G 11% /var/lib/docker /dev/loop3 1.0G 17M 905M 2% /etc/libvirt shm 64M 0 64M 0% /var/lib/docker/containers/06f410f75932a0c47ec0150d3afce238f013a7649cf2c6526c6fc5793a4af922/shm shm 64M 0 64M 0% /var/lib/docker/containers/8d644de6ab63a99334cae60b3d4f2b94349b4d2a22212b9a96c962f9729b62f8/shm shm 64M 0 64M 0% /var/lib/docker/containers/4457fc94daf92d91a260fbe2d497ee3cf6b082072ad4b212597a2d8114673f3b/shm shm 64M 0 64M 0% /var/lib/docker/containers/ccd1a89e92ed58ab389694e2f4eb7cb8a5eacba69cebc566316a9bf10efe0308/shm shm 64M 8.0K 64M 1% /var/lib/docker/containers/e538288330b8395beba2d8c45de42b695d3624800ac8f2ee79fd0e6f6da4893e/shm shm 64M 0 64M 0% /var/lib/docker/containers/95f35c99fee2791858c0d5e73ed6b16b8141d42a8c309c3c125a93eed2ed3600/shm shm 64M 0 64M 0% /var/lib/docker/containers/626f99711bd2833f6758c871b4cac48596fd0d4a07839d5cd74543107931d813/shm shm 64M 0 64M 0% /var/lib/docker/containers/4c8ff2f4babf6085841bc8068a27306a5e8fe775c6ec96ec6e01368e19b35278/shm shm 64M 0 64M 0% /var/lib/docker/containers/f4445575984f15b3b939fbdc2a4d8fbf69eaf55707c7a349f77ce7fe79f72835/shm shm 64M 4.0K 64M 1% /var/lib/docker/containers/93baf072eaa2726e9cbde07d8b59e51fdb6057fa7b9ad74bc9d456cbfafa5899/shm shm 64M 4.0K 64M 1% /var/lib/docker/containers/abff09f67cd317d35588743b4c6f4c896d71d47487528dd0e96f59c4c53fc3b9/shm shm 64M 0 64M 0% /var/lib/docker/containers/b81145a74d939e26b60cfb43e563a1001bcc122a13a8d3ba7e7fcf087af48205/shm shm 64M 0 64M 0% /var/lib/docker/containers/7be1481fd411936836c5fbce890e910ee3ec33b3bce41fb0dd5435d8af17e5b1/shm shm 64M 0 64M 0% /var/lib/docker/containers/491ded4de6f151871526071602c635be4a205d4f612226b840a0bbf5a74f6b65/shm shm 64M 4.0K 64M 1% /var/lib/docker/containers/8de1a09cf25a260a1ac8ecdd82c95355517ef5eb171b11783ccd6cec782c76d8/shm /dev/sdb1 477G 80M 477G 1% /mnt/disks/512SSD-TOP shm 64M 0 64M 0% /var/lib/docker/containers/4fd3dcae7537d5c88977b30085eac3f59611079809ac1f1035debc092c21a5bb/shm This may very well be the issue. However, I would not know how to find that....
  7. fmp4m

    Uptime 10 days - /var/log full

    root@NAS:~# du -h /var/log 1.5M /var/log/atop 0 /var/log/setup/tmp 4.0K /var/log/setup 508K /var/log/scripts 0 /var/log/samba/cores/winbindd 0 /var/log/samba/cores/smbd 0 /var/log/samba/cores/nmbd 0 /var/log/samba/cores 0 /var/log/samba 4.0K /var/log/removed_scripts 52K /var/log/removed_packages 0 /var/log/plugins 2.4M /var/log/packages 4.0K /var/log/nginx 0 /var/log/nfsd 0 /var/log/libvirt/uml 0 /var/log/libvirt/lxc 0 /var/log/libvirt/qemu 0 /var/log/libvirt 4.4M /var/log
  8. fmp4m

    Uptime 10 days - /var/log full

    I think its a false positive from FCP.... root@NAS:~# ls -lah /var/log total 124K drwxr-xr-x 13 root root 580 May 31 09:01 ./ drwxr-xr-x 15 root root 340 May 18 2016 ../ drwxr-xr-x 2 root root 140 May 31 08:13 atop/ -rw------- 1 root root 0 May 18 11:58 btmp -rw-r--r-- 1 root root 0 Mar 9 17:53 cron -rw-r--r-- 1 root root 0 Mar 9 17:53 debug -rw-rw-rw- 1 root root 2.7K May 31 18:01 diskinfo.log -rw-rw-rw- 1 root root 89K May 20 23:35 dmesg -rw-rw-rw- 1 root root 0 May 20 23:36 docker.log -rw-r--r-- 1 root root 0 Nov 21 2017 faillog -rw-rw-rw- 1 root root 617 May 21 08:23 fluxbox.log -rw-r--r-- 1 root root 0 Apr 7 2000 lastlog drwxr-xr-x 5 root root 160 May 20 23:38 libvirt/ -rw-r--r-- 1 root root 9.0K May 31 04:40 maillog -rw-r--r-- 1 root root 0 Mar 9 17:53 messages drwxr-xr-x 2 root root 40 May 15 2001 nfsd/ drwxr-x--- 2 nobody root 60 May 20 23:36 nginx/ drwxr-xr-x 2 root root 8.1K May 24 14:18 packages/ drwxr-xr-x 2 root root 540 May 25 17:45 plugins/ -rw-rw-rw- 1 root root 0 May 20 23:36 preclear.disk.log drwxr-xr-x 2 root root 60 May 20 23:36 removed_packages/ drwxr-xr-x 2 root root 60 May 20 23:36 removed_scripts/ drwxr-xr-x 3 root root 160 May 24 14:14 samba/ -rw-r--r-- 1 root root 33 Feb 11 22:02 scan drwxr-xr-x 2 root root 1.2K May 20 23:36 scripts/ -rw-r--r-- 1 root root 0 Mar 9 17:53 secure drwxr-xr-x 3 root root 80 Aug 21 2012 setup/ -rw-r--r-- 1 root root 0 Mar 9 17:53 spooler -rw-rw-r-- 1 root utmp 7.5K May 20 23:36 wtmp
  9. Received a random notification that /var/log is 94% full and havent had time to look into it. Unraid 6.5.3-rc1 Diagnostics posted incase its important to the RC /var/log is getting full (currently 94 % used) varlogfull.zip
  10. fmp4m

    (Solved) SSL Certificate Provisioning

    I use ubiquiti myself, I have had similar issues with CLI on some firmwares not fully committing unless I commit twice (kind of like the AP set-inform needing to be sent twice. fwiw.
  11. fmp4m

    Optional GUI resolution

    Ok, I have tried many plugs and even ran a VGA header off the cards optional VGA to a dummy. I still can not get 1080p to work. My max is 1024x768 Card with issue: AMD Radeon HD 7450 Card without issue: AMD RX 560
  12. Since you got this working, can you explain the setup in detail for one of the dockers as I am going to begin to move from my.domain.ext/docker to docker.domain.ext soon and this was one of my pre-planning concerns.
  13. fmp4m

    Get Fancy with Docker and CPU Pinning

    I personally would love to see this, especially with the Threadripper I have.
  14. fmp4m

    unRAID Going Down After Installing PiHole

    Would need to see diags, and configuration of network and pi-hole docker. My guess without this is that the docker is set to BR0 with an IP and you did not static ip the unraid machine, since unraid cant talk to BR0 to get an ip, when the lease expires it loses connectivity.
  15. fmp4m

    [Support] Ninthwalker - NowShowing

    You can also add .htpasswd to the nginx or use reverse proxy with password.

Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.