• Content count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About Necrotic

  • Rank
    Advanced Member


  • Gender
  • Location
    East Coast, USA
  1. Preclear plugin

    No I mean I get that spam without running pre-clear. Its just constantly adding to my log all the time.
  2. Preclear plugin

    Hi everyone, for some reason pre-clear is using up most of my log space. Has anyone experienced this? PS. I'm running version 6.3.5. Not updated yet since I wasn't so confident on stability and such. root@unRAID:~# df -h /var/log Filesystem Size Used Avail Use% Mounted on tmpfs 384M 356M 29M 93% /var/log root@unRAID:~# du -sm /var/log/* 1 /var/log/PhAzE-Logs 1 /var/log/ 1 /var/log/ 1 /var/log/ 1 /var/log/ 1 /var/log/ 0 /var/log/btmp 0 /var/log/btmp.1 0 /var/log/cron 0 /var/log/debug 1 /var/log/dmesg 2 /var/log/docker.log 1 /var/log/faillog 1 /var/log/lastlog 0 /var/log/libvirt 0 /var/log/maillog 0 /var/log/messages 0 /var/log/nfsd 3 /var/log/packages 0 /var/log/plugins 351 /var/log/preclear.disk.log 1 /var/log/removed_packages 1 /var/log/removed_scripts 0 /var/log/samba 1 /var/log/scripts 0 /var/log/secure 0 /var/log/setup 0 /var/log/spooler 1 /var/log/syslog 2 /var/log/syslog.1 1 /var/log/wtmp This is what I can see when I do a tail, its adding it every 10 seconds or something: Thu Apr 5 17:51:32 EDT 2018: get_content Finished: 0 Thu Apr 5 17:51:43 EDT 2018: Starting get_content: 0 Thu Apr 5 17:51:43 EDT 2018: Disks: + /dev/sdd => /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2574503 + /dev/sdc => /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2277054 + /dev/sdb => /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2737524 + /dev/sdf => /dev/disk/by-id/ata-WDC_WD60EFRX-68L0BN1_WD-WX11DC57SFX7 + /dev/sdg => /dev/disk/by-id/ata-WDC_WD50EFRX-68L0BN1_WD-WXB1HB4KUF1J + /dev/sde => /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0303087 + /dev/sdh => /dev/disk/by-id/ata-Samsung_SSD_850_EVO_250GB_S21NNXBGA75793K + /dev/sda => /dev/disk/by-id/usb-Kingston_DT_Micro_1C6F654E4910BD30C95403FF-0:0 Thu Apr 5 17:51:43 EDT 2018: unRAID Serials: + 0951-168A-4910-BD30C95403FF + WDC_WD60EFRX-68L0BN1_WD-WX11DC57SFX7 + WDC_WD30EFRX-68AX9N0_WD-WMC1T2277054 + WDC_WD30EFRX-68AX9N0_WD-WMC1T2574503 + WDC_WD30EFRX-68EUZN0_WD-WCC4N0303087 + WDC_WD30EFRX-68AX9N0_WD-WMC1T2737524 + WDC_WD50EFRX-68L0BN1_WD-WXB1HB4KUF1J + Samsung_SSD_850_EVO_250GB_S21NNXBGA75793K + Kingston_DT_Micro_1C6F654E4910BD30C95403FF-0:0 Thu Apr 5 17:51:43 EDT 2018: unRAID Disks: + /dev/disk/by-id/ata-WDC_WD60EFRX-68L0BN1_WD-WX11DC57SFX7 + /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2277054 + /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2574503 + /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0303087 + /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2737524 + /dev/disk/by-id/ata-WDC_WD50EFRX-68L0BN1_WD-WXB1HB4KUF1J + /dev/disk/by-id/ata-Samsung_SSD_850_EVO_250GB_S21NNXBGA75793K + /dev/disk/by-id/usb-Kingston_DT_Micro_1C6F654E4910BD30C95403FF-0:0 Thu Apr 5 17:51:43 EDT 2018: benchmark: get_unasigned_disks() took 0.004354s. Thu Apr 5 17:51:43 EDT 2018: benchmark: get_all_disks_info() took 0.004442s.
  3. [PhAzE] Plugins for Unraid 5/6

    No idea, I remember the first time it started it looked ok, I had done the same thing as you I think. But as soon as it tried to rescan it basically wiped everything from the database and I think it doubled up my appdata folder. I just didn't want to have it re-downloading everything.
  4. [PhAzE] Plugins for Unraid 5/6

    Make sure you restart and look at it again. When I first moved it over it seemed fine but when I did a refresh it went haywire and wiped my database, its why I had to do the whole process of editing the database...
  5. Did anyone else have issues with cachedirs going nuts and pegging one cpu to 100% forever? It happens rarely but over the past year or something had it happen twice. I went into settings, disabled it and re-enabled and it fixed it.
  6. Did you get a server to work? which docker did you use and what was your experience? Thanks! Edit: Nvm, it seems the reason it doesnt work is because steamcmd is 32bit and Unraid is running 64bit without enabling the emulation for 32.
  7. [Support] - SABnzbd

    Thanks. I got the following: Done, had to relocate 11 out of 235 chunks The cache now says: Data, single: total=220.01GiB, used=68.10GiB System, single: total=4.00MiB, used=48.00KiB Metadata, single: total=2.01GiB, used=619.44MiB GlobalReserve, single: total=285.98MiB, used=0.00B Now the big question is, how do I get it back to xfs without messing everything up?
  8. [Support] - SABnzbd

    Well shoot, I think this is what it did automatically. I am in the wrong place then for this post then...Do you think I should go and repost, if so where? I did manage to go in and wipe the recycle bin, that cleared some space and seemed to give me some breathing room.
  9. [Support] - SABnzbd

    Edited my post. Seems like my entire system just went on the fritz....
  10. [Support] - SABnzbd

    Yes, which diagnostics in particular? the SMART report? If so here it is: Samsung_SSD_850_EVO_250GB_S21NNXBGA75793K-20171216-0155.txt Edit: Just in case here is the unraid one:
  11. [Support] - SABnzbd

    I am having some problems with this docker. All of the sudden I am getting the following error on the dashboard: "Too little diskspace forcing PAUSE" Both the incomplete and complete folders are bound to my cache drive, which has 100GB+ of free space yet its happening all the time now. I was able to force it to finish a download by pausing and unpausing multiple times, but its not happening all over. Below is the section in the code where it pauses. 2017-12-15 19:56:17,177::INFO::[directunpacker:265] DirectUnpacked volume 33 for aBtwz76srBc3AATP 2017-12-15 19:56:23,205::WARNING::[assembler:77] Too little diskspace forcing PAUSE 2017-12-15 19:56:23,205::INFO::[downloader:277] Pausing 2017-12-15 19:56:23,205::INFO::[directunpacker:445] Aborting all DirectUnpackers 2017-12-15 19:56:23,205::INFO::[directunpacker:372] Aborting DirectUnpack for aBtwz76srBc3AATP Does anyone have suggestions about what is going on? sabnzbd (2).log
  12. [Request] Docker not dissapear on segfault

    Ok thanks! Still learning dockers
  13. Hi guys, I am running 6.3.5. I ran into an issue where I got a segfault while updating two docker containers (see below for one example). I was able to re-seat my ram and fix my issue, but as the docker container is deleted as a step before the run command it no longer showed on the unraid dashboard under dockers. Luckily I saved the errors, so I just had to get in the console and manually run the commands again and it populated back into the dashboard. It would be good if on error it either rolls back to the previous image or have it populate somehow in the dashboard so that it can run the run command again saving your settings. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="sabnzbd" --net="bridge" -e TZ="America/New_York" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -p 8080:8080/tcp -p 9090:9090/tcp -v "/mnt/user/appdata/downloads":"/downloads":rw -v "/mnt/user/appdata/apps/sabnzbd/incomplete-downloads":"/incomplete-downloads":rw -v "/mnt/user/appdata/apps/sabnzbd":"/config":rw linuxserver/sabnzbd unexpected fault address 0x1502b10 fatal error: fault [signal SIGSEGV: segmentation violation code=0x1 addr=0x1502b10 pc=0x93101d]
  14. [Support] - SABnzbd

    Thanks, I got home and removed the ram each one and blew it for possible dusting. I am running a Memtest right now and will see how it goes. If there is no error, how do I recover my docker? Should I just manually run that first command in the command line? Otherwise they don't show in my list.
  15. [Support] - SABnzbd

    Guys, I got the error below while trying to update the docker and the docker dropped from the list. Is this something specific to my computer? I recently upgraded the CPU and Ram, and I didn't have any issues at the time but I figured best to let you know. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="sabnzbd" --net="bridge" -e TZ="America/New_York" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -p 8080:8080/tcp -p 9090:9090/tcp -v "/mnt/user/appdata/downloads":"/downloads":rw -v "/mnt/user/appdata/apps/sabnzbd/incomplete-downloads":"/incomplete-downloads":rw -v "/mnt/user/appdata/apps/sabnzbd":"/config":rw linuxserver/sabnzbd unexpected fault address 0x1502b10 fatal error: fault [signal SIGSEGV: segmentation violation code=0x1 addr=0x1502b10 pc=0x93101d] goroutine 1 [running, locked to thread]: runtime.throw(0xa8029d, 0x5) /usr/lib64/go1.7.4/go/src/runtime/panic.go:566 +0x95 fp=0xc4200e3ce8 sp=0xc4200e3cc8 runtime.sigpanic() /usr/lib64/go1.7.4/go/src/runtime/sigpanic_unix.go:27 +0x288 fp=0xc4200e3d40 sp=0xc4200e3ce8 html.init() /usr/lib64/go1.7.4/go/src/html/entity.go:2208 +0x11fd fp=0xc4200e3da0 sp=0xc4200e3d40 html/template.init() /usr/lib64/go1.7.4/go/src/html/template/url.go:106 +0x6d fp=0xc4200e3e28 sp=0xc4200e3da0 /tmp/SBo/docker-1.12.6/vendor/src/ +0x53 fp=0xc4200e3e60 sp=0xc4200e3e28 /tmp/SBo/docker-1.12.6/vendor/src/ +0x62 fp=0xc4200e3e90 sp=0xc4200e3e60 /tmp/SBo/docker-1.12.6/.gopath/src/ +0x56 fp=0xc4200e3e98 sp=0xc4200e3e90 /tmp/SBo/docker-1.12.6/.gopath/src/ +0x38 fp=0xc4200e3ea0 sp=0xc4200e3e98 /tmp/SBo/docker-1.12.6/.gopath/src/ +0x7b fp=0xc4200e3ee0 sp=0xc4200e3ea0 main.init() /tmp/SBo/docker-1.12.6/cmd/docker/usage.go:23 +0x67 fp=0xc4200e3f38 sp=0xc4200e3ee0 runtime.main() /usr/lib64/go1.7.4/go/src/runtime/proc.go:172 +0x1bf fp=0xc4200e3f90 sp=0xc4200e3f38 runtime.goexit() /usr/lib64/go1.7.4/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200e3f98 sp=0xc4200e3f90 goroutine 17 [syscall, locked to thread]: runtime.goexit() /usr/lib64/go1.7.4/go/src/runtime/asm_amd64.s:2086 +0x1 The command failed.

Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.