All Activity

This stream auto-updates     

  1. Past hour
  2. Using my VM with a monitor

    Hello there, Ive just created a windows 10 VM for my unraid server and wanted to connect a monitor for the VM to use so i can use windows 10 directly from the monitor, the only option for graphics card i have in my VM settings is VNC. Any help would be greatly apreciated!
  3. How to easily delete all these annoying "@eaDir" files on my unRAID drives. These files where created on my Synology NAS. Anyone?
  4. Same here with Rx 480 have try arch linux. With windows 10 all fine with gpu pass... First start works after vm reboot get the same error. unraid 6.5 installed
  5. Undo LetsEncrypt

    So I setup the lets encrypt options in unraid 6.5.0. I would like to undo this and change everything back to the default way of accessing unraid. I want to undo this so i can switch to using organizr so i can have a single landing page to manage everything with unraid and all my dockers on it.
  6. Poor GPU performance

    Hello everyone and thank you beforehand for any help provided. I got Unraid 6 and here my 2 Gaming VM setup: MOBO: Gigabyte GA297x Gaming 3 CPU: i5-4690k RAM: Corsair DDR3 24 GB total PSU: EVGA G3 850W Gold plus GPU: VM1 - VM1 - EVGA 1070 VM2 - R9 270x Toxic Disks: VM1 - Samsung NVME 960 SSD + 2TB 7200RPM VM2 - Samsung 850 SSD + 2TB 7200 RPM Before we get down to business "Yes! I plan to upgrade to get a better performance for the 2VM setup" With that said: So far, so good. Everything is working properly with one exception that is driving me crazy. My second VM which have the R9 270x Toxic is showing that the graphics card hit 100% for anything, literally watching a youtube video ramp it up to 100% load. My only guess is that this card is installed on a x8 PCI socket. To be honest I'm really lost on this matter. I need to figure this out in order to be able to take a good hardware choice for the upgrade.
  7. Looking to hire someone to help with a down server

    I went with this configuration SuperMicro 846BA-R920B 24x LFF 4U Server W/ X9DRI-LN4F+ Quick question do I run the unRAID app make bootable before I copy my original configuration folder to the USB drive or after I copy my original config folder? Thanks, RS
  8. Rate my upgrade

    Hi MrLeek, out curiosity why not use a server board?
  9. Are you running container as bridge or host mode? And yes, definitely odd. Also, for S's and G's, double check your firewall to make sure it includes those ports. EDIT: I'd also look at the different config.json files of node0,5, and a working one like 1 -- if these were imported from another container. But, my gut feeling says firewall, or at least starting there.
  10. Today
  11. [Support] Ninthwalker - NowShowing

    Glad you figured it out. the test cron is only for making sure the cron schedule goes off as planned and will only send an email to yourself as you found out. It was left over from v1, when there was no GUI and a docker restart was required to change cron. I left it in v2 for cron testing still, but maybe I should remove it or make it more clear what it does. Thanks for letting me know, and glad you like the app!
  12. Webshop selling server chassis

    Thanks for sharing guys! What I eventually did was ordering my chassis by Xcase. I already had good experiences with Xcase so it was the right thing to do ordering again by Xcase. Good shop, good support, and nice people. If you have suggestions of good webshops selling server chassis please post them in this thread. So we create a good topic of advice and experiences!
  13. Another disk read error post...

    You might want to read this: You also have to realize that every HD manufacturer decides which parameters to report, how to format the results, and their interpretation of what the parameter is actually showing about the drive. As a result, most folks tend to focus on these reported parameters which can trigger a report to the user. (They are found under Settings >>> Disk Settings under the GLOBAL SMART SETTINGS section.)
  14. Upgrade ideas

    I can see no reason to think that it would not be up to doing what you are looking at. Especially since you are looking at using the WIN10 VM from the standpoint of an evaluation tool. That VM should be able to do most normal computing jobs without breaking a sweat. It gets a bit more dicey when one is looking to play high end video games or heavy-duty video editing. While, I am not an expert in the use of VM's, you might be a little light on RAM. 32GB might be a better option point but only you know how serious you are about the using a VM.
  15. the unraid cpu is certainly slower, about 5000 vs 15000 passmarks. However the CPU load never goes over 15% when uploading via rclone. Also I get similar speeds when doing a simple speed test on the server. It doesn't seem to me that this is the problem. Thanks for the idea!
  16. Rate my upgrade

    Thanks both. I ended up with Cruical memory in the end - I just seemed to find it easier to find ram that the MB stated it supported - but Crucial are just as good. Also the Platinum PSU (it was £8 more expensive...and it was in stock!) Insert shiny picture of goodies (chassis turns up on Monday)...wish me luck!
  17. Upgrade ideas

  18. Access my unRaid tower externally?

    Yes. I am running exactly that setup and it works just great for me.
  19. What is the processing power of your Unraid box compared to your windows 10 box? I assume that whatever software that is sending it to the cloud server is doing it securely and encrypting it which might account for the different speeds if there is a significant difference in processing power and/or efficiency of the encryption software on the 2 systems.
  20. Access my unRaid tower externally?

    So ... to summarise, run 1 openvpn-as, run no-ip docker, do NOT code any dyn dns at router level, port forward to just the 1 server running openvpn-as, yes? Sent from my LG-D855 using Tapatalk
  21. I just tried this and got 160Mb/s average on a 9GB file. I wouldn't expect MTU size to account for 4-5x decrease in speed but I'll play with it and see what comes about. Thanks for the suggestions Frank!
  22. [Support] - Letsencrypt (Nginx)

    I've gotten everything working so far for use with Ombi on unRAID 6.1.9, (I'll eventually be moving everything over to the sister server v6.5.0) My questions now are really focused on security. I have this server on my home network, with a few business servers running on the same network, and some business data even in the same unRAID server. I'm hoping I may list my setup configurations here, and someone may be able to answer a few questions. (All sensitive information fields have been redacted) DDNS DDNS is handled through where resolves to my home dynamically agssigned IP provided by my ISP. The DDNS update is maintained by the DDNS updater in my DD-WRT flashed router. DNS The domain I use is actually a split domain, as the base domain points to an external business mail server. I set up a separate sub-domain for use, we'll call it: This is a CNAME using cloudflare DNS, that resolves to my DDNS domain provided by freedns. This setup looks like so: --> --> external IP at home. Questions: I assume because I am using a sub-domain of and not the base domain, this is what would be limiting me to only being able to use one letsencrypt/nginx/site-conf file (default) instead of what I've seen in this thread about using multiple files one for each subdomain? I've tried every possible way I could find, and think of, to make this work, with the default, without, with main server block in the default, and separate in each site conf.. But every time I have more than one site-conf file it kills the page and gives a connection refused. (This does the same if I try to list more than one sub-domain /location in the default file. I'm just confirming a suspicion here, and I know I can switch to over subdomains, or just buy a dedicated base Just want to confirm that is the solution or I'm doing something wrong. Also: I currently have a couple that resolve to the same DDNS destination. The problem is they all translate to I'd like if possible to only allow specifically resolve, and any other valid either time out, or error. I tried but just couldn't get it, I played around with the server listen block, and I suspect I'm just missing something in there? Port Forwarding / LetsEncrypt Docker Container Setup Firewall port forwarding is standard: External --> Internal 80 ---> 81 | 443 --> 444 /letsencrypt/nginx/site-confs/default ## Source: server { listen 80; server_name; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name; ## Set root directory & index root /config/www; index index.html index.htm index.php; ## Turn off client checking of client request body size client_max_body_size 0; ## Custom error pages error_page 400 401 402 403 404 /error.php?error=$status; #SSL settings include /config/nginx/strong-ssl.conf; location / { ## Default <port> is 5000, adjust if necessary proxy_pass http://myipaddress:38084; ## Using a single include file for commonly used settings include /config/nginx/proxy.conf; proxy_cache_bypass $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Upgrade $http_upgrade; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-Ssl on; } ## Required for Ombi 3.0.2517+ if ($http_referer ~* /) { rewrite ^/dist/([0-9\d*]).js /dist/$1.js last; } /letsencrypt/nginx/strong-ssl.conf ## Source: ## READ THE COMMENT ON add_header X-Frame-Options AND add_header Content-Security-Policy IF YOU USE THIS ON A SUBDOMAIN YOU WANT TO IFRAME! ## Certificates from LE container placement ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ## Strong Security recommended settings per ssl_dhparam /config/nginx/dhparams.pem; # Bit value: 4096 ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384; ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0 ssl_session_timeout 10m; ## Settings to add strong security profile (A+ on add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; #SET THIS TO none IF YOU DONT WANT GOOGLE TO INDEX YOU SITE! add_header X-Robots-Tag none; ## Use *, not * when using this on a sub-domain that you want to iframe! add_header Content-Security-Policy "frame-ancestors https://*.$server_name https://$server_name"; ## Use *, not * when using this on a sub-domain that you want to iframe! add_header X-Frame-Options "ALLOW-FROM https://*.$server_name" always; add_header Referrer-Policy "strict-origin-when-cross-origin"; proxy_cookie_path / "/; HTTPOnly; Secure"; more_set_headers "Server: Classified"; more_clear_headers 'X-Powered-By'; #ONLY FOR TESTING!!! READ THIS!: add_header Expect-CT max-age=0,report-uri=""; /letsencrypt/nginx/proxy.conf client_max_body_size 10m; client_body_buffer_size 128k; #Timeout if the real server is dead proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; # Advanced Proxy Config send_timeout 5m; proxy_read_timeout 240; proxy_send_timeout 240; proxy_connect_timeout 240; # Basic Proxy Config proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect http:// $scheme://; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_cache_bypass $cookie_session; proxy_no_cache $cookie_session; proxy_buffers 32 4k; This all gets me an A+ on which is great and all, but what does that actually do for me? I'm concerned about brute force, DDOS, and some thug trying to muscle into my server. I've been trying to read up on fail2ban and it's implementation, however from what I've found, because I'm using this for Ombi, and it authenticates off of the users Plex account, this bypasses the fail2ban? I've tried monitoring the fail2ban status however I get this error when I try to check the status: docker exec -it LetsEncrypt bash root@d3fc185ce9d5:/$ fail2ban-client -i Fail2Ban v0.10.1 reads log file that contains password failure report and bans the corresponding IP addresses using firewall rules. fail2ban> status nginx-http-auth Failed to access socket path: /var/run/fail2ban/fail2ban.sock. Is fail2ban running? fail2ban> I know the first time I ran the command, everything reported, though it reported no activity. /letsencrypt/fail2ban/jail.local # This is the custom version of the jail.conf for fail2ban # Feel free to modify this and add additional filters # Then you can drop the new filter conf files into the fail2ban-filters # folder and restart the container [DEFAULT] # ##"bantime" is the number of seconds that a host is banned. bantime = 259200 # ## A host is banned if it has generated "maxretry" during the last "findtime" seconds. findtime = 600 # ## "maxretry" is the number of failures before a host get banned. maxretry = 3 [ssh] enabled = false [nginx-http-auth] enabled = true filter = nginx-http-auth port = http,https logpath = /config/log/nginx/error.log # ignorip = myipaddress.0/24 [nginx-badbots] enabled = true port = http,https filter = nginx-badbots logpath = /config/log/nginx/access.log maxretry = 2 [nginx-botsearch] enabled = true port = http,https filter = nginx-botsearch logpath = /config/log/nginx/access.log ## Unbanning # ## SSH into the container with: # docker exec -it LetsEncrypt bash # ## Enter fail2ban interactive mode: # fail2ban-client -i # ## Check the status of the jail: # status nginx-http-auth # ## Unban with: # set nginx-http-auth unbanip # ## If you already know the IP you want to unban you can just type this: # docker exec -it letsencrypt fail2ban-client set nginx-http-auth unbanip I know there's no such thing at perfectly 100% secure, but with port 80 & 443 being open, and only relying on Ombi/Plex password security, I just feel like my ass is hanging in the wind. Any guidance would be most appreciated.
  23. Awesome keeps us posted
  24. PHLEX

    I've gotten this to work by installing an instance of nginx and extracting a copy of the Phlex zip into the www folder.
  25. [Support] NodeLink

    Thanks @Squid, I just realized this was for NodeLink. I was actually trying to get Phlex working. Just FYI; this thread is linked from the Support Thread link for Phlex in CA. Changing network mode didn't help Phlex but it was worth a try. Anyone else looking for Phlex support, haven't gotten a working docker for it. But, I've gotten Phlex to work by installing an instance of nginx and extracting a copy of the Phlex zip into the www folder.
  26. Another disk read error post...

    Disk4 & 1 have several time error Disk4 200 Multi_Zone_Error_Rate ---R-- 200 200 000 - 30 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 25 00 00 02 40 00 00 01 62 62 80 e0 08 1d+08:54:38.735 READ DMA EXT 25 00 00 04 00 00 00 01 62 5e 80 e0 08 1d+08:54:38.732 READ DMA EXT 25 00 00 04 00 00 00 01 62 5a 80 e0 08 1d+08:54:38.729 READ DMA EXT 25 00 00 04 00 00 00 01 62 56 80 e0 08 1d+08:54:38.725 READ DMA EXT 25 00 00 04 00 00 00 01 62 52 80 e0 08 1d+08:54:38.614 READ DMA EXT Disk1 also have bad signal. 197 Current_Pending_Sector -O--CK 200 200 000 - 1 200 Multi_Zone_Error_Rate ---R-- 200 200 000 - 27 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 01 00 00 b8 00 00 01 58 09 c8 40 08 3d+11:03:52.607 READ FPDMA QUEUED 60 01 00 00 b0 00 00 01 58 0a c8 40 08 3d+11:03:52.607 READ FPDMA QUEUED 27 00 00 00 00 00 00 00 00 00 00 e0 00 3d+11:03:52.606 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3] ec 00 00 00 00 00 00 00 00 00 00 a0 00 3d+11:03:52.606 IDENTIFY DEVICE ef 00 03 00 46 00 00 00 00 00 00 a0 00 3d+11:03:52.606 SET FEATURES [Set transfer mode]
  27. After updating to the latest version, the container doesn't seem to lock up. (After 2 days, at least.)
  1. Load more activity

Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.