• Content count

  • Joined

  • Last visited

  • Days Won


Hoopster last won the day on March 8

Hoopster had the most liked content!

Community Reputation

41 Good

1 Follower

About Hoopster

  • Rank
    Advanced Member


  • Gender
  1. Access my unRaid tower externally?

    The domain name you established with No-IP is just to simplify access so you don't have to change things on the vpn config every time your ISP decides to change your public IP address. The No-IP docker or your router will maintain the public IP address/domain name link with No-IP. That domain name is never used internally. You access your local server via vpn just like you would if in your home network (either via IP address or local server name). I use Windows (but the same concepts hold on a Mac) on my laptop so once in via my VPN, I can browse the network in Windows Explorer. On my iPhone/tablets I have a file browsing app that lets me see my local network much like Windows Explorer does.
  2. Access my unRaid tower externally?

    Here's what it looks like on my Windows 10 laptop with OpenVPN client and two users profiles (for two different unRAID servers) installed: Since I have two servers, from the OpenVPN client, I pick the one to which to connect via the downloaded user profile from each server.
  3. Access my unRaid tower externally?

    You need to load a VPN user profile into your VPN client on Android. Login to the VPN server as an admin user. When the screen below appears, download a user-locked profile. If you setup your Open-VPN server with your No-IP server name (which you should have done), this profile will be configured to connect to that server name with your admin user credentials. Import that profile into your Android OpenVPN client. On my iPhone, I had to email the profile to myself and open it from email. The Android probably will let you import directly from the client.
  4. Access my unRaid tower externally?

    The No-IP docker is only for updating the public IP address associated with your No-IP domain name. You don't need that docker for access to the server and many routers have a built-in way to manage DDNS. I do not run the No-IP docker. Have you setup any port forwarding rules in your router? Is the VPN client software installed on your clients and configured for your No-IP domain name? It is not as simple as pointing your client via a browser to your domain name. You need to first make a vpn connection from a vpn client on your remote devices to the OpenVPN server on unRAID.
  5. Access my unRaid tower externally?

    Yes, it does. I just mentioned Owncluod/Letsencrypt as an option since many prefer the "host my own cloud server" approach over the VPN approach. I did not mean to imply that he needed both and it was not clear from the OP what level of remote access he was really trying to achieve. I can see how my wording may have been confusing on that point. I connect via VPN and have access to the entire home network.
  6. Access my unRaid tower externally?

    Assuming that "access your server from the outside world" means you primarily want access to the unRAID GUI in order to manage the server, a vpn is the best way to go about it. jonathanm has already explained your options. Personally, I have it set up this way: 1 - OpenVPN-AS docker running on unRAID server (server runs 24x7) 2 - No-IP domain name assigned to public IP of my router 3 - Port forwarding rules in the router that forward OpenVPN ports to the LAN IP:ports of my unRAID server 2 - OpenVPN client software installed on my laptops, phones and tablets for remote access via No-IP domain name assigned to unRAID server. Your router may have OpenVPN (or another VPN server) built in which you could configure there. Personally, I prefer to run it on unRAID. If your server has IPMI, you could setup a vpn on a Raspberry Pi and start up your server remotely (if it is not running 24x7) from the RPi after you vpn in to it. If you primarily want to access files, documents, etc. perhaps you want to go the Owncloud/Letsencrypt (reverse proxy)/DuckDNS (or No-IP) route which is well documented in these forums.
  7. No, that I have not tried, but, it's easy enough to do. I will give that a shot and report results.
  8. I see the macvlan call traces seem to be associated with a macvlan broadcast. macvlan_process_broadcast seems to come just before each call trace and the broadcast is referenced in the trace. Is this because the docker INTERFACE variable is br0 instead of eth0 and it is listening on all interfaces instead of being restricted to eth0? UPDATE: I have gone ahead and changed INTERFACE and DNSMASQ_LISTENING variables from br0 to eth0. We'll see if that changes anything.
  9. I am getting call traces only from Pihole because it is the only docker to which I currently have a separate IP address assigned. I got the same call traces when I assigned an IP address to UniFi and OpenVPN-AS. I have since removed the IP address assignments on those dockers and only Pihole has its own IP address: I am using the br0 network and have assigned IP address to Pihole which is both the admin and DNS address. My router (Ubiquiti USG) is the DHCP server and it is configured to tell LAN clients that Pihole is the DNS. Here is my Pihole config: The differences I see are the specified interface which is br0 (not eth0; although they physically share the same NIC) and Pihole is listening on all interfaces. Should I change those both to eth0? I followed the Spaceinvader One video guide for setting up Pihole on unRAID (as I suspect many have) and I do not believe changing those variables was mentioned; however, perhaps it is necessary? Again, some tweaks here may result in the elimination of Pihole generated macvlan call traces, but, I did experience them with other dockers as well. Admittedly, that was with unRAID 6.4.0/1 and perhaps they would be less prevalent in 6.5.0/1. Thanks for your assistance.
  10. @limetech @bonienl It is now clear that I am not the only one getting macvlan call traces on my server and these are not just related to my specific hardware. Below are three reports of the same in last couple of weeks/days. We all have very different hardware configurations. I am sure there are others. One one occasion, the call traces came every hour and I eventually had to reboot the server. These only occur when an IP address is assigned to a docker container. This entire thread documents my efforts to isolate and resolve it on my own. The best I have been able do is eliminate the frequency of macvlan call traces by changing the MB NIC which unRAID/br0 uses. I am now running 6.5.1 RC6 on my server. Since you cannot reproduce, I am happy tp keep digging around on my own, but, if you have any guidance regarding things I can test or information I can provide, that would be helpful. Overall the incidence of call traces for a variety of reasons seems to be increasing among the general unRAID user community. Additional reports of macvlan call traces when assigning an IP address to a docker container:
  11. It's been about 2 1/2 days since the last reboot and the server just experienced another macvlan call trace. It looks like the change of NIC and/or unRAID 6.5.1.RC6 is not a cure; although the frequency of call traces has diminished. UPDATE: Apr. 19: Another identical call trace has occurred less than 24 hours after the last one. It almost seems that one a call trace is generated, subsequent traces begin to come with more frequency.
  12. Error: Call Traces found on your server

    I get these ip/macvlan call traces regularly. As noted, they are related to dockers with custom IP addresses. Doesn't matter what the docker is, just that one or more have their own IP address. I have never found a way to eliminate them completely, but, changing the NIC used for unRAID and upgrading to unRAID 6.5.1 RC6 has reduced them to every 2-3 days rather than every 2-3 hours. Apr 18 17:35:59 MediaNAS kernel: CPU: 0 PID: 22366 Comm: kworker/0:2 Not tainted 4.14.34-unRAID #1 Apr 18 17:35:59 MediaNAS kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./C236 WSI, BIOS P2.50 12/12/2017 Apr 18 17:35:59 MediaNAS kernel: Workqueue: events macvlan_process_broadcast [macvlan] Apr 18 17:35:59 MediaNAS kernel: task: ffff8807ec62c880 task.stack: ffffc90008698000 Apr 18 17:35:59 MediaNAS kernel: RIP: 0010:__nf_conntrack_confirm+0x97/0x4d6 Apr 18 17:35:59 MediaNAS kernel: RSP: 0018:ffff88086dc03d30 EFLAGS: 00010202 Apr 18 17:35:59 MediaNAS kernel: RAX: 0000000000000188 RBX: 0000000000004773 RCX: 0000000000000001 Apr 18 17:35:59 MediaNAS kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffffffff81c093cc Apr 18 17:35:59 MediaNAS kernel: RBP: ffff8807470d5e00 R08: 0000000000000101 R09: ffff88079edce400 Apr 18 17:35:59 MediaNAS kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffffff81c8b080 Apr 18 17:35:59 MediaNAS kernel: R13: 000000000000f97e R14: ffff8806b497cdc0 R15: ffff8806b497ce18 Apr 18 17:35:59 MediaNAS kernel: FS: 0000000000000000(0000) GS:ffff88086dc00000(0000) knlGS:0000000000000000 Apr 18 17:35:59 MediaNAS kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Apr 18 17:35:59 MediaNAS kernel: CR2: 0000146bfedda000 CR3: 0000000001c0a004 CR4: 00000000003606f0 Apr 18 17:35:59 MediaNAS kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Apr 18 17:35:59 MediaNAS kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Apr 18 17:35:59 MediaNAS kernel: Call Trace: Apr 18 17:35:59 MediaNAS kernel: <IRQ> Apr 18 17:35:59 MediaNAS kernel: ipv4_confirm+0xac/0xb4 [nf_conntrack_ipv4] Apr 18 17:35:59 MediaNAS kernel: nf_hook_slow+0x37/0x96 Apr 18 17:35:59 MediaNAS kernel: ip_local_deliver+0xab/0xd3 Apr 18 17:35:59 MediaNAS kernel: ? inet_del_offload+0x3e/0x3e Apr 18 17:35:59 MediaNAS kernel: ip_rcv+0x311/0x346 Apr 18 17:35:59 MediaNAS kernel: ? ip_local_deliver_finish+0x1b8/0x1b8 Apr 18 17:35:59 MediaNAS kernel: __netif_receive_skb_core+0x6ba/0x733 Apr 18 17:35:59 MediaNAS kernel: ? enqueue_task_fair+0x94/0x42c Apr 18 17:35:59 MediaNAS kernel: process_backlog+0x8c/0x12d Apr 18 17:35:59 MediaNAS kernel: net_rx_action+0xfb/0x24f Apr 18 17:35:59 MediaNAS kernel: __do_softirq+0xcd/0x1c2 Apr 18 17:35:59 MediaNAS kernel: do_softirq_own_stack+0x2a/0x40 Apr 18 17:35:59 MediaNAS kernel: </IRQ> Apr 18 17:35:59 MediaNAS kernel: do_softirq+0x46/0x52 Apr 18 17:35:59 MediaNAS kernel: netif_rx_ni+0x21/0x35 Apr 18 17:35:59 MediaNAS kernel: macvlan_broadcast+0x117/0x14f [macvlan] Apr 18 17:35:59 MediaNAS kernel: ? __switch_to_asm+0x24/0x60 Apr 18 17:35:59 MediaNAS kernel: macvlan_process_broadcast+0xe4/0x114 [macvlan] Apr 18 17:35:59 MediaNAS kernel: process_one_work+0x14c/0x23f Apr 18 17:35:59 MediaNAS kernel: ? rescuer_thread+0x258/0x258 Apr 18 17:35:59 MediaNAS kernel: worker_thread+0x1c3/0x292 Apr 18 17:35:59 MediaNAS kernel: kthread+0x111/0x119 Apr 18 17:35:59 MediaNAS kernel: ? kthread_create_on_node+0x3a/0x3a Apr 18 17:35:59 MediaNAS kernel: ? SyS_exit_group+0xb/0xb Apr 18 17:35:59 MediaNAS kernel: ret_from_fork+0x35/0x40 Apr 18 17:35:59 MediaNAS kernel: Code: 48 c1 eb 20 89 1c 24 e8 24 f9 ff ff 8b 54 24 04 89 df 89 c6 41 89 c5 e8 a9 fa ff ff 84 c0 75 b9 49 8b 86 80 00 00 00 a8 08 74 02 <0f> 0b 4c 89 f7 e8 03 ff ff ff 49 8b 86 80 00 00 00 0f ba e0 09 Apr 18 17:35:59 MediaNAS kernel: ---[ end trace 9c114a22f8d955d0 ]--- I have seen 3-4 other users experiencing these macvlan call traces but devs have not been able to reproduce.
  13. How to Force Legacy Mode in BIOS [SOLVED]

    John_M is correct. It has been a while since I had a server that needed an HBA or PCIe SATA controller for additional drives. These days I tend to build smaller servers with fewer larger disks and have only needed to use motherboard SATA ports. However, my first unRAID server had a PCIe SATA controller to add four additional disks. As I recall, the attached disks never showed up in the BIOS and this is normal. I was focusing more on your desire to boot your board in legacy mode than your actual reason for doing so. My bad. unRAID does not care where the disk is physically attached. The only thing that may be affected is the order in which it is detected and which sdx device name it gets assigned when the array is started. You could connect it to a motherboard SATA port to see if it is detected in BIOS/UEFI If not, it may be a bad disk; however, even if detected in the BIOS/UEFI, it still may have other issues. If it is detected in the BIOS/UEFI, you can start the array and see what unRAID reports about the disk status and whether or not you can run a SMART report on it.
  14. How to Force Legacy Mode in BIOS [SOLVED]

    To change your signature click on upper right of forum pages {your username]>Account Settings>Signature I am not familiar with your particular motherboard but many UEFI motherboards have a Compatibilty Support Module (CSM) setting which must be enabled in order to boot in legacy BIOS. For example, to boot in legacy mode with my main server MB I enable CSM and then specify USB:{flash drive} as the first boot device. To boot UEFI, I specify UEFI:{flash drive} as the first and only boot option and rename the EFI- folder on the unRAID flash drive to UEFI. Since unRAID 6.5.0, my MB will only boot successfully in UEFI mode, but, everything is there and works, so, it is not the issue for me that it is for you. Have you tried the ASUS forums to see if they have any tips for booting your board in legacy mode or for resolving the issue in UEFI mode?
  15. After changing the NIC from LAN1 to LAN 2, the system ran without ip/macvlan call traces for over 3 1/2 days. That's a new record; however the old familiar call trace returned just after updating to 6.5.1 RC6. I have since rebooted the server and will monitor for the return of ip/macvlan call traces.

Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.