Routing Docker container traffic thru OpenVPN client container


johnodon

Recommended Posts

I have been using the pipework container to give my Docker containers route-able IPs.  This way I could route them through the OVPN client on my pfsense box (using my PIA sub).  However, I am finding pipework to be too buggy for my liking.

 

Then I stumbled across this post on reddit:  https://www.reddit.com/r/docker/comments/3w0498/docker_containers_routed_through_openvpn_client/

 

So here's my situation. I'm running an UnRAID server, using Docker containers to run a variety of programs. I have one container running an OpenVPN client that is connected to my VPN provider. I have another container running Deluge. The Deluge container is linked to the VPN container, so that all traffic from the Deluge container goes through the VPN container. This is exactly what I want, and it works. I followed the instructions here: https://github.com/dperson/openvpn-client/

HOWEVER, the problem comes when trying to access the Deluge Daemon. As detailed, I set up an nginx container so that I am able to access the Deluge web interface as usual. Certain programs, like Couchpotato, can only interact with Deluge through the daemon, and as of now, I have no way of accessing it through the VPN. Does anyone have any ideas on how to accomplish this? I'm completely stumped D-:

 

So, here is an unRAID user who is successfully routing container traffic through a separate OPVN client container, and, also has the ability to connect to those containers via the local network to manage them.

 

This is EXACTLY what I want to do but I don't know where to begin.  :(

 

Can I use LSIO's OpenVPN-AS container to do the same thing or do I need a pure OPVN client?  If the former, the only OPVN containers I see in the appstore are all server...no clients.  I know that there is a OPVN client plugin, but, I would rather use a Docker container.

 

Can someone get me started?

 

John

Link to comment

Docker Engine went through significant change/update in 1.10 and it includes networking updates.  unRAID is behind at 1.7 for now until probably 6.2 or perhaps another 6.1.x release, but my guess is most likely 6.2.  I've come across several posts like the one on Reddit and there never has been complete success.  I'd suggest waiting as networking will be changing and will probably make things easier.

 

Pipework has been stable for me, what issues have you been having?

Link to comment

 

Pipework has been stable for me, what issues have you been having?

 

I'll see quite a few messages in Sonarr that say it can't reach SAB or Deluge (same for CP).  The real killer was when I was trying to watch a show that TVH recorded.  I would be watching fine nt Kodi and then it would just quit with an error that it lost connection to teh source.  As soon as I took pipework out of the mix for TVH, Kodi played the file without issue.

Link to comment

 

Pipework has been stable for me, what issues have you been having?

 

I'll see quite a few messages in Sonarr that say it can't reach SAB or Deluge (same for CP).  The real killer was when I was trying to watch a show that TVH recorded.  I would be watching fine nt Kodi and then it would just quit with an error that it lost connection to teh source.  As soon as I took pipework out of the mix for TVH, Kodi played the file without issue.

 

Got to ask John, I get why you might want Deluge, Sonarr, CP, NZBGet etc routing through a VPN, but why did you want TVHeadEnd running with Pipework?  Am I missing something, because I'm not sure what you gain by having TVHeadEnd running with a different IP to Unraid?

 

Although just a thought, would it have anything to do with running TVHeadEnd as host not bridge?  I don't see why bridge wouldn't work though if 9981 & 9982 were mapped...

Link to comment

 

Pipework has been stable for me, what issues have you been having?

 

I'll see quite a few messages in Sonarr that say it can't reach SAB or Deluge (same for CP).  The real killer was when I was trying to watch a show that TVH recorded.  I would be watching fine nt Kodi and then it would just quit with an error that it lost connection to teh source.  As soon as I took pipework out of the mix for TVH, Kodi played the file without issue.

 

Strange, I'm not experiencing that type of issue with pipework.  Are you using IP or hostname in CP, etc. config (ex: IP:8080 vs sabnzbd:8080)?  Did you change something in the process of trying to suppress the arp chatter/issues pipework was causing?  I do find Pipework sometimes does not start properly the first time.  I end up having to restart it sometimes several times before the last entry in the log says started (..."(from dreamcat4/pipework) start").  Pipework will soon be deprecated anyway...big changes in Docker coming.

Link to comment

I just wanted every container to have its own IP so I could do whatever I wanted with it in pfsense.  No other reason.

 

So instead of using a docker to route your traffic through a VPN, why not setup pfSense with your VPN client and then route whichever traffic you want through pfSense?

 

That is what I do now.  But for this to work, you need to use pipework to assign br0 IPs to your containers.  Otherwise all container only have a private IP and the IP of the unRAID server.  Without unique non-docker IPs, you really can't create any usable routing rules in pfsense.

 

Basically, containers need to have routable IPs to use an external VPN solution.

 

 

Link to comment

I just wanted every container to have its own IP so I could do whatever I wanted with it in pfsense.  No other reason.

 

So instead of using a docker to route your traffic through a VPN, why not setup pfSense with your VPN client and then route whichever traffic you want through pfSense?

 

That is what I do now.  But for this to work, you need to use pipework to assign br0 IPs to your containers.  Otherwise all container only have a private IP and the IP of the unRAID server.  Without unique non-docker IPs, you really can't create any usable routing rules in pfsense.

 

Basically, containers need to have routable IPs to use an external VPN solution.

 

Sounds very complicated, why not just tell pfSense to route all the unRAID traffic through the VPN.

Link to comment

YAY!  I made some progress.  While not the most elegant (or secure) solution, all I had to do was create vpn.conf and vpn.auth files (also needed the cert) in the /vpn volume mapping.

 

Mon Feb 15 09:52:30 2016 OpenVPN 2.3.4 x86_64-pc-linux-gnu [sSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [iPv6] built on Nov 12 2015

Mon Feb 15 09:52:30 2016 library versions: OpenSSL 1.0.1k 8 Jan 2015, LZO 2.08

Mon Feb 15 09:52:30 2016 UDPv4 link local: [undef]

Mon Feb 15 09:52:30 2016 UDPv4 link remote: [AF_INET]xxx.xxx.xx.xxx:1194

Mon Feb 15 09:52:30 2016 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this

Mon Feb 15 09:52:30 2016 [Private Internet Access] Peer Connection Initiated with [AF_INET]xxx.xxx.xx.xxx:1194

Mon Feb 15 09:52:33 2016 TUN/TAP device tun0 opened

Mon Feb 15 09:52:33 2016 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0

Mon Feb 15 09:52:33 2016 /sbin/ip link set dev tun0 up mtu 1500

Mon Feb 15 09:52:33 2016 /sbin/ip addr add dev tun0 local xx.xxx.x.x peer xx.xxx.x.x

Mon Feb 15 09:52:33 2016 Initialization Sequence Completed

 

Now to start working on getting my deluge container to connect to the interwebs using this container as a bridge (for lack of a better term).  :)

 

John

Link to comment

I just wanted every container to have its own IP so I could do whatever I wanted with it in pfsense.  No other reason.

 

So instead of using a docker to route your traffic through a VPN, why not setup pfSense with your VPN client and then route whichever traffic you want through pfSense?

 

That is what I do now.  But for this to work, you need to use pipework to assign br0 IPs to your containers.  Otherwise all container only have a private IP and the IP of the unRAID server.  Without unique non-docker IPs, you really can't create any usable routing rules in pfsense.

 

Basically, containers need to have routable IPs to use an external VPN solution.

 

Sounds very complicated, why not just tell pfSense to route all the unRAID traffic through the VPN.

 

I'd rather not send Plex or Emby traffic (among others) through there.  Just more overhead that could slow crap down.

Link to comment

I just wanted every container to have its own IP so I could do whatever I wanted with it in pfsense.  No other reason.

 

So instead of using a docker to route your traffic through a VPN, why not setup pfSense with your VPN client and then route whichever traffic you want through pfSense?

 

That is what I do now.  But for this to work, you need to use pipework to assign br0 IPs to your containers.  Otherwise all container only have a private IP and the IP of the unRAID server.  Without unique non-docker IPs, you really can't create any usable routing rules in pfsense.

 

Basically, containers need to have routable IPs to use an external VPN solution.

 

Sounds very complicated, why not just tell pfSense to route all the unRAID traffic through the VPN.

 

I'd rather not send Plex or Emby traffic (among others) through there.  Just more overhead that could slow crap down.

 

I have seen it done where you can make a rule that all traffic from a specific IP (unRAID in this case) gets sent through a VPN except an alias group traffic coming from the server. So you could make an alias with Plex [probably plex.tv] and Emby's [maybe port number for emby?? don't really know anything about emby] specific info. I guess this method starts to get complicated too. Not trying to push you one way or the other, just thought since you already had pfSense setup it might make sense.

Link to comment

One step closer...

 

I have my Deluge container using the OpenVPN Client container as its network.  I have to start the container manually since --net=container is not an option in the unRAID template (only none, host and bridge).

 

docker run -d --name="Deluge" --net=container:OpenVPN-Client -e PUID="99" -e PGID="100" -e TZ="America/New_York" -v "/mnt/cache/Docker/deluge/":"/config":rw -v "/mnt/user/Downloads/":"/downloads":rw linuxserver/deluge

 

I exec'd into the Deluge container and checked /etc/resolv.conf to make sure it was using PIA's DNS servers:

 

root@a329f2399507:/# cat /etc/resolv.conf
nameserver 209.222.18.218
nameserver 209.222.18.222

 

And to also check routes to make sure that there was none other than the private Docker network, the private VPN network and the VPN destination (xxx.xxx.xxx.xx below):

 

root@a329f2399507:/# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.168.1.5      128.0.0.0       UG    0      0        0 tun0
default         172.17.42.1     0.0.0.0         UG    0      0        0 eth0
10.168.1.1      10.168.1.5      255.255.255.255 UGH   0      0        0 tun0
10.168.1.5      *               255.255.255.255 UH    0      0        0 tun0
128.0.0.0       10.168.1.5      128.0.0.0       UG    0      0        0 tun0
172.17.0.0      *               255.255.0.0     U     0      0        0 eth0
xxx.xxx.xxx.xx  172.17.42.1     255.255.255.255 UGH   0      0        0 eth0

 

Unless someone tells me otherwise this is looking good to me.  :)  Last thing to do is get the reverse proxy in order so I can manage Deluge from my regular network (192.168.1.0).

 

John

 

Link to comment

I think I now have a working solution.  I was finally able to get the reverse proxy going using the instructions in that reddit link.  In the end, this was my command to start the Nginx container:

 

docker run --name nginxtcp -d -p 8112:8112 -p 58846:58846 --link OpenVPN-Client:deluge -v /mnt/cache/Docker/nginxtcp/nginx.conf:/usr/local/nginx/conf/nginx.conf zack/nginx-tcp-proxy

 

I struggled with this for a day and a half as I could never get nginx to start (and no error).  I finally checked out the github for that nginx package and saw a sample conf file.  So, I copied that to my volume mapped folder and edited it to strip out the stuff I didn't care about and add:

 

tcp {
     upstream delugedaemon {
         server deluge:58846;
         check interval=3000 rise=2 fall=5 timeout=1000;
     }

    server {
        listen 58846;
        proxy_pass delugedaemon;
    }
}

tcp {
     upstream deluge {
         server deluge:8112;
         check interval=3000 rise=2 fall=5 timeout=1000;
     }

    server {
        listen 8112;
        proxy_pass deluge;
    }
}

 

So what does this mean...

 

All Deluge traffic is being routed through the OpenVPN-Client container (which is connected to my PIA VPN).

 

Tracker Status:
checkmytorrentip.net: Error: Success, Your torrent client IP is: xx.xxx.xx.xxx  <--- my PIA IP

 

I am able to manage Deluge via http://unraid:8112 and Sonarr/CP are able to talk to it on the usual 58846 port.

 

I'm now going to work on SAB.

 

One of the things I really like about this setup is that it really has a built-in kill switch.  If the OPVN container goes down or loses connection to my VPN, the Deluge container does not have a route to the internet.

 

My last task is going to try and move away from Nginx and use Apache for the reverse proxy since I already have that container up and running (no need for 2 proxy containers).  I have already posted in the LSIO Apache support thread looking for help.  But, now that I was able to get Nginx working I may have some success with Apache.

 

John

Link to comment

Peter,

 

I want to believe that I don't have a need for a kill switch in this scenario since the containers which are linked will not have access to the outside world if the  OpenVPN container goes offline.

 

I want to say that same hold true if the OPVN container loses connection to the VPN, but how can I test that?  If I exec into the OPVN container, how do I break the connection with the VPN to see if the container resorts to a public connection?

 

John

Link to comment

SAB webui not playing as nicely as Deluge with the proxy:

 

400 Bad Request

 

Illegal cookie name selected_filter

 

Traceback (most recent call last):

  File "/usr/share/sabnzbdplus/cherrypy/_cprequest.py", line 635, in respond

    self.process_headers()

  File "/usr/share/sabnzbdplus/cherrypy/_cprequest.py", line 737, in process_headers

    raise cherrypy.HTTPError(400, msg)

HTTPError: (400, 'Illegal cookie name selected_filter')

 

Any ideas?

 

John

Link to comment

SAB webui not playing as nicely as Deluge with the proxy:

 

400 Bad Request

 

Illegal cookie name selected_filter

 

Traceback (most recent call last):

  File "/usr/share/sabnzbdplus/cherrypy/_cprequest.py", line 635, in respond

    self.process_headers()

  File "/usr/share/sabnzbdplus/cherrypy/_cprequest.py", line 737, in process_headers

    raise cherrypy.HTTPError(400, msg)

HTTPError: (400, 'Illegal cookie name selected_filter')

 

Any ideas?

 

John

Try deleting browser cache and cookies for sab.

Link to comment

SAB webui not playing as nicely as Deluge with the proxy:

 

400 Bad Request

 

Illegal cookie name selected_filter

 

Traceback (most recent call last):

  File "/usr/share/sabnzbdplus/cherrypy/_cprequest.py", line 635, in respond

    self.process_headers()

  File "/usr/share/sabnzbdplus/cherrypy/_cprequest.py", line 737, in process_headers

    raise cherrypy.HTTPError(400, msg)

HTTPError: (400, 'Illegal cookie name selected_filter')

 

Any ideas?

 

John

Try deleting browser cache and cookies for sab.

 

Ha.  I just did that and came back to report it was a browser cache issue since I was having the same issue when not even trying to use the proxy with SAB.  I even deleted the conatiner/image and config dir and had the same error.  Cleared cache and the issue went away.

 

However, I still can get SAB to play nicely with nginx.  As soon as I try and add it, I can't reach either Deluge or SAB.  If I take SAB out of the nginx mix, I can reach Deluge again.  Must be something misconfigured somewhere.

 

John

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.