bonienl

Network isolation in unRAID 6.4

44 posts in this topic Last Reply

Recommended Posts

By default unRAID, the VMs and Docker containers all run within the same network. This is a straightforward solution, it does not require any special network setup and for most users this is a suitable solution.

 

Sometimes more isolation is required, for example let VMs and Docker containers run in their own network environment completely separated from the unRAID server. Setting up such an environment needs changes in the unRAID network settings, but also requires your switch and router to have additional network possibilities to support this environment.

 

The example here makes use of VLANs. This is an approach which allows to split your physical cable into two or more logical connections, which can run fully isolated from each other. If your switch does not support VLANs then the same can be achieved by connecting multiple physical ports (this requires however more ports on the unRAID server).

 

The following assingments are done:

 

network 10.0.101.0/24 = unRAID management connection. It runs on the default link (untagged)

network 10.0.104.0/24 = isolated network for VMs. It runs on VLAN 4 (tagged)

network 10.0.105.0/24 = isolated network for docker containers. It runs on VLAN 5 (tagged)

 

UNRAID NETWORK SETTINGS

We start with the main interface. Make sure the bridge function is enabled (this is required for VMs and docker). In this example both IPv4 and IPv6 are used, but this is not mandatory, e.g. IPv4 only is a good starting choice. Here a static IPv4 address is used, but automatic assignment can be used too. In this case it is recommended that your router (DHCP server) always hands out the same IP address to the unRAID server. Lastly enable VLANs for this interface.

network1.thumb.png.9f048162f8f25b1fc79db78cc1ea2f02.png

 

VM NETWORK SETTINGS

VMs will operate on VLAN 4 which corresponds to interface br0.4. Here again IPv4 and IPv6 are enabled, but it may be limited to IPv4 only, without any IP assignment for unRAID itself. On the router DHCP can be configured, which allows VMs to obtain an IP address automatically.

network2.thumb.png.4c51d62e2812619f4f54dbe51db0c866.png

 

DOCKER NETWORK SETTINGS

Docker containers operate on VLAN 5 which corresponds to interface br0.5. We need to assign IP addresses on this interface to ensure that Docker "sees" this interface and makes it a choice in the network selection of a container. Assignment can be automatic if you have a DHCP server running on this interface or static otherwise.

network3.thumb.png.eeefc1ae29aabaaa6387e82910ae3c68.png

 

VM CONFIGURATION

We can set interface br0.4 as the default interface for the VMs which we are going to create (existing VMs you'll need to change individually).

network6.thumb.png.a2d4814f917dbd1ffaff7eeff3d9dd4c.png

 

Here a new VM gets interface br0.4 assigned.

network7.png.01a2001fa16935f9bb4681ff537abe97.png

 

DOCKER CONFIGURATION

Docker will use its own built-in DHCP server to assign addresses to containers operating on interface br0.5 This DHCP server however isn't aware of any other DHCP servers (your router). Therefor it is recommended to set an IP range to the Docker DHCP server which is outside the range used by your router (if any) and avoid conflicts. This is done in the Docker settings while the service is stopped.

network5.thumb.png.eb0176240dbd984f45562698649df0c6.png

 

When a docker container is created, the network type br0.5 is selected. This lets the container run on the isolated network. IP addresses can be assigned automatically out of the DHCP  pool defined earlier. Leave the field "Fixed IP address" empty in this case.

network8.thumb.png.7a3e6104e2522d98c4d543bc27438962.png

 

Or containers can use a static address. Fill-in the field "Fixed IP address" in this case.

network9.thumb.png.e93946ab4961f599b8a7cd89ac5ea35b.png

 

This completes the configuration on the unRAID server. Next we have to setup the switch and router to support the new networks we just created on the server.

 

SWITCH CONFIGURATION

The switch must be able to assign VLANs to its different ports. Below is a picture of a TP-LINK switch, other brands should have something similar.

network4.thumb.png.ed4686aecd82d43c910c4b24791dc4aa.png

 

ROUTER CONFIGURATION

The final piece is the router. Remember all connections eventually terminate on the router and this device makes communication between the different networks possible. If you want to allow or deny certain traffic between the networks, firewall rules on the router need to be created. This is however out of scope for this tutorial.

Below is an example of a Ubiquiti USG router, again other brands should offer something similar.

network10.png.e96ec793d387afb8fb27444a1a656ff6.png

network11.thumb.png.de5c366dce5c37d442bb600babbe8683.png

 

That's it. All components are configured and able to handle the different communications. Now you need to create VMs and containers which make use of them.

 

Good luck.

 

Edited by bonienl
  • Like 7
  • Upvote 1

Share this post


Link to post

I didn't read this all, but it looks fairly comprehensive and I'm 100% sure that I will need it in the future.

 

Thanks!

Share this post


Link to post

I've added a link in this post to a full guide to setting up VLANs in pfsense

 

 

Share this post


Link to post

Wait, in 6.4, an IP must be assigned to Docker VLAN for it to function with the GUI? Hmm... Dunno if that will break things for me...

  • Like 1

Share this post


Link to post

Is this going to be 'the new way' of doing things or just one possible way for 6.4?  I have no need or desire to segregate Dockers from VMs using VLANs and then juggle with rules on my router when I need communication between VLAN.  I do have a need and desire to be able to give each Docker and VM there own IP and MAC, which I can do now just fine in 6.3.5 and on same subnet.

Edited by unevent

Share this post


Link to post
4 hours ago, unevent said:

Is this going to be 'the new way' of doing things or just one possible way for 6.4?  I have no need or desire to segregate Dockers from VMs using VLANs and then juggle with rules on my router when I need communication between VLAN.  I do have a need and desire to be able to give each Docker and VM there own IP and MAC, which I can do now just fine in 6.3.5 and on same subnet.

The possible way.  Unless you decide to create vlans, all Dockers and VMs are on the same subnet and Dockers only have unique ips if you assign them.

Share this post


Link to post
On 12/18/2017 at 8:29 AM, DZMM said:

The possible way.  Unless you decide to create vlans, all Dockers and VMs are on the same subnet and Dockers only have unique ips if you assign them.

 

Correct this is an addition not a replacement. Everything defined under unRAID 6.3 remains working under unRAID 6.4.

Edited by bonienl

Share this post


Link to post
5 hours ago, bonienl said:

 

Correct this is an additional not a replacement. Everything defined under unRAID 6.3 remains working under unRAID 6.4.

 

Thanks, and nice work.

Share this post


Link to post

I've having some issues with this...

 

If I setup a VLAN (4) tagged interface as per the VM example with address assignment set to None:

  • I can ping my vlan gateway just fine from my unraid cli.
  • my ubuntu VM on br0.4 can not ping it's gateway

 

When I try to follow the docker example:

  •  I can ping the vlan ip set on the unraid box, but not the gateway.
  • docker containers on br0.4 are not reachable

 

I'm also using a unfi router, setup the VLAN network to be "corporate" so inter vlan routing is enabled by default, just like in your example.  Pretty sure everything is in order, the only difference from your examples is that my main unraid interface is a 2GB LACP bond.

Edited by Dephcon

Share this post


Link to post

Do you have a switch in between and configured to allow VLAN tagged traffic?

 

Share this post


Link to post
9 hours ago, bonienl said:

Do you have a switch in between and configured to allow VLAN tagged traffic?

 

 

I do have a unifi switch and by default it trunks all vlans to all ports.  I'll take some screenshots to illustrate where i'm at ;)

 

**edit** I basically nuked all the routes and started from scratch and now, at least the containers, seem to work.  It's possible that my USG didn't get all the config changes as it sometimes starts to provision before finishing the changes.  I forced a provision which may have helped also.

 

Anyway, this is a super cool feature and i'm glad to have it working.  Now I just need to work backwards to open up all the ports I need for my containers then slap a big DENY ALL at the end of the list to limit the docker vlan from accessing my main LAN.

 

I ended up taking all those screenshots anyway so I might compile a unifi specific guide for users looking to do this and how to harden the stack as all vlans are wide open by default.

 

Thanks!

Edited by Dephcon
  • Like 2

Share this post


Link to post
6 hours ago, Dephcon said:

 

I might compile a unifi specific guide for users looking to do this and how to harden the stack as all vlans are wide open by default.

 

I certainly would appreciate that!

 

I've been thinking of trying this for my Dockers but do not have a managed switch. However, I can connect my server directly to a port on my EdgeRouterX. That would remove the managed switch dependency, right?

Share this post


Link to post
18 hours ago, dave said:

I certainly would appreciate that!

 

I've been thinking of trying this for my Dockers but do not have a managed switch. However, I can connect my server directly to a port on my EdgeRouterX. That would remove the managed switch dependency, right?

 

Not sure about the ER line, but if you use the second LAN port on the USG Pro i think it disabled hardware offload or something so i don't think that would be ideal.

Share this post


Link to post
3 hours ago, Dephcon said:

 

Not sure about the ER line, but if you use the second LAN port on the USG Pro i think it disabled hardware offload or something so i don't think that would be ideal.

As far as I can tell, from the GUI I am able to assign a VLAN to any port. With that said, plugging my server directly into that port should work? I will have to watch some tutorials on setting up VLANs tonight to figure this out.

Share this post


Link to post

just make sure that routing between physical ports on your ER doesn't bypass hardware offload, that would be less than ideal.

Share this post


Link to post
On 1/16/2018 at 5:40 PM, Dephcon said:

 

I do have a unifi switch and by default it trunks all vlans to all ports.  I'll take some screenshots to illustrate where i'm at ;)

 

**edit** I basically nuked all the routes and started from scratch and now, at least the containers, seem to work.  It's possible that my USG didn't get all the config changes as it sometimes starts to provision before finishing the changes.  I forced a provision which may have helped also.

 

Anyway, this is a super cool feature and i'm glad to have it working.  Now I just need to work backwards to open up all the ports I need for my containers then slap a big DENY ALL at the end of the list to limit the docker vlan from accessing my main LAN.

 

I ended up taking all those screenshots anyway so I might compile a unifi specific guide for users looking to do this and how to harden the stack as all vlans are wide open by default.

 

Thanks!

I would be very interested in that guide.

Share this post


Link to post

So it turns out the firewall in unifi is awful, no bi-directional rules, no protocol specification in port groups.  I've defaulted back to wide open between my container VLAN and the main LAN for now.  I might dig into the ER config guide and see if it's easier to just configure the firewall via CLI and export to json.

 

However I might be able to throw something together as an example of what needs to be done to facilitate permitting one app and then deny all, i already had it working for plex before i started piling more apps into the VLAN.

Edited by Dephcon

Share this post


Link to post
10 hours ago, Dephcon said:

configure the firewall via CLI and export to json

 

I had to do the same for IPv6 configuration, not available in the GUI, but CLI allows (most) of what the EdgeRouter can do.

It is a pain to keep track with json files though. Any customization in CLI needs to be saved with json otherwise it will be lost the next time a device is provisioned thru the GUI.

Share this post


Link to post

@bonienl I was a bit surprised by the extra addresses setup for the unraid server by the VLANs which affected my transfer speeds - is this normal?

 

 

Share this post


Link to post

Got the VLAN (2) setup on my EdgeRouterX, added the VLAN in Network Settings for unRAID, added the br0.2 to Docker settings, but when I edit a Container only br0 is shown in the drop-down. There is no option for br0.2 -- any ideas?

Share this post


Link to post

Ok, got this working! Now, is there a way for my LetsEncrypt docker to hand over traffic to another Docker? I have my firewall forwarding the port to LetsEncrypt but now I can't figure out how to get it to pass traffic to the final destination. Previously this all worked because it was a single IP and passing across ports.

 

Thanks!

Share this post


Link to post
2 hours ago, dave said:

Ok, got this working! Now, is there a way for my LetsEncrypt docker to hand over traffic to another Docker? I have my firewall forwarding the port to LetsEncrypt but now I can't figure out how to get it to pass traffic to the final destination. Previously this all worked because it was a single IP and passing across ports.

 

Thanks!

if you've assigned an IP to LE e.g mine is 192.168.50.80, then you have to assign an IP to all the dockers you want it to connect to e.g. I have 192.168.30.86 for nzbget.

 

Then you reference that IP in the config file:

 

	location /nzbget {
		proxy_pass http://192.168.30.86:6789;
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

you can't use dockers on Bridge anymore - they need to have a unique IP to be able to communicate with each other

Share this post


Link to post
1 hour ago, DZMM said:

if you've assigned an IP to LE e.g mine is 192.168.50.80, then you have to assign an IP to all the dockers you want it to connect to e.g. I have 192.168.30.86 for nzbget.

 

Then you reference that IP in the config file:

 


	location /nzbget {
		proxy_pass http://192.168.30.86:6789;
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

you can't use dockers on Bridge anymore - they need to have a unique IP to be able to communicate with each other

Ah, yes, you're right. I updated that IP in the config file and all is working! Thanks!

Share this post


Link to post

I went ahead and created a VLAN (30) for my UniFi docker.  The VLAN is on a separate network from my main LAN.  The only issue I have with this setup is that the unRAID server gets an address is accessible from the VLAN too!  I was hoping by putting the docker apps in a VLAN on a separate network, the docker apps would be segregated from my unRAID server.

 

Is there anyway to prevent unRAID from being in the VLAN network too?

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.