Bonding in unRAID v6, with KVM


Recommended Posts

Hi,

 

I've recently bit the bullet and ditched my ESXi-based 5.0.6 system and upgraded to 6 beta 12, converted all my VMware VM's to KVM etc and all is well.  However, there's one thing I'd like to replicate, and I'm pretty sure it's possible.

 

I have 2 NIC's on my motherboard (X9SCM) and 4 more on a quad-port intel card.  One of the motherboard NIC's is the one that unRAID is using primarily.  I've also created a bridge for KVM usage, and my understanding is that that will effectively bridge ALL the NIC's by default.

 

What I want to do is break down the bridging and bonding so I can achieve:

 

- Nic #1 on MB (eth0), unRAID traffic only

- Nic 1-4 on the quad port (eth1-eth4), bonded using rr or 802.3ad (will test both, I understand the limitations with 802.3, but I still think there's some advantages with multiple gigabit streams in my situation)  I have a HP 1810-G, and was running this way previously in VMware.

 

There's a couple threads here that have MOST of the information, i.e. it's clear enough how to set up multiple bridges, based on this thread:

 

http://lime-technology.com/forum/index.php?topic=34872.0

 

What I've done is disabled the default bridge from the web interface (and in the network.cfg to be sure), and made the following edits to network.cfg (have tested bonding_mode=0 as well):

 

BONDING="yes"

BONDNAME="bond0"

BONDNICS="eth1 eth2 eth3 eth4"

BONDING_MODE="4"

BONDING_MIIMON="100"

BRIDGING="no"

BRNAME="br0"

BRSTP="yes"

 

...and then I'm using the following in the go file, as I believe we still need a bridge for the VM's.  I've excluded eth0, as I want that to remain separate for unRAID traffic only (no VM traffic).  Have also changed the VM xml file to contain the "br1" bridge.

 

brctl addbr br1

brctl stp br1 on

brctl addif br1 eth1

brctl addif br1 eth2

brctl addif br1 eth3

brctl addif br1 eth4

ifconfig eth1 up

ifconfig eth2 up

ifconfig eth3 up

ifconfig eth4 up

 

I've tested at this point, no dice.  I've also tried giving the bridge interface an IP, and giving it a default route (the gateway on my network):

 

ifconfig br1 192.168.43.210 broadcast 192.168.43.255 netmask 255.255.255.0 up

route add default gw 192.168.43.1 dev br1

 

All of this results in the VM's on the bridge being able to ping each other, but nothing external.  Looking at the switch, it appears that at least in bonding mode 4, the trunk is operational and working. 

 

Any assistance would be appreciated, I'm not that great with Linux and networking :)

 

EDIT: removing the bonding and leaving the bridge as is seems to work fine, and there seems to be traffic across eth1 through 4 according to iftop.  I don't feel like this is the ideal scenario though?

EDIT 2:  I've just noticed that even though I manually set the bridge up, the web GUI shows that it is configured, and traffic for br1 is actually travelling across all interfaces, including eth0, even though it wasn't in the manual config.

 

 

Link to comment
  • 1 year later...
  • 3 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.