BetaQuasi Posted January 11, 2015 Share Posted January 11, 2015 Hi, I've recently bit the bullet and ditched my ESXi-based 5.0.6 system and upgraded to 6 beta 12, converted all my VMware VM's to KVM etc and all is well. However, there's one thing I'd like to replicate, and I'm pretty sure it's possible. I have 2 NIC's on my motherboard (X9SCM) and 4 more on a quad-port intel card. One of the motherboard NIC's is the one that unRAID is using primarily. I've also created a bridge for KVM usage, and my understanding is that that will effectively bridge ALL the NIC's by default. What I want to do is break down the bridging and bonding so I can achieve: - Nic #1 on MB (eth0), unRAID traffic only - Nic 1-4 on the quad port (eth1-eth4), bonded using rr or 802.3ad (will test both, I understand the limitations with 802.3, but I still think there's some advantages with multiple gigabit streams in my situation) I have a HP 1810-G, and was running this way previously in VMware. There's a couple threads here that have MOST of the information, i.e. it's clear enough how to set up multiple bridges, based on this thread: http://lime-technology.com/forum/index.php?topic=34872.0 What I've done is disabled the default bridge from the web interface (and in the network.cfg to be sure), and made the following edits to network.cfg (have tested bonding_mode=0 as well): BONDING="yes" BONDNAME="bond0" BONDNICS="eth1 eth2 eth3 eth4" BONDING_MODE="4" BONDING_MIIMON="100" BRIDGING="no" BRNAME="br0" BRSTP="yes" ...and then I'm using the following in the go file, as I believe we still need a bridge for the VM's. I've excluded eth0, as I want that to remain separate for unRAID traffic only (no VM traffic). Have also changed the VM xml file to contain the "br1" bridge. brctl addbr br1 brctl stp br1 on brctl addif br1 eth1 brctl addif br1 eth2 brctl addif br1 eth3 brctl addif br1 eth4 ifconfig eth1 up ifconfig eth2 up ifconfig eth3 up ifconfig eth4 up I've tested at this point, no dice. I've also tried giving the bridge interface an IP, and giving it a default route (the gateway on my network): ifconfig br1 192.168.43.210 broadcast 192.168.43.255 netmask 255.255.255.0 up route add default gw 192.168.43.1 dev br1 All of this results in the VM's on the bridge being able to ping each other, but nothing external. Looking at the switch, it appears that at least in bonding mode 4, the trunk is operational and working. Any assistance would be appreciated, I'm not that great with Linux and networking EDIT: removing the bonding and leaving the bridge as is seems to work fine, and there seems to be traffic across eth1 through 4 according to iftop. I don't feel like this is the ideal scenario though? EDIT 2: I've just noticed that even though I manually set the bridge up, the web GUI shows that it is configured, and traffic for br1 is actually travelling across all interfaces, including eth0, even though it wasn't in the manual config. Quote Link to comment
JimPhreak Posted January 15, 2016 Share Posted January 15, 2016 I realize this is a 1-year old thread but I'm thinking of doing just what you did in switching from ESXi 5 to KVM in UnRAID and I have 4 NICs I'd like to segment similar to what you were trying to do here. How did you ever make out with this? Quote Link to comment
BetaQuasi Posted April 15, 2016 Author Share Posted April 15, 2016 I know this is a late response, but unfortunately I didn't get anywhere with this. I've just pulled the quad port and am now using the single on board gigabit as some of my traffic-related needs are no longer. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.