I’ve been playing about with NIC bonding on my storage server, it’s current running CentOS 6.7. And to be honest apart from the types that I can’t use due to me not having decent switches, I can’t really tell any difference between them, I’m guessing that’s the whole point.
In my current setup, I’m using balance-rr [mode0] but I have also tried balance-tlb [mode5]
0 — Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available.
5 — Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave.
Perhaps as it’s only a home setup, I’m not passing the server enough work to notice a difference, but it seems to provide fault tolerance if I unplug one of the network cables.
Are you using bonding in an enterprise environment using CentOS/RHEL/Fedora? What mode are you using?