Nic Bonding

I’ve been playing about with NIC bonding on my storage server, it’s current running CentOS 6.7. And to be honest apart from the types that I can’t use due to me not having decent switches, I can’t really tell any difference between them, I’m guessing that’s the whole point.

In my current setup, I’m using balance-rr [mode0] but I have also tried balance-tlb [mode5]

0 — Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available.

5 — Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave.

Perhaps as it’s only a home setup, I’m not passing the server enough work to notice a difference, but it seems to provide fault tolerance if I unplug one of the network cables.

Are you using bonding in an enterprise environment using CentOS/RHEL/Fedora? What mode are you using?

Advertisements

One thought on “Nic Bonding

  1. This is based namely on source hashing so you won’t see any performance increase unless you have a lot of clients connecting to the system. With only a single client source IP you won’t see any noticeable increase in performance other than redundancy.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s