|45 Drives Knowledge Base
||KB450130 - Network Bonding Using NMTUI
Network bonding is a method of combining two or more network interfaces together into a single interface. It will increase the network throughput, bandwidth and gives redundancy. If one interface is down or unplugged, the other one will keep the network traffic up and alive. Network bonding can be used in situations wherever you need redundancy, fault tolerance or load balancing networks.
Linux allows us to bond multiple network interfaces into a single interface using a special kernel module named bonding. The Linux bonding driver provides a method for combining multiple network interfaces into a single logical “bonded” interface.
Round-robin policy: It the default mode. It transmits packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
Active-backup policy: In this mode, only one slave in the bond is active. The other one will become active, only when the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.
XOR policy: Transmit based on . This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Ethtool support in the base drivers for retrieving the speed of each slave
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.
Server running Houston UI on CentOS7.9 or Ubuntu 20.04
Make use of the Network Management tool
To test the bond we can download an application called iperf. Iperf is used to determine the max theoretical bandwidth. Iperf is a commonly used network testing tool that can create Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) data streams and measure the throughput of a network that is carrying them.
yum install iperf -y ip a #to determine the IP address of the nodes that will be used to perform the test iperf -s #to use one of the nodes as an iperf "server". The bond on this unit is the one that will be stress tested. iperf -c #to connect to the server as a client. This will initiate the bandwidth test.
apt install iperf -y ip a #to determine the IP address of the nodes that will be used to perform the test iperf -s #to use one of the nodes as an iperf "server". The bond on this unit is the one that will be stress tested. iperf -c #to connect to the server as a client. This will initiate the bandwidth test.
The output looks like this:
(NOTE: These example images are done without a Network Bond and only display the proper usage of the command. Results will vary.)