45 Drives Knowledge Base
KB450130 - Network Bonding Using NMTUI
https://knowledgebase.45drives.com/kb/kb450130-single-server-linux-network-bonding-setup/

KB450130 - Network Bonding Using NMTUI

Posted on February 26, 2019 by Rob MacQueen


Network Bonding Using NMTUI

Scope/Description:

Network bonding is a method of combining two or more network interfaces together into a single interface. It will increase the network throughput, bandwidth and gives redundancy. If one interface is down or unplugged, the other one will keep the network traffic up and alive. Network bonding can be used in situations wherever you need redundancy, fault tolerance or load balancing networks.

Linux allows us to bond multiple network interfaces into a single interface using a special kernel module named bonding. The Linux bonding driver provides a method for combining multiple network interfaces into a single logical “bonded” interface.

Types of Network Bonding

mode=0 (balance-rr)

Round-robin policy: It the default mode. It transmits packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup)

Active-backup policy: In this mode, only one slave in the bond is active. The other one will become active, only when the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.

mode=2 (balance-xor)

XOR policy: Transmit based on . This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)

Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)

IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

Prerequisites:

Ethtool support in the base drivers for retrieving the speed and duplex of each slave.

A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb)

Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Prerequisite:

Ethtool support in the base drivers for retrieving the speed of each slave

mode=6 (balance-alb)
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

Prerequisites:

Server running Houston UI on CentOS7.9 or Ubuntu 20.04

Steps:

Make use of the Network Management tool

  • Type the command "nmtui"
  • Select "Edit a connection"

  • Eno1, eno2 are onboard ports, other two are 10 GB NIC. Delete all interfaces to start fresh.

  • I deleted the two interfaces I want to use for our Bond, enp8s0f0 & enp8s0f1

  • Add Bond

  • Profile name & Device = bond0
    Add Bond Slaves, your two 10GB NIC names (enp8s0f0 & enp8s0f1) if applicable

  • Mode = Adaptive Load Balancing (alb)
    IPv4 Config = Automatic if using DHCP, IPv4 Config= Manual if you want Static
    See the example below.

  • Scroll down to Back, and then go to Activate a connection
  • With bond0 highlighted, go over to <Deactivate> and hit Enter. You will then see <Activate> and then hit Enter again.

  • Then go down to Back, and then click Quit to return to the command line.
  • ip addr show bond will show the IP address you can ping from the other servers.

Verification:

To test the bond we can download an application called iperf. Iperf is used to determine the max theoretical bandwidth. Iperf is a commonly used network testing tool that can create Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) data streams and measure the throughput of a network that is carrying them.

yum install iperf -y

ip a #to determine the IP address of the nodes that will be used to perform the test

iperf -s #to use one of the nodes as an iperf "server". The bond on this unit is the one that will be stress tested.

iperf -c  #to connect to the server as a client. This will initiate the bandwidth test.

 

apt install iperf -y

ip a #to determine the IP address of the nodes that will be used to perform the test

iperf -s #to use one of the nodes as an iperf "server". The bond on this unit is the one that will be stress tested.

iperf -c  #to connect to the server as a client. This will initiate the bandwidth test.

 

The output looks like this:
(NOTE: These example images are done without a Network Bond and only display the proper usage of the command. Results will vary.)
 

Troubleshooting:

KB450130 – Network Bonding Using NMTUI – 45 Drives Knowledge Base

KB450130 – Network Bonding Using NMTUI

Last modified: June 1, 2021
You are here:
Estimated reading time: 3 min

Network Bonding Using NMTUI

Scope/Description:

Network bonding is a method of combining two or more network interfaces together into a single interface. It will increase the network throughput, bandwidth and gives redundancy. If one interface is down or unplugged, the other one will keep the network traffic up and alive. Network bonding can be used in situations wherever you need redundancy, fault tolerance or load balancing networks.

Linux allows us to bond multiple network interfaces into a single interface using a special kernel module named bonding. The Linux bonding driver provides a method for combining multiple network interfaces into a single logical “bonded” interface.

Types of Network Bonding

mode=0 (balance-rr)

Round-robin policy: It the default mode. It transmits packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup)

Active-backup policy: In this mode, only one slave in the bond is active. The other one will become active, only when the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.

mode=2 (balance-xor)

XOR policy: Transmit based on [(source MAC address XOR’d with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)

Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)

IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

Prerequisites:

Ethtool support in the base drivers for retrieving the speed and duplex of each slave.

A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb)

Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Prerequisite:

Ethtool support in the base drivers for retrieving the speed of each slave

mode=6 (balance-alb)
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

Prerequisites:

Server running Houston UI on CentOS7.9 or Ubuntu 20.04

Steps:

Make use of the Network Management tool

  • Type the command “nmtui”
  • Select “Edit a connection”

  • Eno1, eno2 are onboard ports, other two are 10 GB NIC. Delete all interfaces to start fresh.

  • I deleted the two interfaces I want to use for our Bond, enp8s0f0 & enp8s0f1

  • Add Bond

  • Profile name & Device = bond0
    Add Bond Slaves, your two 10GB NIC names (enp8s0f0 & enp8s0f1) if applicable

  • Mode = Adaptive Load Balancing (alb)
    IPv4 Config = Automatic if using DHCP, IPv4 Config= Manual if you want Static
    See the example below.

  • Scroll down to Back, and then go to Activate a connection
  • With bond0 highlighted, go over to <Deactivate> and hit Enter. You will then see <Activate> and then hit Enter again.

  • Then go down to Back, and then click Quit to return to the command line.
  • ip addr show bond will show the IP address you can ping from the other servers.

Verification:

To test the bond we can download an application called iperf. Iperf is used to determine the max theoretical bandwidth. Iperf is a commonly used network testing tool that can create Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) data streams and measure the throughput of a network that is carrying them.

  • Centos
yum install iperf -y

ip a #to determine the IP address of the nodes that will be used to perform the test

iperf -s #to use one of the nodes as an iperf "server". The bond on this unit is the one that will be stress tested.

iperf -c [Iperf Server IP] #to connect to the server as a client. This will initiate the bandwidth test.

 

  • Ubuntu
apt install iperf -y

ip a #to determine the IP address of the nodes that will be used to perform the test

iperf -s #to use one of the nodes as an iperf "server". The bond on this unit is the one that will be stress tested.

iperf -c [Iperf Server IP] #to connect to the server as a client. This will initiate the bandwidth test.

 

The output looks like this:
(NOTE: These example images are done without a Network Bond and only display the proper usage of the command. Results will vary.)

 

Troubleshooting:

Was this article helpful?
Dislike 0
Views: 417
Unboxing Racking Storage Drives Cable Setup Power UPS Sizing Remote Access