Open vSwitch is widely used as a virtual switch in infrastructure as a service type environments.

Installation[edit | edit source]

CentOS 8[edit | edit source]

# yum install
# yum install openvswitch-2.13

Tasks[edit | edit source]

Configuring DPDK[edit | edit source]

The Data Plane Development Kit (DPDK) can be used by Open vSwitch to help improve network performance by utilizing hardware passthrough. See also my notes on enabling this on CloudStack + Open vSwitch.

On Red Hat based systems, install the dpdk and dpdk-tools package.

# yum -y install dpdk dpdk-tools

Edit /etc/default/grub and enable hugepages, iommu, and specify isolcpus. The isolcpus outline what CPUs the Linux scheduler shouldn't use and must be adjusted based on your system's available CPUs. Hugepages helps with performance since fewer pages are needed and less time is spent doing TLB (Translation Lookaside Buffer) lookups.

# vi /etc/default/grub
## GRUB_CMDLINE_LINUX_DEFAULT = default_hugepagesz=1GB hugepagesz=1G hugepages=16 iommu=pt intel_iommu=on isolcpus=1-19,21-39,41-59,61-79 intel_pstate=disable nosoftlockup

# grub2-mkconfig -o /boot/grub2/grub.cfg

Load the vfio-pci kernel module on boot

# echo vfio-pci > /etc/modules-load.d/vfio-pci.conf

Reboot the machine. When it comes back, verify that you have hugepages and vfio-pci loaded, and that IOMMU is working.

# cat /proc/cmdline | grep iommu=pt
# cat /proc/cmdline | grep intel_iommu=on
# dmesg | grep -e DMAR -e IOMMU
# grep HugePages_ /proc/meminfo
# lsmod | grep vfio-pci

Configure Open vSwitch[edit | edit source]

There are a few settings that you need to set with Open vSwitch. Determine the following values:

  • dpdk-lcore-mask: Specifies the CPU cores on which dpdk lcore threads should be spawned. This is a hex string representing the CPU affinity mask.
  • pmd-cpu-mask: Specifies where the PMD threads should run. This is a bitmap
  • dpdk-socket-mem: Comma-separated list of memory to preallocate from hugepages on specific sockets.

Configure Open vSwitch:

  • ovs-vsctl -no-wait set Open_vSwitch . other_config:dpdk-init=true
  • ovs-vsctl -no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xfffffbffffe
  • ovs-vsctl -no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,1024"
  • ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x800002

Setup the network interfaces[edit | edit source]

sudo modprobe vfio-pci

sudo dpdk-devbind --bind=vfio-pci enp3s0f0

sudo dpdk-devbind --bind=vfio-pci enp3s0f1

Verify with dpdk-devbind --status

Setup your ports[edit | edit source]

Here's an example of me setting up a DPDK bonded pair with a trunked connection. I add additional ports that I use as access ports.

# ovs-vsctl add-br nic0 -- set bridge nic0 datapath_type=netdev
# ovs-vsctl add-br public0 nic0 418
# ovs-vsctl add-br guest0 nic0 450
# ovs-vsctl add-br management0 nic0 417
# ovs-vsctl add-br storage0 nic0 3209
# ovs-vsctl add-bond nic0 bond0 ens2f0 ens2f1 lacp=active bond_mode=balance-tcp other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100 other_config:lacp-fallback-ab=true -- set interface ens2f0 type=dpdk -- set Interface ens2f1 type=dpdk -- set interface ens2f0 options:dpdk-devargs=0000:31:00.0 -- set interface ens2f1 options:dpdk-devargs=0000:31:00.1

Create a bonded connection[edit | edit source]

See more on Link Aggregation.

To bond two network interfaces:

# ovs-vsctl add-bond nic0 bond0 eth0 eth1 bond_mode=balance-tcp lacp=active

## If your bonded connection uses VLANs, you can then set it by adjusting the port:
# ovs-vsctl set port nic0 vlan_mode=native-untagged trunks=417,418,450-950,3209

See also[edit | edit source]