Configuring VLAN Tagged Network Bridges with nmcli

I promise to add more exciting home lab cloud content in due time, for as of now, I am making some improvements in addition to refactoring some of my Ansible automation which has me revisiting some of these topics which are worth documenting for future reference.
This is another short and sweet post on creating Linux bridges using both untagged and tagged interfaces on KVM/Libvirt hosts in RHEL family of operating systems or any Linux system shipped with the Network Manager CLI.
I lab, I have a bare-metal KVM/Libvirt RHEL host with a physical network interface plugged into a network switch with 802.1q ( vlan tagging) enabled. This is quite normal o virtualization or container orchestration platforms and because I have multiple networks transiting over the same physical link, the good ol' Linux network bridge is how I pass one or more networks to guest virtual machines which are guests of a KVM/Libvirt host.
High Level Details
Host physical network interface: ens1f1
Management network
VLAN Tag: None/Native
Gateway: 172.16.254.1
Network: 172.16.254.0/24
Address: 172.16.254.11 ( This will be the address assigned on the host)
MTU: 1500
Bridge name: br-mgmt
Bridge interface: ens1f1
Storage network
VLAN Tag: 54
Gateway: None. This is a layer 2 network for storage access
Network: 172.25.54.0/24
Address: 172.25.54.11 (This will the address assigned on the host)
MTU: 9000
Bridge name: br-storage_mgmt
Bridge interface: ens1f1.54
Note: Notice in the bridge interface ens1f1.54 which I am using to identify the sub interface of ens1f1 with a vlan tag of id 54. This is used primarily as a reference for this objective with an example on how to create a vlan tagged interface further down below.
First Example: Creating an untagged network bridge with an IPv4 address and a 1500 MTU using the network manager cli
This is done in mostly two steps: 1) Create the bridge and 2) Create the VLAN interface and enslave it to the bridge. This can be split into more steps if necessary, however these examples aim with the most concise way to accomplish the objective.
- First create the network bridge including ip address, gateway, and dns resolver details along with specifying the MTU.
[root@kvm2 ~]# nmcli con add type bridge con-name br-mgmt ifname br-mgmt ipv4.method manual ipv4.addresses 172.16.254.11/24 ipv4.gateway 172.16.254.1 ipv4.dns 172.25.49.253,172.25.49.254 ipv4.dns-search voltron.xyz,lab.acanorex.io ethernet.mtu 1500
Connection 'br-mgmt' (335d8364-baf5-4619-8d0a-81263252d5c8) successfully added.
- Second we create the ethernet interface and assign it as subordinate of the bridge we created in the previous step. We also specify for this vlan interface to use a 9000 MTU. As I have multiple networks which use both standard MTU(1500) and jumbo MTU(9000) frames in my network, the physical interface(ens1f1) MUST have the highest MTU allowable which in my case is 9000 to support MTUS up to 9000 bytes:
[root@kvm2 ~]# nmcli con add type ethernet con-name ens1f1 ifname ens1f1 slave-type bridge master br-mgmt ethernet.mtu 9000
Connection 'ens1f1' (34d1e42d-c33c-41a7-b930-69bdadf5285e) successfully added.
- Once the two previous steps are completed, we can confirm that our physical interface and bridge have been created:
[root@kvm2 ~]# nmcli con show | egrep "NAME|ens1f1|br-mgmt"
NAME UUID TYPE DEVICE
br-mgmt 335d8364-baf5-4619-8d0a-81263252d5c8 bridge br-mgmt
ens1f1 34d1e42d-c33c-41a7-b930-69bdadf5285e ethernet ens1f1
- We can further verify that the ip address has been assigned to the network bridge as intended:
[root@kvm2 ~]# ip -4 addr show dev br-mgmt
9: br-mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 172.16.254.11/24 brd 172.16.254.255 scope global noprefixroute br-mgmt
valid_lft forever preferred_lft forever
[root@kvm2 ~]# ip link show dev ens1f1
7: ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master br-mgmt state UP mode DEFAULT group default qlen 1000
link/ether 14:02:ec:85:fc:25 brd ff:ff:ff:ff:ff:ff
altname enp4s0f1
- We can also verify network connectivity to our default gateway created on that newly added bridge:
[root@kvm2 ~]# ping -c 4 172.16.254.1
PING 172.16.254.1 (172.16.254.1) 56(84) bytes of data.
64 bytes from 172.16.254.1: icmp_seq=1 ttl=64 time=0.655 ms
64 bytes from 172.16.254.1: icmp_seq=2 ttl=64 time=0.404 ms
64 bytes from 172.16.254.1: icmp_seq=3 ttl=64 time=0.374 ms
64 bytes from 172.16.254.1: icmp_seq=4 ttl=64 time=0.391 ms
--- 172.16.254.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3107ms
rtt min/avg/max/mdev = 0.374/0.456/0.655/0.115 ms
Second example: Creating an layer 2 / non-routable + tagged network bridge with an IPv4 address and a 9000 MTU using the network manager cli
- Similar to the first example, we create the network bridge including ip address and MTU. Because this is a non-routable network, I did not assign a network gateway or any of the other dns resolver parameters which was used on the first example:
[root@kvm2 ~]# nmcli con add type bridge con-name br-storage_mgmt ifname br-storage_mgmt ipv4.method manual ipv4.addresses 172.25.54.11/24 ethernet.mtu 9000
Connection 'br-storage_mgmt' (8d619257-b1d6-41fe-90d8-cb1b84a8301a) successfully added.
- Next we will create our vlan interface with the desired vlan tag and assign it to the bridge we created above. We also specify a 9000 mtu for the vlan interface:
[root@kvm2 ~]# nmcli con add type vlan con-name ens1f1.54 ifname ens1f1.54 dev ens1f1 id 54 slave-type bridge master br-storage_mgmt ethernet.mtu 9000
Connection 'ens1f1.54' (beb1e5c3-25b1-4cac-af58-187ca7a10aa9) successfully added.
- This confirms that our vlan interface and bridge have been created:
[root@kvm2 ~]# nmcli con show | egrep "NAME|br-storage_mgmt|ens1f1.54"
NAME UUID TYPE DEVICE
br-storage_mgmt 8d619257-b1d6-41fe-90d8-cb1b84a8301a bridge br-storage_mgmt
ens1f1.54 beb1e5c3-25b1-4cac-af58-187ca7a10aa9 vlan ens1f1.54
- And we can further verify the ip address has been assigned to the network bridge. We can also verify both the bridge and vlan interface reflect the desired MTU setting:
[root@kvm2 ~]# ip -4 addr show dev br-storage_mgmt
10: br-storage_mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
inet 172.25.54.11/24 brd 172.25.54.255 scope global noprefixroute br-storage_mgmt
valid_lft forever preferred_lft forever
[root@kvm2 ~]# ip link show dev ens1f1.54
11: ens1f1.54@ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master br-storage_mgmt state UP mode DEFAULT group default qlen 1000
link/ether 14:02:ec:85:fc:25 brd ff:ff:ff:ff:ff:ff
- Lastly we confirm network connectivity on the network assigned to the bridge and validate that large MTU packets also work:
[root@kvm2 ~]# ping -c 4 -Mdo -s 8972 172.25.54.100
PING 172.25.54.100 (172.25.54.100) 8972(9000) bytes of data.
8980 bytes from 172.25.54.100: icmp_seq=1 ttl=64 time=0.217 ms
8980 bytes from 172.25.54.100: icmp_seq=2 ttl=64 time=0.286 ms
8980 bytes from 172.25.54.100: icmp_seq=3 ttl=64 time=0.257 ms
8980 bytes from 172.25.54.100: icmp_seq=4 ttl=64 time=0.256 ms
— 172.25.54.100 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3060ms
rtt min/avg/max/mdev = 0.217/0.254/0.286/0.024 ms
There we have it. Using the Network Manager CLI to create both native and vlan tagged network bridges its pretty groovy. If you are interested into seeing how I am attaching these Linux bridges to KVM/Libvirt guests, I have an example in an Ansible task in my Github repository which I use for home lab automation and publicly accessible.