March 18, 2026
Virtual machine workloads are migrating to KubeVirt and OpenShift Virtualization at a rapid pace. Understanding how to securely access datacenter networks directly from pods will be critical to a successful migration. Read on to learn how OVN-Kubernetes, NMstate, and Multus work together to enable traditional architecutres in a cloud-native way.
Open Virtual Network
Open Virtual Network enables a high level representation of a software defined network. While virtual switches have been used in OpenShift for quite some time they were managed individualy, the method used to program and coordinate virtual switching and routing is now orchestrated by OVN technology.
Kubernetes Resource Management is API-First
All resources are managed by APIs in Kubernetes. This may be a new concept to some, but it is a great advantage over traditional infrastructure. These APIs can be driven by the web console, by the command line, or application of resources using GitOps. The interfaces to these APIs are mediated by Custom Resource Definitions.
There are a number of standard resources on every cluster like pods, services, deployments, and additional custom resource definitions may be made available by installing operators like NMstate or KubeVirt.
This means anytime you seek to understand how to accomplish a task in Kubernetes you should start by learning what resources exist.
Networking APIs in OpenShift
This list will vary depending on the version of OpenShift (below is 4.21) and what operators are installed.
Listing Networking API Groups
$ NETWORKING_API_GROUPS=$(
oc api-versions | \
grep -E '(ovn|cni|metallb|nmstate|network|ipam)' | \
sed 's#/v.*##' | sort -u)
$ echo $NETWORKING_API_GROUPS
frrk8s.metallb.io
gateway.networking.k8s.io
ipam.cluster.x-k8s.io
ipam.metal3.io
k8s.cni.cncf.io
k8s.ovn.org
metallb.io
network.operator.openshift.io
networkaddonsoperator.network.kubevirt.io
networking.k8s.io
nmstate.io
policy.networking.k8s.io
whereabouts.cni.cncf.io
There are a lot of Networking APIs, so which ones do you need to focus on?
Networking Resource Management in Kubernetes
It is important to begin configuration at the host level. If you don’t have the wires plugged into the correct ports and a configuration that matches the switch, you will not have a solid ground to build upon. The APIs that are most relevant for this first level starts with NMstate.
Network Configuration with Network Configuration Operator
[ ] Todo - how to cover this operator
Host Network Configuration with NMState
Typically your default interface configuration including your external bridge (management interface) are configured by the installation process. This process may leverage the same NMStateConfig syntax as the NNCP we will discuss below, but there are supplied to the installer before NMState is even present.
After installation, unless you have a fully automated install including configuration, you may need to configure additional bonds or bridges on the nodes. For example bond1 and br-vmdata in the following diagram.
🧭 physnet] physnet-vmdata[Localnet
🧭 physnet-vmdata] end subgraph node1["🖥️ Node "] br-ex[ OVS Bridge
🔗 br-ex] br-vmdata[ OVS Bridge
🔗 br-vmdata] node1-bond0[bond0 🔌] node1-bond1[bond1 🔌] end end physnet-ex -- maps to --> br-ex physnet-vmdata --> br-vmdata br-ex --> node1-bond0 br-vmdata --> node1-bond1 Internet["☁️ "]:::Internet node1-bond0 ==default gw==> Internet node1-bond1 ==(🏷️ 802.1q trunk)==> Internet classDef bond0 fill:#37A3A3,color:#fff,stroke:#333,stroke-width:2px class br-ex,physnet-ex,node1-bond0 bond0 classDef bond1 fill:#9ad8d8,color:#fff,stroke:#333,stroke-width:2px class br-vmdata,physnet-vmdata,node1-bond1 bond1 classDef labels stroke-width:1px,color:#fff,fill:#005577 classDef networks fill:#cdd,stroke-width:0px style Localnets fill:#fff,color:#aaa,stroke:#000,stroke-width:1px style Cluster color:#000,fill:#fff,stroke:#333,stroke-width:0px style Internet fill:none,stroke-width:0px,font-size:+2em classDef nodes fill:#fff,stroke:#000,stroke-width:3px class node1,node2,node3 nodes classDef nad-1924 fill:#00ffff,color:#00f,stroke:#333,stroke-width:1px class nad-1924-client,nad-1924-ldap,nad-1924-nfs nad-1924
The NMState API group from the operator of the same name, is used to configure node level networking.
nmstate.ioKubernetes NMState
$ oc api-resources --api-group="nmstate.io"
NAME SHORTNAMES APIVERSION NAMESPACED KIND
nmstates nmstate.io/v1 false NMState
nodenetworkconfigurationenactments nnce nmstate.io/v1beta1 false NodeNetworkConfigurationEnactment
nodenetworkconfigurationpolicies nncp nmstate.io/v1 false NodeNetworkConfigurationPolicy
nodenetworkstates nns nmstate.io/v1beta1 false NodeNetworkState
Once you identify the relevant resources for a task, you may want to read a description of the resource or find out read and write an instance of the resource in a YAML manifest.
Notice above that a NodeNetworkConfigurationPolicy resource is not namespaced. A NNCP resource is cluster scoped, which means you can only have one NNCP with name “xyz”. The NNCP can be targeted to a subset of nodes though. This is done using a NodeSelector value in the NNCP, and will be important if you have different networking connectivity in different servers. All the APIs in NMState are cluster scoped, in fact.
Node Network State Resource
Once NMstate is installed and enabled, it will create and maintain a respresentation of each node’s network state in a NodeNetworkState resource. You can see above that the shortname is nns. We talked about this resource in my last blog post about OVN Recon.
From the NNS, with a little jq magic we can learn all kinds of facts, including what NICs are installed and what driver they are using for example. This is also represented in the OpenShift console for browsing.
$ oc get nns/$NODE_NAME -o json | \
jq -c '.status.currentState.interfaces[]|select(.type=="ethernet") \
|{"name":.name, "max-mtu":."max-mtu", "driver":.driver}';
{"name":"ens192","max-mtu":9000,"driver":"vmxnet3"}
{"name":"ens224","max-mtu":9000,"driver":"vmxnet3"}
{"name":"genev_sys_6081","max-mtu":65465,"driver":null}
Adding a Bond Interface
As an example, imagine we have 4 network interfaces called eno1, eno2, eno3, and eno4. At install time we may have selected to bind eno1 and eno2 into bond0. This bond will pass the default traffic including any overlay networks.
Now on day 2 we want to add a second bond which will be dedicated to virtual machine traffic. There are multiple bonding modes supported by OpenShift Virtualization, but we will use LACP here.
We will create a NNCP to define the bond1 interface.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: bond1
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
desiredState:
interfaces:
- name: bond1
type: bond
state: up
ipv4:
enabled: false
link-aggregation:
mode: 802.3ad
options:
miimon: "150"
port:
- eno3
- eno4
Adding an OVS Bridge
Once we have a physical connection to a secondary external network, we need to create a logical switch that will use this bond1 port as its “uplink” to the physical network segment plugged into it. This will be akin to a distributed vSwitch on another platform.
Again, we use an NNCP to create this.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br-vmdata
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
desiredState:
interfaces:
- name: br-vmdata
type: ovs-bridge
state: up
bridge:
allow-extra-patch-ports: true
options:
stp: false
port:
- name: bond1
Naming the Physical Network
We have created a bond, we have added a bridge to that bond, and now we must give a name to this physical network. You may see this name referred to as a “bridge-mapping” or a “physicalNetworkName” or a “external network”. You can simply think of this as an alias for the bridge, and it will be used to direct network connections to the bridge and out the port.
Here is the NNCP to create the external network name.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: ovs-bridge-mapping-physnet-vmdata
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
desiredState:
ovn:
bridge-mappings:
- localnet: physnet-vmdata
bridge: br-vmdata
state: present
—– later
The NAD includes a blob of JSON data that defines a CNI configuration. The configuration drives a plugin that is one of several “types” with each type having additional arguments.
CNI Plugin Types
There are many, but these two are our focus here.
cnv-bridge- Use when attaching to a Linux Bridgeovn-k8s-cni-overlay- Has a topology parameter which may be:Localnet- Define network local to the node (datacenter networks)Layer2- Define an overlay network (eg a private replication or healthcheck network)Layer3- Define a routeable overlay network
Use the ovn-k8s-cni-overlay plugin type and the localnet topology.
Unfortunately, when it comes to localnet, the NAD is overloaded. Not only does it define a namespaced attachment it also references a logical network and will implicitely create it if not already defined. It may be reasonable to expect there to be a custom resource definition for a network, but it turns out to be a side effect of NAD at this time. This can be a source of misconfiguration and confusion.
Accessing Datacenter VLANs from OpenShift Virtual Machines
To attach a virtual machine to a “physical” network in a datacenter, as opposed to the private cluster network already present on the OpenShift cluster, requires the coordination of a few resources.
Visualizing The Network Configuration Resources
192.168.4.0/24"] switch --> machinenet --> eno1 machinenet --> eno2 switch ==> T(["fa:fa-tags 802.1q Trunk"]) ==> ens224[ens224] subgraph node["CNV Worker"] eno1["fa:fa-ethernet eno1"] eno2["fa:fa-ethernet eno2"] eno1 --> bond0 eno2 --> bond0 bond0["fa:fa-ethernet bond0"] bond0 ==> br-ex[["fa:fa-grip-vertical fa:fa-bridge br-ex"]] br-int[["fa:fa-grip-vertical fa:fa-bridge br-int"]] br-ex -.- br-int servicenet["fa:fa-network-wired Service Network
172.30.0.0/16"] clusternet["fa:fa-network-wired Cluster Network
10.128.0.0/14"] br-int --> servicenet br-int --> clusternet subgraph nncp["fa:fa-code NNCP"] ens224["fa:fa-ethernet ens224"] ens224 ==> br-vmdata[["fa:fa-grip-vertical fa:fa-bridge br-vmdata"]] br-vmdata -.-> BM1924(["fa:fa-tags bridge mapping"]) br-vmdata -.-> BM1926(["fa:fa-tags bridge mapping"]) end BM1924 -.-> vmdata_ovn_localnet_switch BM1926 -.-> vmdata_ovn_localnet_switch end subgraph nsd["Namespace default"] nsd-nad-1924[/"fa:fa-code NAD 'vlan-1924'"/] end subgraph ns1["Namespace 1"] subgraph ns1-vm1[fab:fa-linux VM Pod Net] nginx-nic["fa:fa-ethernet eth0"] end clusternet ----> nginx-nic subgraph ns1-vm2[fab:fa-windows WS VM] ns1-vm2-nic1["fa:fa-ethernet net1"] end vmdata_ovn_localnet_switch -.- nsd-nad-1924 --> ns1-vm2-nic1 end subgraph ns2["Namespace 2"] subgraph ns2-vm1[fab:fa-github Dev VM] ns2-vm1-nic1["fa:fa-ethernet net1"] ns2-vm1-nic2["fa:fa-ethernet eth0"] end clusternet --> ns2-vm1-nic2 subgraph ns2-vm2["fa:fa-database DB VM"] ns2-vm2-nic1["fa:fa-ethernet net1"] ns2-vm2-nic2["fa:fa-ethernet net2"] end vmdata_ovn_localnet_switch -.- ns2-nad-1924[/"fa:fa-code NAD 'vlan-1924'"/] --- ns2-vm1-nic1 ns2-nad-1924 --- ns2-vm2-nic2 end classDef clusterNet fill:#bfb class clusternet,nginx-nic,ns2-vm1-nic2 clusterNet classDef vlan-1924 fill:#bbf class ens224.1924,br-1924,nsd-nad-1924,ns1-nad-1924,ns2-nad-1924,ns1-vm2-nic1,ns1-ws2-1924,ns2-vm1-nic1,ns2-vm2-nic2 vlan-1924 style nncp stroke:#f66,stroke-width:2px,color:#999,stroke-dasharray: 5 5 style T fill:white,stroke:darkgrey,stroke-width:1px,color:#333,stroke-dasharray: 2 2 classDef ns1-vm fill:#eff class ns1-vm1,ns1-vm2 ns1-vm style ns1 fill:#eee classDef ns2-vm fill:#cdd class ns2-vm1,ns2-vm2,ns2-vm3 ns2-vm style ns2 fill:#ccc
Node Level View
ssh or debug into the node In the node OS there is no ovn-nbctl only an ovs-vsctl
sh-5.1# hostname
hub-v4tbg-cnv-99zmp
sh-5.1# ovs-vsctl list-br
br-ex
br-int
br-vmdata
sh-5.1# nmcli con
NAME UUID TYPE DEVICE
ovs-if-br-ex aec716fd-096b-4ef6-a6cb-96d8fecf5fe3 ovs-interface br-ex
Wired connection 2 10391244-3dbb-3ade-a26d-f8c361c346b2 ethernet ens224
br-ex da9a4c2c-9071-445c-8426-183b5b3e05f0 ovs-bridge br-ex
br-vmdata-br 2acd0411-fa90-435e-9bec-1b3d9a5ef827 ovs-bridge br-vmdata
ens224-port 1cf8c810-a55b-4938-8f00-f2ca57803881 ovs-port ens224
ovs-if-phys0 902e86b9-6c95-4845-b9eb-64fbb3cca58b ethernet ens192
ovs-port-br-ex 561b1a06-3a6b-4313-b119-5d3f3caf1800 ovs-port br-ex
ovs-port-phys0 404a5336-503f-4144-961e-35635fd92fc6 ovs-port ens192
lo efb54016-6e10-4020-bb8b-fd2d8c6577a0 loopback lo
Wired connection 1 bcd3a32d-6de8-3ebc-87b1-8f843871b1e3 ethernet --
ovs-vsctl show will display all the Open vSwitch bridges and ports.
We can view the ports exist on a given bridge
sh-5.1# ovs-vsctl list-ports br-ex
ens192
patch-br-ex_hub-v4tbg-cnv-99zmp-to-br-int
sh-5.1# ovs-vsctl list-ports br-vmdata
ens224
patch-vlan.1924_ovn_localnet_port-to-br-int
We can view what networks are mapped to which OVS bridges.
sh-5.1# ovs-vsctl get Open_vSwitch . external_ids:ovn-bridge-mappings
"machine-net:br-ex,physnet:br-ex,trunk:br-trunk,vlan-1924:br-vmdata,vlan-1926:br-vmdata"
WIP Topics
Test Cases to Explore
Dual NIC √
- NIC 1: Default Interface,
br-ex - NIC 2: 802.1q trunk VM Data Interface,
br-vmdata
Single NIC without VLANs
- NIC 1️⃣ 1: Default Interface,
br-ex
Single NIC with 802.1q and a native VLAN for br-ex
Single NIC with 802.1q and a tagged VLAN for br-ex
Summary
It is important to understand that the name found in the multus config defines a logical network and that network name is cluster-scoped, meaning it should not be re-used unless the configuration is identical.
References
- Secondary networks connected to the physical underlay for KubeVirt VMs using OVN-Kubernetes - kubevirt.io
- Kubernetes API Concepts
- NMstate, OpenShift 4.21 NMState Operator Docs
- KubeVirt
- Connecting OpenShift VM to an OVN Secondary Network - OpenShift Docs
- CNI Specification
- Free Range Routing Kubernetes (frr-k8s)
- Gateway API
- OpenShift Cluster Ingress Operator
- Cluster API IPAM Contract
- Metal3 IP Address Manager
- Multus CNI
- MetalLB
- OpenShift Cluster Network Operator
- KubeVirt Cluster Network Addons Operator
- Kubernetes Services and Networking
- Kubernetes NMState
- Network Policy API
- OpenShift Route Documentation
- Whereabouts CNI IPAM
- Which bonding modes work when used with a bridge that virtual machine guests or containers connect to?