OpenShift Virtualization with Localnet Configuration¶
April 22, 2025
15 min read
Configure OpenShift Virtualization to use localnet for direct north/south connectivity of VMs without dedicated interfaces.
This use case demonstrates how to configure OpenShift Virtualization to leverage the existing br-ex
OVS bridge for direct north/south connectivity of virtual machines. Instead of using a dedicated interface, this approach reuses the main OpenShift interface to provide IP addresses from your baremetal network to your KubeVirt virtual machines.
See also
For more information about OpenShift Virtualization networking, check out the official OpenShift Virtualization Documentation.
Note
This is compatible with OpenShift 4.18 and newer versions.
Implementation Steps¶
1. Understanding Localnet and br-ex Bridge¶
The OpenShift br-ex
bridge is typically used for external connectivity for the OpenShift cluster. By configuring a localnet mapping to this bridge, we can:
Allow virtual machines to obtain IP addresses directly from the baremetal network
Enable direct north/south connectivity without network translation
Simplify network architecture by reusing existing resources
Note
The br-ex
bridge is part of OVN-Kubernetes networking in OpenShift. This approach eliminates the need for dedicated physical interfaces for VM connectivity.
2. Configure NodeNetworkConfigurationPolicy¶
Create a NodeNetworkConfigurationPolicy (NNCP) to add a localnet mapping to the br-ex
bridge:
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br-ex-network
spec:
nodeSelector:
node-role.kubernetes.io/worker: ''
desiredState:
ovn:
bridge-mappings:
- localnet: br-ex-network
bridge: br-ex
state: present
Apply the policy:
oc apply -f br-ex-nncp.yaml
3. Verify NodeNetworkConfigurationPolicy Status¶
Check that the policy has been applied correctly:
oc get nncp
Expected output:
NAME STATUS REASON
br-ex-network Available SuccessfullyConfigured
Verify the node network configuration enactments:
oc get nnce
Expected output:
NAME STATUS STATUS AGE REASON
<node-name>.br-ex-network Available <age> SuccessfullyConfigured
4. Create NetworkAttachmentDefinition¶
Create a NetworkAttachmentDefinition (NAD) that will use the localnet:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: br-ex-network
namespace: default
spec:
config: '{
"name":"br-ex-network",
"type":"ovn-k8s-cni-overlay",
"cniVersion":"0.4.0",
"topology":"localnet",
"netAttachDefName":"default/br-ex-network"
}'
Apply the NetworkAttachmentDefinition:
oc apply -f br-ex-network-nad.yaml
VLAN Configuration Example¶
To configure a NetworkAttachmentDefinition with a specific VLAN ID, use the vlanID
property in the config. This is particularly useful for environments where network segmentation is required:
Tip
Using VLANs with localnet can help maintain network isolation while still leveraging the existing physical infrastructure.
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: vlan-network
namespace: default
spec:
config: |
{
"cniVersion": "0.3.1",
"name": "vlan-network",
"type": "ovn-k8s-cni-overlay",
"topology": "localnet",
"vlanID": 200,
"netAttachDefName": "default/vlan-network"
}
Apply the VLAN NetworkAttachmentDefinition:
oc apply -f vlan-network-nad.yaml
In this example, the virtual machines connected to this network will receive VLAN tagged traffic with VLAN ID 200.
5. Adding Network Interface to Virtual Machines¶
To attach a VM to the localnet bridge, modify your VM definition to include an additional network interface:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: example-vm
spec:
runStrategy: Always
template:
spec:
domain:
devices:
disks:
- name: rootdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
- name: br-ex-interface
bridge: {}
resources:
requests:
memory: 2Gi
networks:
- name: default
pod: {}
- name: br-ex-interface
multus:
networkName: default/br-ex-network
volumes:
- name: rootdisk
containerDisk:
image: quay.io/containerdisks/fedora:latest
Testing and Validation¶
1. Verify VM Network Configuration¶
Connect to the VM console or SSH:
virtctl console example-vm # or virtctl ssh example-vm
Check network interfaces:
ip addr show
Verify you have two interfaces:
First interface connected to the pod network (
default
)Second interface connected to the br-ex network with an IP from your baremetal network
2. Test External Network Connectivity¶
Test network connectivity:
# From inside VM ping 1.1.1.1
Test that external hosts can directly reach the VM on its baremetal IP:
# From an external machine ping <vm-ip-address>
Important
VMs using localnet networking will be directly exposed to your physical network, so ensure proper security measures are in place.