BGP Implementation in Red Hat OpenStack using FRR¶
Sept 24, 2023
5 min read
Introduction¶
Border Gateway Protocol (BGP) enables efficient routing in large-scale environments and has become essential for connecting network segments without Layer 2 spanning technologies or static routing.
Red Hat OpenStack Platform integrates Free Range Routing (FRR) with the OVN BGP agent to provide dynamic routing capabilities with ML2/OVN in both control and data planes. This enables pure Layer 3 data center architectures that overcome traditional Layer 2 limitations such as large failure domains and slow convergence during network failures.
This document provides a technical overview of BGP in Red Hat OpenStack Platform, including architecture details, configuration examples, and implementation scenarios.
Understanding BGP Basics¶
BGP Fundamentals¶
BGP (Border Gateway Protocol) is a standardized exterior gateway protocol designed for routing across administrative domains. It serves as the primary routing protocol of the Internet.
Key BGP concepts:
Path Vector Protocol: BGP tracks the sequence of Autonomous Systems (ASes) that routes traverse, using this information for routing decisions.
Autonomous Systems: Networks managed by a single entity under a common routing policy. Each AS has a unique Autonomous System Number (ASN).
BGP Peers: BGP routers establish TCP-based connections with other BGP routers to exchange routing information.
BGP Use Cases in OpenStack¶
Red Hat OpenStack Platform specifically supports ML2/OVN dynamic routing with BGP in both control and data planes. In this environment, BGP enables:
Control Plane High Availability: Advertises control plane Virtual IPs to external routers, directing traffic to OpenStack services.
External Connectivity: Connects OpenStack workloads to external networks by advertising routes to floating IPs and provider network IPs.
Multi-cloud Connectivity: Links multiple OpenStack clouds together through route advertisement.
High Availability: Provides redundancy by rerouting traffic during network failures.
Subnet Failover: Enables failover of entire subnets for public provider IPs or floating IPs from one site to another.
Benefits of Dynamic Routing with BGP¶
BGP offers significant advantages for OpenStack environments:
Scalability: New networks and floating IPs can be routed without manual configuration.
Load Balancing: Supports equal-cost multi-path routing (ECMP) to distribute traffic efficiently.
Redundancy: Automatically reroutes traffic during network failures, critical for controllers deployed across multiple availability zones.
Interoperability: Works with diverse networking equipment and cloud platforms.
Simplified L3 Architecture: Enables pure Layer 3 data centers, avoiding Layer 2 issues like large failure domains, broadcast traffic, and slow convergence.
Distributed Network Architecture: Distributes L2 provider VLANs and floating IP subnets across L3 boundaries with no requirement to span VLANs across racks (for non-overlapping CIDRs).
Improved Data Plane Management: Provides better control and management of data plane traffic.
Next-Generation Fabric Support: Enables integration with next-generation data center and hyperscale fabric technologies.
BGP Architecture in Red Hat OpenStack Platform¶
Red Hat OpenStack Platform implements dynamic routing through FRR components and the OVN BGP agent. This architecture enables OpenStack deployments in pure layer-3 data centers.
Core Components¶
The BGP implementation consists of three key components:
OVN BGP Agent
A Python daemon running in the
ovn-controller
container on Controller and Compute nodes that:Monitors the OVN southbound database for VM and floating IP events
Notifies FRR when IP addresses need advertisement
Configures Linux kernel networking for external-to-OVN traffic routing
Manages the
bgp-nic
dummy interface for route advertisement
The agent uses a multi-driver implementation, allowing configuration for specific infrastructure running on OVN, such as Red Hat OpenStack Platform or Red Hat OpenShift.
Configuration file:
/etc/ovn_bgp_agent/ovn_bgp_agent.conf
[DEFAULT] debug = False reconcile_interval = 120 expose_tenant_networks = False [bgp] bgp_speaker_driver = ovn_bgp_driver
FRR Container Suite
Runs as a container on all OpenStack nodes with these components:
BGP Daemon (bgpd): Handles BGP peer connections and route advertisements. Uses capability negotiation to detect remote peer capabilities.
BFD Daemon (bfdd): Provides fast failure detection between adjacent forwarding engines.
Zebra Daemon: Interfaces between FRR and the Linux kernel routing table.
VTY Shell: Command-line interface for configuration and monitoring.
Configuration file:
/etc/frr/frr.conf
frr version 8.1 frr defaults traditional hostname overcloud-controller-0 log syslog informational service integrated-vtysh-config ! router bgp 64999 bgp router-id 172.30.1.1 neighbor 172.30.1.254 remote-as 65000 ! address-family ipv4 unicast network 192.0.2.0/24 redistribute connected exit-address-family !
Linux Kernel Networking
Handles packet routing based on FRR information, with components configured by the OVN BGP agent:
IP Rules directing traffic to specific routing tables
Virtual Routing and Forwarding (VRF) for network separation
The
bgp-nic
dummy interface for route advertisementStatic ARP/NDP entries for OVN router gateway ports
Component Interaction Flow¶
When a new VM is created or a floating IP is assigned:
OVN controller updates the southbound database with new port information
OVN BGP agent detects the change through database monitoring
Agent adds the IP address to the
bgp-nic
dummy interfaceAgent configures IP rules and routes to direct traffic to the OVS provider bridge
Zebra detects the new IP and notifies the BGP daemon
BGP daemon advertises the route to all peers
External routers update their routing tables
BGP Advertisement and Traffic Redirection¶
The process of advertising network routes begins with the OVN BGP agent triggering FRR to advertise directly connected routes. When traffic arrives at the node, the agent adds:
IP rules
Routes
OVS flow rules
These redirect traffic to the OVS provider bridge (br-ex
) using the Red Hat Enterprise Linux kernel networking. The OVN BGP agent ensures IP addresses are advertised whenever they are added to the bgp-nic
interface.
Network Traffic Flow¶
Incoming traffic to OpenStack VMs:
External router forwards packet to the OpenStack node advertising the route
OpenStack node processes the packet according to configured IP rules
Traffic is directed to the OVS provider bridge (
br-ex
)OVS flows redirect traffic to the OVN overlay
OVN overlay delivers the packet to the VM
Outgoing traffic from OpenStack VMs:
VM sends packet through the OVN overlay
OVN forwards packet to the provider bridge
Linux network stack processes the packet
Packet is routed according to kernel routing table
Packet exits through the appropriate physical interface
Key Configuration Parameters¶
FRR BGP ASN: Autonomous System Number used by BGP (default: 65000)
BGP Router ID: Unique identifier for the BGP router
OVN BGP Agent Driver: Controls VM IP advertisement method (default: ovn_bgp_driver)
Expose Tenant Networks: Whether to advertise tenant network IPs (default: False)
Maximum Paths: Number of equal-cost paths for ECMP
BFD Timer: Frequency of peer liveliness checks
These components work together to provide a robust, scalable dynamic routing solution in Red Hat OpenStack Platform environments.
FRR: The Free Range Routing Suite¶
Free Range Routing (FRR) powers BGP implementation in Red Hat OpenStack Platform as a containerized service integrated with OVN.
What is FRR?¶
Free Range Routing (FRR) is an IP routing protocol suite that maintains routing tables on OpenStack nodes. It forked from Quagga to overcome limitations and is officially included in Red Hat Enterprise Linux.
Key FRR components in OpenStack:
BGP daemon (``bgpd``): Implements BGP protocol v4, handling peer capabilities and communicating with the kernel through Zebra. Uses capability negotiation to detect remote peer capabilities.
BFD daemon (``bfdd``): Provides fast failure detection between adjacent forwarding engines.
Zebra daemon: Coordinates routing information from various FRR daemons and updates the kernel routing table.
VTY shell (``vtysh``): Command interface that aggregates all CLI commands from the daemons and presents them in a unified interface.
FRR Features in OpenStack¶
FRR provides several critical features for OpenStack:
Equal-Cost Multi-Path Routing (ECMP): Enables load balancing across multiple paths. Each protocol daemon in FRR uses different methods to manage ECMP policy.
Example configuration:
router bgp 65000 maximum-paths 8
BGP Advertisement Mechanism: Works with OVN BGP agent to advertise IP addresses from VMs and load balancers
Sample configuration template:
router bgp {{ bgp_as }} address-family ipv4 unicast import vrf {{ vrf_name }} exit-address-family address-family ipv6 unicast import vrf {{ vrf_name }} exit-address-family router bgp {{ bgp_as }} vrf {{ vrf_name }} bgp router-id {{ bgp_router_id }} address-family ipv4 unicast redistribute connected exit-address-family
Integration with OpenStack: Uses VRF (
bgp_vrf
) and a dummy interface (bgp-nic
) to redirect traffic between external networks and OVN
Why Red Hat Chose FRR¶
FRR was selected for OpenStack BGP implementation for these reasons:
Clean OVN Integration: Works seamlessly with the OVN BGP agent monitoring the OVN southbound database
Agent-FRR interaction:
# Agent communicates with FRR through VTY shell $ vtysh --vty_socket -c <command_file>
Direct Kernel Integration: Zebra daemon efficiently communicates with the kernel routing table
Enterprise BGP Features: Supports critical functionality: - BGP graceful restart for preserving forwarding state - BFD for sub-second failure detection - IPv4/IPv6 support - VRF for network separation
Graceful restart configuration:
router bgp 65000 bgp graceful-restart bgp graceful-restart notification bgp graceful-restart restart-time 60 bgp graceful-restart preserve-fw-state
RHEL Integration: Included with Red Hat Enterprise Linux, providing consistent support within the Red Hat ecosystem
Case Studies and Implementation Examples¶
This section covers practical implementation scenarios for BGP in Red Hat OpenStack Platform.
Scenario 1: Control Plane High Availability¶
BGP enables highly available OpenStack API endpoints without traditional L2 spanning across sites.
Technical implementation:
Controllers deployed across multiple racks in separate L3 segments
Each rack’s Top-of-Rack (ToR) switch acts as a BGP peer
Control plane services use a VIP advertised via BGP
FRR configuration example:
# FRR configuration on OpenStack Controller router bgp 64999 bgp router-id 172.30.1.1 neighbor 172.30.1.254 remote-as 65000 address-family ipv4 unicast network 192.1.1.1/32 exit-address-family
OVN BGP agent monitors control plane events
Pacemaker influences BGP route advertisements based on controller health
Fast convergence enables rapid failover

Scenario 2: Multi-Cloud Connectivity¶
BGP enables secure connectivity between multiple OpenStack clouds and external networks.
Technical implementation:
Each OpenStack cloud uses a unique ASN
Border nodes run FRR with eBGP peering to external routers
FRR configuration example:
# FRR configuration on border node router bgp 64999 bgp router-id 10.0.0.1 neighbor 203.0.113.1 remote-as 65001 # External peer address-family ipv4 unicast network 172.16.0.0/16 # Tenant network range redistribute connected exit-address-family
IP rules configured by OVN BGP agent:
# IP rules example $ ip rule 0: from all lookup local 1000: from all lookup [l3mdev-table] 32000: from all to 172.16.0.0/16 lookup br-ex # tenant networks 32766: from all lookup main 32767: from all lookup default

Scenario 3: ECMP Load Balancing and Redundancy¶
FRR implements Equal-Cost Multi-Path (ECMP) routing for load balancing and redundancy.
Technical implementation:
ECMP configuration in FRR:
# ECMP configuration router bgp 64999 maximum-paths 8 maximum-paths ibgp 8
BFD for fast failure detection:
# BFD configuration router bgp 64999 neighbor 192.0.2.1 bfd neighbor 192.0.2.2 bfd
Traffic auto-rerouted to available paths during failures
OVN BGP agent configuration for traffic flow:
# BGP traffic redirection components: - Dummy interface (bgp-nic) added to VRF (bgp_vrf) - Routes added to OVS provider bridge table - ARP/NDP entries configured for OVN router gateway ports - OVS flows for traffic redirection
Scenario 4: Dynamic Route Advertisement¶
BGP simplifies scaling by dynamically advertising routes as new resources are provisioned.
Implementation workflow:
New VM with IP 172.16.5.10 created on Compute node
OVN BGP agent detects VM in southbound database
Agent adds IP to dummy interface:
`$ ip addr add 172.16.5.10/32 dev bgp-nic`
FRR’s Zebra daemon detects IP and advertises via BGP
Agent configures traffic redirection:
`$ ovs-ofctl add-flow br-ex "priority=900,ip,in_port=patch-provnet-1,actions=mod_dl_dst:<bridge_mac>,NORMAL"`
External BGP peers receive route and can reach VM
For floating IPs, similar automation occurs when they’re associated with instances, eliminating manual route configuration as the environment scales.
Scenario 5: Distributed L2 Provider VLANs¶
BGP allows distributing L2 provider VLANs and floating IP subnets across L3 boundaries.
Technical implementation:
Provider VLANs distributed across racks without spanning VLANs (for non-overlapping CIDRs)
Configuration allows separation of provider networks across physical boundaries
Traffic routed between segments using BGP-advertised routes instead of traditional L2 connectivity
Simplified network design with reduced broadcast domains
Scenario 6: Tenant Network Exposure¶
OpenStack can optionally expose tenant networks via BGP using a special configuration flag.
Technical implementation:
Set the expose_tenant_networks flag to True in OVN BGP agent configuration:
[DEFAULT] expose_tenant_networks = True
With this setting, tenant network IPs are advertised just like provider networks
This feature requires non-overlapping CIDRs across tenants
Tenant VMs become directly reachable from external networks without floating IPs
Load Balancing with BGP in Red Hat OpenStack Platform¶
Red Hat OpenStack Platform leverages BGP for network performance optimization and high availability. The implementation combines FRR’s BGP capabilities with the OVN BGP agent for efficient traffic distribution.
ECMP Implementation¶
Equal-Cost Multi-Path (ECMP) routing is configured through FRR’s BGP daemon:
# FRR configuration for ECMP
router bgp 64999
# Enable up to 8 equal-cost paths
maximum-paths 8
# Enable ECMP for iBGP peering
maximum-paths ibgp 8
This configuration allows FRR to maintain multiple equal-cost paths in the routing table. The kernel then distributes traffic using a hash algorithm based on packet header information.
Traffic Flow and Redirection¶
When network traffic arrives at a node, the OVN BGP agent adds several components to redirect traffic:
IP Rules: Direct traffic to specific routing tables
Routes: Point to the OVS provider bridge
OVS Flow Rules: Redirect traffic to the OVN overlay
These configurations work together to enable traffic to flow between external networks and the OVN overlay using RHEL kernel networking, without requiring Layer 2 connectivity between nodes.
Technical Components¶
The load balancing implementation includes these key components:
Route Advertisement: The OVN BGP agent identifies routes to advertise: * Virtual IP addresses for OpenStack services * Provider network endpoints * Floating IP addresses
Multiple BGP Peers: Configuration with multiple Top-of-Rack switches:
# Multiple BGP peers configuration router bgp 64999 neighbor 192.168.1.1 remote-as 65000 # ToR Switch 1 neighbor 192.168.2.1 remote-as 65000 # ToR Switch 2 address-family ipv4 unicast network 10.0.0.0/24 # Advertise network to both peers exit-address-family
VIP Failover: When a node fails, the OVN BGP agent: * Removes VIP advertisement from the failed node * Triggers advertisement from a healthy node * External routers automatically update routing tables
Advanced Traffic Engineering¶
Red Hat OpenStack Platform supports traffic engineering through BGP attributes:
AS Path Prepending: Influence path selection:
# Make a path less preferred router bgp 64999 address-family ipv4 unicast neighbor 192.168.1.1 route-map PREPEND out exit-address-family route-map PREPEND permit 10 set as-path prepend 64999 64999
BGP Communities: Tag routes for selective routing:
# Set community values router bgp 64999 address-family ipv4 unicast network 10.0.0.0/24 route-map SET-COMMUNITY exit-address-family
- route-map SET-COMMUNITY permit 10
set community 64999:100
BFD Integration: Fast failure detection:
# Enable BFD router bgp 64999 neighbor 192.168.1.1 bfd neighbor 192.168.2.1 bfd
Monitoring¶
Commands to monitor BGP load balancing status:
# Check BGP peers status
$ sudo podman exec -it frr vtysh -c 'show bgp summary'
# View active routes and next-hops
$ sudo podman exec -it frr vtysh -c 'show ip bgp'
# Verify ECMP routes
$ sudo podman exec -it frr vtysh -c 'show ip route'
These commands help administrators verify that load balancing is functioning correctly and troubleshoot any issues that might arise.
Troubleshooting BGP in Red Hat OpenStack Platform¶
Diagnosing problems in a Red Hat OpenStack Platform environment that uses BGP begins with examining logs and querying FRR components with VTY shell.
Log Locations¶
The key log files for troubleshooting BGP issues are:
OVN BGP Agent logs: Located on Compute and Networker nodes
/var/log/containers/stdouts/ovn_bgp_agent.log
FRR component logs: Located on all nodes where FRR is running
/var/log/containers/frr/frr.log
Using VTY Shell for Troubleshooting¶
VTY shell allows interaction with FRR daemons to diagnose BGP routing issues.
Accessing VTY Shell
Log in to the node where you need to troubleshoot BGP
Enter the FRR container:
$ sudo podman exec -it frr bash
You can use VTY shell in two different modes:
Interactive mode:
$ sudo vtysh > show bgp summary
Direct mode:
$ sudo vtysh -c 'show bgp summary'
Useful Troubleshooting Commands
The following commands help diagnose common BGP issues:
Display BGP routing tables:
# For IPv4 > show ip bgp <IPv4_address> | all # For IPv6 (omit the 'ip' argument) > show bgp <IPv6_address> | all
Show routes advertised to a peer:
> show ip bgp neighbors <router-ID> advertised-routes
Show routes received from a peer:
> show ip bgp neighbors <router-ID> received-routes
Check BGP peer status:
> show bgp summary
Verify BGP configuration:
> show running-config
Common BGP Issues¶
Here are some common issues you might encounter and how to address them:
BGP Peers Not Establishing Connection
Check IP connectivity between peers
Verify ASN configuration matches on both sides
Check for firewall rules blocking BGP port (TCP 179)
Examine logs for capability negotiation issues
Routes Not Being Advertised
Verify the OVN BGP agent is running
Check if IP addresses are added to the bgp-nic interface
Inspect FRR configuration for proper route redistribution
Check for route filtering that might prevent advertisement
Traffic Not Reaching VMs
Verify OVS flow rules are correctly installed
Check IP rules and routing table entries
Ensure ARP/NDP proxy is enabled on the provider bridge
Confirm VRF configuration is correct
Slow Convergence After Failures
Check if BFD is enabled and configured correctly
Verify timers are set appropriately
Inspect BGP graceful restart configuration
Check for any route dampening that might delay reconvergence