BGP Implementation in Red Hat OpenStack using FRR¶
Sept 24, 2023
5 min read
Introduction¶
The Border Gateway Protocol (BGP) is widely used over the internet for managing routes and enabling efficient data transfer within large-scale environments. As organizations move toward more distributed infrastructures, BGP has become essential for connecting multiple network segments without relying on Layer 2 spanning technologies or static routing.
Red Hat OpenStack Platform has incorporated the Free Range Routing (FRR) suite with the OVN BGP agent to provide dynamic routing capabilities. This integration enables pure Layer 3 data center architectures that overcome traditional Layer 2 scaling limitations such as large failure domains and convergence delays during network failures.
This document provides a technical overview of the BGP implementation in Red Hat OpenStack Platform using FRR, including architecture details, configuration examples, and real-world use cases.
Understanding BGP Basics¶
BGP Fundamentals¶
BGP (Border Gateway Protocol) is a standardized exterior gateway protocol (EGP) defined in RFC 4271 that plays a central role in how the global routing of Internet traffic is performed. It is considered the routing protocol of the Internet and enables routing across administrative domains.
Here are some fundamental concepts of BGP:
Path Vector Protocol: BGP is a path vector protocol, meaning it keeps track of the path (sequence of ASes) that routes take through the Internet.
This information helps BGP in making routing decisions.
Autonomous Systems (ASes): Autonomous systems are individual networks or groups of networks managed by a single entity and that operates under a common routing policy.
Each AS is assigned a unique Autonomous System Number (ASN), which is used to identify it.
BGP Peers: BGP routers establish peering sessions with other BGP routers (peers).
Peering is typically done using TCP connections.
BGP routers exchange routing updates and route information with their peers.
BGP Use Cases in OpenStack¶
In the context of OpenStack, BGP can be used in multiple ways to facilitate different use cases:
Control Plane VIP: BGP is used to advertise the Control Plane VIP (Virtual IP) to external routers.
This enables the routing of traffic to the OpenStack Control Plane by external routers.
External Connectivity: BGP is used to connect OpenStack workload to external networks, such as the Internet or private data centers.
This enables the routing of traffic between OpenStack instances and the outside world by advertising routes to floating IPs and VLAN provider IPs to external routers.
Multi cloud Connectivity: BGP is used to connect multiple OpenStack clouds together.
This enables the routing of traffic between instances in different OpenStack clouds by advertising routes to external routers.
High Availability: BGP is instrumental in achieving high availability and redundancy by allowing traffic to be rerouted in the event of network failures.
This ensures minimal downtime for critical applications.
Benefits of Dynamic Routing with BGP¶
Dynamic routing with BGP offers several benefits in the context of OpenStack:
Scalability: BGP scales seamlessly, making it suitable for growing OpenStack environments. New networks and floating IPs (FIPs) can be routed without manual configuration, allowing for easier expansion of your cloud infrastructure.
Load Balancing: BGP supports equal-cost multi-path routing (ECMP), which can distribute traffic across multiple paths, optimizing network utilization and ensuring efficient load balancing.
Redundancy: BGP provides high availability by automatically rerouting traffic in case of network failures, reducing the risk of service interruptions. This is especially important for controllers deployed across availability zones with separate L2 segments or physical sites.
Interoperability: BGP is a widely accepted standard, ensuring compatibility with various networking devices and cloud platforms. This simplifies integration with existing network infrastructure.
Simplified L3 Architecture: Deploying clusters in a pure Layer 3 (L3) data center with BGP overcomes the scaling issues of traditional Layer 2 (L2) infrastructures such as large failure domains, high volume broadcast traffic, or long convergence times during failure recoveries.
// filepath: /home/epheo/dev/blog/articles/openstack-bgp/architecture.rst BGP Architecture in Red Hat OpenStack Platform ====================================================
Red Hat OpenStack Platform implements dynamic routing through a layered architecture that combines FRR components with the OVN BGP agent. Understanding this architecture is essential for successfully deploying and troubleshooting BGP in your OpenStack environment.
Core Components¶
The BGP implementation in Red Hat OpenStack Platform consists of these key components:
OVN BGP Agent
The OVN BGP agent is a Python-based daemon that runs in the
ovn-controller
container on all Controller and Compute nodes. The agent performs several critical functions:Monitors the OVN southbound database for VM and floating IP events
Notifies the FRR BGP daemon when IP addresses need to be advertised
Configures the Linux kernel networking stack to route external traffic to the OVN overlay
Manages the
bgp-nic
dummy interface used for route advertisement
The agent’s configuration is stored in
/etc/ovn_bgp_agent/ovn_bgp_agent.conf
and typically includes:[DEFAULT] debug = False reconcile_interval = 120 expose_tenant_networks = False [bgp] bgp_speaker_driver = ovn_bgp_driver
FRR Container Suite
FRR runs as a container (
frr
) on all OpenStack nodes and includes several daemons that work together:BGP Daemon (bgpd): Implements BGP version 4, handling peer connections and route advertisements
BFD Daemon (bfdd): Provides fast failure detection between forwarding engines
Zebra Daemon: Acts as an interface between FRR daemons and the Linux kernel routing table
VTY Shell: Provides a command-line interface for configuration and monitoring
FRR configuration is typically stored in
/etc/frr/frr.conf
with content such as:frr version 8.1 frr defaults traditional hostname overcloud-controller-0 log syslog informational service integrated-vtysh-config ! router bgp 64999 bgp router-id 172.30.1.1 neighbor 172.30.1.254 remote-as 65000 ! address-family ipv4 unicast network 192.0.2.0/24 redistribute connected exit-address-family !
Linux Kernel Networking
The Linux kernel handles actual packet routing based on the information provided by FRR. The OVN BGP agent configures several kernel components:
IP Rules: Direct traffic to specific routing tables
VRF (Virtual Routing and Forwarding): Provides network namespace separation
Dummy Interface: The
bgp-nic
interface is used to advertise routesARP/NDP Entries: Static entries for OVN router gateway ports
Component Interaction Flow¶
When a new VM is created or a floating IP is assigned, the following sequence occurs:
The OVN controller updates the OVN southbound database with the new port information
The OVN BGP agent detects the change through monitoring the database
The agent adds the IP address to the
bgp-nic
dummy interfaceThe agent configures IP rules and routes to direct traffic to the OVS provider bridge
Zebra detects the new IP on the interface and notifies the BGP daemon
The BGP daemon advertises the route to all BGP peers
External routers update their routing tables to reach the new IP address
Network Traffic Flow¶
For incoming traffic to OpenStack VMs:
External router receives a packet destined for an advertised IP
The router forwards the packet to the OpenStack node that advertised the route
The OpenStack node receives the packet and processes it according to the configured IP rules
Traffic is directed to the OVS provider bridge (
br-ex
)OVS flows redirect the traffic to the OVN overlay
The OVN overlay delivers the packet to the appropriate VM
For outgoing traffic from OpenStack VMs:
VM sends a packet to an external destination
The packet traverses the OVN overlay
OVN forwards the packet to the appropriate provider bridge
The packet is processed by the Linux network stack
The packet is routed according to the kernel routing table
The packet exits through the appropriate physical interface to the external network
Configurable Parameters¶
Key parameters that can be configured to customize the BGP implementation:
FRR BGP ASN: The Autonomous System Number used by the BGP daemon (default: 65000)
BGP Router ID: Unique identifier for the BGP router
OVN BGP Agent Driver: Controls how VM IPs are advertised (default: ovn_bgp_driver)
Expose Tenant Networks: Whether to advertise tenant network IPs (default: False)
Maximum Paths: Number of equal-cost paths for ECMP (default varies)
BFD Timer: How frequently to check peer liveliness (default varies)
These components work together to provide a robust, scalable dynamic routing solution in Red Hat OpenStack Platform environments.
FRR: The Free Range Routing Suite¶
Free Range Routing (FRR) is the primary component powering the BGP implementation within Red Hat OpenStack Platform. It operates as a containerized service that seamlessly integrates with OVN (Open Virtual Network).
Introduction to FRR¶
Free Range Routing (FRR) is an IP routing protocol suite of daemons that run in a container on all OpenStack composable roles, working together to build and maintain the routing table.
FRR originated as a fork of Quagga, aiming to overcome limitations and enhance the capabilities of traditional routing software. It is officially included in Red Hat Enterprise Linux (RHEL).
Key components of FRR in OpenStack include:
BGP daemon (``bgpd``): Implements BGP protocol version 4, running in the
frr
container to handle the negotiation of capabilities with remote peers and communicate with the kernel routing table through the Zebra daemon.BFD daemon (``bffd``): Provides Bidirectional Forwarding Detection for faster failure detection between adjacent forwarding engines.
Zebra daemon: Coordinates information from various FRR daemons and communicates routing decisions directly to the kernel routing table.
VTY shell (``vtysh``): A shell interface that aggregates CLI commands from all daemons and presents them in a unified interface.
FRR Features and Capabilities in OpenStack¶
FRR provides specific features that make it ideal for BGP implementation in Red Hat OpenStack Platform:
Equal-Cost Multi-Path Routing (ECMP): FRR supports ECMP for load balancing network traffic across multiple paths, enhancing performance and resilience. Each protocol daemon in FRR uses different methods to manage ECMP policy.
Example configuration in FRR to enable ECMP with 8 paths:
router bgp 65000 maximum-paths 8
BGP Advertisement Mechanism: FRR works with the OVN BGP agent to advertise and withdraw routes. The agent exposes IP addresses of VMs and load balancers on provider networks, and optionally on tenant networks when specifically configured.
Example of FRR’s configuration template used by the OVN BGP agent:
router bgp {{ bgp_as }} address-family ipv4 unicast import vrf {{ vrf_name }} exit-address-family address-family ipv6 unicast import vrf {{ vrf_name }} exit-address-family router bgp {{ bgp_as }} vrf {{ vrf_name }} bgp router-id {{ bgp_router_id }} address-family ipv4 unicast redistribute connected exit-address-family
Seamless Integration with Red Hat OpenStack: The FRR implementation in Red Hat OpenStack platform uses VRF (Virtual Routing and Forwarding) named
bgp_vrf
and a dummy interface (bgp-nic
) to handle the redirection of traffic between external networks and the OVN overlay.
Why FRR in Red Hat OpenStack?¶
Red Hat chose FRR for its OpenStack BGP implementation for several technical reasons:
OVN BGP Agent Integration: FRR works seamlessly with the OVN BGP agent (
ovn-bgp-agent
container), which monitors the OVN southbound database for VM and floating IP events. When these events occur, the agent notifies the FRR BGP daemon to advertise the associated IP addresses. This architecture provides a clean separation between networking functions.Example of how OVN BGP agent interacts with FRR:
# Agent communicates with FRR through VTY shell $ vtysh --vty_socket -c <command_file>
Versatile Kernel Integration: FRR’s Zebra daemon communicates routing decisions directly to the kernel routing table, allowing OpenStack to leverage Linux kernel networking capabilities for traffic management. When routes need to be advertised, the agent simply adds or removes them from the
bgp-nic
interface, and FRR handles the rest.Advanced BGP Features Support: FRR supports critical features needed in production OpenStack environments: - BGP graceful restart (preserves forwarding state during restarts) - BFD for fast failure detection (sub-second) - IPv4 and IPv6 address families - VRF support for network separation
Example configuration for BGP graceful restart:
router bgp 65000 bgp graceful-restart bgp graceful-restart notification bgp graceful-restart restart-time 60 bgp graceful-restart preserve-fw-state
Supplied with RHEL: As FRR is included with Red Hat Enterprise Linux, it provides a consistent and supported solution that integrates well with the entire Red Hat ecosystem.
Case Studies and Use Cases¶
Technical Implementation Scenarios¶
Let’s explore specific technical implementations of BGP in Red Hat OpenStack Platform:
Scenario 1: Control Plane VIP with Dynamic BGP Routing
In this implementation, the OVN BGP agent works with FRR to advertise control plane Virtual IP addresses to external BGP peers. This architecture enables highly available OpenStack API endpoints without requiring traditional L2 spanning across physical sites.
Technical details:
Controllers are deployed across multiple racks in separate L3 network segments
Each rack has its own Top-of-Rack (ToR) switch acting as a BGP peer
Control plane services utilize a VIP (e.g., 192.1.1.1) that’s advertised via BGP
FRR configuration on controllers includes:
# Sample FRR configuration on an OpenStack Controller node router bgp 64999 bgp router-id 172.30.1.1 neighbor 172.30.1.254 remote-as 65000 address-family ipv4 unicast network 192.1.1.1/32 exit-address-family
The OVN BGP agent monitors for control plane events and triggers FRR to advertise the VIP
Pacemaker determines controller health and influences BGP route advertisements
BGP’s fast convergence allows rapid failover when a controller becomes unavailable

Scenario 2: Multi-Cloud Connectivity with BGP
BGP enables secure, efficient connectivity between multiple OpenStack clouds and external networks. The implementation leverages the OVN BGP agent to advertise routes to external networks.
Technical implementation:
Each OpenStack cloud has its own ASN (Autonomous System Number)
Border nodes in each cloud run FRR with eBGP peering to external routers
BGP advertisements include prefixes for tenant networks that need to be accessible
Sample FRR configuration for external connectivity:
# FRR configuration on border node router bgp 64999 bgp router-id 10.0.0.1 neighbor 203.0.113.1 remote-as 65001 # External peer address-family ipv4 unicast network 172.16.0.0/16 # OpenStack tenant network range redistribute connected exit-address-family
The OVN BGP agent configures kernel routing to redirect traffic to the OVN overlay:
# Example of IP rules added by OVN BGP agent $ ip rule 0: from all lookup local 1000: from all lookup [l3mdev-table] 32000: from all to 172.16.0.0/16 lookup br-ex # for tenant networks 32766: from all lookup main 32767: from all lookup default

Scenario 3: Redundancy and Loadbalancing with ECMP
Red Hat OpenStack Platform implements Equal-Cost Multi-Path (ECMP) routing through FRR to provide load balancing and redundancy for network traffic.
Technical details:
FRR is configured to support ECMP with multiple next-hops for the same route
Sample ECMP configuration in FRR:
# Enable ECMP with up to 8 paths router bgp 64999 maximum-paths 8 maximum-paths ibgp 8
BFD (Bidirectional Forwarding Detection) is enabled to detect link failures quickly:
# BFD configuration for fast failure detection router bgp 64999 neighbor 192.0.2.1 bfd neighbor 192.0.2.2 bfd
When network or hardware failures occur, traffic is automatically rerouted to available paths
The OVN BGP agent performs the following configuration to enable proper traffic flow:
# BGP traffic redirection components - Add dummy interface (bgp-nic) to VRF (bgp_vrf) - Add specific routes to the OVS provider bridge routing table - Configure ARP/NDP entries for OVN router gateway ports - Add OVS flows for traffic redirection
Scenario 4: Scaling OpenStack Infrastructure with Dynamic Advertisement
Red Hat OpenStack Platform uses BGP to simplify scaling by dynamically advertising routes as new resources are provisioned, without manual route configuration.
Technical implementation:
When new VMs or floating IPs are created, the OVN BGP agent automatically detects these changes through the OVN southbound database
The agent configures routing rules and triggers FRR to advertise the appropriate routes
Example workflow when a new VM is provisioned:
1. VM is created on a Compute node with IP 172.16.5.10 2. OVN BGP agent detects the new VM in the OVN southbound database 3. Agent adds the IP to the bgp-nic interface: $ ip addr add 172.16.5.10/32 dev bgp-nic 4. FRR's Zebra daemon detects the new IP and advertises it via BGP 5. Agent configures traffic redirection through OVS flows: $ ovs-ofctl add-flow br-ex "priority=900,ip,in_port=patch-provnet-1,actions=mod_dl_dst:<bridge_mac>,NORMAL" 6. External BGP peers receive the route and can reach the VM
For floating IPs, similar automation occurs when they’re associated with instances:
# OpenStack CLI command $ openstack floating ip create external # FRR automatically advertises this floating IP via BGP # External routers can now reach this floating IP
This dynamic nature eliminates the need to manually configure routes as the environment scales
Load Balancing with BGP in Red Hat OpenStack Platform¶
Red Hat OpenStack Platform leverages BGP’s load balancing capabilities to optimize network performance and ensure high availability. The implementation uses FRR’s BGP features with the OVN BGP agent to distribute traffic efficiently across multiple paths. Here’s a detailed look at the technical implementation:
ECMP Implementation in Red Hat OpenStack Platform¶
Equal-Cost Multi-Path (ECMP) is implemented in Red Hat OpenStack Platform through FRR’s BGP daemon to distribute traffic across multiple paths with equal routing cost:
# FRR configuration for ECMP on OpenStack controller/network nodes
router bgp 64999
# Enable up to 8 equal-cost paths for load balancing
maximum-paths 8
# Enable ECMP for iBGP peering
maximum-paths ibgp 8
This configuration allows FRR to maintain multiple equal-cost paths in the routing table and distribute traffic across them. The kernel then performs the actual packet distribution using a hash algorithm based on the packet’s source IP, destination IP, and other parameters.
Technical Components for Load Balancing¶
The load balancing implementation in Red Hat OpenStack Platform consists of several key components working together:
BGP Route Advertisement: The OVN BGP agent identifies routes that need to be advertised for load balancing, such as: * Virtual IP addresses for OpenStack services * Provider network endpoints * Floating IP addresses
Multiple BGP Peers: Configuration with multiple ToR switches as BGP peers:
# Multiple BGP peers configuration in FRR router bgp 64999 neighbor 192.168.1.1 remote-as 65000 # ToR Switch 1 neighbor 192.168.2.1 remote-as 65000 # ToR Switch 2 address-family ipv4 unicast network 10.0.0.0/24 # Advertise network to both peers exit-address-family
VIP Failover Mechanism: When a node fails, the OVN BGP agent detects the failure and: * Removes the VIP advertisement from the failed node * Triggers advertisement from a healthy node * External routers automatically update their routing tables
Advanced Traffic Engineering with BGP Attributes¶
Red Hat OpenStack Platform supports traffic engineering through BGP attribute manipulation:
Using AS Path Prepending: Influence path selection by prepending the AS path:
# Make a path less preferred by prepending AS numbers router bgp 64999 address-family ipv4 unicast neighbor 192.168.1.1 route-map PREPEND out exit-address-family route-map PREPEND permit 10 set as-path prepend 64999 64999
Using BGP Communities: Tag routes with community attributes for selective routing:
# Set community values for specific routes router bgp 64999 address-family ipv4 unicast network 10.0.0.0/24 route-map SET-COMMUNITY exit-address-family
- route-map SET-COMMUNITY permit 10
set community 64999:100
BFD Integration: Fast failure detection for quicker load balancing convergence:
# Enable BFD for faster failover detection router bgp 64999 neighbor 192.168.1.1 bfd neighbor 192.168.2.1 bfd
Monitoring BGP Load Balancing¶
Red Hat OpenStack Platform provides tools to monitor BGP load balancing status:
# Check BGP peers status
$ sudo podman exec -it frr vtysh -c 'show bgp summary'
# View active routes and next-hops
$ sudo podman exec -it frr vtysh -c 'show ip bgp'
# Verify ECMP routes
$ sudo podman exec -it frr vtysh -c 'show ip route'
These commands help administrators verify that load balancing is functioning correctly and troubleshoot any issues that might arise.