Simple Block Device Backup for OpenShift Lab Environments¶
April 25, 2025
8 min read
Introduction¶
Just like the Borg collective from Star Trek assimilates technology and knowledge with their famous phrase “resistance is futile,” BorgBackup assimilates your data.
This article introduces a simple solution for backing up and restoring LVM Logical Volumes from local storage to FreeNAS (or other Storage Class) using BorgBackup. While not designed for enterprise-grade production workloads, this approach provides a practical method.
Overview & Prerequisites¶
This backup solution provides a step-by-step approach to backing up LVM Logical Volumes from a Single Node OpenShift’s local storage to FreeNAS storage servers. It uses the following components:
BorgBackup Engine: Provides efficient, deduplicated, and encrypted backups
Container Image: A minimal image with BorgBackup and necessary tools
Kubernetes CronJobs: For scheduled backup operations of local LVM volumes
Kubernetes Pods: For restore operations to FreeNAS storage
Volume/Device Mounts: Direct access to LVM block devices on the SNO node
Note
All resources described in this article, including YAML manifests and Dockerfile, are available in the GitHub repository.
Step 1: Prepare Your Environment¶
Container Image¶
First, you’ll need a container image with BorgBackup installed. The system uses a minimal Docker image based on Fedora:
1FROM fedora:latest
2
3RUN dnf install -y borgbackup
Build and push this image to your registry:
cd borg-container
podman build -t registry.example.com/borgbackup:latest .
podman push registry.example.com/borgbackup:latest
Resource Planning¶
When deploying backup and restore pods, allocate appropriate resources for optimal performance:
Memory: 4Gi limit (1Gi request)
CPU: 2 cores limit (500m request)
These values provide a balance between ensuring sufficient resources and maintaining efficient cluster resource utilization.
Storage Architecture¶
The backup system uses two primary storage mechanisms that connect your SNO’s local storage to FreeNAS:
Source LVM Storage: Direct access to LVM Logical Volumes on the SNO node
FreeNAS Repository Storage: iSCSI-based persistent volumes from FreeNAS to store the BorgBackup repositories
This dual-storage approach enables efficient backup of LVM volumes from your SNO node to your FreeNAS storage while maintaining the ability to perform block-level operations on raw devices.
Step 2: Configure and Run Backups¶
Create a CronJob to schedule regular backups of your block devices:
1apiVersion: batch/v1
2kind: CronJob
3metadata:
4 name: windows-backup-cronjob
5spec:
6 schedule: "0 0 31 2 *" # A non-recurring time (e.g., Feb 31st)
7 suspend: true # Prevents the job from running automatically
8 jobTemplate:
9 spec:
10 template:
11 spec:
12 containers:
13 - name: backup-container
14 image: registry.desku.be/epheo/borgbackup:latest
15 env:
16 - name: BORG_CONFIG_DIR
17 value: /tmp/borg_config
18 - name: BORG_CACHE_DIR
19 value: /tmp/borg_cache
20 - name: BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK
21 value: "yes"
22 command: ["/bin/bash", "-c"]
23 args:
24 - |
25 echo "Starting Windows Backup pod with two mounted PVCs"
26 mkdir -p $BORG_CONFIG_DIR $BORG_CACHE_DIR
27 borg init --encryption=none /backup/
28 borg create --read-special /backup::"backup-$(date +%Y-%m-%d_%H-%M-%S)" /dev/windows-block
29 volumeDevices:
30 - devicePath: /dev/windows-block
31 name: windows-block
32 volumeMounts:
33 - mountPath: /backup
34 name: windows-backup
35 resources:
36 requests:
37 memory: "1Gi"
38 cpu: "500m"
39 limits:
40 memory: "4Gi"
41 cpu: "2"
42 restartPolicy: OnFailure
43 volumes:
44 - name: windows-block
45 persistentVolumeClaim:
46 claimName: windows
47 - name: windows-backup
48 persistentVolumeClaim:
49 claimName: windows-backup
Apply this configuration:
kubectl apply -f backup-cronjob.yaml
To run an immediate backup:
kubectl create job --from=cronjob/<backup-cronjob-name> manual-backup
Important
Ensure that source block devices are properly configured and accessible to the backup pod.
Step 3: Monitor and Manage Your Backups¶
Checking Backup Status¶
After starting a backup job, monitor its status:
kubectl get jobs
Example output:
NAME COMPLETIONS DURATION AGE
block-backup-manual 1/1 2m15s 10m
vm-backup-1682456400 1/1 1m32s 2h
Examining Backup Logs¶
View detailed logs to confirm successful backups or troubleshoot issues:
kubectl logs <pod-name>
Tip
Use the -f
flag (kubectl logs -f <pod-name>
) to follow logs in real-time during backup operations.
Managing Backup Archives¶
To list available backups in your repository, create a temporary pod with the same volumes:
# Create a temporary pod with the backup volume and needed environment variables
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: borg-inspector
spec:
containers:
- name: borg-inspector
image: registry.desku.be/epheo/borgbackup:latest
command: ["sleep", "3600"]
env:
- name: BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK
value: "yes"
volumeMounts:
- mountPath: /backup
name: backup-volume
volumes:
- name: backup-volume
persistentVolumeClaim:
claimName: windowsdata-backup
EOF
# Wait for the pod to start
kubectl wait --for=condition=Ready pod/borg-inspector
# Access the pod and list backups
kubectl exec -it borg-inspector -- borg list /backup
# If you encounter a lock error like:
# Failed to create/acquire the lock /backup/lock.exclusive (timeout).
# You can break the lock with:
kubectl exec -it borg-inspector -- borg break-lock /backup
# Additional BorgBackup commands for repository management:
# Check repository consistency
kubectl exec -it borg-inspector -- borg check /backup
# Show detailed information about the repository
kubectl exec -it borg-inspector -- borg info /backup
# List archive contents
kubectl exec -it borg-inspector -- borg list /backup::archivename
# Extract specific files from archive
kubectl exec -it borg-inspector -- borg extract /backup::archivename path/to/file
# Prune archives to save space ( /!\ DELETES archives more than 30 days old /!\)
kubectl exec -it borg-inspector -- borg prune -v --list --keep-within=30d /backup
# Remove the temporary pod after use
kubectl delete pod borg-inspector
Step 4: Restore to a Local Volume¶
When you need to restore a backup to a local volume in the same cluster:
1apiVersion: v1
2kind: Pod
3metadata:
4 name: windows-restore
5 namespace: epheo
6spec:
7 containers:
8 - name: restore-container
9 image: registry.desku.be/epheo/borgbackup:latest
10 env:
11 - name: BORG_CONFIG_DIR
12 value: /tmp/borg_config
13 - name: BORG_CACHE_DIR
14 value: /tmp/borg_cache
15 - name: BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK
16 value: "yes"
17 command: ["/bin/bash", "-c"]
18 args:
19 - |
20 echo "Starting Windows Backup pod with two mounted PVCs"
21 mkdir -p $BORG_CONFIG_DIR $BORG_CACHE_DIR
22 borg extract --progress --stdout /backup::$(borg list /backup --last 1 --format "{archive}") |dd of=/dev/windows-block
23 volumeDevices:
24 - devicePath: /dev/windows-block
25 name: windows-block
26 volumeMounts:
27 - mountPath: /backup
28 name: windows-backup
29 resources:
30 requests:
31 memory: "1Gi"
32 cpu: "500m"
33 limits:
34 memory: "4Gi"
35 cpu: "2"
36 volumes:
37 - name: windows-block
38 persistentVolumeClaim:
39 claimName: windows
40 - name: windows-backup
41 persistentVolumeClaim:
42 claimName: windows-backup
43 restartPolicy: OnFailure
Apply this configuration:
kubectl apply -f restore-pod.yaml
Important
Ensure that target block devices are not in use during restore operations to prevent data corruption.
Step 5: Cross-Platform Restoration Process¶
This section explains how to restore backups from your FreeNAS storage to LVM Logical Volumes on your Single Node OpenShift, which is particularly useful for migration scenarios or disaster recovery of your SNO environment.
Prerequisites¶
Before beginning the FreeNAS to SNO restoration, ensure you have:
Democratic-CSI backend installed on your Single Node OpenShift cluster
Access to your FreeNAS storage system containing the backup repositories
Appropriate permissions to create storage resources in the target namespace
The process involves three sequential steps to connect to your existing backup repository and restore data to a new block device:
Step 5.1: Create a Reference to the Existing Backup Volume¶
First, create a PersistentVolume that references your existing FreeNAS volume containing the BorgBackup repository:
1apiVersion: v1
2kind: PersistentVolume
3metadata:
4 name: existing-backup-pv
5spec:
6 capacity:
7 storage: 500Gi
8 volumeMode: Filesystem
9 accessModes:
10 - ReadWriteOnce
11 persistentVolumeReclaimPolicy: Retain
12 storageClassName: freenas-iscsi-csi
13 csi:
14 driver: org.democratic-csi.iscsi
15 volumeHandle: existing-backup-volume-id # ID of your existing volume on FreeNAS
16 volumeAttributes:
17 fs_type: ext4
Apply the PersistentVolume:
kubectl apply -f existing-backup-pv.yaml
Step 5.2: Create a Claim for the Backup Volume¶
Next, create a PersistentVolumeClaim that binds to the volume you defined in the previous step:
1apiVersion: v1
2kind: PersistentVolumeClaim
3metadata:
4 name: existing-backup-pvc
5spec:
6 accessModes:
7 - ReadWriteOnce
8 resources:
9 requests:
10 storage: 500Gi
11 storageClassName: freenas-iscsi-csi
12 volumeName: existing-backup-pv
Apply the PersistentVolumeClaim:
kubectl apply -f existing-backup-pvc.yaml
Step 5.3: Create a Restore Pod¶
Finally, create a restoration pod that will extract data from the backup repository and write it to the target device:
1apiVersion: v1
2kind: Pod
3metadata:
4 name: restore-from-existing-backup
5spec:
6 containers:
7 - name: restore-container
8 image: registry.desku.be/epheo/borgbackup:latest
9 env:
10 - name: BORG_CONFIG_DIR
11 value: /tmp/borg_config
12 - name: BORG_CACHE_DIR
13 value: /tmp/borg_cache
14 - name: BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK
15 value: "yes"
16 command: ["/bin/bash", "-c"]
17 args:
18 - |
19 mkdir -p $BORG_CONFIG_DIR $BORG_CACHE_DIR
20 borg extract --progress --stdout /existing-backup::$(borg list /existing-backup --last 1 --format "{archive}") | dd of=/dev/target-block bs=4M status=progress
21 volumeDevices:
22 - devicePath: /dev/target-block
23 name: target-block-device
24 volumeMounts:
25 - mountPath: /existing-backup
26 name: existing-backup-volume
27 volumes:
28 - name: existing-backup-volume
29 persistentVolumeClaim:
30 claimName: existing-backup-pvc
31 - name: target-block-device
32 persistentVolumeClaim:
33 claimName: target-device-pvc
Apply the restore pod:
kubectl apply -f restore-from-backup-pod.yaml
Step 6: Verify the Restoration¶
After the restore process completes, verify the integrity of the restored data:
# Create a temporary directory for verification
kubectl exec -it restore-from-existing-backup -- mkdir -p /mnt/verify
# Mount the restored device for verification
kubectl exec -it restore-from-existing-backup -- mount /dev/target-block /mnt/verify
# Check the contents
kubectl exec -it restore-from-existing-backup -- ls -la /mnt/verify
Tip
For block devices containing partition tables, use fdisk -l /dev/target-block
before mounting to verify the structure is intact.
Step 7: Directly Using Existing Block Devices with Kubernetes¶
Recent OpenShift versions provide the ability to reinstall the cluster without wiping the underlying disks. This feature preserves existing LVM Logical Volumes, allowing you to recover existing PVs managed by LVM after reinstallation, particularly Virtual Machine disks managed by OpenShift Virtualization.
We would them make these volumes available again as PersistentVolumes in the new cluster
Identifying Available Block Devices¶
First, identify the block devices on your system:
# From a debug pod with host access
oc debug node/<node-name>
chroot /host
# List block devices
lsblk
# Get more detailed information about a specific device
blkid /dev/sdX
Direct PersistentVolume Approach¶
You can create a PersistentVolume that directly references the block device using the Local Volume static provisioner:
1apiVersion: v1
2kind: PersistentVolume
3metadata:
4 name: existing-block-pv
5spec:
6 capacity:
7 storage: 50Gi
8 volumeMode: Block # Use Block for raw device or Filesystem for formatted
9 accessModes:
10 - ReadWriteOnce
11 persistentVolumeReclaimPolicy: Retain
12 storageClassName: local-storage
13 local:
14 path: /dev/sdX # Replace with your actual device path
15 nodeAffinity:
16 required:
17 nodeSelectorTerms:
18 - matchExpressions:
19 - key: kubernetes.io/hostname
20 operator: In
21 values:
22 - sno-node-01 # Replace with your node name
Apply the PersistentVolume:
kubectl apply -f local-pv.yaml
Creating a PersistentVolumeClaim¶
Now create a PersistentVolumeClaim that will bind to your PersistentVolume:
1apiVersion: v1
2kind: PersistentVolumeClaim
3metadata:
4 name: existing-block-pvc
5 namespace: my-namespace
6spec:
7 volumeMode: Block # Must match the PV's volumeMode
8 accessModes:
9 - ReadWriteOnce
10 resources:
11 requests:
12 storage: 50Gi
13 volumeName: existing-block-pv # Reference the PV by name
14 storageClassName: local-storage
Apply the PersistentVolumeClaim:
kubectl apply -f local-pvc.yaml
Using the Block Device in a Pod¶
You can now use the block device in a pod:
1apiVersion: v1
2kind: Pod
3metadata:
4 name: block-device-pod
5 namespace: my-namespace
6spec:
7 containers:
8 - name: block-user
9 image: registry.access.redhat.com/ubi8/ubi-minimal:latest
10 command: ["sleep", "infinity"]
11 volumeDevices: # Note: volumeDevices instead of volumeMounts for Block mode
12 - name: block-vol
13 devicePath: /dev/xvda # How the device will appear in the container
14 volumes:
15 - name: block-vol
16 persistentVolumeClaim:
17 claimName: existing-block-pvc
Apply this pod configuration:
kubectl apply -f block-device-pod.yaml
Verification¶
Verify that you can access and use the block device:
# Check that the pod is running
kubectl get pod -n my-namespace block-device-pod
# Access the pod and verify the block device is available
kubectl exec -it -n my-namespace block-device-pod -- lsblk
# If you need to format and mount the device inside the pod
kubectl exec -it -n my-namespace block-device-pod -- mkfs.ext4 /dev/xvda
kubectl exec -it -n my-namespace block-device-pod -- mkdir -p /mnt/data
kubectl exec -it -n my-namespace block-device-pod -- mount /dev/xvda /mnt/data
Tip
For an existing formatted device, skip the formatting step and only mount it.
This approach bypasses the need for the LVM Operator entirely and gives you direct access to your block devices while still managing them through Kubernetes abstractions.
Resources and References¶
The following resources provide additional information on BorgBackup and Kubernetes concepts used in this article: