Simple Block Device Backup for OpenShift Lab Environments¶
April 25, 2025
8 min read
Introduction¶
Like the Borg collective from Star Trek assimilates technology, BorgBackup assimilates your data.
This article shows how to back up and restore LVM Logical Volumes from local storage to FreeNAS using BorgBackup. While not for enterprise production workloads, this method works well for lab environments.
Overview & Prerequisites¶
This backup solution backs up LVM Logical Volumes from Single Node OpenShift local storage to FreeNAS servers using:
BorgBackup: For efficient, deduplicated, encrypted backups
Container Image: Minimal image with BorgBackup and required tools
Kubernetes CronJobs: For scheduled backups of local LVM volumes
Kubernetes Pods: For restores to FreeNAS storage
Volume/Device Mounts: Direct access to LVM block devices on the SNO node
Note
All resources in this article are available in the GitHub repository.
Step 1: Prepare Your Environment¶
Container Image¶
Create a container image with BorgBackup:
1FROM fedora:latest
2
3RUN dnf install -y borgbackup
Build and push the image:
cd borg-container
podman build -t registry.example.com/borgbackup:latest .
podman push registry.example.com/borgbackup:latest
Resource Planning¶
Allocate resources for backup and restore pods:
Memory: 4Gi limit (1Gi request)
CPU: 2 cores limit (500m request)
These values balance performance with efficient resource usage.
Storage Architecture¶
The backup system uses two storage mechanisms:
Source LVM Storage: Direct access to LVM Logical Volumes on the SNO node
FreeNAS Repository Storage: iSCSI-based persistent volumes for BorgBackup repositories
This approach enables efficient backup of LVM volumes to FreeNAS while maintaining block-level operations.
Step 2: Configure and Run Backups¶
Create a CronJob for regular backups:
1apiVersion: batch/v1
2kind: CronJob
3metadata:
4 name: windows-backup-cronjob
5spec:
6 schedule: "0 0 31 2 *" # A non-recurring time (e.g., Feb 31st)
7 suspend: true # Prevents the job from running automatically
8 jobTemplate:
9 spec:
10 template:
11 spec:
12 containers:
13 - name: backup-container
14 image: registry.desku.be/epheo/borgbackup:latest
15 env:
16 - name: BORG_CONFIG_DIR
17 value: /tmp/borg_config
18 - name: BORG_CACHE_DIR
19 value: /tmp/borg_cache
20 - name: BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK
21 value: "yes"
22 command: ["/bin/bash", "-c"]
23 args:
24 - |
25 echo "Starting Windows Backup pod with two mounted PVCs"
26 mkdir -p $BORG_CONFIG_DIR $BORG_CACHE_DIR
27 borg init --encryption=none /backup/
28 borg create --read-special /backup::"backup-$(date +%Y-%m-%d_%H-%M-%S)" /dev/windows-block
29 volumeDevices:
30 - devicePath: /dev/windows-block
31 name: windows-block
32 volumeMounts:
33 - mountPath: /backup
34 name: windows-backup
35 resources:
36 requests:
37 memory: "1Gi"
38 cpu: "500m"
39 limits:
40 memory: "4Gi"
41 cpu: "2"
42 restartPolicy: OnFailure
43 volumes:
44 - name: windows-block
45 persistentVolumeClaim:
46 claimName: windows
47 - name: windows-backup
48 persistentVolumeClaim:
49 claimName: windows-backup
Apply the configuration:
kubectl apply -f backup-cronjob.yaml
Run an immediate backup:
kubectl create job --from=cronjob/<backup-cronjob-name> manual-backup
Important
Ensure source block devices are accessible to the backup pod.
Step 3: Monitor and Manage Your Backups¶
Checking Backup Status¶
Monitor backup job status:
kubectl get jobs
Example output:
NAME COMPLETIONS DURATION AGE
block-backup-manual 1/1 2m15s 10m
vm-backup-1682456400 1/1 1m32s 2h
Examining Backup Logs¶
View logs to confirm success or troubleshoot:
kubectl logs <pod-name>
Tip
Use kubectl logs -f <pod-name>
to follow logs in real-time.
Managing Backup Archives¶
List available backups:
# Create a temporary pod with the backup volume
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: borg-inspector
spec:
containers:
- name: borg-inspector
image: registry.desku.be/epheo/borgbackup:latest
command: ["sleep", "3600"]
env:
- name: BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK
value: "yes"
volumeMounts:
- mountPath: /backup
name: backup-volume
volumes:
- name: backup-volume
persistentVolumeClaim:
claimName: windowsdata-backup
EOF
# Wait for the pod to start
kubectl wait --for=condition=Ready pod/borg-inspector
# List backups
kubectl exec -it borg-inspector -- borg list /backup
# Break a lock if needed:
kubectl exec -it borg-inspector -- borg break-lock /backup
# Additional commands:
# Check repository
kubectl exec -it borg-inspector -- borg check /backup
# Show repository info
kubectl exec -it borg-inspector -- borg info /backup
# List archive contents
kubectl exec -it borg-inspector -- borg list /backup::archivename
# Extract files
kubectl exec -it borg-inspector -- borg extract /backup::archivename path/to/file
# Prune old archives
kubectl exec -it borg-inspector -- borg prune -v --list --keep-within=30d /backup
# Remove the temporary pod
kubectl delete pod borg-inspector
Step 4: Restore to a Local Volume¶
Restore a backup to a local volume:
1apiVersion: v1
2kind: Pod
3metadata:
4 name: windows-restore
5 namespace: epheo
6spec:
7 containers:
8 - name: restore-container
9 image: registry.desku.be/epheo/borgbackup:latest
10 env:
11 - name: BORG_CONFIG_DIR
12 value: /tmp/borg_config
13 - name: BORG_CACHE_DIR
14 value: /tmp/borg_cache
15 - name: BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK
16 value: "yes"
17 command: ["/bin/bash", "-c"]
18 args:
19 - |
20 echo "Starting Windows Backup pod with two mounted PVCs"
21 mkdir -p $BORG_CONFIG_DIR $BORG_CACHE_DIR
22 borg extract --progress --stdout /backup::$(borg list /backup --last 1 --format "{archive}") |dd of=/dev/windows-block
23 volumeDevices:
24 - devicePath: /dev/windows-block
25 name: windows-block
26 volumeMounts:
27 - mountPath: /backup
28 name: windows-backup
29 resources:
30 requests:
31 memory: "1Gi"
32 cpu: "500m"
33 limits:
34 memory: "4Gi"
35 cpu: "2"
36 volumes:
37 - name: windows-block
38 persistentVolumeClaim:
39 claimName: windows
40 - name: windows-backup
41 persistentVolumeClaim:
42 claimName: windows-backup
43 restartPolicy: OnFailure
Apply the configuration:
kubectl apply -f restore-pod.yaml
Important
Ensure target block devices are not in use during restore.
Step 5: Cross-Platform Restoration Process¶
This section explains how to restore backups from FreeNAS to LVM Logical Volumes on Single Node OpenShift.
Prerequisites¶
Before starting:
Install Democratic-CSI backend on your SNO cluster
Ensure access to FreeNAS storage with the backup repositories
Verify permissions to create storage resources in the target namespace
The process involves three steps:
Step 5.1: Create a Reference to the Existing Backup Volume¶
Create a PersistentVolume referencing your FreeNAS volume:
1apiVersion: v1
2kind: PersistentVolume
3metadata:
4 name: existing-backup-pv
5spec:
6 capacity:
7 storage: 500Gi
8 volumeMode: Filesystem
9 accessModes:
10 - ReadWriteOnce
11 persistentVolumeReclaimPolicy: Retain
12 storageClassName: freenas-iscsi-csi
13 csi:
14 driver: org.democratic-csi.iscsi
15 volumeHandle: existing-backup-volume-id # ID of your existing volume on FreeNAS
16 volumeAttributes:
17 fs_type: ext4
Apply it:
kubectl apply -f existing-backup-pv.yaml
Step 5.2: Create a Claim for the Backup Volume¶
Create a PersistentVolumeClaim:
1apiVersion: v1
2kind: PersistentVolumeClaim
3metadata:
4 name: existing-backup-pvc
5spec:
6 accessModes:
7 - ReadWriteOnce
8 resources:
9 requests:
10 storage: 500Gi
11 storageClassName: freenas-iscsi-csi
12 volumeName: existing-backup-pv
Apply it:
kubectl apply -f existing-backup-pvc.yaml
Step 5.3: Create a Restore Pod¶
Create a restoration pod:
1apiVersion: v1
2kind: Pod
3metadata:
4 name: restore-from-existing-backup
5spec:
6 containers:
7 - name: restore-container
8 image: registry.desku.be/epheo/borgbackup:latest
9 env:
10 - name: BORG_CONFIG_DIR
11 value: /tmp/borg_config
12 - name: BORG_CACHE_DIR
13 value: /tmp/borg_cache
14 - name: BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK
15 value: "yes"
16 command: ["/bin/bash", "-c"]
17 args:
18 - |
19 mkdir -p $BORG_CONFIG_DIR $BORG_CACHE_DIR
20 borg extract --progress --stdout /existing-backup::$(borg list /existing-backup --last 1 --format "{archive}") | dd of=/dev/target-block bs=4M status=progress
21 volumeDevices:
22 - devicePath: /dev/target-block
23 name: target-block-device
24 volumeMounts:
25 - mountPath: /existing-backup
26 name: existing-backup-volume
27 volumes:
28 - name: existing-backup-volume
29 persistentVolumeClaim:
30 claimName: existing-backup-pvc
31 - name: target-block-device
32 persistentVolumeClaim:
33 claimName: target-device-pvc
Apply it:
kubectl apply -f restore-from-backup-pod.yaml
Step 6: Verify the Restoration¶
Verify restored data:
# Create verification directory
kubectl exec -it restore-from-existing-backup -- mkdir -p /mnt/verify
# Mount the restored device
kubectl exec -it restore-from-existing-backup -- mount /dev/target-block /mnt/verify
# Check contents
kubectl exec -it restore-from-existing-backup -- ls -la /mnt/verify
Tip
For block devices with partition tables, use fdisk -l /dev/target-block
before mounting.
Step 7: Directly Using Existing Block Devices with Kubernetes¶
Recent OpenShift versions can reinstall without wiping disks, preserving LVM Logical Volumes for recovery after reinstallation.
Identifying Available Block Devices¶
Identify block devices:
# From a debug pod
oc debug node/<node-name>
chroot /host
# List devices
lsblk
# Get device info
blkid /dev/sdX
Direct PersistentVolume Approach¶
Create a PersistentVolume referencing the block device:
1apiVersion: v1
2kind: PersistentVolume
3metadata:
4 name: existing-block-pv
5spec:
6 capacity:
7 storage: 50Gi
8 volumeMode: Block # Use Block for raw device or Filesystem for formatted
9 accessModes:
10 - ReadWriteOnce
11 persistentVolumeReclaimPolicy: Retain
12 storageClassName: local-storage
13 local:
14 path: /dev/sdX # Replace with your device path
15 nodeAffinity:
16 required:
17 nodeSelectorTerms:
18 - matchExpressions:
19 - key: kubernetes.io/hostname
20 operator: In
21 values:
22 - sno-node-01 # Replace with your node name
Apply it:
kubectl apply -f local-pv.yaml
Creating a PersistentVolumeClaim¶
Create a PersistentVolumeClaim:
1apiVersion: v1
2kind: PersistentVolumeClaim
3metadata:
4 name: existing-block-pvc
5 namespace: my-namespace
6spec:
7 volumeMode: Block # Must match the PV's volumeMode
8 accessModes:
9 - ReadWriteOnce
10 resources:
11 requests:
12 storage: 50Gi
13 volumeName: existing-block-pv # Reference the PV by name
14 storageClassName: local-storage
Apply it:
kubectl apply -f local-pvc.yaml
Using the Block Device in a Pod¶
Use the block device in a pod:
1apiVersion: v1
2kind: Pod
3metadata:
4 name: block-device-pod
5 namespace: my-namespace
6spec:
7 containers:
8 - name: block-user
9 image: registry.access.redhat.com/ubi8/ubi-minimal:latest
10 command: ["sleep", "infinity"]
11 volumeDevices: # Use volumeDevices for Block mode
12 - name: block-vol
13 devicePath: /dev/xvda # Device path in container
14 volumes:
15 - name: block-vol
16 persistentVolumeClaim:
17 claimName: existing-block-pvc
Apply it:
kubectl apply -f block-device-pod.yaml
Verification¶
Verify access to the block device:
# Check pod status
kubectl get pod -n my-namespace block-device-pod
# Verify block device
kubectl exec -it -n my-namespace block-device-pod -- lsblk
# Format and mount if needed
kubectl exec -it -n my-namespace block-device-pod -- mkfs.ext4 /dev/xvda
kubectl exec -it -n my-namespace block-device-pod -- mkdir -p /mnt/data
kubectl exec -it -n my-namespace block-device-pod -- mount /dev/xvda /mnt/data
Tip
Skip formatting for existing formatted devices.
This approach gives direct access to block devices while managing them through Kubernetes.
Resources and References¶
Additional resources: