Storage in Kubernetes (and by extension OpenShift) is one of those things that seems simple until it isn’t. You create a PVC, it binds to a PV, your pod mounts it. Sorted.
Then you hit a production cluster and nothing binds, the pod is stuck in Pending, and the error message is “no persistent volumes available for this claim.”
Let’s actually understand what’s happening.
The Three-Way Relationship
There are three objects to keep straight:
| Object | What it is |
|---|---|
| PersistentVolume (PV) | The actual storage resource — a disk, NFS share, etc. |
| PersistentVolumeClaim (PVC) | A request for storage from a pod. “I need X GB.” |
| StorageClass | A abstraction defining how PVs are provisioned. |
A PVC binds to a PV when:
- The access modes match
- The requested storage fits within the PV’s capacity
- They share the same StorageClass (or both have none)
- Label selectors match (if specified)
Miss any of these and you’re stuck in Pending.
Static vs. Dynamic Provisioning
Static provisioning
You create PVs manually. A cluster admin provisions a 50Gi disk, defines it as a PV, and developers claim against it. Old school but explicit.
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
storageClassName: standard-csi
csi:
driver: disk.csi.example.com
volumeHandle: vol-0a1b2c3d
Dynamic provisioning
A StorageClass has a provisioner. When a PVC references it, the provisioner automatically creates a PV. No admin involvement needed.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard-csi
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: disk.csi.example.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
parameters:
type: ssd
The WaitForFirstConsumer binding mode is important — it delays PV creation until a pod actually gets scheduled, allowing the provisioner to create the volume in the right availability zone.
Access Modes
A common source of confusion. Three options:
- ReadWriteOnce (RWO) — mounted read-write by a single node. Most block storage.
- ReadOnlyMany (ROX) — mounted read-only by many nodes.
- ReadWriteMany (RWX) — mounted read-write by many nodes simultaneously. Requires shared storage like NFS or CephFS.
If you’re trying to run multiple replicas and they all need to write to the same volume: you need RWX. Trying it with RWO will work until it doesn’t — typically when a pod reschedules to a different node.
Reclaim Policies
When a PVC is deleted, what happens to the PV?
- Delete — PV and the underlying storage are deleted. Default for dynamic provisioning. Fine for ephemeral data, dangerous for anything you care about.
- Retain — PV persists, but moves to
Releasedstate. Needs manual cleanup or reclaiming. Good for production data. - Recycle — Deprecated. Don’t use it.
For stateful workloads in production: Retain. Always. You want a human involved before data is destroyed.
The OpenShift Twist
OpenShift adds Security Context Constraints (SCCs) on top. Your storage driver needs appropriate permissions, and your pods may need to run as a specific UID to access mounted volumes.
If you’re getting permission denied errors on a mounted PV, check:
- The pod’s
securityContext.runAsUser - The volume’s filesystem permissions
- The SCC assigned to your service account
oc get scc
oc describe scc restricted-v2
oc adm policy who-can use scc anyuid
Sane Defaults to Start From
- Use a CSI driver — in-tree volume plugins are being deprecated.
- Set
volumeBindingMode: WaitForFirstConsumeron storage classes used by stateful apps. - Use
Retainreclaim policy for anything stateful in production. - Always request less storage than you think you need — you can expand PVCs (if the driver supports it), but you can’t shrink them.
- If you need shared storage across pods, plan for it from day one. Retrofitting RWX onto an RWO architecture is painful.
Storage is boring until it fails. Then it’s the most important thing in the room.