What are Persistent Volume and Persistent Volume Claim in Kubernetes?
As you all might have heard pod is stateless, where data had been generated by the pod was removed after the pod itself was destroyed.
Therefore, Persistent Volume (PV) has come into the picture. It allows the abstract implementation of physical storage that will be utilized by Pods. Furthermore, cluster admin can manage varieties of PV configurations without exposing the detail to users and they are performance, access mode, storage implementations (NFS, iSCSI, or a cloud-provider-specific storage system) …etc.
However, to consume a PV resource user need to create a Persistent Volume Claim (PVC).
To create PV, I will create a new file named pv.yml and it will be written as the below example:
kind: PersistentVolume
apiVersion: v1
metadata:
name: demo-log-pv
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/logs"
In this example, I’ve created a PV resource named demo-log-pv and a few properties which will be explained as follows:
A capacity is a place where you specify the desired storage to be requested.
AccessModes:
- ReadWriteOnce (RWO): Volume is being used by one node at a time, but still allows multiple pods to access it simultaneously.
- ReadOnlyMany (ROX): Volume is being used as read-only by many nodes.
- ReadWriteMany (RWX): Volume is being used as read and write by many nodes.
- ReadWriteOncePod (RWOP): Volume is guaranteed to use by a pod at a time. Use this when you only want a pod to access it.
Note: Kubernetes uses volume access modes to match PersistentVolumeClaims and PersistentVolumes. In some cases, the volume access modes also constrain where the PersistentVolume can be mounted.
Reclaim Policy:
- Retain: allows manual storage reclamation. When a Persistent Volume Claim is deleted and PV is considered as “released”, however, the data will still be there, so to make it claimable you need clean up associated data manually.
- Delete: volume plugins that support the delete policy will automatically delete both PV from Kubernetes and associated data in external storage such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume.
hostpath here is referring to mounting a volume to the current node with the path of “/tmp/logs”. Please make sure that this path exists in your destination node, otherwise, it will fail.
To apply it, log in to the Kubernetes master server and transfer pv.yml to it, then execute this
kubectl apply -f pv.yml
Now, let’s continue making use of our PV by creating PVC. I am gonna create the file name pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-logs-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Just like the pv.yml, we need to copy it to the server and apply it.
kubectl apply -f pvc.yml
Finally, we can see our app running with our newly created storage.
apiVersion: v1
kind: Service
metadata:
name: helloworld-app-svc
labels:
app: helloworld
spec:
type: NodePort
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 32222
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld
labels:
app: helloworld
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: hello-world:0.0.1
ports:
- containerPort: 3000
volumeMounts:
- name: logs
mountPath: "/usr/logs"
readinessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
volumes:
- name: logs
persistentVolumeClaim:
claimName: demo-logs-claim
Thanks for reading ;)