EFS Volume Mount with Access Point
This article describes the procedure to integrate Amazon Elastic File System (EFS) with Amazon Elastic Kubernetes Service (EKS) using EFS Access Points. The integration enables applications deployed on EKS to share storage across multiple pods, ensuring scalability and persistence.
Prerequisites
AWS account with permissions to create and manage:
Amazon EFS
EKS cluster resources
IAM roles and policies
Installed tools:
kubectl
helm
AWS CLI
EKS cluster already running in the target VPC.
Read/write access to EFS resources.
Steps Involved
Create an EFS File System
Log in to the AWS Management Console.
Navigate to Amazon EFS → Create file system.

Select Customize to configure the file system.

Provide a name for the file system and click Next.

Select the appropriate VPC and Security Groups that allow NFS traffic (port 2049).

Review the settings and click Create.

Create EFS Access Points
Open the created EFS in the console.
Select Access points → Create access point.

Enter a name and specify the root directory path (e.g.,
/any-name).Configure POSIX user:
User ID:
777Group ID:
777Secondary Group ID:
777

Configure Root directory permissions:
Owner User ID:
777Owner Group ID:
777Permissions:
777

Save the configuration.
Repeat the above steps to create access points for the following directories:
/third-party-jars(e.g.,csp-lib.jar,lineage.jar, required JARs)/oelogs/certs/esdata
Update Helm Charts for Persistent Volumes
Update the Helm chart templates to define PersistentVolume (PV) and PersistentVolumeClaim (PVC) resources for each directory. Replace placeholders
<fs-filesystem_ID>and<AccessPointID>with actual values.Jars
jars_pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv-jars spec: capacity: storage: 2Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: efs-sc csi: driver: efs.csi.aws.com volumeHandle: <fs-filesystem_ID>::<AccessPointID>jars_pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: efs-claim-jars spec: accessModes: - ReadWriteMany storageClassName: efs-sc resources: requests: storage: 2Gi
Certs
certs_pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv-certs spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: efs-sc csi: driver: efs.csi.aws.com volumeHandle: <fs-filesystem_ID>::<AccessPointID>certs_pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: efs-claim-certs spec: accessModes: - ReadWriteMany storageClassName: efs-sc resources: requests: storage: 1Gi
Files
files_pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv-files spec: capacity: storage: 7Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: efs-sc csi: driver: efs.csi.aws.com volumeHandle: <fs-filesystem_ID>::<AccessPointID>files_pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: efs-claim-files spec: accessModes: - ReadWriteMany storageClassName: efs-sc resources: requests: storage: 7Gi
Attach Volumes to Pods
Update the Deployment/StatefulSet specifications to mount the PVCs. Example:
volumes: - name: efs-volume-jars persistentVolumeClaim: claimName: efs-claim-jars - name: efs-volume-certs persistentVolumeClaim: claimName: efs-claim-certs - name: efs-volume-files persistentVolumeClaim: claimName: efs-claim-files volumeMounts: - name: efs-volume-jars mountPath: /home/ovaledge/third_party_jars - name: efs-volume-certs mountPath: /home/ovaledge/certificates - name: efs-volume-files mountPath: /home/ovaledgefiles
Install/Upgrade the Helm Chart
Deploy the application with the updated configuration:
helm install ovaledge ./ovaledgeor, if upgrading:
helm upgrade ovaledge ./ovaledge
Validation
Verify that the PersistentVolumes and PersistentVolumeClaims are bound:
kubectl get pv,pvcConfirm that pods are using the mounted EFS volumes:
kubectl describe pod <pod-name>Log into a pod and validate that files can be created in the mounted directories:
kubectl exec -it <pod-name> -- ls /home/ovaledge/third_party_jars
Error Handling and Rollback
If a pod fails to mount the volume:
Check EFS CSI driver logs:
kubectl logs -n kube-system -l app=efs-csi-nodeVerify that the Security Group allows NFS (2049).
Ensure that the Access Point ID matches the configured path.
Rollback:
Revert to the previous Helm release:
helm rollback ovaledge <REVISION_NUMBER>Delete and recreate PVCs if binding issues persist.
Copyright © 2025, OvalEdge LLC, Peachtree Corners, GA, USA.
Last updated
Was this helpful?

