Some time ago I wrote this post about different storage options in Azure Red Hat OpenShift. One of the options discussed was using Azure NetApp Files for persistent storage of your pods. As discussed in that post, Azure NetApp Files has some advantages:
- ReadWriteMany support
- Does not count against the limit of Azure Disks per VM
- Different performance tiers, the most performant one being 128MiB/s per TiB of volume capacity
- The NetApp tooling ecosystem
There is one situations where Azure NetApp Files will not be a great fit: if you only need a small share, since the minimum pool size in which Azure NetApp Files can be ordered is 4 TiB. You can carve out many small volumes out of that pool with 4 TiB of capacity, but if the only thing you need is a small share, other options might be more cost effective.
I think the three different performance tiers of Azure NetApp Files can be very flexible, offering between 16 and 128 MiB/s per provisioned TiB. For example, at 1 TiB a Premium SSD (P30) would give you 200 MiB/s, while a an ANF volume would give you up to 128 MiB/s. Not quite the performance of a Premium SSD, but it doesn’t fall too far behind either.
But’s let’s go back to our post title: what is Trident? In a standard setup, you would have to create the ANF volume manually, and assign it to the different pods that need it. However, with the project Trident NetApp offers the possibility for the Kubernetes clusters of creating and destroying those volumes automatically, tied to the Persistent Volume Claim lifecycle.
Hence, when the application is deployed to OpenShift, nobody needs to go to the Azure Portal and provision storage in advance, but the volumes are created using the Kubernetes API through the functionality of Trident.
As the Trident documentation says, OpenShift is a supported platform. I did not find any blog about whether it would work on Azure RedHat OpenShift (why shouldn’t it?), so I decided to give it a go. I installed Trident on my ARO cluster following this great post by Sean Luce: Azure NetApp Files + Trident, and it was a breeze. You need the client tooling tridentctl
, which will do some of the required operations for you (more to this further down).
I created my ANF account and pool with the Azure CLI (Sean is using the Azure Portal). Trident needs a Service Principal to interact with Azure NetApp Files. In my case I am using the cluster SP, to which I granted contributor access for the ANF account:
az netappfiles account create -g $rg -n $anf_name -l $anf_location az netappfiles pool create -g $rg -a $anf_name -n $anf_name -l $anf_location --size 4 --service-level Standard anf_account_id=$(az netappfiles account show -n $anf_name -g $rg --query id -o tsv) az role assignment create --scope $anf_account_id --assignee $sp_app_id --role 'Contributor'
Now you need to install the Trident software (unsurprisingly, Helm is your friend here), and add a “backend”, which will teach Trident how to access that Azure NetApp Files pool you created a minute ago:
# Create ANF backend # Credits to https://github.com/seanluce/ANF_Trident_AKS subscription_id=$(az account show --query id -o tsv) tenant_id=$(az account show --query tenantId -o tsv) trident_backend_file=/tmp/trident_backend.json cat <<EOF > $trident_backend_file { "version": 1, "storageDriverName": "azure-netapp-files", "subscriptionID": "$subscription_id", "tenantID": "$tenant_id", "clientID": "$sp_app_id", "clientSecret": "$sp_app_secret", "location": "$anf_location", "serviceLevel": "Standard", "virtualNetwork": "$vnet_name", "subnet": "$anf_subnet_name", "nfsMountOptions": "vers=3,proto=tcp,timeo=600", "limitVolumeSize": "500Gi", "defaults": { "exportRule": "0.0.0.0/0", "size": "200Gi" } } EOF tridentctl -n $trident_ns create backend -f $trident_backend_file
After this, we need a way for OpenShift to use this through standard Kubernetes constructs as any other OpenShift storage technology: through a storage class.
# Create Storage Class # Credits to https://github.com/seanluce/ANF_Trident_AKS cat <<EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: azurenetappfiles provisioner: netapp.io/trident parameters: backendType: "azure-netapp-files" EOF
So Openshift now will have now 2 storage classes: the default one, which leverages managed Azure Premium disks, plus the new one that has been created to interact with ANF:
$ k get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION azurenetappfiles csi.trident.netapp.io Delete Immediate false managed-premium (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true
And here the magic comes: when a Persistent Volume Claim is created associated to that storage class, an ANF volume will be instantiated too, matching the parameters specified in the PVC. To create the PVC I will stick to Sean’s example, with a 100 GiB volume:
# Create PVC # Credits to https://github.com/seanluce/ANF_Trident_AKS cat <<EOF | kubectl apply -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: azurenetappfiles spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: azurenetappfiles EOF
The PVC is now visible in OpenShift:
$ k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE azurenetappfiles Bound pvc-adc0348d-5752-4e44-82c2-8205f39c376d 100Gi RWX azurenetappfiles 8h
And sure enough, you can use the Azure Portal to browse your account, pool, and the newly created volume:

The Azure CLI will give us as well information about the created volume. The default output was a bit busy and didn’t fit in my screen width, so I created my own set of options that I was interested in.
$ az netappfiles volume list -g $rg -a $anf_name -p $anf_name -o table --query '[].{Name:name, ProvisioningState:provisioningState, ThroughputMibps:throughputMibps, ServiceLevel:serviceLevel, Location:location}' Name ProvisioningState ThroughputMibps ServiceLevel Location -------------------------------------------------------- ------------------- ----------------- -------------- ----------- anf5550/anf5550/pvc-adc0348d-5752-4e44-82c2-8205f39c376d Succeeded 1.6 Standard northeurope
Interestingly enough, I couldn’t see the volume size in the object properties, but it can be easily inferred: the volume is Standard, and from the Azure NetApp Files performance tiers we know that Standard means 16 MiB/s per provisioned TiB. Hence, 1.6 MiB/s means 100 GiBs: Maths still work!
I used my sqlapi image, which includes a rudimentary I/O performance benchmark tool based on this code by thodnev, to verify those expected 1.6 MiB/s
# Deployment name=api cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: $name labels: app: $name deploymethod: trident spec: replicas: 1 selector: matchLabels: app: $name template: metadata: labels: app: $name deploymethod: trident spec: containers: - name: $name image: erjosito/sqlapi:1.0 ports: - containerPort: 8080 volumeMounts: - name: disk01 mountPath: /mnt/disk volumes: - name: disk01 persistentVolumeClaim: claimName: azurenetappfiles --- apiVersion: v1 kind: Service metadata: labels: app: $name name: $name spec: ports: - port: 8080 protocol: TCP targetPort: 8080 selector: app: $name type: LoadBalancer EOF
And here the results of the I/O benchmark (not sure about why the read bandwidth is 10x, I might have a bug in the I/O benchmarking code or there is some caching involved somewhere, I will update this post when I find out more):
❯ curl 'http://40.127.231.103:8080/api/ioperf?size=512&file=%2Fmnt%2Fdisk%2Fiotest' { "Filepath": "/mnt/disk/iotest", "Read IOPS": 201567.0, "Read bandwidth in MB/s": 1574.75, "Read block size (KB)": 8, "Read blocks": 65536, "Read time (sec)": 0.33, "Write IOPS": 13.0, "Write bandwidth in MB/s": 1.62, "Write block size (KB)": 128, "Write time (sec)": 315.51, "Written MB": 512, "Written blocks": 4096 }
When you delete the application from OpenShift, including the PVC, the Azure NetApp Files will disappear as well, without anybody having to log to Azure to do anything.
So that concludes this post, with a boring “it works as expected”. Thanks for reading!