8. Integrating Trident¶
8.1. Trident backend design¶
8.1.1. ONTAP¶
Choosing a backend driver for ONTAP
Four different backend drivers are available for ONTAP systems. These drivers are differentiated by the protocol being used and how the volumes are provisioned on the storage system. Therefore, give careful consideration regarding which driver to deploy.
At a higher level, if your application has components which need shared storage (multiple pods accessing the same PVC) NAS based drivers would be the default choice, while the block-based iSCSI drivers meets the needs of non-shared storage. Choose the protocol based on the requirements of the application and the comfort level of the storage and infrastructure teams. Generally speaking, there is little difference between them for most applications, so often the decision is based upon whether or not shared storage (where more than one pod will need simultaneous access) is needed.
The five drivers for ONTAP backends are listed below:
ontap-nas
– Each PV provisioned is a full ONTAP FlexVolume.ontap-nas-economy
– Each PV provisioned is a qtree, with up to 200 qtrees per FlexVolume.ontap-nas-flexgroup
- Each PV provisioned as a full ONTAP FlexGroup, and all aggregates assigned to a SVM are used.ontap-san
– Each PV provisioned is a LUN within its own FlexVolume.ontap-san-economy
- Each PV provisioned is a LUN within a set of automatically managed FlexVols.
Choosing between the three NFS drivers has some ramifications to the features which are made available to the application.
Note that, in the tables below, not all of the capabilities are exposed through Trident. Some must be applied by the storage administrator after provisioning if that functionality is desired. The superscript footnotes distinguish the functionality per feature and driver.
ONTAP NFS Drivers | Snapshots | Clones | Multi-attach | QoS | Resize | Replication |
---|---|---|---|---|---|---|
ontap-nas |
Yes1 | Yes | Yes | Yes1 | Yes | Yes1 |
ontap-nas-economy |
Yes12 | Yes12 | Yes | Yes12 | Yes | Yes12 |
ontap-nas-flexgroup |
Yes1 | No | Yes | Yes1 | Yes | Yes1 |
Trident offers 2 SAN drivers for ONTAP, whose capabilities are shown below.
ONTAP SAN Driver | Snapshots | Clones | Multi-attach | QoS | Resize | Replication |
---|---|---|---|---|---|---|
ontap-san |
Yes1 | Yes | Yes3 | Yes1 | Yes | Yes1 |
ontap-san-economy |
Yes1 | Yes | Yes3 | Yes12 | Yes1 | Yes12 |
The features that are not PV granular are applied to the entire FlexVolume and all of the PVs (i.e. qtrees or LUNs in shared FlexVols) will share a common schedule.
As we can see in the above tables, much of the functionality between the ontap-nas
and ontap-nas-economy
is the same. However, since the ontap-nas-economy
driver limits the ability to control the schedule at per-PV granularity, this may affect your disaster recovery and backup planning in particular. For development teams which desire to leverage PVC clone functionality on ONTAP storage, this is only possible when using the ontap-nas
, ontap-san
or ontap-san-economy
drivers (note, the solidfire-san
driver is also capable of cloning PVCs).
Choosing a backend driver for Cloud Volumes ONTAP
Cloud Volumes ONTAP provides data control along with enterprise-class storage features for various use cases, including file shares and block-level storage serving NAS and SAN protocols (NFS, SMB / CIFS, and iSCSI). The compatible drivers for Cloud Volume ONTAP are ontap-nas
, ontap-nas-economy
, ontap-san
and
ontap-san-economy
. These are applicable for Cloud Volume ONTAP for AWS, Cloud Volume ONTAP for Azure, Cloud Volume ONTAP for GCP.
8.1.2. Element (HCI/SolidFire)¶
The solidfire-san
driver used with the HCI/SolidFire platforms, helps the admin configure an Element backend for Trident on the basis of QoS limits. If you would like to design your backend to set the specific QoS limits on the volumes provisioned by Trident, use the type
parameter in the backend file. The admin also can restrict the volume size that could be created on the storage using the limitVolumeSize parameter. Currently, Element OS storage features like volume resize and volume replication are not supported through the solidfire-san
driver. These operations should be done manually through Element OS Web UI.
SolidFire Driver | Snapshots | Clones | Multi-attach | QoS | Resize | Replication |
---|---|---|---|---|---|---|
solidfire-san |
Yes1 | Yes | Yes2 | Yes | Yes | Yes1 |
8.1.3. SANtricity (E-Series)¶
To configure an E-Series backend for Trident, set the storageDriverName
parameter to eseries-iscsi
driver in the backend configuration. Once the E-Series backend has been configured, any requests to provision volume from the E-Series will be handled by Trident based on the host groups. Trident uses host groups to gain access to the LUNs that it provisions and by default, it looks for a host group named trident
unless a different host group name is specified using the accessGroupName
parameter in the backend configuration. The admin also can restrict the volume size that could be created on the storage using the limitVolumeSize parameter. Currently, E-Series storage features like volume resize and volume replication are not supported through the eseries-iscsi
driver. These operations should be done manually through SANtricity System Manager.
E-Series Driver | Snapshots | Clones | Multi-attach | QoS | Resize | Replication |
---|---|---|---|---|---|---|
eseries-iscsi |
Yes1 | Yes1 | Yes2 | No | Yes | Yes1 |
8.1.4. Azure NetApp Files Backend Driver¶
Trident uses the azure-netapp-files
driver to manage the Azure NetApp Files service.
More information about this driver and how to configure it can be found in Trident’s Azure NetApp Files backend documentation.
Azure NetApp Files Driver | Snapshots | Clones | Multi-attach | QoS | Expand | Replication |
---|---|---|---|---|---|---|
azure-netapp-files |
Yes1 | Yes | Yes | Yes | Yes | Yes1 |
8.1.5. Cloud Volumes Service with AWS Backend Driver¶
Trident uses the aws-cvs
driver to link with the Cloud Volumes Service on the AWS backend. To configure the AWS backend on Trident, you are required specify apiRegion
, apiURL
, apiKey
, and the secretKey
in the backend file. These values can be found in the CVS web portal in Account settings/API access. The supported service levels are aligned with CVS and include standard, premium, and extreme. More information on this driver may be found in the Cloud Volumes Service for AWS Documentation. Currently, 100G is the minimum volume size that will be provisioned. Future releases of CVS may remove this restriction.
CVS for AWS Driver | Snapshots | Clones | Multi-attach | QoS | Expand | Replication |
---|---|---|---|---|---|---|
aws-cvs |
Yes1 | Yes | Yes | Yes | Yes | Yes1 |
The aws-cvs
driver uses virtual storage pools. Virtual storage pools abstract the backend, letting Trident decide volume placement. The administrator defines the virtual storage pools in the backend.json file(s). Storage classes identify the virtual storage pools with the use of labels. More information on the virtual storage pools feature can be found in Virtual Storage Pools Documentation.
8.1.6. Cloud Volumes Service with GCP Backend Driver¶
Trident uses the gcp-cvs
driver to link with the Cloud Volumes Service on the GCP backend. To configure the GCP backend on Trident, you are required specify projectNumber
, apiRegion
, and apiKey
in the backend file. The project number may be found in the GCP web portal, while the API key must be taken from the service account private key file that you created while setting up API access for Cloud Volumes on GCP. The supported service levels are aligned with CVS and include standard, premium, and extreme. More information on this driver may be found in the Cloud Volumes Service for GCP Documentation. Currently, 1 TiB is the minimum volume size that will be provisioned. Future releases of CVS may remove this restriction.
CVS for GCP Driver | Snapshots | Clones | Multi-attach | QoS | Expand | Replication |
---|---|---|---|---|---|---|
gcp-cvs |
Yes1 | Yes | Yes | Yes | Yes | Yes1 |
The gcp-cvs
driver uses virtual storage pools. Virtual storage pools abstract the backend, letting Trident decide volume placement. The administrator defines the virtual storage pools in the backend.json file(s). Storage classes identify the virtual storage pools with the use of labels. More information on the virtual storage pools feature can be found in Virtual Storage Pools Documentation.
8.2. Storage Class design¶
Individual Storage Classes need to be configured and applied to create a Kubernetes Storage Class object. This section discusses how to design a storage class for your application.
8.2.1. Storage Class design for specific backend utilization¶
Filtering can be used within a specific storage class object to determine which storage pool or set of pools are to be used with that specific storage class. Three sets of filters can be set in the Storage Class: storagePools, additionalStoragePools, and/or excludeStoragePools.
The storagePools parameter helps restrict storage to the set of pools that match any specified attributes. The additionalStoragePools parameter is used to extend the set of pools that Trident will use for provisioning along with the set of pools selected by the attributes and storagePools parameters. You can use either parameter alone or both together to make sure that the appropriate set of storage pools are selected.
The excludeStoragePools parameter is used to specifically exclude the listed set of pools that match the attributes.
Please refer to Trident StorageClass Objects on how these parameters are used.
8.2.2. Storage Class design to emulate QoS policies¶
If you would like to design Storage Classes to emulate Quality of Service policies, create a Storage Class with the media attribute as hdd or ssd. Based on the media attribute mentioned in the storage class, Trident will select the appropriate backend that serves hdd or ssd aggregates to match the media attribute and then direct the provisioning of the volumes on to the specific aggregate. Therefore we can create a storage class PREMIUM which would have media attribute set as ssd which could be classified as the PREMIUM QoS policy. We can create another storage class STANDARD which would have the media attribute set as ‘hdd’ which could be classified as the STANDARD QoS policy. We could also use the “IOPS” attribute in the storage class to redirect provisioning to an Element appliance which can be defined as a QoS Policy.
Please refer to Trident StorageClass Objects on how these parameters can be used.
8.2.3. Storage Class Design To utilize backend based on specific features¶
Storage Classes can be designed to direct volume provisioning on a specific backend where features such as thin and thick provisioning, snapshots, clones, and encryption are enabled. To specify which storage to use, create Storage Classes that specify the appropriate backend with the required feature enabled.
Please refer to Trident StorageClass Objects on how these parameters can be used.
8.2.4. Storage Class Design for Virtual Storage Pools¶
Virtual Storage Pools are available for Cloud Volumes Service for AWS, ANF, Element and E-Series backends.
Virtual Storage Pools allow an administrator to create a level of abstraction over backends which can be referenced through Storage Classes, for greater flexibility and efficient placement of volumes on backends. Different backends can be defined with the same class of service. Moreover, multiple Storage Pools can be created on the same backend but with different characteristics. When a Storage Class is configured with a selector with the specific labels , Trident chooses a backend which matches all the selector labels to place the volume. If the Storage Class selector labels matches multiple Storage Pools, Trident will choose one of them to provision the volume from.
Please refer to Virtual Storage Pools for more information and applicable parameters.
8.3. Virtual Storage Pool Design¶
While creating a backend , you can generally specify a set of parameters. It was impossible for the administrator to create another backend with the same storage credentials and with a different set of parameters. With the introduction of Virtual Storage Pools , this issue has been alleviated. Virtual Storage Pools is a level abstraction introduced between the backend and the Kubernetes Storage Class so that the administrator can define parameters along with labels which can be referenced through Kubernetes Storage Classes as a selector, in a backend-agnostic way. Currently Virtual Storage Pool is supported for Cloud Volumes Service for AWS , E-Series and SolidFire.
8.3.1. Design Virtual Storage Pools for emulating different Service Levels/QoS¶
It is possible to design Virtual Storage Pools for emulating service classes. Using the virtual pool implementation for Cloud Volume Service for AWS, let us examine how we can setup up different service classes. Configure the AWS-CVS backend with multiple labels, representing different performance levels. Set “servicelevel” aspect to the appropriate performance level and add other required aspects under each labels. Now create different Kubernetes Storage Classes that would map to different virtual Storage Pools. Using the parameters.selector
field, each StorageClass calls out which virtual pool(s) may be used to host a volume.
8.3.2. Design Virtual Pools for Assigning Specifc Set of Aspects¶
Multiple Virtual Storage pools with a specific set of aspects can be designed from a single storage backend. For doing so, configure the backend with multiple labels and set the required aspects under each label. Now create different Kubernetes Storage Classes using the parameters.selector
field that would map to different Virtual Storage Pools.The volumes that get provisioned on the backend will have the aspects defined in the chosen Virtual Storage Pool.
8.4. PVC characteristics which affect storage provisioning¶
Some parameters beyond the requested storage class may affect Trident’s provisioning decision process when creating a PVC.
8.4.1. Access mode¶
When requesting storage via a PVC, one of the mandatory fields is the access mode. The mode desired may affect the backend selected to host the storage request.
Trident will attempt to match the storage protocol used with the access method specified according to the following matrix. This is independent of the underlying storage platform.
ReadWriteOnce | ReadOnlyMany | ReadWriteMany | |
---|---|---|---|
iSCSI | Yes | Yes | No |
NFS | Yes | Yes | Yes |
A request for a ReadWriteMany PVC submitted to a Trident deployment without an NFS backend configured will result in no volume being provisioned. For this reason, the requestor should use the access mode which is appropriate for their application.
8.5. Volume Operations¶
8.5.1. Modifying persistent volumes¶
Persistent volumes are, with two exceptions, immutable objects in Kubernetes. Once created, the reclaim policy and the size can be modified. However, this doesn’t prevent some aspects of the volume from being modified outside of Kubernetes. This may be desirable in order to customize the volume for specific applications, to ensure that capacity is not accidentally consumed, or simply to move the volume to a different storage controller for any reason.
Note
Kubernetes in-tree provisioners do not support volume resize operations for NFS or iSCSI PVs at this time. Trident supports expanding both NFS and iSCSI volumes. For a list of PV types which support volume resizing refer to the Kubernetes documentation.
The connection details of the PV cannot be modified after creation.
8.5.2. Volume Move Operations¶
Storage administrators have the ability to move volumes between aggregates and controllers in the ONTAP cluster non-disruptively to the storage consumer. This operation does not affect Trident or the Kubernetes cluster, as long as the destination aggregate is one which the SVM Trident is using has access to. Importantly, if the aggregate has been newly added to the SVM, the backend will need to be “refreshed” by re-adding it to Trident. This will trigger Trident to reinventory the SVM so that the new aggregate is recognized.
However, moving volumes across backends is not supported automatically by Trident. This includes between SVMs in the same cluster, between clusters, or onto a different storage platform (even if that storage system is one which is connected to Trident).
If a volume is copied to another location, the volume import feature may be used to import current volumes into Trident.
8.5.3. Expanding volumes¶
Trident supports resizing NFS and iSCSI PVs, beginning with the 18.10
and 19.10
releases respectively. This enables users to resize their volumes directly through
the Kubernetes layer. Volume expansion is possible for all major NetApp storage platforms,
including ONTAP, Element/HCI and Cloud Volumes Service backends.
Take a look at the Expanding an NFS volume and
Expanding an iSCSI volume for examples and conditions that must be met.
To allow possible expansion later, set allowVolumeExpansion to true in your StorageClass associated with the volume. Whenever the Persistent Volume needs to be resized, edit the spec.resources.requests.storage
annotation in the Persistent Volume Claim to the required volume size. Trident will automatically take care of resizing the volume on the storage cluster.
Note
- Resizing iSCSI PVs requires Kubernetes 1.16 and Trident 19.10 or later.
- Kubernetes, prior to version 1.12, does not support PV resize as the admission controller may reject PVC size updates. The Trident team has changed Kubernetes to allow such changes starting with Kubernetes 1.12. While we recommend using Kubernetes 1.12, it is still possible to resize NFS PVs for earlier versions of Kubernetes that support resize. This is done by disabling the PersistentVolumeClaimResize admission plugin when the Kubernetes API server is started.
8.5.4. Import an existing volume into Kubernetes¶
Volume Import provides the ability to import an existing storage volume into a Kubernetes environment. This is currently
supported by the ontap-nas
, ontap-nas-flexgroup
, solidfire-san
, azure-netapp-files
, aws-cvs
, and
gcp-cvs
drivers. This feature is useful when porting an existing application into Kubernetes or during disaster
recovery scenarios.
When using the ONTAP and solidfire-san
drivers, use the command tridentctl import volume <backend-name> <volume-name> -f /path/pvc.yaml
to import an existing volume into Kubernetes to be managed by Trident. The PVC YAML or JSON file used in the import volume
command points to a storage class which identifies Trident as the provisioner. When using a HCI/SolidFire
backend, ensure the volume names are unique. If the volume names are duplicated, clone the volume to a unique name so
the volume import feature can distinguish between them.
If the aws-cvs
, azure-netapp-files
or gcp-cvs
driver is used, use the command tridentctl import volume <backend-name> <volume path> -f /path/pvc.yaml
to import the volume into Kubernetes to be managed by Trident. This ensures a unique volume reference.
When the above command is executed, Trident will find the volume on the backend and read its size. It will automatically add (and overwrite if necessary) the configured PVC’s volume size. Trident then creates the new PV and Kubernetes binds the PVC to the PV.
If a container was deployed such that it required the specific imported PVC, it would remain in a pending state until the PVC/PV pair are bound via the volume import process. After the PVC/PV pair are bound, the container should come up, provided there are no other issues.
For information, please see the documentation.
8.6. Deploying OpenShift services using Trident¶
The OpenShift value-add cluster services provide important functionality to cluster administrators and the applications being hosted. The storage which these services use can be provisioned using the node-local resources, however, this often limits the capacity, performance, recoverability, and sustainability of the service. Leveraging an enterprise storage array to provide the capacity to these services can enable dramatically improved service, however, as with all applications, the OpenShift and storage administrators should work closely together to determine the best options for each. The Red Hat documentation should be leveraged heavily to determine the requirements and ensure that sizing and performance needs are met.
8.6.1. Registry service¶
Deploying and managing storage for the registry has been documented on netapp.io in this blog post.
8.6.2. Logging service¶
Like other OpenShift services, the logging service is deployed using Ansible with configuration parameters supplied by the inventory file, a.k.a. hosts, provided to the playbook. There are two installation methods which will be covered: deploying logging during initial OpenShift install and deploying logging after OpenShift has been installed.
Warning
As of Red Hat OpenShift version 3.9, the official documentation recommends against NFS for the logging service due to concerns around data corruption. This is based on Red Hat testing of their products. ONTAP’s NFS server does not have these issues, and can easily back a logging deployment. Ultimately, the choice of protocol for the logging service is up to you, just know that both will work great when using NetApp platforms and there is no reason to avoid NFS if that is your preference.
If you choose to use NFS with the logging service, you will need to set the Ansible variable openshift_enable_unsupported_configurations
to true
to prevent the installer from failing.
Getting started
The logging service can, optionally, be deployed for both applications as well as for the core operations of the OpenShift cluster itself. If you choose to deploy operations logging, by specifying the variable openshift_logging_use_ops
as true
, two instances of the service will be created. The variables which control the logging instance for operations contain “ops” in them, whereas the instance for applications does not.
Configuring the Ansible variables according to the deployment method is important in order to ensure that the correct storage is utilized by the underlying services. Let’s look at the options for each of the deployment methods
Note
The tables below only contain the variables which are relevant for storage configuration as it relates to the logging service. There are many other options found in the logging documentation which should be reviewed, configured, and used according to your deployment.
The variables in the below table will result in the Ansible playbook creating a PV and PVC for the logging service using the details provided. This method is significantly less flexible than using the component installation playbook after OpenShift installation, however, if you have existing volumes available, it is an option.
Variable | Details |
---|---|
openshift_logging_storage_kind |
Set to nfs to have the installer create an
NFS PV for the logging service. |
openshift_logging_storage_host |
The hostname or IP address of the NFS host. This should be set to the data LIF for your virtual machine. |
openshift_logging_storage_nfs_directory |
The mount path for the NFS export. For
example, if the volume is junctioned as
/openshift_logging , you would use that
path for this variable. |
openshift_logging_storage_volume_name |
The name, e.g. pv_ose_logs , of the PV to
create. |
openshift_logging_storage_volume_size |
The size of the NFS export, for example
100Gi . |
If your OpenShift cluster is already running, and therefore Trident has been deployed and configured, the installer can use dynamic provisioning to create the volumes. The following variables will need to be configured.
Variable | Details |
---|---|
openshift_logging_es_pvc_dynamic |
Set to true to use dynamically provisioned volumes. |
openshift_logging_es_pvc_storage_class_name |
The name of the storage class which will be used in the PVC. |
openshift_logging_es_pvc_size |
The size of the volume requested in the PVC. |
openshift_logging_es_pvc_prefix |
A prefix for the PVCs used by the logging service. |
openshift_logging_es_ops_pvc_dynamic |
Set to true to use dynamically provisioned volumes for the ops logging instance. |
openshift_logging_es_ops_pvc_storage_class_name |
The name of the storage class for the ops logging instance. |
openshift_logging_es_ops_pvc_size |
The size of the volume request for the ops instance. |
openshift_logging_es_ops_pvc_prefix |
A prefix for the ops instance PVCs. |
Note
A bug exists in OpenShift 3.9 which prevents a storage class from being used when the value for openshift_logging_es_ops_pvc_dynamic
is set to true
. However, this can be worked around by, counterintuitively, setting the variable to false
, which will include the storage class in the PVC.
Deploy the logging stack
If you are deploying logging as a part of the initial OpenShift install process, then you only need to follow the standard deployment process. Ansible will configure and deploy the needed services and OpenShift objects so that the service is available as soon as Ansible completes.
However, if you are deploying after the initial installation, the component playbook will need to be used by Ansible. This process may change slightly with different versions of OpenShift, so be sure to read and follow the documentation for your version.
8.6.3. Metrics service¶
The metrics service provides valuable information to the administrator regarding the status, resource utilization, and availability of the OpenShift cluster. It is also necessary for pod autoscale functionality and many organizations use data from the metrics service for their charge back and/or show back applications.
Like with the logging service, and OpenShift as a whole, Ansible is used to deploy the metrics service. Also, like the logging service, the metrics service can be deployed during an initial setup of the cluster or after it’s operational using the component installation method. The following tables contain the variables which are important when configuring persistent storage for the metrics service.
Note
The tables below only contain the variables which are relevant for storage configuration as it relates to the metrics service. There are many other options found in the documentation which should be reviewed, configured, and used according to your deployment.
Variable | Details |
---|---|
openshift_metrics_storage_kind |
Set to nfs to have the installer create an NFS
PV for the logging service. |
openshift_metrics_storage_host |
The hostname or IP address of the NFS host. This should be set to the data LIF for your SVM. |
openshift_metrics_storage_nfs_directory |
The mount path for the NFS export. For example, if
the volume is junctioned as /openshift_metrics ,
you would use that path for this variable. |
openshift_metrics_storage_volume_name |
The name, e.g. pv_ose_metrics , of the PV to
create. |
openshift_metrics_storage_volume_size |
The size of the NFS export, for example 100Gi . |
If your OpenShift cluster is already running, and therefore Trident has been deployed and configured, the installer can use dynamic provisioning to create the volumes. The following variables will need to be configured.
Variable | Details |
---|---|
openshift_metrics_cassandra_pvc_prefix |
A prefix to use for the metrics PVCs. |
openshift_metrics_cassandra_pvc_size |
The size of the volumes to request. |
openshift_metrics_cassandra_storage_type |
The type of storage to use for metrics, this must be set to dynamic for Ansible to create PVCs with the appropriate storage class. |
openshift_metrics_cassanda_pvc_storage_class_name |
The name of the storage class to use. |
Deploying the metrics service
With the appropriate Ansible variables defined in your hosts/inventory file, deploy the service using Ansible. If you are deploying at OpenShift install time, then the PV will be created and used automatically. If you’re deploying using the component playbooks, after OpenShift install, then Ansible will create any PVCs which are needed and, after Trident has provisioned storage for them, deploy the service.
The variables above, and the process for deploying, may change with each version of OpenShift. Ensure you review and follow the deployment guide for your version so that it is configured for your environment.