Deploying with the Trident Operator¶
If you are looking to deploy Trident using the Trident Operator, you are in the right place. This page contains all the steps required for getting started with the Trident Operator to install and manage Trident. You can deploy Trident Operator either manually or using Helm.
Prerequisites¶
If you have not already familiarized yourself with the basic concepts, now is a great time to do that. Go ahead, we’ll be here when you get back.
To deploy Trident using the operator you need:
- Full privileges to a
supported Kubernetes cluster
running Kubernetes
1.17
and later. - Helm 3 (if deploying using Helm).
- Access to a supported NetApp storage system
- Volume mount capability from all of the Kubernetes worker nodes
- A Linux host with
kubectl
(oroc
, if you’re using OpenShift) installed and configured to manage the Kubernetes cluster you want to use - Set the
KUBECONFIG
environment variable to point to your Kubernetes cluster configuration. - Enable the Feature Gates required by Trident
Got all that? Great! Let’s get started. You can choose to either Deploy using Helm or Using the Trident Operator.
Deploy Trident Operator by using Helm¶
Perform the steps listed to deploy Trident Operator by using Helm. You will need the following:
- Kubernetes 1.17 and later
- Helm version 3
1: Download the installer bundle¶
Download the Trident 21.07 installer bundle from the Trident GitHub
page. The installer bundle includes the Helm chart in the /helm
directory.
2: Deploy the Trident operator¶
Use the helm install
command and specify a name for your deployment. See the following example:
$ helm install <name> trident-operator-21.07.2.tgz
There are two ways to pass configuration data during the install:
- –values (or -f): Specify a YAML file with overrides. This can be specified multiple times and the rightmost file will take precedence.
- –set: Specify overrides on the command line.
For example, to change the default value of debug
, run the following –set command:
$ helm install <name> trident-operator-21.07.2.tgz --set tridentDebug=true
The values.yaml
file, which is part of the Helm chart provides the list of keys and their default values.
helm list
shows you details about the Trident installation, such as name, namespace, chart, status, app version, revision number, and so on.
Deploy Trident Operator manually¶
Perform the steps listed to manually deploy Trident Operator.
If you are interested in upgrading an operator-based Trident install to the latest release, take a look at Upgrading Trident.
1: Qualify your Kubernetes cluster¶
You made sure that you have everything in hand from the previous section, right? Right.
The first thing you need to do is log into the Linux host and verify that it is managing a working, supported Kubernetes cluster that you have the necessary privileges to.
Note
With OpenShift, you will use oc
instead of kubectl
in all of the
examples that follow, and you need to login as system:admin first by
running oc login -u system:admin
or oc login -u kube-admin
.
# Is your Kubernetes version greater than 1.17?
kubectl version
# Are you a Kubernetes cluster administrator?
kubectl auth can-i '*' '*' --all-namespaces
# Can you launch a pod that uses an image from Docker Hub and can reach your
# storage system over the pod network?
kubectl run -i --tty ping --image=busybox --restart=Never --rm -- \
ping <management IP>
2: Download & setup the operator¶
Note
Beginning with 21.01, the Trident Operator is cluster-scoped. Using the
Trident Operator to install Trident requires creating the
TridentOrchestrator
Custom Resource Definition and defining other
resources. You will need to perform these steps to setup the operator
before you can install Trident.
Download the latest version of the Trident installer bundle from the Downloads section and extract it.
wget https://github.com/NetApp/trident/releases/download/v21.07.2/trident-installer-21.07.2.tar.gz
tar -xf trident-installer-21.07.2.tar.gz
cd trident-installer
Use the appropriate CRD manifest to create the TridentOrchestrator
Custom
Resource Definition. You will then create a TridentOrchestrator
Custom Resource
later on to instantiate a Trident install by the operator.
# Kubernetes version must be 1.17 and later
kubectl create -f deploy/crds/trident.netapp.io_tridentorchestrators_crd_post1.16.yaml
Once the TridentOrchestrator
CRD is created, you will then have to create
the resources required for the operator deployment, such as:
- a ServiceAccount for the operator.
- a ClusterRole and ClusterRoleBinding to the ServiceAccount.
- a dedicated PodSecurityPolicy.
- the Operator itself.
The Trident Installer contains manifests for defining these resources. By default
the operator is deployed in trident
namespace, if the trident
namespace
does not exist use the below manifest to create one.
$ kubectl apply -f deploy/namespace.yaml
If you would like to deploy the operator in a namespace other than
the default trident
namespace, you will need to update the
serviceaccount.yaml
, clusterrolebinding.yaml
and operator.yaml
manifests and generate your bundle.yaml
.
# Have you updated the yaml manifests? Generate your bundle.yaml
# using the kustomization.yaml
kubectl kustomize deploy/ > deploy/bundle.yaml
# Create the resources and deploy the operator
kubectl create -f deploy/bundle.yaml
You can check the status of the operator once you have deployed.
$ kubectl get deployment -n <operator-namespace>
NAME READY UP-TO-DATE AVAILABLE AGE
trident-operator 1/1 1 1 3m
$ kubectl get pods -n <operator-namespace>
NAME READY STATUS RESTARTS AGE
trident-operator-54cb664d-lnjxh 1/1 Running 0 3m
The operator deployment successfully creates a pod running on one of the worker nodes in your cluster.
Important
There must only be one instance of the operator in a Kubernetes cluster. Do not create multiple deployments of the Trident operator.
3: Creating a TridentOrchestrator and installing Trident¶
You are now ready to install Trident using the operator! This will require
creating a TridentOrchestrator. The Trident installer comes with example
definitions for creating a TridentOrchestrator. This kicks off a Trident
installation in the trident
namespace.
$ kubectl create -f deploy/crds/tridentorchestrator_cr.yaml
tridentorchestrator.trident.netapp.io/trident created
$ kubectl describe torc trident
Name: trident
Namespace:
Labels: <none>
Annotations: <none>
API Version: trident.netapp.io/v1
Kind: TridentOrchestrator
...
Spec:
Debug: true
Namespace: trident
Status:
Current Installation Params:
IPv6: false
Autosupport Hostname:
Autosupport Image: netapp/trident-autosupport:21.01
Autosupport Proxy:
Autosupport Serial Number:
Debug: true
Enable Node Prep: false
Image Pull Secrets:
Image Registry:
k8sTimeout: 30
Kubelet Dir: /var/lib/kubelet
Log Format: text
Silence Autosupport: false
Trident Image: netapp/trident:21.07.2
Message: Trident installed
Namespace: trident
Status: Installed
Version: v21.07.2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Installing 74s trident-operator.netapp.io Installing Trident
Normal Installed 67s trident-operator.netapp.io Trident installed
Customizing your deployment¶
The Trident operator provides users the ability to customize the manner in which
Trident is installed, using the following attributes in the TridentOrchestrator spec
:
Parameter | Description | Default |
---|---|---|
namespace | Namespace to install Trident in | “default” |
debug | Enable debugging for Trident | ‘false’ |
useIPv6 | Install Trident over IPv6 | ‘false’ |
k8sTimeout | Timeout for Kubernetes operations | 30sec |
silenceAutosupport | Don’t send autosupport bundles to NetApp automatically | ‘false’ |
enableNodePrep | Manage worker node dependencies automatically (BETA) | ‘false’ |
autosupportImage | The container image for Autosupport Telemetry | “netapp/trident-autosupport:21.01.0” |
autosupportProxy | The address/port of a proxy for sending Autosupport Telemetry | “http://proxy.example.com:8888” |
uninstall | A flag used to uninstall Trident | ‘false’ |
logFormat | Trident logging format to be used [text,json] | “text” |
tridentImage | Trident image to install | “netapp/trident:21.07.2” |
imageRegistry | Path to internal registry, of the format <registry FQDN>[:port][/subpath] |
“k8s.gcr.io/sig-storage” |
kubeletDir | Path to the kubelet directory on the host | “/var/lib/kubelet” |
wipeout | A list of resources to delete to perform a complete removal of Trident | |
imagePullSecrets | Secrets to pull images from an internal registry |
Note
spec.namespace
is specified in the tridentOrchestrator
to signify
which namespace Trident is installed in. This parameter cannot be updated
after Trident is installed. Attempting to do so will cause the Status of
tridentOrchestrator
to change to Failed
. Trident is not meant to be
migrated across namespaces.
Warning
Automatic worker node prep is a beta feature meant to be used in non-production environments only.
You can use the attributes mentioned above when defining a TridentOrchestrator to customize your Trident installation. Here’s an example:
$ cat deploy/crds/tridentorchestrator_cr_imagepullsecrets.yaml
apiVersion: trident.netapp.io/v1
kind: TridentOrchestrator
metadata:
name: trident
spec:
debug: true
namespace: trident
tridentImage: netapp/trident:21.07.2
imagePullSecrets:
- thisisasecret
If you are looking to customize Trident’s installation beyond what the TridentOrchestrator’s
arguments allow, you should consider using tridentctl
to generate custom
yaml manifests that you can modify as desired. Head on over to the
deployment guide for tridentctl to learn
how this works.
Observing the status of the operator¶
The Status of the TridentOrchestrator will indicate if the installation was successful and will display the version of Trident installed.
Status | Description |
---|---|
Installing | The operator is installing Trident using this TridentOrchestrator CR. |
Installed | Trident has successfully installed. |
Uninstalling | The operator is uninstalling Trident, since spec.uninstall=true . |
Uninstalled | Trident is uninstalled. |
Failed | The operator could not install, patch, update or uninstall Trident; the |
operator will automatically try to recover from this state. If this | |
state persists you will require troubleshooting. | |
Updating | The operator is updating an existing Trident installation. |
Error | The TridentOrchestrator is not used. Another one already exists. |
During the installation, the status of the TridentOrchestrator
will change from Installing
to Installed
. If you observe
the Failed
status and the operator is unable to recover by
itself, there’s probably something wrong and you
will need to check the logs of the operator by running
tridentctl logs -l trident-operator
.
You can also confirm if the Trident install completed by taking a look at the pods that have been created:
$ kubectl get pod -n trident
NAME READY STATUS RESTARTS AGE
trident-csi-7d466bf5c7-v4cpw 5/5 Running 0 1m
trident-csi-mr6zc 2/2 Running 0 1m
trident-csi-xrp7w 2/2 Running 0 1m
trident-csi-zh2jt 2/2 Running 0 1m
trident-operator-766f7b8658-ldzsv 1/1 Running 0 3m
You can also use tridentctl
to check the version of Trident installed.
$ ./tridentctl -n trident version
+----------------+----------------+
| SERVER VERSION | CLIENT VERSION |
+----------------+----------------+
| 21.07.2 | 21.07.2 |
+----------------+----------------+
If that’s what you see, you’re done with this step, but Trident is not
yet fully configured. Go ahead and continue to the
next step to create
a Trident backend using tridentctl
.
However, if the installer does not complete successfully or you don’t see
a Running trident-csi-<generated id>
, then Trident had a problem and the platform was not
installed.
To understand why the installation of Trident was unsuccessful, you should
first take a look at the TridentOrchestrator
status.
$ kubectl describe torc trident-2
Name: trident-2
Namespace:
Labels: <none>
Annotations: <none>
API Version: trident.netapp.io/v1
Kind: TridentOrchestrator
...
Status:
Current Installation Params:
IPv6:
Autosupport Hostname:
Autosupport Image:
Autosupport Proxy:
Autosupport Serial Number:
Debug:
Enable Node Prep:
Image Pull Secrets: <nil>
Image Registry:
k8sTimeout:
Kubelet Dir:
Log Format:
Silence Autosupport:
Trident Image:
Message: Trident is bound to another CR 'trident'
Namespace: trident-2
Status: Error
Version:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Error 16s (x2 over 16s) trident-operator.netapp.io Trident is bound to another CR 'trident'
This error indicates that there already exists a TridentOrchestrator that was used to install Trident. Since each Kubernetes cluster can only have one instance of Trident, the operator ensures that at any given time there only exists one active TridentOrchestrator that it can create.
Another thing to do is to check the operator logs. Trailing the logs of the
trident-operator
container can point to where the problem lies.
$ tridentctl logs -l trident-operator
For example, one such issue could be the inability to pull the required container images from upstream registries in an airgapped environment. The logs from the operator can help identify this problem and fix it.
In addition, observing the status of the Trident pods can often indicate if something is not right.
$ kubectl get pods -n trident
NAME READY STATUS RESTARTS AGE
trident-csi-4p5kq 1/2 ImagePullBackOff 0 5m18s
trident-csi-6f45bfd8b6-vfrkw 4/5 ImagePullBackOff 0 5m19s
trident-csi-9q5xc 1/2 ImagePullBackOff 0 5m18s
trident-csi-9v95z 1/2 ImagePullBackOff 0 5m18s
trident-operator-766f7b8658-ldzsv 1/1 Running 0 8m17s
You can clearly see that the pods are not able to initialize completely as one or more container images were not fetched.
To address the problem, you must edit the TridentOrchestrator CR. Alternatively, you can delete the TridentOrchestrator and create a new one with the modified, accurate definition.
If you continue to have trouble, visit the troubleshooting guide for more advice.
Post-deployment steps¶
After you deploy Trident with the operator, you can proceed with creating a Trident backend, creating a storage class, provisioning a volume, and mounting the volume in a pod.
1: Creating a Trident backend¶
You can now go ahead and create a backend that will be used by Trident
to provision volumes. To do this, create a backend.json
file that
contains the necessary parameters. Sample configuration files for
different backend types can be found in the sample-input
directory.
Visit the backend configuration guide for more details about how to craft the configuration file for your backend type.
cp sample-input/<backend template>.json backend.json
# Fill out the template for your backend
vi backend.json
./tridentctl -n trident create backend -f backend.json
+-------------+----------------+--------------------------------------+--------+---------+
| NAME | STORAGE DRIVER | UUID | STATE | VOLUMES |
+-------------+----------------+--------------------------------------+--------+---------+
| nas-backend | ontap-nas | 98e19b74-aec7-4a3d-8dcf-128e5033b214 | online | 0 |
+-------------+----------------+--------------------------------------+--------+---------+
If the creation fails, something was wrong with the backend configuration. You can view the logs to determine the cause by running:
./tridentctl -n trident logs
After addressing the problem, simply go back to the beginning of this step and try again. If you continue to have trouble, visit the troubleshooting guide for more advice on how to determine what went wrong.
2: Creating a Storage Class¶
Kubernetes users provision volumes using persistent volume claims (PVCs) that specify a storage class by name. The details are hidden from users, but a storage class identifies the provisioner that will be used for that class (in this case, Trident) and what that class means to the provisioner.
Create a storage class Kubernetes users will specify when they want a volume. The configuration of the class needs to model the backend that you created in the previous step so that Trident will use it to provision new volumes.
The simplest storage class to start with is one based on the
sample-input/storage-class-csi.yaml.templ
file that comes with the
installer, replacing __BACKEND_TYPE__
with the storage driver name.
./tridentctl -n trident get backend
+-------------+----------------+--------------------------------------+--------+---------+
| NAME | STORAGE DRIVER | UUID | STATE | VOLUMES |
+-------------+----------------+--------------------------------------+--------+---------+
| nas-backend | ontap-nas | 98e19b74-aec7-4a3d-8dcf-128e5033b214 | online | 0 |
+-------------+----------------+--------------------------------------+--------+---------+
cp sample-input/storage-class-csi.yaml.templ sample-input/storage-class-basic-csi.yaml
# Modify __BACKEND_TYPE__ with the storage driver field above (e.g., ontap-nas)
vi sample-input/storage-class-basic-csi.yaml
This is a Kubernetes object, so you will use kubectl
to create it in
Kubernetes.
kubectl create -f sample-input/storage-class-basic-csi.yaml
You should now see a basic storage class in both Kubernetes and Trident, and Trident should have discovered the pools on the backend.
kubectl get sc basic-csi
NAME PROVISIONER AGE
basic-csi csi.trident.netapp.io 15h
./tridentctl -n trident get storageclass basic-csi -o json
{
"items": [
{
"Config": {
"version": "1",
"name": "basic-csi",
"attributes": {
"backendType": "ontap-nas"
},
"storagePools": null,
"additionalStoragePools": null
},
"storage": {
"ontapnas_10.0.0.1": [
"aggr1",
"aggr2",
"aggr3",
"aggr4"
]
}
}
]
}
3: Provision your first volume¶
Now you’re ready to dynamically provision your first volume. How exciting! This is done by creating a Kubernetes persistent volume claim (PVC) object, and this is exactly how your users will do it too.
Create a persistent volume claim (PVC) for a volume that uses the storage class that you just created.
See sample-input/pvc-basic-csi.yaml
for an example. Make sure the storage
class name matches the one that you created in 6.
kubectl create -f sample-input/pvc-basic-csi.yaml
kubectl get pvc --watch
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
basic Pending basic 1s
basic Pending pvc-3acb0d1c-b1ae-11e9-8d9f-5254004dfdb7 0 basic 5s
basic Bound pvc-3acb0d1c-b1ae-11e9-8d9f-5254004dfdb7 1Gi RWO basic 7s
4: Mount the volume in a pod¶
Now that you have a volume, let’s mount it. We’ll launch an nginx pod that
mounts the PV under /usr/share/nginx/html
.
cat << EOF > task-pv-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: basic
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
EOF
kubectl create -f task-pv-pod.yaml
# Wait for the pod to start
kubectl get pod --watch
# Verify that the volume is mounted on /usr/share/nginx/html
kubectl exec -it task-pv-pod -- df -h /usr/share/nginx/html
Filesystem Size Used Avail Use% Mounted on
10.xx.xx.xx:/trident_pvc_3acb0d1c_b1ae_11e9_8d9f_5254004dfdb7 1.0G 256K 1.0G 1% /usr/share/nginx/html
# Delete the pod
kubectl delete pod task-pv-pod
At this point the pod (application) no longer exists but the volume is still there. You could use it from another pod if you wanted to.
To delete the volume, simply delete the claim:
kubectl delete pvc basic
Where do you go from here? you can do things like:
- Configure additional backends.
- Model additional storage classes.
- Review considerations for moving this into production.