This page contains a list of known issues that may be observed when using Trident.
The Trident operator released with v21.01.0 contains an issue that has been identified with OpenShift Container Platform (OCP 4.x). Installations that are impacted by this issue will report this error:
no kind ‘ClusterRole’ is registered for version ‘authorization.openshift.io/v1\’ in scheme ’k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30.``.
This issue does not affect the normal functioning of Trident, only the Trident Operator. This prevents changes made to the TridentOrchestrator custom resource (CR) after the install/upgrade from taking effect. This has been fixed with v21.01.1. Users are advised to bypass v21.01.0 and upgrade directly to v21.01.1 or later.
A previously identified issue where iSCSI target portals with a negative group tag were ignored by Trident has been fixed with v21.01.1. This issue may be observed when
iscsiadmreports target portals with a negative group tag such as
-1, as shown in the example below:
#List iscsi node records $iscsiadm -m node 192.168.0.134:3260,-1 iqn.1992-08.com.netapp:sn.6cffffffffffffffff5056b03185:vs.3
Upgrading to v21.01.1 or later will fix this issue.
- When using Trident v20.07.1
for non-CSI deployments [Kubernetes
1.13], users may observe existing
igroupsbeing cleared of all existing IQNs. This has been fixed in the v20.10 release of Trident. This can only be observed when Trident initializes a backend/restarts if scheduled on a new node. Users are recommended to upgrade to v20.10 or later.
- An upstream Kubernetes bug
that could be encountered when rapidly attaching/detaching volumes has been
fixed with Trident v20.07.1
release of the CSI external-provisioner sidecar. Trident v20.07.1
will use the
2.0.1release of the
external-provisionersidecar for all Kubernetes clusters running
1.17and above. If using Kubernetes
1.16or below, you must upgrade your Kubernetes cluster to
1.17or above for Trident v20.07.1 to fix this issue.
- A previously identified issue
with updating the storage prefix for a Trident backend has been resolved with
Users can work with backends that use an empty storage prefix (
"") or one that includes
- With Trident v20.07.1
using the v2.0.1
release of the CSI external-provisioner sidecar, users will now observe Trident
enforcing a blank fsType (
fsType="") for volumes that don’t specify the
fsTypein their StorageClass. When working with Kubernetes
1.17or above, upgrading to Trident
20.07.1would enable users to provide a blank
fsTypefor NFS volumes. For iSCSI volumes, if you are enforcing an
fsGroupusing a Security Context, users are required to set the
fsTypeon their StorageClass.
- Trident has continually improved the resiliency for iSCSI volumes. The v20.07.0 release implements multiple additional checks that eliminate accidental deletions from occuring.
- Trident may create multiple unintended secrets when it attempts to patch Trident service accounts. This is a bug and is fixed with v20.07.1.
- When installing Trident (using
tridentctlor the Trident Operator) and using
tridentctlto manage Trident, you must ensure the
KUBECONFIGenvironment variable is set. This is necessary to indicate the Kubernetes cluster that
tridentctlmust work against. When working with multiple Kubernetes environments, care must be taken to ensure the KUBECONFIG file is sourced accurately.
- To perform online space reclamation for iSCSI PVs, the underlying OS on the
worker node may require mount options to be passed to the volume. This is
true for RHEL/RedHat CoreOS instances, which require the
discardmount option; ensure the
discardmountOption is included in your StorageClass to support online block discard.
- Although we provide a deployment for Trident, it should never be scaled beyond a single replica. Similarly, only one instance of Trident should be run per Kubernetes cluster. Trident cannot communicate with other instances and cannot discover other volumes that they have created, which will lead to unexpected and incorrect behavior if more than one instance runs within a cluster.
- If Trident-based
StorageClassobjects are deleted from Kubernetes while Trident is offline, Trident will not remove the corresponding storage classes from its database when it comes back online. Any such storage classes must be deleted manually using
tridentctlor the REST API.
- If a user deletes a PV provisioned by Trident before deleting the
corresponding PVC, Trident will not automatically delete the backing volume.
In this case, the user must remove the volume manually via
tridentctlor the REST API.
- When using a backend across multiple Trident instances, it is recommended
that each backend configuration file specify a different
storagePrefixvalue for ONTAP backends or use a different
TenantNamefor Element backends. Trident cannot detect volumes that other instances of Trident have created, and attempting to create an existing volume on either ONTAP or Element backends succeeds as Trident treats volume creation as an idempotent operation. Thus, if the
TenantNamedoes not differ, there is a very slim chance to have name collisions for volumes created on the same backend.
- ONTAP cannot concurrently provision more than one FlexGroup at a time unless the set of aggregates are unique to each provisioning request.
- When using Trident over IPv6, the
dataLIFin the backend definition must be specified within square brackets, like
- If using the
solidfire-sandriver with OpenShift 4.5, make sure the underlying worker nodes use
MD5as the CHAP authentication algorithm. Refer to Worker node Preparation for instructions.