Known issues¶
This page contains a list of known issues that may be observed when using Trident.
- When using Trident v20.07.1
for non-CSI deployments [Kubernetes
1.11
-1.13
], users may observe existingigroups
being cleared of all existing IQNs. This has been fixed in the v20.10 release of Trident. This can only be observed when Trident initializes a backend/restarts if scheduled on a new node. Users are recommended to upgrade to v20.10. - An upstream Kubernetes bug
that could be encountered when rapidly attaching/detaching volumes has been
fixed with Trident v20.07.1
and the
v2.0.1
release of the CSI external-provisioner sidecar. Trident v20.07.1
will use the
2.0.1
release of theexternal-provisioner
sidecar for all Kubernetes clusters running1.17
and above. If using Kubernetes1.16
or below, you must upgrade your Kubernetes cluster to1.17
or above for Trident v20.07.1 to fix this issue. - A previously identified issue
with updating the storage prefix for a Trident backend has been resolved with
Trident v20.07.1.
Users can work with backends that use an empty storage prefix (
""
) or one that includes"-"
.
- With Trident v20.07.1
using the v2.0.1
release of the CSI external-provisioner sidecar, users will now observe Trident
enforcing a blank fsType (
fsType=""
) for volumes that don’t specify thefsType
in their StorageClass. When working with Kubernetes1.17
or above, upgrading to Trident20.07.1
would enable users to provide a blankfsType
for NFS volumes. For iSCSI volumes, if you are enforcing anfsGroup
using a Security Context, users are required to set thefsType
on their StorageClass. - Trident has continually improved the resiliency for iSCSI volumes. The v20.07.0 release implements multiple additional checks that eliminate accidental deletions from occuring.
- Trident may create multiple unintended secrets when it attempts to patch Trident service accounts. This is a bug and is fixed with v20.07.1.
- When installing Trident (using
tridentctl
or the Trident Operator) and usingtridentctl
to manage Trident, you must ensure theKUBECONFIG
environment variable is set. This is necessary to indicate the Kubernetes cluster thattridentctl
must work against. When working with multiple Kubernetes environments, care must be taken to ensure the KUBECONFIG file is sourced accurately. - To perform online space reclamation for iSCSI PVs, the underlying OS on the
worker node may require mount options to be passed to the volume. This is
true for RHEL/RedHat CoreOS instances, which require the
discard
mount option; ensure thediscard
mountOption is included in your StorageClass to support online block discard. - Although we provide a deployment for Trident, it should never be scaled beyond a single replica. Similarly, only one instance of Trident should be run per Kubernetes cluster. Trident cannot communicate with other instances and cannot discover other volumes that they have created, which will lead to unexpected and incorrect behavior if more than one instance runs within a cluster.
- If Trident-based
StorageClass
objects are deleted from Kubernetes while Trident is offline, Trident will not remove the corresponding storage classes from its database when it comes back online. Any such storage classes must be deleted manually usingtridentctl
or the REST API. - If a user deletes a PV provisioned by Trident before deleting the
corresponding PVC, Trident will not automatically delete the backing volume.
In this case, the user must remove the volume manually via
tridentctl
or the REST API. - When using a backend across multiple Trident instances, it is recommended
that each backend configuration file specify a different
storagePrefix
value for ONTAP backends or use a differentTenantName
for Element backends. Trident cannot detect volumes that other instances of Trident have created, and attempting to create an existing volume on either ONTAP or Element backends succeeds as Trident treats volume creation as an idempotent operation. Thus, if thestoragePrefix
orTenantName
does not differ, there is a very slim chance to have name collisions for volumes created on the same backend. - ONTAP cannot concurrently provision more than one FlexGroup at a time unless the set of aggregates are unique to each provisioning request.
- When using Trident over IPv6, the
managementLIF
anddataLIF
in the backend definition must be specified within square brackets, like[fd20:8b1e:b258:2000:f816:3eff:feec:0]
. - If using the
solidfire-san
driver with OpenShift 4.5, make sure the underlying worker nodes useMD5
as the CHAP authentication algorithm. Refer to Worker node Preparation for instructions.