Preparing a BYOK8s environment for Workbench#
Determining the resource requirements for a Kubernetes cluster depends on a number of different factors, including what type of applications you are going to be running, the number of users that are active at once, and the workloads you will be managing within the cluster. Data Science & AI Workbench’s performance is tightly coupled with the health of your Kubernetes stack, so it is important to allocate enough resources to manage your users workloads.
Anaconda’s hardware recommendations ensure a reliable and performant Kubernetes cluster. However, most requirements are likely to be superseded by the requirements imposed by your existing Kubernetes cluster, whether that is an on-premise cluster that is configured to support multiple tenants or a cloud offering.
To install Workbench successfully, your systems must meet or exceed the requirements listed below. Anaconda has created a pre-installation checklist to help prepare you for installation. The checklist helps you verify that your cluster is ready to install Workbench, and that the necessary resources are reserved. Anaconda’s Implementation team will review the checklist with you prior to your installation.
Supported Kubernetes Versions#
Workbench is compatible with Kubernetes API versions 1.15-1.28
. If your version of Kubernetes utilizes the API at these versions, you can install Workbench!
Workbench has been successfully installed on the following Kubernetes variants:
Vanilla Kubernetes
VMWare Tanzu
RedHat OpenShift
Google Anthos
Amazon Elastic Kubernetes Service (EKS)
Microsoft Azure Kubernetes Service (AKS)
Google Kubernetes Service (GKE)
RedHat OpenShift on AWS (ROSA)
Aside from the basic requirements listed on this page, Anaconda also offers environment-specific recommendations for you to consider.
Administration Server#
Installation requires a machine with direct access to the target Kubernetes cluster and Docker registry. Anaconda refers to this machine as the Administration Server. Anaconda recommends that you identify a machine to be your Administration Server that will remain available for ongoing management of the application once installed. It is useful for this server to be able to mount the storage volumes as well.
The following software must be installed on the Administration Server:
Helm
version 3.2+The Kubernetes CLI tool -
kubectl
(OpenShift only) The OpenShift
oc
CLI tool(Optional) The
watch
command line tool(Optional) The
jq
command line tool
Administration Server setup
You can obtain all of the tools you need for your Administration Server by installing the
ae5-conda
environment. This environment already containshelm
,kubectl
,oc
,jq
, and a number of other useful Workbench management utilities. To install the environment:
If necessary, move the environment to the Administration Server.
Open a terminal shell and install the environment by running the following command:
bash ae5-conda-latest-Linux-x86_64.shFollow the prompts, then restart your terminal.
(Optional) Add the environment to your
PATH
.
CPU, memory, and nodes#
Minimum node size: 8 CPU cores, 32GB RAM
Recommended node size: 16 CPU cores, 64GB RAM (or more)
Recommended oversubscription (limits/requests ratio) 4:1
Minimum number of worker nodes: 3
Minimally sized nodes should be reserved for test environments with low user counts and small workloads. Any development or production environment should meet or exceed the recommended requirements.
Workbench utilizes node labels and taints and tolerations to ensure that workloads run on the appropriate nodes. Anaconda recommends identifying any necessary affinity (which nodes a workload runs on based on its applied labels) or toleration settings prior to installation.
Resource profiles#
Resource profiles available to platform users for their sessions and deployments are created by the cluster administrator. Each resource profile can be customized for the amount of CPU, memory, and (optionally) GPU resources available. Anaconda recommends determining what resource profiles you will require prior to installation. For more information, see Configuring workload resource profiles.
Namespace, service account, RBAC#
Workbench should be installed in a namespace that is not occupied by any other applications, including other instances of Workbench. Create a service account for the namespace with sufficient permissions to complete the helm installation and enable the dynamic resource provisioning Workbench performs during normal operation.
Workbench requires more permissions than would normally be given to an application that only requires read-only
access to the Kubernetes API. However, with the exception of the ingress controller, all necessary permission grants are limited to the application namespace. Please speak with the Anaconda Implementation team about any questions you may have regarding these permissions.
RBAC template
The following Role
and RoleBinding
pair can be used to grant sufficient permissions to the Service Account.
# Replace <SERVICE_ACCOUNT> with the name of your service account
# Replace <NAMESPACE> with the namespace reserved for Workbench
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: <SERVICE_ACCOUNT>
namespace: <NAMESPACE>
rules:
- verbs:
- get
- list
apiGroups:
- ''
resources:
- namespaces
- pods/log
- events
- verbs:
- create
- delete
- get
- list
- patch
- update
- watch
apiGroups:
- ''
resources:
- configmaps
- secrets
- pods
- persistentvolumeclaims
- endpoints
- services
- verbs:
- create
- delete
- get
- list
- patch
- update
- watch
apiGroups:
- apps
resources:
- deployments
- replicasets
- statefulsets
- verbs:
- create
- delete
- get
- list
- patch
- update
- watch
apiGroups:
- batch
resources:
- jobs
- cronjobs
- verbs:
- create
- delete
- get
- list
- patch
- update
- watch
apiGroups:
- extensions
resources:
- deployments
- replicasets
- verbs:
- create
- delete
- get
- list
- patch
- update
- watch
apiGroups:
- networking.k8s.io
resources:
- ingresses
- verbs:
- create
- delete
- get
- list
- patch
- update
- watch
apiGroups:
- route.openshift.io
resources:
- routes
- routes/custom-host
- verbs:
- get
- list
apiGroups:
- ''
resources:
- serviceaccounts
- roles
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: <SERVICE_ACCOUNT>
namespace: <NAMESPACE>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: <SERVICE_ACCOUNT>
subjects:
- kind: ServiceAccount
name: <SERVICE_ACCOUNT>
Note
Recent versions of OpenShift no longer allow granting direct access to the anyuid
Security Context Constraint (SCC), or any other default SCC. Instead, access grants are defined within the role.
Example anyuid
SCC configuration
- verbs:
- use
apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
resourceNames:
- anyuid
If you wish to use the Anaconda-supplied ingress, it is also necessary to grant a small number of additional, cluster-wide permissions. This is because the ingress controller expects to be able to monitor ingress-related resources across all namespaces.
Ingress controller permissions
The following is a minimal ClusterRole
and ClusterRoleBinding
pair that has grants the ingress controller sufficient permissions to run without warnings:
# Replace <SERVICE_ACCOUNT> with the name of your ingress service account
# Replace <NAMESPACE> with the namespace reserved for Workbench
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <SERVICE_ACCOUNT>-ingress
rules:
- verbs:
- '*'
apiGroups:
- '*'
resources:
- ingressclasses
- verbs:
- patch
apiGroups:
- '*'
resources:
- events
- verbs:
- list
- watch
apiGroups:
- '*'
resources:
- secrets
- endpoints
- endpointslices
- ingresses
- services
- pods
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: <SERVICE_ACCOUNT>-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <SERVICE_ACCOUNT>-ingress
subjects:
- kind: ServiceAccount
name: <SERVICE_ACCOUNT>
namespace: <NAMESPACE>
Caution
Please review these RBAC configurations with your Kubernetes administrator. While it is possible to further reduce these scopes, doing so is likely to prevent normal operation of Workbench.
Security#
Preparing your Kubernetes cluster to install Workbench involves configuring your environment in a way that both supports the functionality of Workbench and adheres to security best practices.
Workbench containers can be run using any fixed, non-zero UID, making the application compatible with an OpenShift Container Platform (OCP) restricted SCC, or an equivalent non-permissive Kubernetes security context. This reduces the risk to your systems if the container is compromised.
However, in order to enable the Authenticated Network File System (NFS) capability, allowing user containers to access external, authenticated fileshares (storage servers), user pods must be permitted to run as root (UID 0).
Note
This configuration runs containers in a privileged state to determine and assign the authenticated group memberships of the user running the container only. Once authentication is complete, the container drops down to a non-privileged state for all further execution.
Please speak with your Anaconda Implementation team for more information, and to see if it is possible for your application to avoid this requirement.
Storage#
A standard installation of Workbench requires two Persistent Volume Claims (PVCs) to be statically provisioned and bound prior to installation. Anaconda strongly recommends a premium performance tier for provisioning your PVCs if the option is available. Expand the following sections for more information about the necessary volumes:
anaconda-storage
This volume contains:
Anaconda’s internal Postgres control database
Anaconda’s internal Git storage mechanism
Anaconda’s internal conda package repository
Note
If you are hosting conda packages outside of Workbench, the minimum size of your
anaconda-storage
volume must be at least 100GiB.However, if you intend to mirror conda packages into the Workbench repository, the
anaconda-storage
volume will need to be much larger to accommodate those packages. Anaconda recommends at least 500GiB of storage.
The anaconda-storage
volume must support either the ReadWriteOnce
or ReadWriteMany
access mode.
Caution
The
ReadWriteOnce
configuration requires that thepostgres
,git-storage
, andobject-storage
pods run on the same node.
anaconda-persistence
This volume hosts Anaconda’s managed persistence storage. It contains:
Custom sample projects
Custom conda environments
User code and data
The anaconda-persistence
volume requires ReadWriteMany
access.
Caution
The demands on the anaconda-persistence
volume will continuously grow with usage. Anaconda recommends you provision at least 1TiB of storage to start.
It is possible to combine these into a single PersistentVolumeClaim
to cover both needs, as long as that single volume simultaneously meets the performance needs demanded by both.
The root directories of these storage volumes must be writable by Workbench containers. This can be accomplished by configuring the volumes to be group writable by a single numeric GroupID (GID). Anaconda strongly recommends that this GID be 0
. This is the default GID assigned to Kubernetes containers. If this is not possible, supply the GID within the Persistent Volume specification as a pv.beta.kubernetes.io/gid
annotation.
Caution
To ensure that the data on these volumes is not lost if Workbench is uninstalled, do not change the ReclaimPolicy
from its default value of Retain
.
Ingress and firewall#
Workbench is compatible with most ingress controllers that are commonly used with Kubernetes clusters. Because ingress controllers are a cluster-wide resource, Anaconda recommends that the controller be installed and configured prior to installing Workbench. For example, if your Kubernetes version falls within 1.19-1.26
, any ingress controller with full support for the networking.k8s.io/v1
ingress API enables Workbench to build endpoints for user sessions and deployments.
Note
If your cluster is fully dedicated to Workbench, you can configure the Helm chart to install a version of the NGINX Ingress controller, which is compatible with multiple variants of Kubernetes, including OpenShift. Anaconda’s only modification to the stock NGNIX container enables it to run without root privileges.
Your cluster configuration and firewall settings must allow all TCP traffic between nodes, particularly HTTP, HTTPS, and the standard Postgres ports.
Caution
Healthy clusters can block inter-node communication, which disrupts the pods that Workbench requires to provision user workloads.
External traffic to Workbench will be funneled entirely through the ingress controller, through the standard HTTPS port 443
.
DNS/SSL#
Workbench requires the following:
A valid, fully qualified domain name (FQDN) reserved for Workbench.
A DNS record for the FQDN, as well as a wildcard DNS record for its subdomains.
Note
Both records must point to the IP address allocated by the ingress controller. If you are using an existing ingress controller, you may be able to obtain this address prior to installation. Otherwise, you must populate the DNS records with the address after the initial installation is complete.
A valid wildcard SSL certificate covering the cluster FQDN and its subdomains. Installation requires both the public certificate and the private key.
Note
If the certificate chain includes an intermediate certificate, the public certificate for the intermediate is required. The scope of the wildcard only requires
*.anaconda.company.com
to be covered, ensuring that all subdomains under this specific domain are included in the SSL certificate and DNS configuration.The public root certificate, if the above certificates were created with a private Certificate Authority (CA).
Warning
Wildcard DNS records and SSL certificates are required for correct operation of Workbench. If you have any objections to this requirement, speak with your Anaconda Implementation team.
Docker images#
Anaconda strongly recommends that you copy the Workbench Docker images from our authenticated source repository on Docker Hub into your internal docker registry. This ensures their availability even if there is an interuption in connectivity to Docker Hub. This registry must be accessible from the Kubernetes cluster where Workbench is installed.
Caution
In an airgapped setting, this is required.
Docker images from the aedev/
Docker Hub channel
Anaconda provides a more precise manifest, including version numbers and the credentials required to pull these images from our authenticated repository, prior to installation.
ae-app-proxy ae-auth ae-auth-api ae-auth-escrow ae-deploy ae-docs ae-editor ae-git-proxy ae-git-storage ae-object-storage ae-operation-controller ae-operation-create-project ae-repository ae-storage ae-sync ae-ui ae-workspace ae-wagonwheel ae-auth-keycloak ae-nginx-ingress-v1 ae-nginx-ingress-v1beta1 postgres:16.2
Note
Docker images used by Workbench are larger than many Kubernetes administrators are accustomed to. For more background, see Docker image sizes.
GPU Information#
This release of Workbench supports up to Compute Unified Device Architecture (CUDA) 11.6
DataCenter drivers.
Anaconda has directly tested the application with the following GPU cards:
Tesla V100 (recommended)
Tesla P100 (adequate)
Theoretically, Workbench will work with any GPU card compatible with the CUDA drivers, as long as they are properly installed. Other cards supported by CUDA 11.6
:
A-Series: NVIDIA A100, NVIDIA A40, NVIDIA A30, NVIDIA A10
RTX-Series: RTX 8000, RTX 6000, NVIDIA RTX A6000, NVIDIA RTX A5000, NVIDIA RTX A4000, NVIDIA T1000, NVIDIA T600, NVIDIA T400
HGX-Series: HGX A100, HGX-2
T-Series: Tesla T4
P-Series: Tesla P40, Tesla P6, Tesla P4
K-Series: Tesla K80, Tesla K520, Tesla K40c, Tesla K40m, Tesla K40s, Tesla K40st, Tesla K40t, Tesla K20Xm, Tesla K20m, Tesla K20s, Tesla K20c, Tesla K10, Tesla K8
M-Class: M60, M40 24GB, M40, M6, M4
Support for GPUs in Kubernetes is still a work in progress, and each cloud vendor provides different recommendations. For more information about GPUs, see Understanding GPUs.
Helm chart#
Workbench includes a values.yaml
override file with the Helm chart. Make additions and modifications to this file with your cluster configurations now, then complete the helm install or upgrade process to update the default Helm chart.
Helm values.yaml
template
# The fully qualified domain name (FQDN) of the cluster hostname: anaconda.example.com # The name of the service account with the necessary # permission to operate the cluster serviceAccountName: anaconda-enterprise # The UID under which to run the containers uid: 1000 # Docker registry information image: # Server address. This default points to Docker Hub server: 'aedev/' # A pull secret name, or list of names pullSecrets: # TLS / SSL secret management # - generate: generate self-signed certificates # - load: use the certificates in Anaconda-Enterprise/certs # - skip: do nothing; assume the secrets already exist # Existing secrets are always preserved during upgrades. generateCerts: generate # Keycloak LDAPS Settings # truststore: path to your truststore file containing custom CA cert # truststore_password: password of the truststore # truststore_seret: name of secret used for the truststore such as anaconda-enterprise-truststore keycloak: truststore: "" truststore_password: "" truststore_secret: "" ingress: # If an existing ingress controller is being used, this # must match the ingress.className of that controller. className: nginx # - false: an existing ingress controller will be used # - true: install an ingress controller in this namespace install: false # - false: an existing IngressClass resource will be used # - true: create a new IngressClass in the global namespace # Ignored if ingress.install is false. Note that installing # the IngressClass requires cluster-level permissions. installClass: false # If your ingress controller requires custom labels to be # added to ingress entries, list them here as a dictionary # of key/value pairs. labels: {} # If your ingress requires custom annotations to be added # to ingress entries, they can be included here. These # will be added to any existing annotations in the chart. # For all ingress entries global: {} # For the master ingress only system: {} # For sessions and deployments only user: {} # As discussed in the documentation, you may use the same # persistent volume for both storage resources. If so, make # sure to use the same same pvc: value in both locations. storage: pvc: anaconda-storage persistence: pvc: anaconda-persistence # OPENSHIFT ONLY: uncomment this value for OCP platforms. # dnsServer: dns-default.openshift-dns.svc.cluster.local # TOLERATIONS / AFFINITY # Please work with the Anaconda team for assistance # to configure these settings if you need them. tolerations: # For all pods global: [] # For system pods, except the ingress system: [] # For the ingress daemonset alone ingress: [] # For user pods user: [] affinity: # For all pods global: {} # For system pods, except the ingress system: {} # For the ingress daemonset alone ingress: {} # For user pods user: {}
Pre-installation checklist#
Anaconda has created this pre-installation checklist to help you verify that you have properly prepared your environment prior to installation.
Within this checklist, Anaconda provides some commands or command templates for you to run in order to verify a given requirement, along with a typical output from the command to give you an idea of the kind of information you should see. Run each of these commands, (modified as appropriate for your environment) and copy the outputs into a document. Send this document to your Anaconda implementation team so that they can verify your environment is ready before you begin the installation process.
BYOK8s pre-installation checklist
Verify that your administration server has been provisioned with appropriate versions of
kubectl
,helm
, and other tools needed to perform installation and administration tasks by running the following command:helm versionHere is an example response from the command:
version.BuildInfo{Version:"v3.7.1", GitCommit:"1d11fcb5d3f3bf00dbe6fe31b8412839a96b3dc4", GitTreeState:"clean", GoVersion:"go1.16.9"}Verify that the API version of the Kubernetes cluster is between
1.15
and1.28
by running the following command:kubectl versionHere is an example response from the command:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.12", GitCommit:"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725", GitTreeState:"clean", BuildDate:"2020-05-06T05:17:59Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.12", GitCommit:"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725", GitTreeState:"clean", BuildDate:"2020-05-06T05:09:48Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}Verify all nodes nodes on which Workbench will be installed have sufficient CPU and memory allocations by running the following command:
kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.capacity.cpu}{'\t'}{.status.capacity.memory}{'\n'}{end}"Here is an example response from the command:
10.234.2.18 16 65806876Ki 10.234.2.19 16 65806876Ki 10.234.2.20 16 65806876Ki 10.234.2.21 16 65806876Ki 10.234.2.6 16 65974812KiVerify that the namespace where Workbench will be installed has been created by running the following command:
# Replace <NAMESPACE> with the namespace you've reserved for Workbench kubectl describe namespace <NAMESPACE>Here is an example response from the command:
Name: default Labels: <none> Annotations: <none> Status: Active No resource quota. No resource limits.Verify the service account that will be used by Workbench during installation and operation has been created by running the following command:
# Replace <SERVICE_ACCOUNT> with the name of the service account you've created for Workbench kubectl describe sa <SERVICE_ACCOUNT>Here is an example response from the command:
Name: anaconda-enterprise Namespace: default Labels: <none> Annotations: <none> Image pull secrets: <none> Mountable secrets: anaconda-enterprise-token-cdmnf Tokens: anaconda-enterprise-token-cdmnf Events: <none>(Openshift Only) Verify the Security Context Constraint (SCC) associated with the service account contains all of the necessary permissions by running the following command:
oc describe scc <SCC_NAME>Here is an example response from the command:
Name: anyuid Priority: 10 Access: Users: <none> Groups: system:cluster-adminsNote
This example uses the
anyuid
SCC; however, therestricted
SCC can also be used, as long as theuid
range is known.Verify the
ClusterRole
resource associated with the service account has the necessary permissions to facilitate installation and operation by running the following command:# Replace <SERVICE_ACCOUNT> with the name of the service account you've created for Workbench kubectl describe clusterrole <SERVICE_ACCOUNT>-ingressHere is an example response from the command:
Name: anaconda-enterprise-ingress Labels: app.kubernetes.io/managed-by=Helm skaffold.dev/run-id=8d38b94a-ab82-49d7-a6fd-0bc0fb549d1c Annotations: meta.helm.sh/release-name: anaconda-enterprise meta.helm.sh/release-namespace: default PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]Note
The above example is fully permissive. See the RBAC template for more realistic configurations.
A numeric UID that will be used to run Workbench containers is reserved.
Note
Include the UID in your checklist results.
Verify that GID
0
is permitted by the security context.Verify any tolerations and/or node labels required to permit Workbench to run on its assigned nodes have been identified by running the following command:
# This command returns information for tolerations only kubectl get nodes -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.taints[*].key}{"\n"}{end}'``Verify that a Persistent Volume Claim (PVC) has been created within the application namespace, referencing a statically provisioned Persistent Volume that meets the storage requirements for the
anaconda-storage
volume.Command:
kubectl describe pvc anaconda-storage
:Name: anaconda-storage Namespace: default StorageClass: anaconda-storage Status: Bound Volume: anaconda-storage Labels: <none> Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes Finalizers: [kubernetes.io/pvc-protection] Capacity: 500Gi Access Modes: RWO VolumeMode: Filesystem Mounted By: anaconda-enterprise-ap-git-storage-6658575d6f-vxj4s anaconda-enterprise-ap-object-storage-76bcfc4d44-ctlhp anaconda-enterprise-postgres-c76869799-cbqzq Events: <none>Verify a PVC has been created within the application namespace, referencing a statically provisioned Persistent Volume that meets the storage requirements for the
anaconda-persistence
volume by running the following command:kubectl describe pvc anaconda-persistenceHere is an example response from the command:
Name: anaconda-persistence Labels: <none> Annotations: pv.kubernetes.io/bound-by-controller: yes Finalizers: [kubernetes.io/pv-protection] StorageClass: Status: Bound Claim: default/anaconda-persistence Reclaim Policy: Retain Access Modes: RWX VolumeMode: Filesystem Capacity: 500Gi Node Affinity: <none> Message: Source: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 10.234.2.7 Path: /data/persistence ReadOnly: false Events: <none>The cluster is sized appropriately (CPU / Memory) for user workload, including consideration for “burst” workloads. For more information, see Understanding Workbench system requirements.
Resource Profiles have been determined, and created in the
values.yaml
file.A domain name for the Workbench application has been identified.
Note
Please include this domain name in your checklist output.
If you are supplying your own ingress controller, verify it has already been installed, and its master IP address and
ingressClassName
value have been identified.Note
Please include both the IP address ingress class name in your checklist output.
Verify the DNS records for both
anaconda.example.com
and*.anaconda.example.com
have been created and point to the IP address of the ingress controller by running the following command:ping test.anaconda.example.comHere is an example response from the command:
PING test.anaconda.example.com (167.172.143.144): 56 data bytesNote
If the ingress controller is to be installed with Workbench, this may not be possible. In such cases, it is sufficient to confirm that the networking team is prepared to establish these records immediately following installation.
A wildcard SSL secret for
anaconda.example.com
and*.anaconda.example.com
has been created. The public and private keys for the main certificate, as well as the full public certificate chain, are accessible from the administration server.Note
Please share the public certificate chain in your checklist output.
If the SSL secret was created using a private CA, verify the public root certificate has been obtained.
If you are using a private Docker registry, verify the full set of Docker images have been transferred to this registry.
If a pull secret is required to access the Docker images (whether from the standard Workbench Docker channel or the private registry) verify the secret has been created in the application namespace by running the following command:
# Replace <NAMESPACE> with the namespace you've reserved for Workbench # Replace <PULL_SECRET_NAME> with the name for your pull secret kubectl get secret -n <NAMESPACE> <PULL_SECRET_NAME>