Pre-install checklist

This checklist should be used to verify all requirements have been met prior to any installation.

For many of these items, we have provided some commands or commmand templates to run in order to verify the given prequisite, along with a typical output to give you an idea of the kind of information you should be given. Please run each of these commands, modified as appropriate for your environment, and copy the outputs into a document for sending to the Anaconda implementation team so that they may verify that the requirements are ready.

Basic requirements

  • An administration server has been provisioned with appropriate versions of kubectl, helm, and other tools needed to perform installation and administration tasks.

    Command: helm version:

    version.BuildInfo{Version:"v3.7.1", GitCommit:"1d11fcb5d3f3bf00dbe6fe31b8412839a96b3dc4", GitTreeState:"clean", GoVersion:"go1.16.9"}
    
  • The API version of the Kubernetes cluster is between 1.15 and 1.24.

    Command: kubectl version:

    Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.12", GitCommit:"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725", GitTreeState:"clean", BuildDate:"2020-05-06T05:17:59Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.12", GitCommit:"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725", GitTreeState:"clean", BuildDate:"2020-05-06T05:09:48Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
    
  • All nodes nodes on which Anaconda Enterprise will be installed have sufficient CPU and memory allocations.

    Command: kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.capacity.cpu}{'\t'}{.status.capacity.memory}{'\n'}{end}":

    10.234.2.18 16  65806876Ki
    10.234.2.19 16  65806876Ki
    10.234.2.20 16  65806876Ki
    10.234.2.21 16  65806876Ki
    10.234.2.6  16  65974812Ki
    

Access control and security

  • The namespace into which Anaconda Enterprise will be installed has been created.

    Command: kubectl describe namespace <NAMESPACE>:

    Name:         default
    Labels:       <none>
    Annotations:  <none>
    Status:       Active
    No resource quota.
    No resource limits.
    
  • The service account that will be used during the installation process as well as by Anaconda Enterprise itself, has been created.

    Command: kubectl describe sa <SERVICEACCOUNT>:

    Name:                anaconda-enterprise
    Namespace:           default
    Labels:              <none>
    Annotations:         <none>
    Image pull secrets:  <none>
    Mountable secrets:   anaconda-enterprise-token-cdmnf
    Tokens:              anaconda-enterprise-token-cdmnf
    Events:              <none>
    
  • (Openshift) The Security Context Constraint (SCC) associated with the service account contains all of the necessary permisisons. Note the example below uses the anyuid scc, however the restricted scc can also be used, as long as the uid range is known.

    Command: oc describe scc <SCC_NAME>:

    Name:                       anyuid
    Priority:                   10
    Access:
      Users:                    <none>
      Groups:                   system:cluster-admins
    
  • The ClusterRole resource associated with the service account has the necessary permissions to facilitate installation and operation.

    Command: kubectl describe clusterrole <CR_NAME>:

    Name:         anaconda-enterprise
    Labels:       app.kubernetes.io/managed-by=Helm
                  skaffold.dev/run-id=8d38b94a-ab82-49d7-a6fd-0bc0fb549d1c
    Annotations:  meta.helm.sh/release-name: anaconda-enterprise
                  meta.helm.sh/release-namespace: default
    PolicyRule:
      Resources  Non-Resource URLs  Resource Names  Verbs
      ---------  -----------------  --------------  -----
      *.*        []                 []              [*]
                 [*]                []              [*]
    

Note

The above example is fully permissive. See this example for a more realistic choice.

  • The numeric UID to use to run Anaconda Enterprise containers has been identified. Furthermore, GID 0 is verified to be permitted by the security context. Please include the UID in your checklist results.

  • Any tolerations and/or node labels required to permit Anaconda Enterprise to run on its assigned nodes have been identified.

    Command (tolerations only): kubectl get nodes -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.taints[*].key}{"\n"}{end}'

Storage

  • A Persistent Volume Claim (PVC) has been created within the application namespace, referencing a statically provisioned Persistent Volume that meets the storage requirements for the anaconda-storage volume.

    Command: kubectl describe pvc anaconda-storage:

    Name:          anaconda-storage
    Namespace:     default
    StorageClass:  anaconda-storage
    Status:        Bound
    Volume:        anaconda-storage
    Labels:        <none>
    Annotations:   pv.kubernetes.io/bind-completed: yes
                   pv.kubernetes.io/bound-by-controller: yes
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      500Gi
    Access Modes:  RWO
    VolumeMode:    Filesystem
    Mounted By:    anaconda-enterprise-ap-git-storage-6658575d6f-vxj4s
                   anaconda-enterprise-ap-object-storage-76bcfc4d44-ctlhp
                   anaconda-enterprise-postgres-c76869799-cbqzq
    Events:        <none>
    
  • A Persistent Volume Claim (PVC) has been created within the application namespace, referencing a statically provisioned Persistent Volume that meets the storage requirements for the anaconda-persistence volume.

    Command: ``kubectl describe pvc anaconda-persistence`:

    Name:            anaconda-persistence
    Labels:          <none>
    Annotations:     pv.kubernetes.io/bound-by-controller: yes
    Finalizers:      [kubernetes.io/pv-protection]
    StorageClass:
    Status:          Bound
    Claim:           default/anaconda-persistence
    Reclaim Policy:  Retain
    Access Modes:    RWX
    VolumeMode:      Filesystem
    Capacity:        500Gi
    Node Affinity:   <none>
    Message:
    Source:
        Type:      NFS (an NFS mount that lasts the lifetime of a pod)
        Server:    10.234.2.7
        Path:      /data/persistence
        ReadOnly:  false
    Events:        <none>
    

Cluster Sizing / Resources

  • The cluster is sized appropriately (CPU / Memory) for user workload, including consideration for “burst” workloads. Cluster considerations
  • Resource Profiles have been determined, and created in the “values.yaml” file prior to install. Resource Profile guide

Networking

  • The domain name for the Anaconda Enterprise application has been identified. In the next several bullets, we will use the sample domain anaconda.example.com as a stand-in for this choice. Please include this domain name in your checklist output.

  • If a customer-selected ingress controller is to be used, this controller has already been installed, and its master IP address and ingressClassName value have been identified. Please include both the IP address ingress class name in your checklist output.

  • The DNS records for both anaconda.example.com and *.anaconda.example.com have been created, pointing to the IP address of the ingress controller.

    Command: ping test.anaconda.example.com:

    PING test.anaconda.example.com (167.172.143.144): 56 data bytes
    

    If the ingress controller is to be installed with Anaconda Enterprise, this may not be possible; in this case, it is sufficient to confirm that the networking team is prepared to instantiate these records immediately following installation.

  • A wildcard SSL secret for anaconda.example.com and *.anaconda.example.com has been created. The public and private keys for the main certificate, as well as the full public certificate chain, are accessible from the administration server. Please share the public certificate chain in your checklist output.

  • If the SSL secret was created using a private CA, the public root certificate has been obtained.

Docker Images

  • If a private Docker registry is to be used, the full set of Docker images have been transferred to this registry.

  • If a pull secret is required to access the Docker images—whether from the standard Anaconda Enterprise Docker channel or the private registry—the secret has been created in the application namespace.

    Command: kubectl get secret -n <NAMESPACE> <PULL_SECRET_NAME>