Migrating from Gravity to Bring Your Own Kubernetes

Migrating your Gravity based Anaconda Enterprise installation to a Bring Your Own Kubernetes platform can be performed with our Backup and Restore tool in conjunction with moving user data manually.

This topic provides guidance on the following:


If you are using NFS or Dynamic Storage for both your “anaconda-storage” and “anaconda-persistence” volumes within your Gravity cluster, it is not necessary to perform these steps as you can unmount these volumes from your Gravity cluster and remount these volumes on your BYOK8s cluster. Please work with our Implementation team for this process.


This process requires that your users have committed all work and have stopped all running sessions, deployments, and jobs. It is necessary that your users not use both clusters until this process is complete so that no data is lost during the migration.


  • You have sudo access.

  • The jq conda package is installed in your base environment.


Install the backup script


The ae5-conda environment mentioned in the prerequisites already contains the backup script. If you choose to install the environment, skip ahead to verify your installation.

Standard environment backup script

Install the ae5_backup_restore package into your base conda environment:

conda install -c ae5-admin ae5_backup_restore

Once complete, verify the installation.

Air-gapped environment backup script

Download the latest ae5_conda installer file and move it to your master node.

Set the installer file to be executable, then run the script to install the ae5_conda environment:

chmod +x ae5-conda-latest-Linux-x86_64.sh

Once complete, verify the installation.

Verify your installation

Verify your installation by testing a basic package command. Let’s try the help command:

ae_backup.sh -h

If your terminal returns the usage help text, then your installation of the backup/restore script was successful! You are now ready to run the backup script.

Run the backup script

When taking a backup, you will need to supply the -c, –config-db command line argument, as the backup script will only be able to capture your Anaconda Enterprise configuration data. This will not capture user/project data, and you will need to move this data over in a later step in this process.

Run the ae_backup.sh script to create backup files of your cluster in the current directory:

bash ae_backup.sh -c

Or specify a destination for your backup files:

bash ae_backup.sh -c /your/file/path/here

The backup script will create one tarball file:



YYYYMMDDHHMM is the format for the timestamp of your backup data.

The ae5_config_db file stores your Kubernetes resources and Postgres data.

Migrate user data

For this step, you will need to first scale down the ap- pods within Gravity, so that no further writes are made to disk. Within Gravity, run the following:

kubectl get deploy | grep ap- | cut -d' ' -f1 | xargs kubectl scale deploy --replicas=0

Verify that the “ap-” pods are no longer running with the following:

watch kubectl get pods

Once the “ap-” pods are no longer running, you will need to move the following directories into the new cluster:



This can be done by either directly mounting the /opt/anaconda/storage volume onto a workstation with access to the BYOK8s cluster, or by compressing both directories and copying directly into the storage pod.


Before moving user data into place, you must ensure that permissions have been set correctly for all files. If the UID/GID is different from one cluster versus the other, you will need to ensure that you have set permissions to correctly match the new cluster.

If you are able to directly mount the /opt/anaconda/storage volume onto a workstation with access to the BYOK8s cluster, after confirming file permissions match the new cluster, move both directories directly into place on top of the pre-existing directories on the BYOK8s cluster.

If you are unable to directly mount the /opt/anaconda/storage volume, you will instead need to copy both tarballs you have taken. This can be done by performing the following:

kubectl cp </path/git-tarball.tar.gz> <anaconda-enterprise-ap-git-storage>:/tmp
kubectl cp </path/persistence-tarball.tar.gz> <anaconda-enterprise-ap-storage>:/tmp

Once both tarballs have been copied to /tmp, exec into the pods, extract and confirm file permissions are correct. Then move both into place:

kubectl exec -it <anaconda-enterprise-ap-git-storage> /bin/bash
mv /tmp/anaconda /opt/anaconda/storage/git/repositories/anaconda
kubectl exec -it <anaconda-enterprise-ap-storage> /bin/bash
mv /tmp/projects /opt/anaconda/storage/projects

Restoration modes

Restoring to a different host without a hostname change

In this mode, only some resources are restored, as described below.

Restored data:

  • Kubernetes secrets (non-ssl)

  • Postgres

  • ConfigMaps

  • SSL certs

  • Secrets

Non-restored data:

  • Hostname

  • Ingress

This can be run with the following:

bash ae_restore.sh ae5_config_db_YYYYMMDDHHMM

Restoring to a different host, but with a hostname change

This mode fully restores all resources. The ingress is also updated in this case to reflect the new hostname.

This can be run with the following:

bash ae_restore.sh -u ae5_config_db_YYYYMMDDHHMM

After you have moved user data over, and have completed running the Restore script, the migration is complete and you can confirm that your BYOK8s cluster contains all data from the old Gravity cluster.