Skip to content
Securely manage Docker, Swarm, Kubernetes and Podman clusters in the cloud, on-premise, and in the data center.
Secure app deployment and device management for your Industrial IoT, IoT and Edge devices.
Let Portainer's Managed Platform Services accelerate your containerization journey.
Manage all your Docker, Swarm, Kubernetes and Podman clusters from a single secure interface.
Portainer empowers Platform Engineering teams to deliver efficient, user-centric services.
Empower your business by adopting containerization the easy way with Portainer.
Deploy to and manage your fleet of remote devices centrally and securely.
Onboard, manage and deploy workloads across hundreds of devices securely with Portainer.
Deployment scenarios
Partner Solutions
How to Restore a Portainer instance running on Kubernetes using the API
Adolfo DelorenzoFebruary 29, 20245 min read

How to Restore a Portainer instance running on Kubernetes using the API

If you want to understand the ease and speed of restoring a Portainer instance on a Kubernetes cluster using the Portainer API, then this post is for you. This post will demonstrate via a simple script called portainer_hb.sh and the Portainer API to restore a Portainer Instance: A Simple, Fast, and Efficient Process.

The magic of Portainer lies in its simplicity. This bash script checks if the main Portainer server is up and running. The script accomplishes this task by executing an API call that doesn't require authorization. If no response matches the standard pattern of an API call, the script takes the initiative and deploys a new Portainer instance. Post-deployment, it restores a backup from an S3-compatible server—in our case, MinIO.

Pre-reqs

For this blog, the main pre-requisites are:

  • Portainer is backing up to an S3 bucket
  • Portainer is running on Kubernetes
  • Make sure your Portainer instance is accessible via a FQDN.

After the restore process, your new Portainer instance IP address has to be updated on your DNS server to ensure no disruption of services. This is especially important if you have Endpoints connected to Portainer via Edge Agents.

Backing up to S3

The backup process to S3 in Portainer is simple and powerful. You can follow our documentation page, Backing up to S3. which explains how to perform S3 bucket backups on any S3-compatible server like Amazon, Wasabi, MinIO, etc.

The portainer_hb.sh script

Is it alive?

Let's start with checking if the main Portainer server is alive. This is a simple 'ping' using an HTTP call with curl that; depending on the response, we can determine if Portainer is up or not. The ping interval is 5 seconds. If Portainer goes down, the script carries on to the next steps. Otherwise, it'll stay on an endless loop.

# Check if the main Portainer server is running using an API call. An 'Unauthorized' reply means it is. No authentication is needed.
while true
do
	portainer_up=`curl --silent --insecure -X GET  | jq -r '.details'`
		if [ "$portainer_up" = "Unauthorized" ]; then
	        echo -ne '⚡ Portainer is up\\r'
		else
	        break
		fi
sleep 5
done

Create a new instance of Portainer on the secondary site

The following step of the process deploys a Portainer server on Kubernetes if the main server goes down. The publishing method adopted for the Portainer deployment is NodePort. For more details, you can check our documentation page, Install Portainer BE on your Kubernetes environment.

# Deploy a new Portainer instance
kubectl apply -n portainer -f 
echo
echo 'Deploying Portainer server'
echo

Is the Portainer running?

This step makes sure the Portainer server is running correctly on the Kubernetes cluster before starting the restore process.

# Check if Portainer is running before applying the restore
while true
do
	portainer_running=`kubectl get po -n portainer | tail -1 | awk '{print $3}'`
	if [ "$portainer_running" != "Running" ]; then
	    echo -ne ' ⚡ Portainer is Not Running yet\\r'
	else
	    break
	fi
sleep 1
done

Restoring the main Portainer server from an S3 bucket

Now that Portainer is up and running, it is ready to be deployed with a backup file stored in an S3 bucket. Ensuring the variables below are set correctly in this next step is crucial. These are:

  • ACCESSKEYID
    • This is the access key for your S3 bucket
  • BUCKETNAME
    • This is the bucket name where your backup of Portainer is stored
  • FILENAME
    • For example, this is the file name you'd like to use to restore Portainer. portainer-backup_2024-02-27_00-55-00.tar.gz
  • FILEPASSWORD
    • If you defined a password for your backup file, make sure to place it here
  • REGION
    • This is the region where your S3 server is located, for example, us-east-1
  • SERVER:PORT
    • This is the hostname or IP address of the S3 bucket server along with the port where it is running, for example, 192.168.10.45:9001
  • SECRETKEY
    • Finally, this is the password that enables access to the S3 bucket server

Change the variables above in the script to match your setup. Once the restore process finishes, remap the IP address of the secondary Portainer server to the original FQDN.

# Restore the Portainer backup from an S3 bucket
echo
echo 'Restoring Portainer backup'

ACCESSKEYID="portainer"
BUCKETNAME="portainerbkp"
FILENAME="portainer-backup_2024-02-27_00-55-00.tar.gz"
FILEPASSWORD="restore1234"
REGION="us-east-1"
SERVER="s3server.example.com"
PORT="9001"
SECRETKEY="changeme"

curl -X POST \\
--insecure \\
--header 'Content-Type: application/json' \\
--url '' \\
--data '{"accessKeyID": "$ACCESSKEYID", "bucketName": "$BUCKETNAME",  "filename": "$FILENAME$",  "password": "$FILEPASSWORD",  "region": "us-east-1",  "s3CompatibleHost": ":$PORT",  "secretAccessKey": "$SECRETKEY"}'

echo
echo 'Portainer restored'

The Complete Script

# Check if the main Portainer server is running using an API call. An 'Unauthorized' reply means it is. No authentication is needed.
while true
do
	portainer_up=`curl --silent --insecure -X GET  | jq -r '.details'`
		if [ "$portainer_up" = "Unauthorized" ]; then
	        echo -ne '⚡ Portainer is up\\r'
		else
	        break
		fi
sleep 5
done

# Deploy a new Portainer instance
kubectl apply -n portainer -f 
echo
echo 'Deploying Portainer server'
echo

# Check if Portainer is running before applying the restore
while true
do
	portainer_running=`kubectl get po -n portainer | tail -1 | awk '{print $3}'`
	if [ "$portainer_running" != "Running" ]; then
	    echo -ne ' ⚡ Portainer is Not Running yet\\r'
	else
	    break
	fi
sleep 1
done

# Restore the Portainer backup from an S3 bucket
sleep 5
echo
echo 'Restoring Portainer backup'

ACCESSKEYID="portainer"
BUCKETNAME="portainerbkp"
FILENAME="portainer-backup_2024-02-27_00-55-00.tar.gz"
FILEPASSWORD="restore1234"
REGION="us-east-1"
SERVER="s3server.example.com"
PORT="9001"
SECRETKEY="changeme"

curl -X POST \\
--insecure \\
--header 'Content-Type: application/json' \\
--url '' \\
--data '{"accessKeyID": "$ACCESSKEYID", "bucketName": "$BUCKETNAME",  "filename": "$FILENAME$",  "password": "$FILEPASSWORD",  "region": "us-east-1",  "s3CompatibleHost": ":$PORT",  "secretAccessKey": "$SECRETKEY"}'

echo
echo 'Portainer restored'

What makes this process genuinely remarkable is continuity. The restore process carries across all of the pre-configured settings of Portainer, ensuring no disruption when the application is redeployed on the backup Kubernetes cluster. In a real-world production environment, this would include all aspects like endpoints, registry settings, and authentication, among others, configured on the Portainer server. The implication here is clear – there are minimal interruptions to your operations.

To provide a more precise and more practical understanding, below is a video demonstrating the automated restore process. In this example, we used the portainer_hb.sh bash script.

In the video, the main Portainer server runs on IP address 192.168.10.171, while the backup operates on IP address 192.168.10.176. The MinIO S3 server, tasked with storing the backups, runs on IP address 192.168.10.1.

This demonstration offers a glimpse into the potential of the Portainer API in automating and simplifying the restore process. The ability to quickly deploy and restore a new Portainer instance from a backup can help maintain continuity and effectiveness in managing your Kubernetes clusters.

COMMENTS

Related articles