Introduction: Architectural Overview
Rancher and Portainer approach Kubernetes management very differently. Rancher is a comprehensive multi-cluster management platform that organizes clusters into “projects” (application-specific groupings) and leverages custom RBAC, GitOps (via Fleet), and an array of add-ons managed through its UI. Its control plane is a set of services running on Kubernetes.
Portainer, on the other hand, is a self-contained containerized management solution. Its central server can run in various environments—on a dedicated management VM (under Docker), on a single-node Kubernetes cluster (using k3s), or as part of a small Kubernetes-as-a-Service (KaaS) deployment in the cloud. Portainer uses pre-defined RBAC roles and a pull-based GitOps model that requires you to define your desired state directly within its interface. It can manage any Kubernetes environment—on-prem, cloud, edge, or any distribution—with a simpler, container-first architecture.
1. Inventory and Prerequisites
Begin by manually documenting all key elements of your Rancher-managed environment. In Rancher, “projects” serve as application-specific groupings that help organize namespaces according to your deployment needs. Record details such as:
- Projects and Namespaces: List all Rancher projects and the namespaces they contain.
- RBAC Configurations: Capture custom roles, project-level permissions, and user/team assignments per Project
- RKE Cluster Footprint: Note node sizes, control-plane versus worker allocations, and any specific node configurations.
- Application Deployments: Identify workloads deployed manually, via Rancher’s GitOps (Fleet), or through external CI/CD pipelines using the Rancher Kubernetes proxy.
- Networking Settings: Record load balancers, ingress rules, exposed ports, and routing details.
- Installed Components: List CRDs, Helm charts, and add-ons.
- Security and Observability: Document alerting, auditing, monitoring, and logging setups.
Consult Rancher’s documentation to verify that you’ve captured every configuration element necessary for the migration.
2. Deploy the New Portainer Management Environment
Set up a new Portainer management server as your central control plane:
- Installation Options: Deploy Portainer as a self-contained container. It can run on a dedicated management VM (using Docker), on a single-node Kubernetes cluster (with k3s), or in a small Kubernetes-as-a-Service (KaaS) cluster in the cloud.
- Directory Integration: Link Portainer with your enterprise directory for seamless user and team authentication.
- Base Configuration: Configure global security policies, network settings, and storage options to match your organization’s requirements.
- Sidero Omni: As we will be moving you from RKE to Talos Kubernetes, you need to register for an Omni SaaS account, and then add credentials for this into Portainer.
Refer to academy.portainer.io for detailed first setup steps.
This environment will oversee your new clusters and coordinate subsequent reconfigurations.
3. Recreating Your Kubernetes Clusters Using Portainer
With your Portainer management environment in place, re-create new instances of your RKE Kubernetes clusters using Portainer’s integrated deployment of Sidero Talos. Note that unlike Rancher, with its "machine" drivers, provisioning VMs is handled externally; you must pre-create VMs in your hypervisor/cloud, and then boot these VMs off the Talos bootable media so Portainer can discover them. In addition, if you are using bare-metal servers, you need to boot the bare metal servers off the Talos bootable media.
In practice:
- Provision External Hardware: Boot your VMs or bare metal servers using your chosen bootable media (via your cloud provider or on-prem provisioning), ensuring they become discoverable by Portainer.
- Cluster Configuration in Portainer: In the Portainer console, add a new environment, use the Talos option, and create the clusters with configurations that match your existing RKE clusters.
- Cluster Creation: Portainer will integrate these pre-booted nodes into secure, immutable clusters.
- Cluster Policies: With the clusters now defined in Portainer, you can apply security policies to them using Portainer's native OPA Gatekeeper integration. Lock down the cluster as appropriate for your needs. Additionally, using the cluster settings, you can enable/disable functionality as suits your requirements.
4. Recreate RBAC and User/Team Assignments
Since Rancher uses custom RBAC configurations mapped to "Projects", and Portainer relies on pre-defined roles mapped to Clusters/Namespaces, you must manually remap your existing settings:
- Review Configurations: Refer to your documented RBAC details and user/team assignments from Rancher, including those defined within each project.
- Map to Portainer Roles: Assign each user or team to the appropriate Portainer role (e.g., administrator, operator, or read-only), and the respective cluster.
- Document the Mapping: Update your internal documentation to ensure consistent access control post-migration.
5. Onboard Any Non-RKE Clusters That You Wish to Retain
For clusters that were not created using RKE, ensure they are still part of your operational environment by onboarding them into Portainer:
- Document Existing Clusters: Identify any non-RKE clusters you wish to continue managing.
- Install Portainer Agents: Deploy the Portainer agent on each cluster to enable management.
- Register Clusters in Portainer: Manually add these clusters to the Portainer management server.
- Update Automation: Modify any scripts or integrations that previously interfaced with Rancher’s API so they now communicate with Portainer’s endpoints.
6. Observability
With Portainer now the management console, you need to update your observability tooling.
- Kube Metrics: If your needs are simple, the Kubernetes Metrics service is likely all that's needed; Portainer natively integrates with this, so to enable CPU and Memory statistics, deploy the metrics HELM chart into each managed cluster.
- Prometheus: If you need deeper insights, you can deploy Prometheus in each managed cluster, following the Portainer proceedure for this.
- Grafana: If you are using Grafana for dashboards, this needs to be repointed to the new Prometheus instances.
7. Application Redeployment
Since your new clusters are built from scratch, application workloads must be redeployed:
- Blue Environment: Continue running your production applications on your existing Rancher-managed clusters.
- Green Environment: Redeploy your workloads on the new clusters managed by Portainer using your manifests, Helm charts, or Portainer defined GitOps pipelines.
- Testing: Thoroughly test redeployed applications to ensure RBAC, networking, and environment configurations are correct.
- Traffic Shift: Gradually transition production traffic from Blue to Green using DNS changes, load balancer updates, or service endpoint modifications.
8. Final Validation and Cutover
After reconfiguring and redeploying everything, perform comprehensive validation:
- Comprehensive Testing: Verify node health, user access, and application performance against your documented baseline.
- Production Cutover: Once validated, redirect all traffic to the new Portainer-managed clusters.
- Decommissioning: Decommission the old Rancher-managed clusters only after confirming a stable transition.
So, now you have the steps required to migrate your Kubernetes management controller and distribution from Rancher to Portainer (and Sidero).

COMMENTS