Most engineers trust Kubernetes to be secure by design. After all, it’s built with RBAC, service accounts, and network policies, right? But the truth is, many Kubernetes clusters are dangerously misconfigured, exposing their API server to easy privilege escalation, lateral movement, and even full cluster compromise.
This isn’t just theoretical. Real-world Kubernetes breaches have occurred due to misconfigurations that engineers assumed were ‘secure defaults.’ In this article, we’ll dive into how attackers actually hack Kubernetes clusters via the API and, more importantly, how to defend against these threats before someone else finds them first.
The danger of over-permissive RBAC
Kubernetes RBAC differs notably from the familiar RBAC or authorization frameworks like Active Directory, primarily because it lacks any explicit notion of "deny" rules. Unlike traditional systems, Kubernetes permissions are strictly additive; you explicitly grant permissions, and anything not explicitly permitted is inherently denied. This additive-only design can unintentionally lead administrators into providing overly generous permissions too early in the authorization hierarchy. Once granted, these permissions propagate downward, potentially exposing child resources to unnecessary risk.
Another area of frequent confusion is around Kubernetes verbs; actions such as get
, list
, watch
, create
, update
, patch
, and delete
. Translating these verbs into practical, everyday user activities can be unintuitive, leading administrators to inadvertently grant more permissions than needed. This issue becomes particularly pronounced when assigning permissions at the cluster scope; in reality, many routine tasks require only namespace-scoped permissions. By misunderstanding these subtleties, administrators inadvertently widen the security footprint.
Even more troubling, those new to Kubernetes RBAC often find the granular configuration daunting. To bypass complexity and frustration, they might default to assigning everyone the powerful cluster-admin
role, intending to simplify operations and just "get on with work." This approach, however, is akin to granting all users Domain Admin privileges in Active Directory; an extremely dangerous practice that severely undermines cluster security, creates unnecessary vulnerabilities, and negates the very benefits Kubernetes RBAC was designed to offer.
How attackers exploit it
RBAC (Role-Based Access Control) is supposed to restrict what users and workloads can do. But in many clusters, engineers unknowingly grant excessive permissions. A common scenario:
- A service account gets cluster-admin privileges.
- A pod running with this service account is compromised.
- The attacker now has full cluster control via the API.
Hands-on Example: Finding over-permissive accounts
Run the following to see which service accounts have broad permissions:
kubectl get clusterrolebindings -o json | jq '.items[] | {role: .roleRef.name, subjects: .subjects}'
If you see cluster-admin linked to service accounts that don’t need it, you have a problem.
How to defend
- Follow the principle of least privilege—grant only the permissions needed.
- Use separate service accounts per workload rather than default ones.
- Audit RBAC policies regularly with tools like
rbac-lookup
orkubectl auth can-i
. - Use namespace-specific permissions instead of cluster-wide permissions when possible.
- Regularly review and adjust RBAC policies to ensure they remain appropriate.
- Or more simply, use Portainer to manage your user authentication and RBAC assignments, which takes away almost all of the complexity and misunderstanding surrounding RBAC.
Service Account token exposure
In Kubernetes, service accounts act like user identities, but specifically for applications or services running within the cluster. Unlike human user accounts (managed externally through authentication systems), service accounts are Kubernetes-native and designed primarily to enable workloads—such as containers, pods, or applications—to authenticate to the Kubernetes API server and interact with cluster resources.
When Kubernetes creates a pod, unless otherwise specified, it automatically associates a default service account within the pod’s namespace. This built-in convenience, however, introduces some significant weaknesses.
Firstly, default service accounts are frequently overprivileged. Because the default service account in many namespaces might have broader permissions than strictly necessary (especially if administrators aren’t diligent about securing RBAC configurations), applications or services within pods can unintentionally gain elevated access. This creates a hidden security risk: if one application is compromised, attackers could leverage the default service account's permissions to escalate privileges within the cluster.
Secondly, Kubernetes provides service account tokens automatically and mounts them directly into pods at startup, usually within a predictable location (/var/run/secrets/kubernetes.io/serviceaccount
). These tokens, unless explicitly managed, are mounted in every pod by default, creating a predictable attack surface. Attackers who gain access to the pod or container can easily extract these tokens and impersonate the service account, potentially accessing cluster resources beyond the intended scope.
Thirdly, Kubernetes service accounts often lack granular, dynamic control mechanisms. Because RBAC is purely additive without explicit "deny" rules, misconfigured service accounts can inadvertently provide overly permissive access to cluster resources. Administrators who aren't clear on exactly what permissions the workload needs often default to granting broader permissions than necessary, increasing vulnerability.
How attackers exploit it
Did you know that every pod in Kubernetes gets a service account token by default? If an attacker gains access to a pod (via a vulnerable app or misconfiguration), they can:
- Use the token to query the Kubernetes API.
- Escalate privileges if the token has too many permissions.
Hands-on example: Extracting a Service Account token
If you get shell access to a pod:
cat /var/run/secrets/kubernetes.io/serviceaccount/token
Then, use it to access the API:
curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/api/v1/namespaces
How to defend
- Disable service account tokens unless explicitly needed:
automountServiceAccountToken: false
- Use OIDC or workload identity for authentication instead of static tokens.
- Restrict API access with NetworkPolicies so that only authorized workloads can communicate with the API server.
- Implement token rotation to limit the lifespan of service account tokens.
- Use Portainer to Regularly audit and remove unused service accounts to reduce the attack surface.
- Use Portainer as your user OIDC so as to remove the use of the default service tokens.
Anonymous API access and unauthenticated endpoints
In Kubernetes, Anonymous API Access refers to requests made to the Kubernetes API server without authentication credentials, relying instead on the default anonymous user identity. If not explicitly disabled or restricted, Kubernetes may inadvertently allow anonymous requests to access certain unauthenticated API endpoints, creating a significant security risk.
This risk is further compounded by cloud-hosted Kubernetes services, which often expose node IP addresses directly on the public internet, making these endpoints potential targets for attackers. Even in self-hosted Kubernetes deployments, node IPs are typically exposed internally to all users on the corporate LAN or connected VPNs, inadvertently increasing the cluster's vulnerability to internal threats or lateral movement by attackers who already have a foothold in your network. This underscores why explicitly controlling and securing anonymous API access is an essential practice.
How attackers exploit it
By default, some Kubernetes components allow anonymous API access, especially older clusters. This means an attacker might be able to:
- List cluster resources without authentication.
- Discover running workloads and sensitive environment variables.
Hands-on example: Check if anonymous access is enabled
Run:
kubectl auth can-i list pods --as=system:anonymous
If the output is yes
, your cluster is exposed.
How to defend
- Disable anonymous API access:
--anonymous-auth=false
- Ensure your API server requires authentication and authorization.
- Implement strong API authentication mechanisms, such as OIDC or LDAP.
- If health checks are necessary, use authenticated service accounts.
- Restrict access to
/healthz
,/livez
, and/readyz
endpoints using RBAC and/or NetworkPolicies. - Or easier, use Portainer as the API entry-point for your clusters, denying access to your Cluster API on all nodes. Portainer enforces user authentication as part of the Kubernetes API proxy, native to Portainer.
Exploiting insecure admission controllers
An admission controller, as described in the Kubernetes documentation, is "a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the resource, but after the request is authenticated and authorized". With proper admission control, a user could inadvertently provide an entry point for an attacker through code they deploy.
How attackers exploit it
Some Kubernetes clusters lack proper admission control, allowing attackers to:
- Create privileged containers.
- Bypass security policies by modifying pod specifications.
Hands-on example: Deploying a privileged pod
If PodSecurityPolicies or AdmissionControllers aren’t enforced, an attacker can run:
apiVersion: v1
kind: Pod
metadata:
name: privilege-escalation
spec:
containers:
- name: root-container
image: busybox
securityContext:
privileged: true
This pod runs with full node privileges, potentially leading to host compromise.
How to defend
- Enforce PodSecurityStandards (PSS) or Kyverno policies.
- Enable admission controllers like
PodSecurityPolicy
,Gatekeeper
, orKyverno
. - or simpler, use Portainer to enable enhanced security in your clusters, which auto-deploys OPA Gatekeeper, and can easily apply restrictive permissions.
Lateral network movement across namespaces
In Kubernetes, namespaces are often misunderstood as security boundaries, particularly regarding network isolation. While namespaces effectively segment and organize cluster resources logically, they do not inherently restrict network traffic between applications.
By default, workloads in different namespaces can freely communicate with each other, making lateral movement across namespaces surprisingly straightforward for attackers who've gained initial access. This common misconception, that namespaces automatically isolate workloads, can lead administrators to assume a false sense of security, inadvertently exposing applications and services to risks such as lateral compromise or privilege escalation.
Where true network isolation is needed, separate clusters are recommended, however an alternative is to use Network Policies.
How attackers exploit it
Namespaces in Kubernetes do not provide built-in network isolation by default. Many clusters lack proper NetworkPolicies, meaning once an attacker gains entry into one pod, they can freely move laterally to:
- Other pods across different namespaces.
- Internal services (databases, admin panels, etc).
Attackers exploit this lack of restriction by performing internal reconnaissance, identifying vulnerable services, and escalating their privileges horizontally across the cluster.
Hands-on example: Check if your pods are unrestricted
First, quickly verify if your cluster has any network policies configured:
kubectl get networkpolicies --all-namespaces
If this returns empty or no entries, it means your pods are completely unrestricted and can freely communicate across namespaces.
To practically test this, you can perform the following simple validation:
Deploy a test pod with network scanning tools (nmap
) in one namespace:
kubectl run scanner -n default --rm -it --image=instrumentisto/nmap -- bash
Scan common Kubernetes internal CIDRs from within that pod to see accessible services and pods in other namespaces:
nmap -p 80,443 10.0.0.0/8
Replace 10.0.0.0/8
with the CIDR block relevant to your cluster (often 10.x.x.x
, 172.x.x.x
, or 192.168.x.x
).
If you discover other namespaces' pods or services responding to your scan, you've confirmed there's no enforced isolation between namespaces.
How to defend
- Apply a default deny-all policy, then allow only necessary traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
- Use Network Security Policies to define cross-namespace network access rules.
- Use OPA Gatekeeper to assist with the configuration of rules.
- or simpler, use Portainer to enable enhanced security, which auto-deploys OPA Gatekeeper and can apply restrictive permissions.
So, the TL;DR
Kubernetes API security is often overlooked but highly exploitable. The above misconfigurations are just a few ways attackers can compromise clusters.
Want to see all of this visually and simplify security audits? Tools like Portainer can help reduce the complexity of securing your Kubernetes environments by providing a clear view of API access, service account permissions, and workload security settings.
Your Kubernetes API is a prime target. Secure it before attackers do.
COMMENTS