Kubernetes (including AWS EKS)
Folks who interact regularly with resources in Kubernetes will be familiar with kubectl
, the Kubernetes CLI. One of the most useful features of Kubernetes is being able to create ephemeral "exec" resources which provide the functionality to stream commands to (running) containers in a Kubernetes cluster. This is exposed in the Kubernetes CLI via the kubectl exec
subcommand and its usage is as follows:
kubectl exec -it nginx -- /bin/bash
If you are interested in a technical deep-dive of how kubectl exec
works, this Medium article might be of interest to you.
The Border0 connector reproduces the same functionality as kubectl exec
in order to expose containers running in Kubernetes as Border0 sockets.
Further, you can expose entire clusters as a single Border0 socket, such that your clients (employees / users) will be prompted for the namespace, pod, and container they'd like a shell in each time they connect to the Border0 socket.
Permissions / Kubernetes RBAC
The Kubernetes identity of the connector must have, at a minimum, the permissions outlined in the Kubernetes ClusterRole below:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: border0-connector
rules:
# rule for connector to list namespaces, services, and pods
- apiGroups: [""]
resources: ["namespaces", "services", "pods"]
verbs: ["list"]
# rule for connector to get pods (incl. its containers)
- apiGroups: [""]
resources: ["pods"]
verbs: ["get"]
# rule for connector to create exec streams into pods
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
i.e. the connector must be able to list namespaces and pods in the cluster, describe (get) pods, and create "exec" resources for pods.
Deployment Modes
How you define your socket configuration (for kubectl exec
sockets) majorly depends on where/how your connector is deployed. The two deployment modes are:
- In-Cluster: the connector is running within the Kubernetes cluster as a Kubernetes deployment with an associated service account capable of listing namespaces and pods, getting pods, and creating exec resources for pods.
- Out-of-Cluster: the connector is running in compute not associated with the Kubernetes cluster.
When the connector is deployed In-Cluster, socket configuration will be identical regardless of where the Kubernetes cluster is running. In other words, socket configuration is no different whether the cluster is on-premises, AWS EKS, GCP GKE, Azure Kubernetes Service, or any other managed Kubernetes cluster provider.
It is highly recommended to deploy a Border0 connector In-Cluster due to its simple configuration (relative to Out-of-Cluster) if you are planning to primarily expose Kubernetes resources. Out-of-Cluster is recommended if you want to expose Kubernetes resources in addition to other resources outside of the Kubernetes cluster e.g. other types of compute and you only want to run a single connector. Though keep in mind, you may choose to run as many connectors as you see fit - your clients (employees / users) will not notice any difference in experience
Both options provide the same flexibility in terms of what is ultimately exposed to clients (employees / users) behind sockets. For example, you may choose to expose the whole cluster behind a single socket, specific namespaces behind various sockets, or even specific pods (with the use of Kubernetes selectors) behind various sockets.
In-Cluster
When you have a connector running in the Kubernetes cluster, exposing Kubernetes resources becomes quite trivial.
We recommend running the connector as a Kubernetes Deployment with a Kubernetes Service Account tied to the Kubernetes ClusterRole outlined in the Permissions / Kubernetes RBAC section above.
For a full .yaml
configuration file, you may refer to the example found in our examples
repository here. Be sure to read the notes at the top of the file before applying the Kubernetes definitions to your cluster. For more details about this example refer to the FAQ: "What's the fastest way to expose an entire Kubernetes cluster behind Border0?".
Creating a Border0 Socket for an In-Cluster Connector
To make containers on Kubernetes available, follow the following steps
- In the Sockets page, click on "Add New Socket" and click the Kubernetes tile
- Set a name and, optionally, a description
- From the Upstream Connection Type dropdown select Kubectl Exec
- From the Kubectl Exec Target Type dropdown select Standard
- From the Kubernetes Authentication Strategy dropdown select Default Credentials Chain (which will result in the connector using the In-Cluster role).
- This option will result in the connector respecting Kubernetes' default precedence of configuration loading rules. e.g. environment variables first, then kubeconfig files, then in-cluster configuration. This precedence ensures that clients work correctly in various environments, from a developer's local machine to a pod running in a Kubernetes cluster, providing flexibility and ease of configuration in different deployment scenarios.
- (Optionally) Add allowlisted namespaces and selectors for each namespace. Skipping this part will expose the entire cluster over the Border0 socket. Adding namespaces but no selector for a given namespace, will expose all pods that are part of the namespace. Adding multiple selectors for a given namespace will expose any pods that match any of the selectors (i.e. selector matches are logically OR-ed, not AND-ed).
- Lastly, we select our target connector
- Click Create Socket at the bottom of the page
Out-of-Cluster
When you have a connector running outside the Kubernetes cluster, exposing Kubernetes resources requires slightly more configuration depending on how the connector will be authenticating against the cluster.
Heads-up!
If your connector has access to Kubernetes credentials (with the right permissions) in the file system, then configuration of sockets when the connector is Out-of-Cluster is identical to when it is In-Cluster.
Creating a Border0 Socket for an Out-of-Cluster Connector
The main difference in configuration between sockets for In-Cluster and Out-of-Cluster connectors is that when the connector is running Out-of-Cluster, it might be necessary to provide a non-default Master URL and Kubeconfig Path. This can be done by choosing the Master URL and Kubeconfig Path option from the Kubernetes Authentication Strategy dropdown. Below we go over the entire list of steps including this additional step.
- In the Sockets page, click on "Add New Socket" and click the Kubernetes tile
- Set a name and, optionally, a description
- From the Upstream Connection Type dropdown select Kubectl Exec
- From the Kubectl Exec Target Type dropdown select Standard
- From the Kubernetes Authentication Strategy dropdown select Default Credentials Chain (which will result in the connector using the In-Cluster role) OR Master URL and Kubeconfig Path if you would like to define either Master URL and/or Kubeconfig Path manually.
- This Default Credentials Chain option will result in the connector respecting Kubernetes' default precedence of configuration loading rules. e.g. environment variables first, then kubeconfig files, then in-cluster configuration. This precedence ensures that clients work correctly in various environments, from a developer's local machine to a pod running in a Kubernetes cluster, providing flexibility and ease of configuration in different deployment scenarios.
- (Optionally) Add allowlisted namespaces and selectors for each namespace. Skipping this part will expose the entire cluster over the Border0 socket. Adding namespaces but no selector for a given namespace, will expose all pods that are part of the namespace. Adding multiple selectors for a given namespace will expose any pods that match any of the selectors (i.e. selector matches are logically OR-ed, not AND-ed).
- Lastly, we select our target connector
- Click Create Socket at the bottom of the page
AWS EKS
Exposing resources in AWS EKS is possible through both In-Cluster and Out-of-Cluster Border0 connectors. As noted above, if your connector is running In-Cluster, your socket configuration will be that of a standard kubectl exec
Border0 socket - it will not have any configuration specific to AWS. If this is your scenario, you should refer to
On the other hand, if your connector is running Out-of-Cluster, there will be a few more AWS-IAM related configuration steps that administrators must take in order to make the Kubernetes cluster (in AWS EKS) available to the connector.
Known Limitation for Connector Out-of-Cluster EKS
Heads up!
AWS's chosen authorization approach for EKS clusters prevents Border0 connectors outside of the cluster from automatically having access to resources within EKS clusters having only AWS IAM credentials.
The connector's AWS IAM identity (e.g. an AWS IAM Role), must be mapped to a Kubernetes identity in a special ConfigMap called aws-auth
, present in the kube-system
namespace (in each AWS EKS Cluster you'd like to expose behind a given connector.
If there is no mapping for the connector's AWS IAM identity, any request from the connector to the Kubernetes API will result in an unauthorized response. Therefore, Border0 cannot edit the ConfigMap without first having access to the cluster.
(1) Configure AWS IAM Permissions for Connector
Heads-up!
If you use the Border0 CLI's built-in connector installer for AWS, this will be handled for you.
In order for the Border0 connector to automatically discover AWS EKS clusters, the connector's AWS IAM identity must have the following policy document as part of its IAM Policy:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'eks:ListClusters'
- 'eks:DescribeCluster'
Resource: '*'
i.e. the connector AWS IAM identity will be able to list and describe any AWS EKS clusters in your AWS account.
(2) Configure Kubernetes Cluster Authorization
You may choose between the following 3 options for granting a connector outside of your EKS cluster the ability to connect to pods/containers within your cluster:
- (a) Use the
system:masters
Kubernetes RBAC Group - (b) Use an existing Kubernetes RBAC Group
- Note that this RBAC Group must have, at a minimum, the permissions outlined in the Permissions / Kubernetes RBAC section above
- (c) Create a new Kubernetes RBAC Group
(a) Use the system:masters
Kubernetes RBAC Group
system:masters
Kubernetes RBAC GroupHeads-up!
Assigning the Border0 connector's AWS IAM role to the
system:masters
group implies granting the Border0 connector full administrative privileges over the cluster. We recommend against using this method in production environments, in favour of approaches (b) or (c) below.
To associate the Border0 Connector AWS IAM identity with the system:masters
Kubernetes RBAC group, edit the aws-auth
ConfigMap in the kube-system
namespace and append the following to mapRoles
(replacing [[ THE AWS IAM ROLE OF YOUR BORDER0 CONNECTOR ]]
with the AWS IAM Role of your Border0 connector):
- groups:
- system:masters
rolearn: [[ THE AWS IAM ROLE OF YOUR BORDER0 CONNECTOR ]]
username: border0-connector
Note
You can edit the ConfigMap with the commands:
- Set your kubectl context to that of the AWS EKS cluster
aws eks update-kubeconfig --region ${AWS_EKS_CLUSTER_REGION} --name ${AWS_EKS_CLUSTER_NAME}
- Edit the
aws-auth
ConfigMapkubectl edit configmap aws-auth --namespace kube-system
The resulting aws-auth
ConfigMap should look something like this
$ kubectl describe configmap/aws-auth -n kube-system
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::123456789012:role/my-eks-cluster-bootstrapper-role-arn
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:masters
rolearn: arn:aws:iam::123456789012:role/border0-aws-connector
username: border0-connector
BinaryData
====
Events: <none>
(b) Use an existing Kubernetes RBAC Group
Heads-Up!
The RBAC Group must have, at a minimum, the permissions outlined in the Permissions / Kubernetes RBAC section above
To associate the Border0 Connector AWS IAM identity with a Kubernetes RBAC group of your choosing, edit the aws-auth
ConfigMap in the kube-system
namespace and append the following to mapRoles
(replacing the [[ KUBERNETES RBAC GROUP IDENTIFIER ]]
and [[ THE AWS IAM ROLE OF YOUR BORDER0 CONNECTOR ]]
with the appropriate values):
- groups:
- [[ KUBERNETES RBAC GROUP IDENTIFIER ]]
rolearn: [[ THE AWS IAM ROLE OF YOUR BORDER0 CONNECTOR ]]
username: border0-connector
Note
You can edit the ConfigMap with the commands:
- Set your kubectl context to that of the AWS EKS cluster
aws eks update-kubeconfig --region ${AWS_EKS_CLUSTER_REGION} --name ${AWS_EKS_CLUSTER_NAME}
- Edit the
aws-auth
ConfigMapkubectl edit configmap aws-auth --namespace kube-system
The resulting aws-auth
ConfigMap should look something like this
$ kubectl describe configmap/aws-auth -n kube-system
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::123456789012:role/my-eks-cluster-bootstrapper-role-arn
username: system:node:{{EC2PrivateDNSName}}
- groups:
- your-custom-group
rolearn: arn:aws:iam::123456789012:role/border0-aws-connector
username: border0-connector
BinaryData
====
Events: <none>
(c) Create a new Kubernetes RBAC Group
This is the recommended approach, following the principle of least privilege.
First we create a new ClusterRole and ClusterRoleBinding
The ClusterRole defines a cross-namespace Kubernetes RBAC Role which allows the connector to perform its necessary capabilities.
The ClusterRoleBinding binds the role to a Kubernetes RBAC Group which we will use in the next step.
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: border0-connector
rules:
# rule for connector to list namespaces, services, and pods
- apiGroups: [""]
resources: ["namespaces", "services", "pods"]
verbs: ["list"]
# rule for connector to get pods (incl. its containers)
- apiGroups: [""]
resources: ["pods"]
verbs: ["get"]
# rule for connector to create exec streams into pods
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: border0-connector
roleRef:
kind: ClusterRole
name: border0-connector
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: border0-connector
apiGroup: rbac.authorization.k8s.io
Now that you have a new Group border0-connector
, you may follow the instructions of (b) Use an existing Kubernetes RBAC Group, replacing [[ KUBERNETES RBAC GROUP IDENTIFIER ]]
with the string border0-connector
.
Creating a Border0 Socket for an Out-of-Cluster Connector (AWS EKS)
Heads-up!
Configuring Kubernetes Cluster Authorization is a prerequisite to creating sockets for AWS EKS clusters when the connector is running outside of the cluser.
To make containers on AWS EKS available, follow the steps below:
- In the Sockets page, click on "Add New Socket" and click the Kubernetes tile
- Set a name and, optionally, a description
- From the Upstream Connection Type dropdown select Kubectl Exec
- From the Kubectl Exec Target Type dropdown select AWS EKS
- From the AWS Authentication Strategy dropdown select your preferred option. If your connector is running in compute within AWS (e.g. an EC2 instance) and you would like to use that instance's AWS IAM role to connect to your AWS EKS cluster, you keep the default value Default Credentials Chain.
- Alternatively, if your connector is running on compute outside of AWS, you can specify an AWS profile, or static AWS credentials (e.g. AWS IAM User credentials).
- Specify the EKS Cluster Name and EKS Cluster Region
- We require a region so that connectors outside of your EKS cluster's AWS region are able to connect to the right cluster.
- (Optionally) Add allowlisted namespaces and selectors for each namespace. Skipping this part will expose the entire cluster over the Border0 socket. Adding namespaces but no selector for a given namespace, will expose all pods that are part of the namespace. Adding multiple selectors for a given namespace will expose any pods that match any of the selectors (i.e. selector matches are logically OR-ed, not AND-ed).
- Lastly, we select our target connector
- Click Create Socket at the bottom of the page
FAQ
Q: What's the fastest way to expose an entire Kubernetes cluster behind Border0?
A:
- Navigate to the "Connectors" tab in the Border0 portal
- Create a new connector and a new connector token
- Download our sample
.yaml
from here - Replace the stub in the file
[[ YOUR TOKEN GOES HERE ]]
with your actual token string - Run
kubectl apply -f connector.yaml
- Create a new socket for the connector by choosing the "Kubernetes" tile, leave all configuration as-is, click create.
After following these steps, your clients (employees / users) will be able to connect to any container in the cluster by navigating to https://client.border0.com/#/ssh/${SOCKET_DNS_NAME}
(or choosing the socket's tile in the Border0 client portal).
This will work regardless of whether the Kubernetes cluster is on-premises, in a cloud provider, or locally on your machine.
Updated 7 months ago