Kubernetes(k8s) Plugin

Kubernetes(k8s) Plugin

Going deeper into container realm we continue the same methodology from EC2 and Docker into K8s

(Optional)Setup kubernetes

If you don’t have a kubernetes cluster available you can use the following steps to create one using kind.

 brew install kind

After install kind you can create a cluster with kind create cluster

$ kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

And set the context to the new cluster so you can use it with kubectl.

$ kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:51141
CoreDNS is running at https://127.0.0.1:51141/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Deploy connector

Now we're ready to deploy the Border0 connector into our cluster.

We need a couple of resources in the cluster to spin up a connector.

  • ServiceAccount, the service account is used to bind the needed cluster role to the deployment
  • ClusterRole, a cluster role is used to give the deployment the permission to read the services in the cluster
  • ClusterRoleBinding, this is used to bind the role to the service account.
  • Configmap, this is used to store the the connector configuration file.
  • Deployment, the actual connector deployment

Apply the following yaml:
NOTE: you need to specify the namespace in the ClusteRoleBinding!
NOTE: you need to specify the token you want to use for the connector!

We probably want to move the token to an ENV var or something, this way we can store it as a secret in kubernetes.

If the namespace where you want to deploy the connector does not yet exists you can use the following command to create the namespace: kubectl create ns connector-test

$ kubectl create ns <namespace name>
namespace/connector-test created

Create a the following border0.yaml file:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: border0-connector
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: border0-connector
rules:
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: border0-connector
subjects:
  - kind: ServiceAccount
    name: border0-connector
    namespace: <namespace> # replace this with the right namespace !!!
roleRef:
  kind: ClusterRole
  name: border0-connector
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: connector-config
data:
  config.yaml: |
    connector:
       name: "connectortest"
    credentials:
       token: <TOKEN>  # replace this with the actual token
    k8_plugin:
        - group: testgroup
          policies: [my-k8s-policy]

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: border0-connector
  name: border0-connector
spec:
  selector:
    matchLabels:
      app: border0-connector
  template:
    metadata:
      labels:
        app: border0-connector
    spec:
      serviceAccount: border0-connector
      containers:
        - image: ghcr.io/borderzero/border0:latest
          name: border0-connector
          args: ["connector", "start", "--config", "/etc/border0-connector/config.yaml"]
          env: # Optional, for logging
            - name: BORDER0_LOG_LEVEL
              value: "info"
          volumeMounts:
            - name: config
              mountPath: /etc/border0-connector
      volumes:
        - name: config
          configMap:
            name: connector-config

We can deploy this with kubectl, in this example we use namespace connector-test to deploy in. Make sure this namespace is specified in the ClusterRoleBinding in the yaml file and that the token is replaced with the actual token.

$ kubectl apply -f border0.yaml -n connector-test
serviceaccount/border0-connector created
clusterrole.rbac.authorization.k8s.io/border0-connector created
clusterrolebinding.rbac.authorization.k8s.io/border0-connector created
configmap/connector-config created
deployment.apps/border0-connector created

Now we should have a pod running the connector

$ kubectl -n connector-test get pods
NAME                                    READY   STATUS    RESTARTS   AGE
border0-connector-655c6cdccd-82fzs   1/1     Running   0          104s

And you can view the logs with:

$ kubectl -n connector-test logs -f border0-connector-655c6cdccd-82fzs
2022/07/22 11:40:13 reading the config /root/.border0_connector_config
2022/07/22 11:40:13 loading the ssm with:
2022/07/22 11:40:13 starting the connector service
2022/07/22 11:40:13 using token defined in config file
2022/07/22 11:40:16 receiving sockets to update
2022/07/22 11:40:16 statics sockets found 0
2022/07/22 11:40:16 api sockets found 3
2022/07/22 11:40:16 number of sockets to connect:  0
2022/07/22 11:40:23 receiving sockets to update
2022/07/22 11:40:23 receiving sockets to update
2022/07/22 11:40:23 statics sockets found 0
2022/07/22 11:40:23 api sockets found 3
2022/07/22 11:40:23 number of sockets to connect:  0
2022/07/22 11:40:23 statics sockets found 0
2022/07/22 11:40:23 api sockets found 3
2022/07/22 11:40:23 number of sockets to connect:  0

Setup test env, let's create a service and make it available through Border0

If you don’t have any services running you want to expose, you can use the following steps to deploy a sample service.

Create a new namespace for our test

$ kubectl create ns testsvc
namespace/testsvc created

Deploy a helloworld web deployment:
kubectl create deployment hello-server --image=us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0 -n testsvc

$ kubectl create deployment hello-server --image=us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0  -n testsvc
deployment.apps/hello-server created

Create a sample service:

$ kubectl create service clusterip -n testsvc hello-server --tcp 8080:8080
service/hello-server created

We should now have a new pod:

$ kubectl -n testsvc get pods
NAME                           READY   STATUS    RESTARTS   AGE
hello-server-bdb64769d-kdkpc   1/1     Running   0          2m33s

Test the new service:

$ kubectl -n testsvc exec -it hello-server-bdb64769d-kdkpc -- wget -O- hello-server:8080
Connecting to hello-server:8080 (10.96.38.5:8080)
writing to stdout
Hello, world!
Version: 2.0.0
Hostname: hello-server-bdb64769d-kdkpc
-                    100% |*********************************************************************************************************************************************************|    68  0:00:00 ETA
written to stdout

Expose svc as socket

After the connector is deployed you can modify any service to be exposed as a border0.io socket.

If any network policies are in place, you need to make sure the connector pod is allowed to access the services you want to expose.

We are using the annotation border0.com/group to specify the group value we set in the config file (configmap), in this case that is “testgroup”. We'll also use this annotation to instruct the border0 connector on what type of socket to create, in this case HTTP.

kubectl annotate service -n testsvc hello-server border0.com/group=testgroup border0.com/socketType=http

Optionally, for HTTP sockets, you can also set the following annotations to configure the upstream type to either http or https. border0.com/upstreamType=https

if your HTTP origin expects a specific hostname or SNI header useborder0.com/upstreamHttpHostname=www.mywebserver.com

You can validate the annotation like this kubectl -n testsvc describe service hello-server

$ kubectl -n testsvc describe service hello-server

Name:              hello-server
Namespace:         testsvc
Labels:            app=hello-server
Annotations:       border0.com/group: testgroup
Selector:          app=hello-server
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.96.80.39
IPs:               10.96.80.39
Port:              8080-8080  8080/TCP
TargetPort:        8080/TCP
Endpoints:         10.244.0.12:8080
Session Affinity:  None
Events:            <none>

In the logs from the connector you can see it will create a new socket:

2022/07/22 12:07:13 statics sockets found 1
2022/07/22 12:07:13 api sockets found 3
2022/07/22 12:07:13 creating a socket: tls-hello-server-connectortest
Connecting to Server: tunnel.border0.com

Welcome to border0.io
tls-hello-server-connectortest - tls://tls-hello-server-connectortest-bas.border0.io

=======================================================
Logs
=======================================================

By default it will create an tls socket, but with the annotation border0.com/socketType you can specify the socket type. For example in this case border0.com/socketType: http

This will create a new socket that will available as https://http-hello-server-connectortest-.border0.io

(you can also use kubectl -n testsvc edit service hello-server to modify the annotations).