Kubernetes(k8s) Plugin

Kubernetes(k8s) Plugin

Going deeper into container realm we continue the same methodology from EC2 and Docker into K8s

(Optional)Setup kubernetes

If you don’t have a kubernets cluster available you can use the following steps to create one using kind.

$ brew install kind

After install kind you can create a cluster:

$ kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

And set the context to the new cluster so you can use it with kubectl.

$ kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:51141
CoreDNS is running at https://127.0.0.1:51141/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Create image:

We used the following Docker file to create and publish the image, you can use this or create your own:

FROM --platform=linux/amd64 alpine
MAINTAINER [email protected]

COPY border0 .

ENTRYPOINT ["/border0", "connector", "start"]
$ docker build -t btoonk/border0-connector .
[+] Building 1.7s (8/8) FINISHED
=> [internal] load build definition from Dockerfile                                          0.0s
=> => transferring dockerfile: 37B                                                           0.0s
=> [internal] load .dockerignore                                                             0.0s
=> => transferring context: 2B                                                               0.0s
=> [internal] load metadata for docker.io/library/alpine:latest                              1.6s
=> [auth] library/alpine:pull token for registry-1.docker.io                                 0.0s
=> [internal] load build context                                                             0.0s
=> => transferring context: 35B                                                              0.0s
=> [1/2] FROM docker.io/library/[email protected]:7580ece7963bfa863801466c0a488f11c86f85d998805  0.0s
=> CACHED [2/2] COPY border0 .                                                           0.0s
=> exporting to image                                                                        0.0s
=> => exporting layers                                                                       0.0s
=> => writing image sha256:d7e7b9624b7dcf271247f7395a484f0807a34ae28f311d1d9cfc5ca76cd00953  0.0s
=> => naming to docker.io/btoonk/border0-connector                                        0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
$ docker push btoonk/border0-connector
Using default tag: latest
The push refers to repository [docker.io/btoonk/border0-connector]
20da18db4232: Layer already exists
ec34fcc1d526: Layer already exists
latest: digest: sha256:62a50d2a9b7211c7ed2c56be9ceac36216efc7a339dc6e1e7d3d523678782ae8 size: 740

Deploy connector

This is the only step that an end-user should do.

We need a couple of resources in the cluster to spinup a connector.

  • ServiceAccount, the service account is used to bind the needed cluster role to the deployment
  • ClusterRole, a cluster role is used to give the deployment the permission to read the services in the cluster
  • ClusterRoleBinding, this is used to bind the role to the service account.
  • Configmap, this is used to store the the connector configuration file.
  • Deployment, the actual connector deployment

Apply the following yaml:
NOTE: you need to specify the namespace in the ClusteRoleBinding!
NOTE: you need to specify the token you want to use for the connector!

We probably want to move the token to an ENV var or something, this way we can store it as a secret in kubernets.

If the namespace where you want to deploy the connector does not yet exists you can use the following command to create the namespace:

$ kubectl create ns <namespace name>
namespace/connector-test created

Create a the following border0.yaml file:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: border0-connector
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: border0-connector
rules:
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: border0-connector
subjects:
  - kind: ServiceAccount
    name: border0-connector
    namespace: <namespace> # replace this with the right namespace !!!
roleRef:
  kind: ClusterRole
  name: border0-connector
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: connector-config
data:
  config.yaml: |
    connector:
       name: "connectortest"
    credentials:
       token: <TOKEN>  # replace this with the actual token
    k8_plugin:
        - group: testgroup
          policies: [my-k8s-policy]

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: border0-connector
  name: border0-connector
spec:
  selector:
    matchLabels:
      app: border0-connector
  template:
    metadata:
      labels:
        app: border0-connector
    spec:
      serviceAccount: border0-connector
      containers:
        - image: docker.io/btoonk/border0-connector:latest
          name: border0-connector
          args: ["--config", "/etc/border0-connector/config.yaml"]
          volumeMounts:
            - name: config
              mountPath: /etc/border0-connector
      volumes:
        - name: config
          configMap:
            name: connector-config

We can deploy this with kubectl, in this example we use namespace connector-test to deploy in. Make sure this namespace is specified in the ClusterRoleBinding in the yaml file and that the token is replaced with the actual token.

$ kubectl apply -f border0.yaml -n connector-test
serviceaccount/border0-connector created
clusterrole.rbac.authorization.k8s.io/border0-connector created
clusterrolebinding.rbac.authorization.k8s.io/border0-connector created
configmap/connector-config created
deployment.apps/border0-connector created

Now we should have a pod running the connector

$ kubectl -n connector-test get pods
NAME                                    READY   STATUS    RESTARTS   AGE
hello-server-bdb64769d-qp226            1/1     Running   0          104m
border0-connector-655c6cdccd-82fzs   1/1     Running   0          104s

And you can view the logs with:

$ kubectl -n connector-test logs -f border0-connector-655c6cdccd-82fzs
2022/07/22 11:40:13 reading the config /root/.border0_connector_config
2022/07/22 11:40:13 loading the ssm with:
2022/07/22 11:40:13 starting the connector service
2022/07/22 11:40:13 using token defined in config file
2022/07/22 11:40:16 receiving sockets to update
2022/07/22 11:40:16 statics sockets found 0
2022/07/22 11:40:16 api sockets found 3
2022/07/22 11:40:16 number of sockets to connect:  0
2022/07/22 11:40:23 receiving sockets to update
2022/07/22 11:40:23 receiving sockets to update
2022/07/22 11:40:23 statics sockets found 0
2022/07/22 11:40:23 api sockets found 3
2022/07/22 11:40:23 number of sockets to connect:  0
2022/07/22 11:40:23 statics sockets found 0
2022/07/22 11:40:23 api sockets found 3
2022/07/22 11:40:23 number of sockets to connect:  0

Setup test env

If you don’t have any services running you want to expose, you can use the following steps to deploy a sample service.

Create a new namespace for our test

$ kubectl create ns testsvc
namespace/testsvc created

Deploy a helloworld web deployment:

$ kubectl create deployment hello-server --image=us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0  -n testsvc
deployment.apps/hello-server created

Create a sample service:

$ kubectl create service clusterip -n testsvc hello-server --tcp 8080:8080
service/hello-server created

We should now have a new pod:

$ kubectl -n testsvc get pods
NAME                           READY   STATUS    RESTARTS   AGE
hello-server-bdb64769d-kdkpc   1/1     Running   0          2m33s

Test the new service:

$ kubectl -n testsvc exec -it hello-server-bdb64769d-kdkpc -- wget -O- hello-server:8080
Connecting to hello-server:8080 (10.96.38.5:8080)
writing to stdout
Hello, world!
Version: 2.0.0
Hostname: hello-server-bdb64769d-kdkpc
-                    100% |*********************************************************************************************************************************************************|    68  0:00:00 ETA
written to stdout

Expose svc as socket

After the connector is deployed you can modify any service to be exposed as a border0.io socket.

If any network policies are in place, you need to make sure the connector pod is allowed to access the services you want to expose.

We are using the annotation border0.io/group to specify the group value we set in the config file (configmap), in this case that is “testgroup”.

$ kubectl annotate service -n testsvc hello-server border0.io/group=testgroup
service/hello-server annotated

In the logs from the connector you can see it will create a new socket:

2022/07/22 12:07:13 statics sockets found 1
2022/07/22 12:07:13 api sockets found 3
2022/07/22 12:07:13 creating a socket: tls-hello-server-connectortest
Connecting to Server: tunnel.border0.com

Welcome to border0.io
tls-hello-server-connectortest - tls://tls-hello-server-connectortest-bas.border0.io

=======================================================
Logs
=======================================================

By default it will create an tls socket, but with the annotation border0.io/socketType you can specify the socket type. For example in this case border0.io/socketType: http

This will create a new socket that will available as https://http-hello-server-connectortest-.border0.io

(you can also use kubectl -n testsvc edit service hello-server to modify the annotations).

Other supported annotations:
border0.io/privateSocket: with this you can enable (and only enable) the privateSocket feature.
You can also enable this feature for all sockets by adding private_socket: True to the configuration file in the configmap.

border0.io/allowedEmailAddresses: to override the default setting
border0.io/allowedEmailDomains: to override the default setting