Do you want a simple single server kubernetes that is easy to install?

Here is the guide for you.

Installing kubernetes

First, we install k3s kuberntes using the next command:

curl -sfL https://get.k3s.io | sh -s - --cluster-init --node-external-ip <external-ip>

Note: Replace external-ip with your server external IP.

This is needed in order to get the traefik ingress controller working.

Next we should copy the k3s kubernetes configuration file /etc/rancher/k3s/k3s.yaml to the default location ~/.kube/config and change the KUBECONFIG environment variable:

rm -rf ~/.kube/config

sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown -R user:user ~/.kube

echo 'export KUBECONFIG=~/.kube/config' >> ~/.bashrc

Note: update user with your current user.

Also, a few aliases that helps with k8s configuration :

echo 'alias k=kubectl' >> ~/.bashrc
echo 'alias k9s=/snap/k9s/155/bin/k9s' >> ~/.bashrc

Also, remember to install (if needed) k9s TUI kubernetes manager.

sudo snap install k9s

If snaps are not your thing, then you can install it other ways.

If everything is ok, then we should be able to get the cluster nodes for example:

k get nodes

With the response:

NAME   STATUS   ROLES                       AGE   VERSION
k8s    Ready    control-plane,etcd,master   1d   v1.30.6+k3s1

Configuring volumes and storage

Although k3s comes with a default storage provider, it is recommended to install Longhorn using the next commands:

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml

Note: Update the version v1.6.0 with the latest version of Longhorn.

If needed you can make it default using the next command:

kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

If everything is ok, we should be able to check the storage classes:

k get storageclass

With the response:

NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path             rancher.io/local-path   Delete          WaitForFirstConsumer   false                  1d
longhorn (default)     driver.longhorn.io      Delete          Immediate              true                   1d

We can test it using a PVC (persistent volume claim) and a pod.

First we install the PVC, using pvc.yml file:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-volv-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 2Gi

Next we map the PVC to a pod of nginx for example:

apiVersion: v1
kind: Pod
metadata:
  name: volume-test
  namespace: default
spec:
  containers:
  - name: volume-test
    image: nginx:stable-alpine
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: volv
      mountPath: /data
    ports:
    - containerPort: 80
  volumes:
  - name: volv
    persistentVolumeClaim:
      claimName: longhorn-volv-pvc

Now we apply the PVC and the pod using:

k apply -f pvc.yaml
k apply -f pod.yml

And if everything is ok, we should be able to see the volume:

k get pvc

With the response

NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
longhorn-volv-pvc   Bound    pvc-c56cd06a-3b19-45ba-973e-2f88da543493   2Gi        RWO            longhorn       <unset>                 105s

And the pod:

k get pods

With the response:

NAME          READY   STATUS    RESTARTS   AGE
volume-test   1/1     Running   0          93s

And of course the volume:

k get pv

With the response :

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-c56cd06a-3b19-45ba-973e-2f88da543493   2Gi        RWO            Delete           Bound    default/longhorn-volv-pvc   longhorn       <unset>                          5m20s

Lastly, we delete the test PVC, pod and pv.

k delete -f pod.yml
k delete -f pvc.yml

Testing Kubernetes

Now, let’s deploy nginx using a deployment of 1 pod - nginx exposed through a service with an ingress controller that exposes the port 80 and maps it the service.

cat <<EOF > nginx-deployment-ingress.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              number: 80
	#host: nginx.local
EOF

Next we apply the configuration:

k apply -f nginx-deployment-ingress.yaml

If everything is ok, we should be able to access Nginx on the external-ip from above on port 80:

nginx