Kubernetes

From David's Wiki
\( \newcommand{\P}[]{\unicode{xB6}} \newcommand{\AA}[]{\unicode{x212B}} \newcommand{\empty}[]{\emptyset} \newcommand{\O}[]{\emptyset} \newcommand{\Alpha}[]{Α} \newcommand{\Beta}[]{Β} \newcommand{\Epsilon}[]{Ε} \newcommand{\Iota}[]{Ι} \newcommand{\Kappa}[]{Κ} \newcommand{\Rho}[]{Ρ} \newcommand{\Tau}[]{Τ} \newcommand{\Zeta}[]{Ζ} \newcommand{\Mu}[]{\unicode{x039C}} \newcommand{\Chi}[]{Χ} \newcommand{\Eta}[]{\unicode{x0397}} \newcommand{\Nu}[]{\unicode{x039D}} \newcommand{\Omicron}[]{\unicode{x039F}} \DeclareMathOperator{\sgn}{sgn} \def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits} \def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits} \)

Kubernetes, also known as K8s, is a container orchestration service by Google.
This means it runs containers across a cluster of machines for you and handles networking and container failures
This document contains notes on both administrating a self-hosted Kubernetes cluster and deploying applications to one.

Getting Started

Background

Kubernetes runs applications across nodes which are (physical or virtual) Linux machines.
Each node contains a kubelet process, a container runtime (typically containerd), and any running pods.
Pods contain resources needed to host your application including volumes and containers.
Typically you will want one container per pod since deployments scale by creating multiple pods.
A deployment is a rule which spawns and manages pods.
A service is a networking rule which allows connecting to pods.

In addition to standard Kubernetes objects, operators watch for and allow you to instantiate custom resources (CR).

Kubeadm Administration

Notes on administering kubernetes clusters.

Kuberenetes has many parts and administration is very tedious which is why K3S exists. I'd recommend against using kubeadm for a homelab.

Installation

For local development, you can install minikube.
Otherwise, install kubeadm.

kubeadm

Deploy a Kubernetes cluster using kubeadm

Install Commands
Control Plane Init
Setup Networking With Calico
Local Balancer (MetalLB)
Ingress Controller (ingress-nginx)
cert-manager
Add worker nodes

Certificates

Certificate Management with kubeadm Kubernetes requires several TLS certificates which are automatically generated by Kubeadm. These expire in one year but are automatically renewed whenever you upgrade your cluster with kubeadm upgrade apply

To renew the certificates manually, run kubeadm certs renew all and restart your control plane services. Note that if you lets the certificates expire, you will need to setup kubectl again.

Issues connecting with etcd

I ran into this when trying to kubeadm upgrade

context deadline exceeded remote error
tls: bad certificate

Kubeadm stores etcd certificates in /etc/kubernetes/pki/etcd/.

Follow this to generate new certificates: https://github.com/etcd-io/etcd/issues/9785#issuecomment-432438748 You will need to create a temporary files for ca-config.json and server.json to generate new keys. Make sure in the server.json to set the key algo to "rsa" and size to 2048. In the same file, set your CN to 127.0.0.1 and the hosts to [127.0.0.1, your local IP].

cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs

This means your hosts in server.json is not correct when you generated the new keys.

Pods per node

How to increase pods per node
By default, Kubernetes allows 110 pods per node.
You may increase this up to a limit of 255 with the default networking subnet.
For reference, GCP GKE uses 110 pods per node and AWS EKS uses 250 pods per node.

Changing Master Address

See https://ystatit.medium.com/how-to-change-kubernetes-kube-apiserver-ip-address-402d6ddb8aa2

kubectl

In general you will want to create a .yaml manifest and use apply, create, or delete to manage your resources.

nodes

kubectl get nodes

# Drain evicts all pods from a node.
kubectl drain $NODE_NAME
# Uncordon to reenable scheduling
kubectl uncordon $NODE_NAME

pods

# List all pods
kubectl get pods
kubectl describe pods

# List pods and node name
kubectl get pods -o=custom-columns='NAME:metadata.name,Node:spec.nodeName'

# Access a port on a pod
kubectl port-forward <pod> <localport:podport>

deployment

kubectl get deployments
kubectl logs $POD_NAME
kubectl exec -it $POD_NAME -- bash

# For one-off deployments of an image.
kubectl create deployment <name> --image=<image> [--replicas=1]

proxy

kubectl proxy

service

Services handle routing to your pods.

kubectl get services

kubectl expose deployment/<name> --type=<type> --port <port>
kubectl describe services/<name>

run

https://gc-taylor.com/blog/2016/10/31/fire-up-an-interactive-bash-pod-within-a-kubernetes-cluster

# Throw up a ubuntu container
kubectl run my-shell --rm -i --tty --image ubuntu -- bash
kubectl run busybox-shell --rm -i --tty --image odise/busybox-curl -- sh

Deployments

In most cases, you will use deployments to provision pods.
Deployments internally use replicasets to create multiple identical pods.
This is great for things such as webservers or standalone services which are not stateful. In most cases, you can stick a service in front which will round-robin requests to different pods in your deployment.

Example Deployment

StatefulSets

StatefulSets basics
Stateful sets are useful when you need a fixed number of pods with stable identities such as databases.
Pods created by stateful sets have a unique number suffix which allows you to query a specific pod.
Typically, you will want to use a headless service (i.e. without ClusterIP) to give local dns records to each service.

In most cases, you will want to look for a helm chart instead of creating your own stateful sets.

Services

Documentation

Services handle networking.
For self-hosted/bare metal clusters, there are two types of services.

  • ClusterIP - This creates an IP address on the internal cluster which nodes and pods on the cluster can access. (Default)
  • NodePort - This exposes the port on every node. It implicitly creates a ClusterIP and every node will route to that. This allows access from outside the cluster.
  • ExternalName - uses a CNAME record. Primarily for accessing other services from within the cluster.
  • LoadBalancer - Creates a clusterip+nodeport and tells the loadbalancer to create an IP and route it to the nodeport.
    • On bare-metal clusters you will need to install a loadbalancer such as metallb.

By default, ClusterIP is provided by kube-proxy and performs round-robin load-balancing to pods.
For exposing non-http(s) production services, you typically will use a LoadBalancer service.
For https services, you will typically use an ingress.

Example ClusterIP Service

Ingress

Ingress | Kubernetes
An ingress is an http endpoint. This configures an ingress controller which is a load-balancer or reverse-proxy pod that integrates with Kubernetes.

A common ingress controller is ingress-nginx which is maintained by the Kubernetes team. Alternatives include nginx-ingress traefik, haproxy-ingress, and others.

Installing ingress-nginx

See ingress-nginx to deploy an ingress controller.
Note that ingress-nginx is managed by the Kubernetes team and nginx-ingress is an different ingress controller by the Nginx team.

Personally, I have:

values.yaml
upgrade.sh

To set settings per-ingress, add the annotation to your ingress definition:

example ingress

If your backend uses HTTPS, you will need to add the annotation: nginx.ingress.kubernetes.io/backend-protocol: HTTPS For self-signed SSL certificates, you will also need the annotation:

    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_ssl_name $host;
      proxy_ssl_server_name on;

Authentication

ingress-nginx external oauth
If you like to authenticate using an oauth2 provider (e.g. Google, GitHub), I suggest using oauth2-proxy.

  1. First setup a deployment of the oauth2, possibly without an upstream.
  2. Then you can simply add the following annotations to your ingresses to protect them:
    nginx.ingress.kubernetes.io/auth-url: "http://oauth2proxy.default.svc.cluster.local/oauth2/auth?allowed_emails=myemail@gmail.com"
    nginx.ingress.kubernetes.io/auth-signin: "https://oauth2proxy.davidl.me/oauth2/start?rd=$scheme://$host$request_uri"
    
Additional things to look into

If you use Cloudflare, you can also use Cloudflare access, though make sure you prevent other sources from accessing the service directly.

Autoscaling

Horizontal Autoscale Walkthrough
Horizontal Pod Autoscaler

You will need to install metrics-server.
For testing, you may need to allow insecure tls.

Accessing External Services

access mysql on localhost
To access services running outside of your kubernetes cluster, including services running directly on a node, you need to add an endpoint and a service.

Example

NetworkPolicy

Network policies are used to limit ingress or egress to pods.

Example network policy

Security Context

security context If you want to restrict pods to run as a particular UID/GUI while still binding to any port, you can add the following:

    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        sysctls:
        - name: net.ipv4.ip_unprivileged_port_start
          value: "0"

Devices

Generic devices

See https://gitlab.com/arm-research/smarter/smarter-device-manager
and https://github.com/kubernetes/kubernetes/issues/7890#issuecomment-766088805

Intel GPU

See https://github.com/intel/intel-device-plugins-for-kubernetes/tree/main/cmd/gpu_plugin

After adding the gpu plugin, add the following to your deployment.

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
          resources:
            limits:
              gpu.intel.com/i915: 1

Restarting your cluster

Scale to 0

reference
If you wish to restart all nodes of your cluster, you can scale your deployments and stateful sets down to 0 and then scale them back up after.

# Annotate existing deployments and statefulsets with replica count.
kubectl get deploy -o jsonpath='{range .items[*]}{"kubectl annotate --overwrite deploy "}{@.metadata.name}{" previous-size="}{@.spec.replicas}{" \n"}{end}' | sh
kubectl get sts -o jsonpath='{range .items[*]}{"kubectl annotate --overwrite sts "}{@.metadata.name}{" previous-size="}{@.spec.replicas}{" \n"}{end}' | sh

# Scale to 0.
# shellcheck disable=SC2046
kubectl scale --replicas=0 $(kubectl get deploy -o name) 
# shellcheck disable=SC2046
kubectl scale --replicas=0 $(kubectl get sts -o name)

# Scale back up.
kubectl get deploy -o jsonpath='{range .items[*]}{"kubectl scale deploy "}{@.metadata.name}{" --replicas="}{.metadata.annotations.previous-size}{"\n"}{end}' | sh
kubectl get sts -o jsonpath='{range .items[*]}{"kubectl scale sts "}{@.metadata.name}{" --replicas="}{.metadata.annotations.previous-size}{"\n"}{end}' | sh

Helm

Helm is a method for deploying applications using premade kubernetes manifest templates known as helm charts.
Helm charts abstract away manifests, allowing you to focus on only the important configuration values.
Manifests can also be composed into other manifests for applications which require multiple microservices.

https://artifacthub.io/ allows you to search for helm charts others have made.
bitnami/charts contains helm charts for many popular applications.

Usage

To install an application, generally you do the following:

  1. Create a yaml file, e.g. values.yaml with the options you want.
  2. If necessary, create any PVs, PVCs, and Ingress which might be required.
  3. Install the application using helm.
    helm upgrade --install $NAME $CHARTNAME -f values.yaml [--version $VERSION]

Troubleshooting

Sometimes, Kubernetes will deprecate APIs, preventing it from managing existing helm releases.
The mapkubeapis helm plugin can help resolve some of these issues.

Variants

minikube

minikube is a tool to quickly set up a local Kubernetes dev environment on your PC.

kind

k3s

k3s is a lighter-weight Kubernetes by Rancher Labs. It includes Flannel CNI and Traefik Ingress Controller.

KubeVirt

KubeVirt allows you to run virtual machines on your Kubernetes cluster.

Resources