Kubernetes: Difference between revisions
Line 124: | Line 124: | ||
===nodes=== | ===nodes=== | ||
< | <syntaxhighlight lang="bash"> | ||
kubectl get nodes | kubectl get nodes | ||
Line 131: | Line 131: | ||
# Uncordon to reenable scheduling | # Uncordon to reenable scheduling | ||
kubectl uncordon $NODE_NAME | kubectl uncordon $NODE_NAME | ||
</ | </syntaxhighlight> | ||
===pods=== | ===pods=== |
Revision as of 22:05, 15 January 2022
Kubernetes, also known as K8s, is a container orchestration service by Google.
It supposedly has a harder learning curve than docker-swarm but is heavily inspired by Google's internal borg system.
This document contains notes on both administrating a self-hosted Kubernetes cluster and deploying applications to one.
Getting Started
Background
Kubernetes runs applications across nodes which are physical or virtual machines.
Each node contains a kubelet process, a container runtime, and possibly one or more pods.
Pods contain resources needed to host your application including volumes and containers.
Typically you will want one container per pod since deployments scale by creating multiple pods.
Installation
For local development, you can install minikube.
Otherwise, install kubeadm
.
kubeadm
kubeadm install
# Setup docker repos and install containerd.io sudo apt update sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg \ lsb-release curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt update && sudo apt install containerd.io cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system sudo apt-get install -y apt-transport-https ca-certificates curl sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl # Configure containerd cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # Setup required sysctl params, these persist across reboots. cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # Apply sysctl params without reboot sudo sysctl --system sudo mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml sudo systemctl restart containerd # Systemd cgroup sudo vim /etc/containerd/config.toml # Under this line, add the line below. # [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] # SystemdCgroup = true sudo systemctl restart containerd
# Disable swap sudo swapoff -a # Comment out any swap in /etc/fstab sudo kubeadm init \ --cri-socket=/run/containerd/containerd.sock \ --pod-network-cidr=10.0.0.0/16 <br /> # Setup calico networking kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml # (Optional) Remove taint on control-node to allow job scheduling kubectl taint nodes --all node-role.kubernetes.io/master-
Run the following on worker nodes.
# Disable swap sudo swapoff -a # Comment out any swap in /etc/fstab # Add the line to join the cluster here # kubeadm join <ip>:6443 --token <...> --discovery-token-ca-cert-hash <...>
- Notes
Pods per node
How to increase pods per node
By default, Kubernetes allows 110 pods per node.
You may increase this up to a limit of 255 with the default networking subnet.
For reference, GCP GKE uses 110 pods per node and AWS EKS uses 250 pods per node.
kubectl
In general you will want to create a .yaml
manifest and use apply
, create
, or delete
to manage your resources.
nodes
kubectl get nodes
# Drain evicts all pods from a node.
kubectl drain $NODE_NAME
# Uncordon to reenable scheduling
kubectl uncordon $NODE_NAME
pods
kubectl get pods kubectl describe pods # Access a port on a pod kubectl port-forward <pod> <localport:podport>
deployment
kubectl get deployments kubectl logs $POD_NAME kubectl exec -it $POD_NAME -- bash # For one-off deployments of an image. kubectl create deployment <name> --image=<image> [--replicas=1]
proxy
kubectl proxy
service
Services handle routing to your pods.
kubectl get services kubectl expose deployment/<name> --type=<type> --port <port> kubectl describe services/<name>
run
https://gc-taylor.com/blog/2016/10/31/fire-up-an-interactive-bash-pod-within-a-kubernetes-cluster
# Throw up a ubuntu container kubectl run my-shell --rm -i --tty --image ubuntu -- bash kubectl run busybox-shell --rm -i --tty --image odise/busybox-curl -- sh
Services
Services handle networking.
For self-hosted/bare metal deployments, there are two types of services.
- ClusterIP - This creates an IP address on the internal cluster which nodes and pods on the cluster can access. (Default)
- NodePort - This exposes the port on every node. It implicitly creates a ClusterIP and every node will route to that. This allows access from outside the cluster.
- ExternalName - uses a CNAME record. Primarily for accessing other services from within the cluster.
On managed deployments (e.g. AWS EKS, GKE) you also have
- LoadBalancer - fires up the provider's load balancer
By default, ClusterIP is provided by kube-proxy
and performs round-robin load-balancing to pods.
Ingress
Ingress | Kubernetes
Ingress is equivalent to having a load-balancer / reverse-proxy pod with a NodePort service.
Installing an Ingress Controller
See ingress-nginx to deploy an ingress controller.
Note that global Nginx settings are set in the configmap.
Personally, I have:
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.0-beta.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0-beta.3
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
proxy-body-size: 1g
Autoscaling
Horizontal Autoscale Walkthrough
Horizontal Pod Autoscaler
You will need to install metrics-server.
For testing, you may need to allow insecure tls.
Accessing External Services
access mysql on localhost
To access services running outside of your kubernetes cluster, including services running directly on a node, you need to add an endpoint and a service.
apiVersion: v1
kind: Service
metadata:
name: t440s-wireguard-service
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 52395
targetPort: 52395
---
apiVersion: v1
kind: Endpoints
metadata:
name: t440s-wireguard-service
subsets:
- addresses:
- ip: 192.168.1.40
ports:
- port: 52395
Devices
Generic devices
See https://gitlab.com/arm-research/smarter/smarter-device-manager
and https://github.com/kubernetes/kubernetes/issues/7890#issuecomment-766088805
Intel GPU
See https://github.com/intel/intel-device-plugins-for-kubernetes/tree/main/cmd/gpu_plugin
After adding the gpu plugin, add the following to your deployment.
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
resources:
limits:
gpu.intel.com/i915: 1
Variants
minikube
minikube is a tool to quickly set up a local Kubernetes cluster on your PC.
kind
k3s
k3s is a lighter-weight Kubernetes by Rancher Labs.