Kubernetes, also known as K8s, is a container orchestration service by Google.
It supposedly has a harder learning curve than docker-swarm but is heavily inspired by Google's internal borg system.
This document contains notes on both administrating a self-hosted Kubernetes cluster and deploying applications to one.
Getting Started
Background
Kubernetes runs applications across nodes which are physical or virtual machines.
Each node contains a kubelet process, a container runtime (typically containerd), and any running pods.
Pods contain resources needed to host your application including volumes and containers.
Typically you will want one container per pod since deployments scale by creating multiple pods.
Installation
For local development, you can install minikube.
Otherwise, install kubeadm.
# Disable swap
sudo swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab
# Add the line to join the cluster here
# kubeadm join <ip>:6443 --token <...> --discovery-token-ca-cert-hash <...>
How to increase pods per node
By default, Kubernetes allows 110 pods per node.
You may increase this up to a limit of 255 with the default networking subnet.
For reference, GCP GKE uses 110 pods per node and AWS EKS uses 250 pods per node.
kubectl
In general you will want to create a .yaml manifest and use apply, create, or delete to manage your resources.
nodes
kubectlgetnodes
# Drain evicts all pods from a node.
kubectldrain$NODE_NAME# Uncordon to reenable scheduling
kubectluncordon$NODE_NAME
pods
# List all pods
kubectlgetpods
kubectldescribepods
# List pods and node name
kubectlgetpods-o=custom-columns='NAME:metadata.name,Node:spec.nodeName'# Access a port on a pod
kubectlport-forward<pod><localport:podport>
deployment
kubectlgetdeployments
kubectllogs$POD_NAME
kubectlexec-it$POD_NAME--bash
# For one-off deployments of an image.
kubectlcreatedeployment<name>--image=<image>[--replicas=1]
Services handle networking.
For self-hosted/bare metal deployments, there are two types of services.
ClusterIP - This creates an IP address on the internal cluster which nodes and pods on the cluster can access. (Default)
NodePort - This exposes the port on every node. It implicitly creates a ClusterIP and every node will route to that. This allows access from outside the cluster.
ExternalName - uses a CNAME record. Primarily for accessing other services from within the cluster.
LoadBalancer - Creates a clusterip+nodeport and tells the loadbalancer to create an IP and route it to the nodeport.
On bare-metal deployments you will need to install a loadbalancer such as metallb.
By default, ClusterIP is provided by kube-proxy and performs round-robin load-balancing to pods.
For exposing non-http(s) production services, you typically will use a LoadBalancer service.
For https services, you will typically use an ingress.
Ingress
Ingress | Kubernetes
Ingress is equivalent to having a load-balancer / reverse-proxy pod with a NodePort service.
Note that global Nginx settings are set in the configmap.
Personally, I have:
configmap
# Source: ingress-nginx/templates/controller-configmap.yamlapiVersion:v1kind:ConfigMapmetadata:labels:helm.sh/chart:ingress-nginx-4.0.0-beta.3app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/instance:ingress-nginxapp.kubernetes.io/version:1.0.0-beta.3app.kubernetes.io/managed-by:Helmapp.kubernetes.io/component:controllername:ingress-nginx-controllernamespace:ingress-nginxdata:proxy-body-size:1guse-forwarded-headers:"true"# True because we're behind another reverse proxy
To set settings per-ingress, add the following to your ingress definition:
access mysql on localhost
To access services running outside of your kubernetes cluster, including services running directly on a node, you need to add an endpoint and a service.
Helm is a method for deploying application using premade kubernetes manifest templates known as helm charts.
Rather than writing your own manifest or copying a manifest from elsewhere, you can use helm charts which create and install kubernetes manifests.
Manifests can also be composed into other manifests for applications which require multiple microservices.
Usage
To install an application, generally you do the following:
Create a yaml file, e.g. values.yaml with the options you want.
If necessary, create any PVs, PVCs, and Ingress which might be required.