Kubernetes: Difference between revisions
Created page with "Kubernetes, also known as K8s, is a container orchestration service by Google. It supposedly has a harder learning curve than docker-swarm. ==Resources== * [https://www.uda..." |
|||
(128 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
Kubernetes, also known as K8s, is a container orchestration service by Google. | Kubernetes, also known as K8s, is a container orchestration service by Google.<br> | ||
It | This means it runs containers across a cluster of machines for you and handles networking and container failures<br> | ||
This document contains notes on both administrating a self-hosted Kubernetes cluster and deploying applications to one. | |||
==Getting Started== | |||
===Background=== | |||
Kubernetes runs applications across '''nodes''' which are (physical or virtual) Linux machines.<br> | |||
Each node contains a kubelet process, a container runtime (typically containerd), and any running pods.<br> | |||
'''Pods''' contain resources needed to host your application including volumes and containers.<br> | |||
Typically you will want one container per pod since deployments scale by creating multiple pods.<br> | |||
A '''deployment''' is a rule which spawns and manages pods.<br> | |||
A '''service''' is a networking rule which allows connecting to pods. | |||
In addition to standard Kubernetes objects, '''operators''' watch for and allow you to instantiate custom resources (CR). | |||
==Kubeadm Administration== | |||
Notes on administering kubernetes clusters. | |||
Kuberenetes has many parts and administration is very tedious which is why K3S exists. I'd recommend against using kubeadm for a homelab. | |||
===Installation=== | |||
For local development, you can install [https://minikube.sigs.k8s.io/docs/start/ minikube].<br> | |||
Otherwise, install <code>kubeadm</code>. | |||
====kubeadm==== | |||
Deploy a Kubernetes cluster using kubeadm | |||
{{hidden | Install Commands | | |||
[https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ Install Kubeadm] | |||
<syntaxhighlight lang="bash"> | |||
KUBE_VERSION=1.23.1-00 | |||
# Setup docker repos and install containerd.io | |||
sudo apt update | |||
sudo apt-get install \ | |||
apt-transport-https \ | |||
ca-certificates \ | |||
curl \ | |||
gnupg \ | |||
lsb-release | |||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg | |||
echo \ | |||
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ | |||
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null | |||
sudo apt update && sudo apt install containerd.io | |||
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf | |||
br_netfilter | |||
EOF | |||
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf | |||
net.bridge.bridge-nf-call-ip6tables = 1 | |||
net.bridge.bridge-nf-call-iptables = 1 | |||
EOF | |||
sudo sysctl --system | |||
sudo apt-get install -y apt-transport-https ca-certificates curl | |||
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg | |||
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list | |||
sudo apt-get update | |||
sudo apt-get install -y kubelet=$KUBE_VERSION kubeadm=$KUBE_VERSION kubectl=$KUBE_VERSION | |||
sudo apt-mark hold kubelet kubeadm kubectl | |||
</syntaxhighlight> | |||
;Install Containerd | |||
<syntaxhighlight lang="bash"> | |||
sudo apt-get remove docker docker-engine docker.io containerd runc | |||
sudo apt-get update | |||
sudo apt-get install ca-certificates curl gnupg lsb-release | |||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg | |||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ | |||
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null | |||
sudo apt-get update | |||
sudo apt-get install containerd.io | |||
</syntaxhighlight> | |||
;Setup containerd | |||
;[https://kubernetes.io/docs/setup/production-environment/container-runtimes/ Container runtimes] | |||
<syntaxhighlight lang="bash"> | |||
# Configure containerd | |||
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf | |||
overlay | |||
br_netfilter | |||
EOF | |||
sudo modprobe overlay | |||
sudo modprobe br_netfilter | |||
# Setup required sysctl params, these persist across reboots. | |||
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf | |||
net.bridge.bridge-nf-call-iptables = 1 | |||
net.ipv4.ip_forward = 1 | |||
net.bridge.bridge-nf-call-ip6tables = 1 | |||
EOF | |||
# Apply sysctl params without reboot | |||
sudo sysctl --system | |||
sudo mkdir -p /etc/containerd | |||
containerd config default | sudo tee /etc/containerd/config.toml | |||
sudo systemctl restart containerd | |||
# See https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd | |||
sudo sed -i '/\[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options\]/a \ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml | |||
sudo systemctl restart containerd | |||
</syntaxhighlight> | |||
}} | |||
{{hidden | Control Plane Init | | |||
<syntaxhighlight lang="bash"> | |||
# Disable swap | |||
sudo swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab | |||
sudo kubeadm init \ | |||
--cri-socket=/run/containerd/containerd.sock \ | |||
--pod-network-cidr=10.0.0.0/16 | |||
# (Optional) Remove taint on control-node to allow job scheduling | |||
kubectl taint nodes --all node-role.kubernetes.io/master- | |||
</syntaxhighlight> | |||
}} | |||
{{hidden | Setup Networking With Calico | | |||
After creating you control plane, you need to deploy a network plugin.<br> | |||
Popular choices are Calico and Flannel.<br> | |||
See [https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart Quickstart] | |||
<syntaxhighlight lang="bash"> | |||
# Setup calico networking | |||
kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml | |||
kubectl create -f -<<EOF | |||
apiVersion: operator.tigera.io/v1 | |||
kind: Installation | |||
metadata: | |||
name: default | |||
spec: | |||
# Configures Calico networking. | |||
calicoNetwork: | |||
# Note: The ipPools section cannot be modified post-install. | |||
ipPools: | |||
- blockSize: 26 | |||
cidr: 10.0.0.0/16 | |||
encapsulation: VXLANCrossSubnet | |||
natOutgoing: Enabled | |||
nodeSelector: all() | |||
nodeAddressAutodetectionV4: | |||
canReach: "192.168.1.1" | |||
--- | |||
apiVersion: operator.tigera.io/v1 | |||
kind: APIServer | |||
metadata: | |||
name: default | |||
spec: {} | |||
EOF | |||
</syntaxhighlight> | |||
;Notes | |||
* [https://stackoverflow.com/questions/57504063/calico-kubernetes-pods-cant-ping-each-other-use-cluster-ip https://stackoverflow.com/questions/57504063/calico-kubernetes-pods-cant-ping-each-other-use-cluster-ip] | |||
}} | |||
{{hidden | Local Balancer (MetalLB) | | |||
See https://metallb.universe.tf/installation/.<br> | |||
<syntaxhighlight lang="bash"> | |||
helm repo add metallb https://metallb.github.io/metallb | |||
helm upgrade --install --create-namespace -n metallb metallb metallb/metallb | |||
cat <<EOF >ipaddresspool.yaml | |||
apiVersion: metallb.io/v1beta1 | |||
kind: IPAddressPool | |||
metadata: | |||
name: default | |||
namespace: metallb | |||
spec: | |||
addresses: | |||
- 192.168.1.2-192.168.1.11 | |||
--- | |||
apiVersion: metallb.io/v1beta1 | |||
kind: L2Advertisement | |||
metadata: | |||
name: example | |||
namespace: metallb | |||
EOF | |||
kubectl apply -f ipaddresspool.yaml | |||
</syntaxhighlight> | |||
}} | |||
{{hidden | Ingress Controller (ingress-nginx) | | |||
The ingress controller is used to forward HTTP requests to the appropriate ingress.<br> | |||
See https://kubernetes.github.io/ingress-nginx/. | |||
}} | |||
{{hidden | cert-manager | | |||
See https://cert-manager.io/docs/installation/helm/ | |||
You may also want to setup DNS challenges to support wildcard certificates.<br> | |||
See https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/ if you are using Cloudflare. | |||
}} | |||
{{hidden | Add worker nodes | | |||
Run the following on worker nodes. | |||
<syntaxhighlight lang="bash"> | |||
# Disable swap | |||
sudo swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab | |||
# Add the line to join the cluster here | |||
# kubeadm join <ip>:6443 --token <...> --discovery-token-ca-cert-hash <...> | |||
</syntaxhighlight> | |||
}} | |||
===Certificates=== | |||
[https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/ Certificate Management with kubeadm] | |||
Kubernetes requires several TLS certificates which are automatically generated by Kubeadm. | |||
These expire in one year but are automatically renewed whenever you upgrade your cluster with <code>kubeadm upgrade apply</code> | |||
To renew the certificates manually, run <code>kubeadm certs renew all</code> and restart your control plane services. | |||
Note that if you lets the certificates expire, you will need to setup kubectl again. | |||
Issues connecting with etcd | |||
I ran into this when trying to kubeadm upgrade | |||
;context deadline exceeded remote error: tls: bad certificate | |||
Kubeadm stores etcd certificates in <code>/etc/kubernetes/pki/etcd/</code>. | |||
Follow this to generate new certificates: https://github.com/etcd-io/etcd/issues/9785#issuecomment-432438748 | |||
You will need to create a temporary files for ca-config.json and server.json to generate new keys. | |||
Make sure in the server.json to set the key algo to "rsa" and size to 2048. In the same file, set your CN to 127.0.0.1 and the hosts to [127.0.0.1, your local IP]. | |||
;cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs | |||
This means your hosts in server.json is not correct when you generated the new keys. | |||
===Pods per node=== | |||
[http://blog.schoolofdevops.com/how-to-increase-the-number-of-pods-limit-per-kubernetes-node/ How to increase pods per node]<br> | |||
By default, Kubernetes allows 110 pods per node.<br> | |||
You may increase this up to a limit of 255 with the default networking subnet.<br> | |||
For reference, GCP GKE uses 110 pods per node and AWS EKS uses 250 pods per node. | |||
===Changing Master Address=== | |||
See https://ystatit.medium.com/how-to-change-kubernetes-kube-apiserver-ip-address-402d6ddb8aa2 | |||
==kubectl== | |||
In general you will want to create a <code>.yaml</code> manifest and use <code>apply</code>, <code>create</code>, or <code>delete</code> to manage your resources. | |||
===nodes=== | |||
<syntaxhighlight lang="bash"> | |||
kubectl get nodes | |||
# Drain evicts all pods from a node. | |||
kubectl drain $NODE_NAME | |||
# Uncordon to reenable scheduling | |||
kubectl uncordon $NODE_NAME | |||
</syntaxhighlight> | |||
===pods=== | |||
<syntaxhighlight lang="bash"> | |||
# List all pods | |||
kubectl get pods | |||
kubectl describe pods | |||
# List pods and node name | |||
kubectl get pods -o=custom-columns='NAME:metadata.name,Node:spec.nodeName' | |||
# Access a port on a pod | |||
kubectl port-forward <pod> <localport:podport> | |||
</syntaxhighlight> | |||
===deployment=== | |||
<syntaxhighlight lang="bash"> | |||
kubectl get deployments | |||
kubectl logs $POD_NAME | |||
kubectl exec -it $POD_NAME -- bash | |||
# For one-off deployments of an image. | |||
kubectl create deployment <name> --image=<image> [--replicas=1] | |||
</syntaxhighlight> | |||
===proxy=== | |||
<syntaxhighlight lang="bash"> | |||
kubectl proxy | |||
</syntaxhighlight> | |||
===service=== | |||
Services handle routing to your pods. | |||
<syntaxhighlight lang="bash"> | |||
kubectl get services | |||
kubectl expose deployment/<name> --type=<type> --port <port> | |||
kubectl describe services/<name> | |||
</syntaxhighlight> | |||
===run=== | |||
[https://gc-taylor.com/blog/2016/10/31/fire-up-an-interactive-bash-pod-within-a-kubernetes-cluster https://gc-taylor.com/blog/2016/10/31/fire-up-an-interactive-bash-pod-within-a-kubernetes-cluster]<br> | |||
<syntaxhighlight lang="bash"> | |||
# Throw up a ubuntu container | |||
kubectl run my-shell --rm -i --tty --image ubuntu -- bash | |||
kubectl run busybox-shell --rm -i --tty --image odise/busybox-curl -- sh | |||
</syntaxhighlight> | |||
==Deployments== | |||
In most cases, you will use deployments to provision pods.<br> | |||
Deployments internally use replicasets to create multiple identical pods.<br> | |||
This is great for things such as webservers or standalone services which are not stateful. | |||
In most cases, you can stick a service in front which will round-robin requests to different pods in your deployment. | |||
{{hidden | Example Deployment | | |||
<syntaxhighlight lang="yaml"> | |||
apiVersion: apps/v1 | |||
kind: Deployment | |||
metadata: | |||
name: nextcloud-app | |||
labels: | |||
app: nextcloud | |||
spec: | |||
replicas: 1 | |||
selector: | |||
matchLabels: | |||
pod-label: nextcloud-app-pod | |||
template: | |||
metadata: | |||
labels: | |||
pod-label: nextcloud-app-pod | |||
spec: | |||
containers: | |||
- name: nextcloud | |||
image: public.ecr.aws/docker/library/nextcloud:stable | |||
ports: | |||
- containerPort: 80 | |||
env: | |||
- name: MYSQL_HOST | |||
value: nextcloud-db-service | |||
- name: MYSQL_DATABASE | |||
value: nextcloud | |||
- name: MYSQL_USER | |||
valueFrom: | |||
secretKeyRef: | |||
name: nextcloud-db-credentials | |||
key: username | |||
- name: MYSQL_PASSWORD | |||
valueFrom: | |||
secretKeyRef: | |||
name: nextcloud-db-credentials | |||
key: password | |||
volumeMounts: | |||
- name: nextcloud-app-storage | |||
mountPath: /var/www/html | |||
volumes: | |||
- name: nextcloud-app-storage | |||
persistentVolumeClaim: | |||
claimName: nextcloud-app-pvc | |||
</syntaxhighlight> | |||
}} | |||
==StatefulSets== | |||
[https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/ StatefulSets basics]<br> | |||
Stateful sets are useful when you need a fixed number of pods with stable identities such as databases.<br> | |||
Pods created by stateful sets have a unique number suffix which allows you to query a specific pod.<br> | |||
Typically, you will want to use a headless service (i.e. without ClusterIP) to give local dns records to each service. | |||
In most cases, you will want to look for a helm chart instead of creating your own stateful sets. | |||
==Services== | |||
[https://kubernetes.io/docs/concepts/services-networking/service Documentation] | |||
Services handle networking. | |||
For self-hosted/bare metal clusters, there are two types of services. | |||
* ClusterIP - This creates an IP address on the internal cluster which nodes and pods on the cluster can access. (Default) | |||
* NodePort - This exposes the port on every node. It implicitly creates a ClusterIP and every node will route to that. This allows access from outside the cluster. | |||
* ExternalName - uses a CNAME record. Primarily for accessing other services from within the cluster. | |||
* LoadBalancer - Creates a clusterip+nodeport and tells the loadbalancer to create an IP and route it to the nodeport. | |||
** On bare-metal clusters you will need to install a loadbalancer such as metallb. | |||
By default, ClusterIP is provided by <code>kube-proxy</code> and performs round-robin load-balancing to pods.<br> | |||
For exposing non-http(s) production services, you typically will use a LoadBalancer service.<br> | |||
For https services, you will typically use an ingress. | |||
{{ hidden | Example ClusterIP Service | | |||
<syntaxhighlight lang="yaml"> | |||
apiVersion: v1 | |||
kind: Service | |||
metadata: | |||
name: pwiki-app-service | |||
spec: | |||
type: ClusterIP | |||
selector: | |||
pod-label: pwiki-app-pod | |||
ports: | |||
- protocol: TCP | |||
port: 80 | |||
targetPort: 3000 | |||
</syntaxhighlight> | |||
}} | |||
==Ingress== | |||
[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress | Kubernetes]<br> | |||
An ingress is an http endpoint. This configures an ingress controller which is a load-balancer or reverse-proxy pod that integrates with Kubernetes. | |||
A common ingress controller is [https://github.com/kubernetes/ingress-nginx ingress-nginx] which is maintained by the Kubernetes team. Alternatives include [https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-helm/ nginx-ingress] [https://doc.traefik.io/traefik/providers/kubernetes-ingress/ traefik], [https://haproxy-ingress.github.io/ haproxy-ingress], and [https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/ others]. | |||
===Installing ingress-nginx=== | |||
See [https://kubernetes.github.io/ingress-nginx/deploy/ ingress-nginx] to deploy an ingress controller.<br> | |||
Note that <code>ingress-nginx</code> is managed by the Kubernetes team and <code>nginx-ingress</code> is an different ingress controller by the Nginx team. | |||
Personally, I have: | |||
{{hidden | values.yaml | | |||
<syntaxhighlight lang="yaml"> | |||
controller: | |||
watchIngressWithoutClass: true | |||
autoscaling: | |||
enabled: true | |||
minReplicas: 1 | |||
maxReplicas: 5 | |||
targetCPUUtilizationPercentage: 50 | |||
targetMemoryUtilizationPercentage: 50 | |||
behavior: {} | |||
service: | |||
enabled: true | |||
appProtocol: true | |||
annotations: {} | |||
labels: {} | |||
externalIPs: [] | |||
enableHttp: true | |||
enableHttps: true | |||
ports: | |||
http: 80 | |||
https: 443 | |||
targetPorts: | |||
http: http | |||
https: https | |||
type: LoadBalancer | |||
loadBalancerIP: 192.168.1.3 | |||
externalTrafficPolicy: Local | |||
config: | |||
proxy-body-size: 1g | |||
</syntaxhighlight> | |||
}} | |||
{{hidden | upgrade.sh | | |||
<syntaxhighlight lang="bash"> | |||
#!/bin/bash | |||
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)" | |||
cd "${DIR}" || exit | |||
helm upgrade --install ingress-nginx ingress-nginx \ | |||
--repo https://kubernetes.github.io/ingress-nginx \ | |||
--namespace ingress-nginx --create-namespace \ | |||
-f values.yaml | |||
</syntaxhighlight> | |||
}} | |||
To set settings per-ingress, add the annotation to your ingress definition: | |||
{{hidden | example ingress | | |||
<syntaxhighlight lang="yaml"> | |||
apiVersion: networking.k8s.io/v1 | |||
kind: Ingress | |||
metadata: | |||
name: nextcloud | |||
annotations: | |||
cert-manager.io/issuer: letsencrypt-prod | |||
nginx.ingress.kubernetes.io/proxy-body-size: 10g | |||
spec: | |||
tls: | |||
- secretName: cloud-davidl-me-tls | |||
hosts: | |||
- cloud.davidl.me | |||
rules: | |||
- host: cloud.davidl.me | |||
http: | |||
paths: | |||
- path: / | |||
pathType: Prefix | |||
backend: | |||
service: | |||
name: nextcloud-app-service | |||
port: | |||
number: 80 | |||
</syntaxhighlight> | |||
}} | |||
If your backend uses HTTPS, you will need to add the annotation: <code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS</code> | |||
For self-signed SSL certificates, you will also need the annotation: | |||
<syntaxhighlight lang="yaml"> | |||
nginx.ingress.kubernetes.io/configuration-snippet: | | |||
proxy_ssl_name $host; | |||
proxy_ssl_server_name on; | |||
</syntaxhighlight> | |||
===Authentication=== | |||
[https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/ ingress-nginx external oauth]<br> | |||
If you like to authenticate using an oauth2 provider (e.g. Google, GitHub), I suggest using [https://github.com/oauth2-proxy/oauth2-proxy oauth2-proxy]. | |||
# First setup a deployment of the oauth2, possibly without an upstream. | |||
# Then you can simply add the following annotations to your ingresses to protect them: | |||
#:<syntaxhighlight lang="yaml"> | |||
nginx.ingress.kubernetes.io/auth-url: "http://oauth2proxy.default.svc.cluster.local/oauth2/auth?allowed_emails=myemail@gmail.com" | |||
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2proxy.davidl.me/oauth2/start?rd=$scheme://$host$request_uri" | |||
</syntaxhighlight> | |||
;Additional things to look into | |||
* Pomerium | |||
* Keycloak | |||
** https://www.talkingquickly.co.uk/webapp-authentication-keycloak-OAuth2-proxy-nginx-ingress-kubernetes | |||
* Authelia - only supports username/password as the first factor | |||
* Authentik - tried this but had too complicated and buggy for me. | |||
If you use Cloudflare, you can also use Cloudflare access, though make sure you prevent other sources from accessing the service directly. | |||
==Autoscaling== | |||
[https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ Horizontal Autoscale Walkthrough]<br> | |||
[https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ Horizontal Pod Autoscaler] | |||
You will need to install [https://github.com/kubernetes-sigs/metrics-server metrics-server].<br> | |||
For testing, you may need to [https://stackoverflow.com/questions/54106725/docker-kubernetes-mac-autoscaler-unable-to-find-metrics allow insecure tls]. | |||
==Accessing External Services== | |||
[https://stackoverflow.com/questions/55164223/access-mysql-running-on-localhost-from-minikube access mysql on localhost]<br> | |||
To access services running outside of your kubernetes cluster, including services running directly on a node, you need to add an endpoint and a service.<br> | |||
{{hidden | Example | | |||
<syntaxhighlight lang="yaml"> | |||
apiVersion: v1 | |||
kind: Service | |||
metadata: | |||
name: t440s-wireguard | |||
spec: | |||
type: ClusterIP | |||
ports: | |||
- protocol: TCP | |||
port: 52395 | |||
targetPort: 52395 | |||
--- | |||
apiVersion: v1 | |||
kind: Endpoints | |||
metadata: | |||
name: t440s-wireguard | |||
subsets: | |||
- addresses: | |||
- ip: 192.168.1.40 | |||
ports: | |||
- port: 52395 | |||
</syntaxhighlight> | |||
}} | |||
==NetworkPolicy== | |||
Network policies are used to limit ingress or egress to pods.<br> | |||
{{hidden | Example network policy | | |||
<syntaxhighlight lang="yaml"> | |||
apiVersion: networking.k8s.io/v1 | |||
kind: NetworkPolicy | |||
metadata: | |||
name: access-rstudio | |||
spec: | |||
podSelector: | |||
matchLabels: | |||
pod-label: rstudio-pod | |||
ingress: | |||
- from: | |||
- podSelector: | |||
matchLabels: | |||
rstudio-access: "true" | |||
</syntaxhighlight> | |||
}} | |||
==Security Context== | |||
[https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ security context] | |||
If you want to restrict pods to run as a particular UID/GUI while still binding to any port, you can add the following: | |||
<syntaxhighlight lang=yaml> | |||
spec: | |||
securityContext: | |||
runAsUser: 1000 | |||
runAsGroup: 1000 | |||
sysctls: | |||
- name: net.ipv4.ip_unprivileged_port_start | |||
value: "0" | |||
</syntaxhighlight> | |||
==Devices== | |||
===Generic devices=== | |||
See [https://gitlab.com/arm-research/smarter/smarter-device-manager https://gitlab.com/arm-research/smarter/smarter-device-manager]<br> | |||
and [https://github.com/kubernetes/kubernetes/issues/7890#issuecomment-766088805 https://github.com/kubernetes/kubernetes/issues/7890#issuecomment-766088805] | |||
===Intel GPU=== | |||
See [https://github.com/intel/intel-device-plugins-for-kubernetes/tree/main/cmd/gpu_plugin https://github.com/intel/intel-device-plugins-for-kubernetes/tree/main/cmd/gpu_plugin] | |||
After adding the gpu plugin, add the following to your deployment. | |||
<syntaxhighlight lang="yaml"> | |||
apiVersion: apps/v1 | |||
kind: Deployment | |||
spec: | |||
template: | |||
spec: | |||
containers: | |||
resources: | |||
limits: | |||
gpu.intel.com/i915: 1 | |||
</syntaxhighlight> | |||
==Restarting your cluster== | |||
===Scale to 0=== | |||
[https://stackoverflow.com/questions/64133011/scale-down-kubernetes-deployments-to-0-and-scale-back-to-original-number-of-repl reference]<br> | |||
If you wish to restart all nodes of your cluster, you can scale your deployments and stateful sets down to 0 and then scale them back up after. | |||
<syntaxhighlight lang="bash"> | |||
# Annotate existing deployments and statefulsets with replica count. | |||
kubectl get deploy -o jsonpath='{range .items[*]}{"kubectl annotate --overwrite deploy "}{@.metadata.name}{" previous-size="}{@.spec.replicas}{" \n"}{end}' | sh | |||
kubectl get sts -o jsonpath='{range .items[*]}{"kubectl annotate --overwrite sts "}{@.metadata.name}{" previous-size="}{@.spec.replicas}{" \n"}{end}' | sh | |||
# Scale to 0. | |||
# shellcheck disable=SC2046 | |||
kubectl scale --replicas=0 $(kubectl get deploy -o name) | |||
# shellcheck disable=SC2046 | |||
kubectl scale --replicas=0 $(kubectl get sts -o name) | |||
# Scale back up. | |||
kubectl get deploy -o jsonpath='{range .items[*]}{"kubectl scale deploy "}{@.metadata.name}{" --replicas="}{.metadata.annotations.previous-size}{"\n"}{end}' | sh | |||
kubectl get sts -o jsonpath='{range .items[*]}{"kubectl scale sts "}{@.metadata.name}{" --replicas="}{.metadata.annotations.previous-size}{"\n"}{end}' | sh | |||
</syntaxhighlight> | |||
==Helm== | |||
Helm is a method for deploying applications using premade kubernetes manifest templates known as helm charts.<br> | |||
Helm charts abstract away manifests, allowing you to focus on only the important configuration values.<br> | |||
Manifests can also be composed into other manifests for applications which require multiple microservices. | |||
[https://artifacthub.io/ https://artifacthub.io/] allows you to search for helm charts others have made.<br> | |||
[https://github.com/bitnami/charts bitnami/charts] contains helm charts for many popular applications. | |||
===Usage=== | |||
To install an application, generally you do the following: | |||
# Create a yaml file, e.g. <code>values.yaml</code> with the options you want. | |||
# If necessary, create any PVs, PVCs, and Ingress which might be required. | |||
# Install the application using helm. | |||
#:<pre>helm upgrade --install $NAME $CHARTNAME -f values.yaml [--version $VERSION]</pre> | |||
===Troubleshooting=== | |||
Sometimes, Kubernetes will deprecate APIs, preventing it from managing existing helm releases.<br> | |||
The [https://github.com/helm/helm-mapkubeapis mapkubeapis] helm plugin can help resolve some of these issues. | |||
==Variants== | |||
===minikube=== | |||
[https://minikube.sigs.k8s.io/docs/ minikube] is a tool to quickly set up a local Kubernetes dev environment on your PC. | |||
===kind=== | |||
===k3s=== | |||
[https://k3s.io/ k3s] is a lighter-weight Kubernetes by Rancher Labs. | |||
It includes Flannel CNI and Traefik Ingress Controller. | |||
==KubeVirt== | |||
{{main | KubeVirt}} | |||
KubeVirt allows you to run virtual machines on your Kubernetes cluster. | |||
==Resources== | ==Resources== | ||
* [https://www. | * [https://kubernetes.io/docs/tutorials/kubernetes-basics/ Kubernetes Basics] | ||
* | * [https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/ Certified Kubernetes Administrator (CKA) with Practice Tests (~$15)] | ||
* [https://yolops.net/k8s-dualstack-cilium.html https://yolops.net/k8s-dualstack-cilium.html] |