Kubernetes: Difference between revisions

 
(54 intermediate revisions by the same user not shown)
Line 1: Line 1:
Kubernetes, also known as K8s, is a container orchestration service by Google.<br>
Kubernetes, also known as K8s, is a container orchestration service by Google.<br>
It supposedly has a harder learning curve than docker-swarm but is heavily inspired by Google's internal [https://research.google/pubs/pub43438/#:~:text=Google's%20Borg%20system%20is%20a,tens%20of%20thousands%20of%20machines. borg system].<br>
This means it runs containers across a cluster of machines for you and handles networking and container failures<br>
This document contains notes on both administrating a self-hosted Kubernetes cluster and deploying applications to one.
This document contains notes on both administrating a self-hosted Kubernetes cluster and deploying applications to one.


==Getting Started==
==Getting Started==
===Background===
===Background===
Kubernetes runs applications across nodes which are physical or virtual machines.<br>
Kubernetes runs applications across '''nodes''' which are (physical or virtual) Linux machines.<br>
Each node contains a kubelet process, a container runtime (typically containerd), and any running pods.<br>
Each node contains a kubelet process, a container runtime (typically containerd), and any running pods.<br>
Pods contain resources needed to host your application including volumes and containers.<br>
'''Pods''' contain resources needed to host your application including volumes and containers.<br>
Typically you will want one container per pod since deployments scale by creating multiple pods.
Typically you will want one container per pod since deployments scale by creating multiple pods.<br>
A '''deployment''' is a rule which spawns and manages pods.<br>
A '''service''' is a networking rule which allows connecting to pods.
 
In addition to standard Kubernetes objects, '''operators''' watch for and allow you to instantiate custom resources (CR).
 
==Kubeadm Administration==
Notes on administering kubernetes clusters.
 
Kuberenetes has many parts and administration is very tedious which is why K3S exists. I'd recommend against using kubeadm for a homelab.


===Installation===
===Installation===
Line 17: Line 26:
Deploy a Kubernetes cluster using kubeadm
Deploy a Kubernetes cluster using kubeadm
{{hidden | Install Commands |
{{hidden | Install Commands |
<pre>
[https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ Install Kubeadm]
<syntaxhighlight lang="bash">
KUBE_VERSION=1.23.1-00
# Setup docker repos and install containerd.io
# Setup docker repos and install containerd.io
sudo apt update
sudo apt update
Line 46: Line 57:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-get install -y kubelet=$KUBE_VERSION kubeadm=$KUBE_VERSION kubectl=$KUBE_VERSION
sudo apt-mark hold kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
</syntaxhighlight>


;Install Containerd
<syntaxhighlight lang="bash">
sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install containerd.io
</syntaxhighlight>


 
;Setup containerd
;[https://kubernetes.io/docs/setup/production-environment/container-runtimes/ Container runtimes]
<syntaxhighlight lang="bash">
# Configure containerd
# Configure containerd
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
Line 77: Line 102:
sudo sed -i '/\[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options\]/a \ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
sudo sed -i '/\[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options\]/a \ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl restart containerd
</pre>
</syntaxhighlight>
}}
}}


{{hidden | Control Plane Init |
{{hidden | Control Plane Init |
<pre>
<syntaxhighlight lang="bash">
# Disable swap
# Disable swap
sudo swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab
sudo swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab
Line 90: Line 115:
# (Optional) Remove taint on control-node to allow job scheduling
# (Optional) Remove taint on control-node to allow job scheduling
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/master-
</pre>
</syntaxhighlight>
}}
}}
{{hidden | Setup Networking With Calico |
{{hidden | Setup Networking With Calico |
Line 96: Line 121:
Popular choices are Calico and Flannel.<br>
Popular choices are Calico and Flannel.<br>
See [https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart Quickstart]
See [https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart Quickstart]
<pre>
<syntaxhighlight lang="bash">


# Setup calico networking
# Setup calico networking
Line 124: Line 149:
spec: {}
spec: {}
EOF
EOF
</pre>
</syntaxhighlight>
 
;Notes
* [https://stackoverflow.com/questions/57504063/calico-kubernetes-pods-cant-ping-each-other-use-cluster-ip https://stackoverflow.com/questions/57504063/calico-kubernetes-pods-cant-ping-each-other-use-cluster-ip]
}}
{{hidden | Local Balancer (MetalLB) |
See https://metallb.universe.tf/installation/.<br>
<syntaxhighlight lang="bash">
helm repo add metallb https://metallb.github.io/metallb
helm upgrade --install --create-namespace -n metallb metallb metallb/metallb
 
cat <<EOF >ipaddresspool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default
  namespace: metallb
spec:
  addresses:
  - 192.168.1.2-192.168.1.11
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb
EOF
 
kubectl apply -f ipaddresspool.yaml
</syntaxhighlight>
}}
{{hidden | Ingress Controller (ingress-nginx) |
The ingress controller is used to forward HTTP requests to the appropriate ingress.<br>
See https://kubernetes.github.io/ingress-nginx/.
}}
{{hidden | cert-manager |
See https://cert-manager.io/docs/installation/helm/
 
You may also want to setup DNS challenges to support wildcard certificates.<br>
See https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/ if you are using Cloudflare.
}}
}}
{{hidden | Add worker nodes |
{{hidden | Add worker nodes |
Run the following on worker nodes.
Run the following on worker nodes.
<pre>
<syntaxhighlight lang="bash">
# Disable swap
# Disable swap
sudo swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab
sudo swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab
# Add the line to join the cluster here
# Add the line to join the cluster here
# kubeadm join <ip>:6443 --token <...> --discovery-token-ca-cert-hash <...>
# kubeadm join <ip>:6443 --token <...> --discovery-token-ca-cert-hash <...>
</pre>
</syntaxhighlight>
}}
}}
{{hidden | Ingress Controller (ingress-nginx) |
The ingress controller is used to forward HTTP requests to the appropriate ingress.<br>
See https://kubernetes.github.io/ingress-nginx/.
}}
{{hidden | Local Balancer (MetalLB) (Optional) |
See https://metallb.universe.tf/installation/.<br>
<syntaxhighlight lang="bash">
cat <<EOF >values.yaml
configInline:
  address-pools:
  - name: default
    protocol: layer2
    addresses:
    - 192.168.1.2-192.168.1.11
EOF


helm repo add metallb https://metallb.github.io/metallb
===Certificates===
helm upgrade --install --create-namespace -n metallb metallb metallb/metallb -f values.yaml
[https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/ Certificate Management with kubeadm]
</syntaxhighlight>
Kubernetes requires several TLS certificates which are automatically generated by Kubeadm.
}}
These expire in one year but are automatically renewed whenever you upgrade your cluster with <code>kubeadm upgrade apply</code>
;Notes
 
* [https://stackoverflow.com/questions/57504063/calico-kubernetes-pods-cant-ping-each-other-use-cluster-ip https://stackoverflow.com/questions/57504063/calico-kubernetes-pods-cant-ping-each-other-use-cluster-ip]
To renew the certificates manually, run <code>kubeadm certs renew all</code> and restart your control plane services.
Note that if you lets the certificates expire, you will need to setup kubectl again.
 
Issues connecting with etcd
 
I ran into this when trying to kubeadm upgrade
;context deadline exceeded remote error: tls: bad certificate
Kubeadm stores etcd certificates in <code>/etc/kubernetes/pki/etcd/</code>.
 
Follow this to generate new certificates: https://github.com/etcd-io/etcd/issues/9785#issuecomment-432438748
You will need to create a temporary files for ca-config.json and server.json to generate new keys.
Make sure in the server.json to set the key algo to "rsa" and size to 2048. In the same file, set your CN to 127.0.0.1 and the hosts to [127.0.0.1, your local IP].
 
;cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs
This means your hosts in server.json is not correct when you generated the new keys.


===Pods per node===
===Pods per node===
Line 163: Line 226:
You may increase this up to a limit of 255 with the default networking subnet.<br>
You may increase this up to a limit of 255 with the default networking subnet.<br>
For reference, GCP GKE uses 110 pods per node and AWS EKS uses 250 pods per node.
For reference, GCP GKE uses 110 pods per node and AWS EKS uses 250 pods per node.
===Changing Master Address===
See https://ystatit.medium.com/how-to-change-kubernetes-kube-apiserver-ip-address-402d6ddb8aa2


==kubectl==
==kubectl==
Line 221: Line 287:
kubectl run busybox-shell --rm -i --tty --image odise/busybox-curl -- sh
kubectl run busybox-shell --rm -i --tty --image odise/busybox-curl -- sh
</syntaxhighlight>
</syntaxhighlight>
==Deployments==
In most cases, you will use deployments to provision pods.<br>
Deployments internally use replicasets to create multiple identical pods.<br>
This is great for things such as webservers or standalone services which are not stateful.
In most cases, you can stick a service in front which will round-robin requests to different pods in your deployment.
{{hidden | Example Deployment |
<syntaxhighlight lang="yaml">
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nextcloud-app
  labels:
    app: nextcloud
spec:
  replicas: 1
  selector:
    matchLabels:
      pod-label: nextcloud-app-pod
  template:
    metadata:
      labels:
        pod-label: nextcloud-app-pod
    spec:
      containers:
        - name: nextcloud
          image: public.ecr.aws/docker/library/nextcloud:stable
          ports:
            - containerPort: 80
          env:
            - name: MYSQL_HOST
              value: nextcloud-db-service
            - name: MYSQL_DATABASE
              value: nextcloud
            - name: MYSQL_USER
              valueFrom:
                secretKeyRef:
                  name: nextcloud-db-credentials
                  key: username
            - name: MYSQL_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: nextcloud-db-credentials
                  key: password
          volumeMounts:
            - name: nextcloud-app-storage
              mountPath: /var/www/html
      volumes:
        - name: nextcloud-app-storage
          persistentVolumeClaim:
            claimName: nextcloud-app-pvc
</syntaxhighlight>
}}
==StatefulSets==
[https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/ StatefulSets basics]<br>
Stateful sets are useful when you need a fixed number of pods with stable identities such as databases.<br>
Pods created by stateful sets have a unique number suffix which allows you to query a specific pod.<br>
Typically, you will want to use a headless service (i.e. without ClusterIP) to give local dns records to each service.
In most cases, you will want to look for a helm chart instead of creating your own stateful sets.


==Services==
==Services==
Line 226: Line 354:


Services handle networking.   
Services handle networking.   
For self-hosted/bare metal deployments, there are two types of services.
For self-hosted/bare metal clusters, there are two types of services.
* ClusterIP - This creates an IP address on the internal cluster which nodes and pods on the cluster can access. (Default)
* ClusterIP - This creates an IP address on the internal cluster which nodes and pods on the cluster can access. (Default)
* NodePort - This exposes the port on every node. It implicitly creates a ClusterIP and every node will route to that. This allows access from outside the cluster.
* NodePort - This exposes the port on every node. It implicitly creates a ClusterIP and every node will route to that. This allows access from outside the cluster.
* ExternalName - uses a CNAME record. Primarily for accessing other services from within the cluster.
* ExternalName - uses a CNAME record. Primarily for accessing other services from within the cluster.
* LoadBalancer - Creates a clusterip+nodeport and tells the loadbalancer to create an IP and route it to the nodeport.
* LoadBalancer - Creates a clusterip+nodeport and tells the loadbalancer to create an IP and route it to the nodeport.
** On bare-metal deployments you will need to install a loadbalancer such as metallb.
** On bare-metal clusters you will need to install a loadbalancer such as metallb.


By default, ClusterIP is provided by <code>kube-proxy</code> and performs round-robin load-balancing to pods.<br>
By default, ClusterIP is provided by <code>kube-proxy</code> and performs round-robin load-balancing to pods.<br>
For exposing non-http(s) production services, you typically will use a LoadBalancer service.<br>
For exposing non-http(s) production services, you typically will use a LoadBalancer service.<br>
For https services, you will typically use an ingress.
For https services, you will typically use an ingress.
{{ hidden | Example ClusterIP Service |
<syntaxhighlight lang="yaml">
apiVersion: v1
kind: Service
metadata:
  name: pwiki-app-service
spec:
  type: ClusterIP
  selector:
    pod-label: pwiki-app-pod
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
</syntaxhighlight>
}}


==Ingress==
==Ingress==
[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress | Kubernetes]<br>
[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress | Kubernetes]<br>
Ingress is equivalent to having a load-balancer / reverse-proxy pod with a NodePort service.
An ingress is an http endpoint. This configures an ingress controller which is a load-balancer or reverse-proxy pod that integrates with Kubernetes.
 
A common ingress controller is [https://github.com/kubernetes/ingress-nginx ingress-nginx] which is maintained by the Kubernetes team. Alternatives include [https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-helm/ nginx-ingress] [https://doc.traefik.io/traefik/providers/kubernetes-ingress/ traefik], [https://haproxy-ingress.github.io/ haproxy-ingress], and [https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/ others].


===Installing an Ingress Controller===
===Installing ingress-nginx===
See [https://kubernetes.github.io/ingress-nginx/deploy/ ingress-nginx] to deploy an ingress controller.
See [https://kubernetes.github.io/ingress-nginx/deploy/ ingress-nginx] to deploy an ingress controller.<br>
Note that <code>ingress-nginx</code> is managed by the Kubernetes team and <code>nginx-ingress</code> is an different ingress controller by the Nginx team.


Note that global Nginx settings are set in the configmap.<br>
Personally, I have:
Personally, I have:
{{hidden | configmap |
{{hidden | values.yaml |
<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
# Source: ingress-nginx/templates/controller-configmap.yaml
controller:
apiVersion: v1
  watchIngressWithoutClass: true
kind: ConfigMap
  autoscaling:
metadata:
    enabled: true
   labels:
    minReplicas: 1
     helm.sh/chart: ingress-nginx-4.0.0-beta.3
    maxReplicas: 5
     app.kubernetes.io/name: ingress-nginx
    targetCPUUtilizationPercentage: 50
     app.kubernetes.io/instance: ingress-nginx
    targetMemoryUtilizationPercentage: 50
     app.kubernetes.io/version: 1.0.0-beta.3
    behavior: {}
     app.kubernetes.io/managed-by: Helm
 
    app.kubernetes.io/component: controller
   service:
  name: ingress-nginx-controller
    enabled: true
   namespace: ingress-nginx
    appProtocol: true
data:
 
   proxy-body-size: 1g
    annotations: {}
   use-forwarded-headers: "true" # True because we're behind another reverse proxy
    labels: {}
     externalIPs: []
 
    enableHttp: true
    enableHttps: true
 
    ports:
      http: 80
      https: 443
 
     targetPorts:
      http: http
      https: https
 
     type: LoadBalancer
     loadBalancerIP: 192.168.1.3
     externalTrafficPolicy: Local
 
  config:
    proxy-body-size: 1g
</syntaxhighlight>
}}
{{hidden | upgrade.sh |
<syntaxhighlight lang="bash">
#!/bin/bash
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
cd "${DIR}" || exit
 
helm upgrade --install ingress-nginx ingress-nginx \
   --repo https://kubernetes.github.io/ingress-nginx \
   --namespace ingress-nginx --create-namespace \
   -f values.yaml
</syntaxhighlight>
</syntaxhighlight>
}}
}}
To set settings per-ingress, add the following to your ingress definition:
To set settings per-ingress, add the annotation to your ingress definition:
{{hidden | example ingress |
{{hidden | example ingress |
<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
Line 274: Line 452:
   name: nextcloud
   name: nextcloud
   annotations:
   annotations:
    cert-manager.io/issuer: letsencrypt-prod
     nginx.ingress.kubernetes.io/proxy-body-size: 10g
     nginx.ingress.kubernetes.io/proxy-body-size: 10g
spec:
spec:
  tls:
    - secretName: cloud-davidl-me-tls
      hosts:
        - cloud.davidl.me
   rules:
   rules:
  - host: "cloud.davidl.me"
    - host: cloud.davidl.me
    http:
      http:
      paths:
        paths:
      - path: /
          - path: /
        pathType: Prefix
            pathType: Prefix
        backend:
            backend:
          service:
              service:
            name: nextcloud-app-service
                name: nextcloud-app-service
            port:
                port:
              number: 80
                  number: 80
</syntaxhighlight>
</syntaxhighlight>
}}
}}
If your backend uses HTTPS, you will need to add the annotation: <code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS</code>
For self-signed SSL certificates, you will also need the annotation:
<syntaxhighlight lang="yaml">
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_ssl_name $host;
      proxy_ssl_server_name on;
</syntaxhighlight>
===Authentication===
[https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/ ingress-nginx external oauth]<br>
If you like to authenticate using an oauth2 provider (e.g. Google, GitHub), I suggest using [https://github.com/oauth2-proxy/oauth2-proxy oauth2-proxy].
# First setup a deployment of the oauth2, possibly without an upstream.
# Then you can simply add the following annotations to your ingresses to protect them:
#:<syntaxhighlight lang="yaml">
nginx.ingress.kubernetes.io/auth-url: "http://oauth2proxy.default.svc.cluster.local/oauth2/[email protected]"
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2proxy.davidl.me/oauth2/start?rd=$scheme://$host$request_uri"
</syntaxhighlight>
;Additional things to look into
* Pomerium
* Keycloak
** https://www.talkingquickly.co.uk/webapp-authentication-keycloak-OAuth2-proxy-nginx-ingress-kubernetes
* Authelia - only supports username/password as the first factor
* Authentik - tried this but had too complicated and buggy for me.
If you use Cloudflare, you can also use Cloudflare access, though make sure you prevent other sources from accessing the service directly.


==Autoscaling==
==Autoscaling==
Line 306: Line 516:
kind: Service
kind: Service
metadata:
metadata:
   name: t440s-wireguard-service
   name: t440s-wireguard
spec:
spec:
   type: ClusterIP
   type: ClusterIP
Line 317: Line 527:
kind: Endpoints
kind: Endpoints
metadata:
metadata:
   name: t440s-wireguard-service
   name: t440s-wireguard
subsets:
subsets:
   - addresses:
   - addresses:
Line 325: Line 535:
</syntaxhighlight>
</syntaxhighlight>
}}
}}
==NetworkPolicy==
Network policies are used to limit ingress or egress to pods.<br>
{{hidden | Example network policy |
<syntaxhighlight lang="yaml">
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: access-rstudio
spec:
  podSelector:
    matchLabels:
      pod-label: rstudio-pod
  ingress:
    - from:
        - podSelector:
            matchLabels:
              rstudio-access: "true"
</syntaxhighlight>
}}
==Security Context==
[https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ security context]
If you want to restrict pods to run as a particular UID/GUI while still binding to any port, you can add the following:
<syntaxhighlight lang=yaml>
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        sysctls:
        - name: net.ipv4.ip_unprivileged_port_start
          value: "0"
</syntaxhighlight>


==Devices==
==Devices==
Line 345: Line 588:
             limits:
             limits:
               gpu.intel.com/i915: 1
               gpu.intel.com/i915: 1
</syntaxhighlight>
==Restarting your cluster==
===Scale to 0===
[https://stackoverflow.com/questions/64133011/scale-down-kubernetes-deployments-to-0-and-scale-back-to-original-number-of-repl reference]<br>
If you wish to restart all nodes of your cluster, you can scale your deployments and stateful sets down to 0 and then scale them back up after.
<syntaxhighlight lang="bash">
# Annotate existing deployments and statefulsets with replica count.
kubectl get deploy -o jsonpath='{range .items[*]}{"kubectl annotate --overwrite deploy "}{@.metadata.name}{" previous-size="}{@.spec.replicas}{" \n"}{end}' | sh
kubectl get sts -o jsonpath='{range .items[*]}{"kubectl annotate --overwrite sts "}{@.metadata.name}{" previous-size="}{@.spec.replicas}{" \n"}{end}' | sh
# Scale to 0.
# shellcheck disable=SC2046
kubectl scale --replicas=0 $(kubectl get deploy -o name)
# shellcheck disable=SC2046
kubectl scale --replicas=0 $(kubectl get sts -o name)
# Scale back up.
kubectl get deploy -o jsonpath='{range .items[*]}{"kubectl scale deploy "}{@.metadata.name}{" --replicas="}{.metadata.annotations.previous-size}{"\n"}{end}' | sh
kubectl get sts -o jsonpath='{range .items[*]}{"kubectl scale sts "}{@.metadata.name}{" --replicas="}{.metadata.annotations.previous-size}{"\n"}{end}' | sh
</syntaxhighlight>
</syntaxhighlight>


==Helm==
==Helm==
Helm is a method for deploying application using premade kubernetes manifest templates known as helm charts.<br>
Helm is a method for deploying applications using premade kubernetes manifest templates known as helm charts.<br>
Rather than writing your own manifest or copying a manifest from elsewhere, you can use helm charts which create and install kubernetes manifests.<br>
Helm charts abstract away manifests, allowing you to focus on only the important configuration values.<br>
Manifests can also be composed into other manifests for applications which require multiple microservices.
Manifests can also be composed into other manifests for applications which require multiple microservices.
[https://artifacthub.io/ https://artifacthub.io/] allows you to search for helm charts others have made.<br>
[https://github.com/bitnami/charts bitnami/charts] contains helm charts for many popular applications.


===Usage===
===Usage===
Line 358: Line 624:
# Install the application using helm.
# Install the application using helm.
#:<pre>helm upgrade --install $NAME $CHARTNAME -f values.yaml [--version $VERSION]</pre>
#:<pre>helm upgrade --install $NAME $CHARTNAME -f values.yaml [--version $VERSION]</pre>
===Troubleshooting===
Sometimes, Kubernetes will deprecate APIs, preventing it from managing existing helm releases.<br>
The [https://github.com/helm/helm-mapkubeapis mapkubeapis] helm plugin can help resolve some of these issues.


==Variants==
==Variants==
===minikube===
===minikube===
[https://minikube.sigs.k8s.io/docs/ minikube] is a tool to quickly set up a local Kubernetes cluster on your PC.
[https://minikube.sigs.k8s.io/docs/ minikube] is a tool to quickly set up a local Kubernetes dev environment on your PC.


===kind===
===kind===
Line 367: Line 637:
===k3s===
===k3s===
[https://k3s.io/ k3s] is a lighter-weight Kubernetes by Rancher Labs.
[https://k3s.io/ k3s] is a lighter-weight Kubernetes by Rancher Labs.
It includes Flannel CNI and Traefik Ingress Controller.


==KubeVirt==
==KubeVirt==
{{main | KubeVirt}}
{{main | KubeVirt}}
KubeVirt allows you to run virtual machines with vGPU support on your Kubernetes cluster.
KubeVirt allows you to run virtual machines on your Kubernetes cluster.


==Resources==
==Resources==
* [https://kubernetes.io/docs/tutorials/kubernetes-basics/ Kubernetes Basics]
* [https://kubernetes.io/docs/tutorials/kubernetes-basics/ Kubernetes Basics]
* [https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/ Certified Kubernetes Administrator (CKA) with Practice Tests (~$15)]
* [https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/ Certified Kubernetes Administrator (CKA) with Practice Tests (~$15)]
* [https://yolops.net/k8s-dualstack-cilium.html https://yolops.net/k8s-dualstack-cilium.html]