Kubernetes: Difference between revisions
(16 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
==Getting Started== | ==Getting Started== | ||
===Background=== | ===Background=== | ||
Kubernetes runs applications across nodes which are (physical or virtual) Linux machines.<br> | Kubernetes runs applications across '''nodes''' which are (physical or virtual) Linux machines.<br> | ||
Each node contains a kubelet process, a container runtime (typically containerd), and any running pods.<br> | Each node contains a kubelet process, a container runtime (typically containerd), and any running pods.<br> | ||
Pods contain resources needed to host your application including volumes and containers.<br> | '''Pods''' contain resources needed to host your application including volumes and containers.<br> | ||
Typically you will want one container per pod since deployments scale by creating multiple pods. | Typically you will want one container per pod since deployments scale by creating multiple pods.<br> | ||
A '''deployment''' is a rule which spawns and manages pods.<br> | |||
A '''service''' is a networking rule which allows connecting to pods. | |||
==Administration== | In addition to standard Kubernetes objects, '''operators''' watch for and allow you to instantiate custom resources (CR). | ||
==Kubeadm Administration== | |||
Notes on administering kubernetes clusters. | Notes on administering kubernetes clusters. | ||
Kuberenetes has many parts and administration is very tedious which is why K3S exists. I'd recommend against using kubeadm for a homelab. | |||
===Installation=== | ===Installation=== | ||
Line 201: | Line 207: | ||
To renew the certificates manually, run <code>kubeadm certs renew all</code> and restart your control plane services. | To renew the certificates manually, run <code>kubeadm certs renew all</code> and restart your control plane services. | ||
Note that if you lets the certificates expire, you will need to setup kubectl again. | Note that if you lets the certificates expire, you will need to setup kubectl again. | ||
Issues connecting with etcd | |||
I ran into this when trying to kubeadm upgrade | |||
;context deadline exceeded remote error: tls: bad certificate | |||
Kubeadm stores etcd certificates in <code>/etc/kubernetes/pki/etcd/</code>. | |||
Follow this to generate new certificates: https://github.com/etcd-io/etcd/issues/9785#issuecomment-432438748 | |||
You will need to create a temporary files for ca-config.json and server.json to generate new keys. | |||
Make sure in the server.json to set the key algo to "rsa" and size to 2048. In the same file, set your CN to 127.0.0.1 and the hosts to [127.0.0.1, your local IP]. | |||
;cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs | |||
This means your hosts in server.json is not correct when you generated the new keys. | |||
===Pods per node=== | ===Pods per node=== | ||
Line 207: | Line 226: | ||
You may increase this up to a limit of 255 with the default networking subnet.<br> | You may increase this up to a limit of 255 with the default networking subnet.<br> | ||
For reference, GCP GKE uses 110 pods per node and AWS EKS uses 250 pods per node. | For reference, GCP GKE uses 110 pods per node and AWS EKS uses 250 pods per node. | ||
===Changing Master Address=== | |||
See https://ystatit.medium.com/how-to-change-kubernetes-kube-apiserver-ip-address-402d6ddb8aa2 | |||
==kubectl== | ==kubectl== | ||
Line 321: | Line 343: | ||
==StatefulSets== | ==StatefulSets== | ||
[https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/ StatefulSets basics] | [https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/ StatefulSets basics]<br> | ||
Stateful sets are useful when you need a fixed number of pods with stable identities.<br> | Stateful sets are useful when you need a fixed number of pods with stable identities such as databases.<br> | ||
Pods created by stateful sets have a unique number suffix which allows you to query a specific pod.<br> | Pods created by stateful sets have a unique number suffix which allows you to query a specific pod.<br> | ||
Typically, you will want to use a headless service (i.e. without ClusterIP) to give local dns records to each service. | Typically, you will want to use a headless service (i.e. without ClusterIP) to give local dns records to each service. | ||
Line 362: | Line 384: | ||
==Ingress== | ==Ingress== | ||
[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress | Kubernetes]<br> | [https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingress | Kubernetes]<br> | ||
An ingress is an http endpoint. This configures an ingress controller which is a load-balancer or reverse-proxy pod that integrates with Kubernetes. | |||
A common ingress controller is [https://github.com/kubernetes/ingress-nginx ingress-nginx] which is maintained by the Kubernetes team. Alternatives include [https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-helm/ nginx-ingress] [https://doc.traefik.io/traefik/providers/kubernetes-ingress/ traefik], [https://haproxy-ingress.github.io/ haproxy-ingress], and [https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/ others]. | |||
===Installing ingress-nginx=== | ===Installing ingress-nginx=== | ||
Line 402: | Line 426: | ||
type: LoadBalancer | type: LoadBalancer | ||
loadBalancerIP: 192.168.1.3 | loadBalancerIP: 192.168.1.3 | ||
externalTrafficPolicy: Local | |||
config: | config: | ||
Line 462: | Line 487: | ||
# Then you can simply add the following annotations to your ingresses to protect them: | # Then you can simply add the following annotations to your ingresses to protect them: | ||
#:<syntaxhighlight lang="yaml"> | #:<syntaxhighlight lang="yaml"> | ||
nginx.ingress.kubernetes.io/auth-url: " | nginx.ingress.kubernetes.io/auth-url: "http://oauth2proxy.default.svc.cluster.local/oauth2/auth?allowed_emails=myemail@gmail.com" | ||
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2proxy.davidl.me/oauth2/start?rd=$scheme://$host$request_uri" | nginx.ingress.kubernetes.io/auth-signin: "https://oauth2proxy.davidl.me/oauth2/start?rd=$scheme://$host$request_uri" | ||
</syntaxhighlight> | </syntaxhighlight> | ||
;Additional things to look into | |||
* Pomerium | |||
* Keycloak | |||
** https://www.talkingquickly.co.uk/webapp-authentication-keycloak-OAuth2-proxy-nginx-ingress-kubernetes | |||
* Authelia - only supports username/password as the first factor | |||
* Authentik - tried this but had too complicated and buggy for me. | |||
If you use Cloudflare, you can also use Cloudflare access, though make sure you prevent other sources from accessing the service directly. | |||
==Autoscaling== | ==Autoscaling== | ||
Line 590: | Line 624: | ||
# Install the application using helm. | # Install the application using helm. | ||
#:<pre>helm upgrade --install $NAME $CHARTNAME -f values.yaml [--version $VERSION]</pre> | #:<pre>helm upgrade --install $NAME $CHARTNAME -f values.yaml [--version $VERSION]</pre> | ||
===Troubleshooting=== | |||
Sometimes, Kubernetes will deprecate APIs, preventing it from managing existing helm releases.<br> | |||
The [https://github.com/helm/helm-mapkubeapis mapkubeapis] helm plugin can help resolve some of these issues. | |||
==Variants== | ==Variants== | ||
Line 599: | Line 637: | ||
===k3s=== | ===k3s=== | ||
[https://k3s.io/ k3s] is a lighter-weight Kubernetes by Rancher Labs. | [https://k3s.io/ k3s] is a lighter-weight Kubernetes by Rancher Labs. | ||
It includes Flannel CNI and Traefik Ingress Controller. | |||
==KubeVirt== | ==KubeVirt== | ||
{{main | KubeVirt}} | {{main | KubeVirt}} | ||
KubeVirt allows you to run virtual machines | KubeVirt allows you to run virtual machines on your Kubernetes cluster. | ||
==Resources== | ==Resources== |