Kubernetes is deprecating Docker as a container runtime after v1.20.
Don't Panic 😱 Docker containers are still supported, but the dockershim/Docker, the layer between Kubernetes and containerd is deprecated and will be removed from version 1.22+.
So if you are running docker you need to change to a supported container runtime interface (CRI). containerd is a good choice, it is already running on your Kubernetes node if you are running Docker.
An extra advantage is there is less overhead, there is no docker-shim and Docker translation layers as you can see is this diagram.
How to migrate
First we check what container runtime is currently running. We do this with
kubectl get nodes -o wide
As we can see we are runnig Docker as runtime.
Now we check if we have the containerd cli
/usr/bin/ctr and the namespace moby is there. moby is the namespace from docker.
And we can list the running containers in this namespace
If everything looks fine we can change the cri, We change one node at a time and first the worker nodes then our control node. If you have only one control node you will lose access to the cluster, this will be temporally and it should recover it self.
Cordon and Drain node
We need to cordon and drain the nodes, so that are workloads are rescheduled.
root@k8s-cn01:~# kubectl cordon k8s-wn01 node/k8s-wn01 cordoned root@k8s-cn01:~# kubectl drain k8s-wn01 --ignore-daemonsets node/k8s-wn01 already cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-9wnh4, kube-system/weave-net-pgptm evicting pod default/nginx-6799fc88d8-r44x9 pod/nginx-6799fc88d8-r44x9 evicted node/k8s-wn01 evicted root@k8s-cn01:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-cn01 Ready control-plane,master 138m v1.20.4 k8s-wn01 Ready,SchedulingDisabled <none> 124m v1.20.4 k8s-wn02 Ready <none> 64m v1.20.4
Remove docker (optional)
We remove Docker, this is not necessary but make things more clear, less prone for mistakes later and we will save some disk space...
Disable the line disabled_plugins in
/etc/containerd/config.toml so the cri interface is loaded.
If there is no config file for containerd, you can generate a new default file.
The restart containerd
Edit the file
/var/lib/kubelet/kubeadm-flags.env and add the containerd runtime to the flags.
So the kubeadm-flags file would look something like this.
After changing the runtime we can start the kubelet service.
Now when we run
kubectl get nodes -o wide and we see containerd a the runtime for the node we just changed.
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-cn01 Ready control-plane,master 131m v1.20.4 10.65.79.164 <none> Ubuntu 20.04.2 LTS 5.4.0-67-generic docker://20.10.5 k8s-wn01 Ready,,SchedulingDisabled <none> 117m v1.20.4 10.65.79.131 <none> Ubuntu 20.04.2 LTS 5.4.0-67-generic containerd://1.4.4 k8s-wn02 Ready <none> 57m v1.20.4 10.65.79.244 <none> CentOS Linux 8 4.18.0-240.15.1.el8_3.centos.plus.x86_64 docker://20.10.5
The node we just changed is still cordoned. So we can uncordon it now.
root@k8s-cn01:~# kubectl uncordon k8s-wn01 node/k8s-wn01 uncordoned root@k8s-cn01:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-cn01 Ready control-plane,master 143m v1.20.4 k8s-wn01 Ready <none> 129m v1.20.4 k8s-wn02 Ready <none> 69m v1.20.4
If we check the namespaces on the node now, we see a new namespace, k8s.io. The moby namespace is now empty, no containers are running in this namespace all the containers are now running the the k8s.io namespace.
We have changed successfully the cri, now we can move to the next node and repeat everything.