使用kubeadm建立多節點的Kubernetes Cluster

Posted by Kubeguts on 2021-05-30

kubeadm可以讓我們建立多節點的叢集環境,達到分散式運行容器在不同機器上的效果

這篇筆記主要會透過Linux Ubuntu的環境來建立

前置需求

以下來至k8s官網

硬體需求

  • 2 GB or more of RAM per machine (any less will leave little room for your apps).
  • 2 CPUs or more.
  • Full network connectivity between all machines in the cluster (public or private network is fine).
  • Unique hostname, MAC address, and product_uuid for every node. See here for more details.
  • Certain ports are open on your machines.
  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

網路配置

檢查每台都有獨立的MAC address與product_uuid

  • 透過 ip link or ifconfig -a檢查MAC address
  • product_uuid可以透過 sudo cat /sys/class/dmi/id/product_uuid 查看

確認每台的network adapter

如果同一台node有多的network adapter,請確保node是連接到的network adapter是可以與其他node的local area network

確認iptable可以看到bridge traffic

需要載入 br_netfilter這個模組,這個模組主要提供過濾brige IPv4, IPv6, ARP的封包內容,以及防火牆功能

透過 lsmod | grep br_netfilter 指令看有沒有output,沒有的話代表還沒載入,請使用 sudo modprobe br_netfilter 載入之

再透過 lsmod 可以看到被載入囉

1
2
3
lsmod | grep br_netfilter
br_netfilter 24576 0
bridge 155648 1 br_netfilter

接著要讓每個node的iptable都可以看到briged traffic,將sysctl的config中 net.bridge.bridge-nf-call-iptables設置成 1,透過以下指令達成

1
2
3
4
5
6
7
8
9
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

更多網路相關設置可到 Network Plugin Requirements

安裝Container Runtime

主要使用Docker來作為k8s cluster的container runtime環境

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

在Ubuntu 16.04上安裝Docker

Install Docker Engine on Ubuntu

先移除舊的版本

1
sudo apt-get remove docker docker-engine docker.io containerd runc

安裝docker engine之前要設置docker repository

1
2
3
4
5
6
7
8
sudo apt-get update

sudo apt-get -y install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release

新增 Docker’s official GPG key

1
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

x86_64/amd64OS底下指定對應的Docker Engine

1
2
3
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

安裝docker engine

1
2
3
sudo apt-get update

sudo apt-get -y install docker-ce docker-ce-cli containerd.io

重新啟動docker

1
2
3
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

docker重啟完畢後,可以將目前的user加入到docker群組內 (Optional)

1
sudo usermod -aG docker ${USER}

安裝 kubeadm, kubelet and kubectl

  • kubeadm: 啟動cluster

  • kubelet: 在每個節點上面都會裝的工具,用來啟動pod或containers

  • kubectl: command line tool,用來跟cluster溝通,也是k8s admin最常會用到的

更新apt package

1
2
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

下載Google Cloud public signing key

1
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

新增Kubernetes apt repository

1
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

更新apt index,並正式下載kubelet kubeadm kubectl

1
2
3
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

只在Master Node上面跑的指令

記得先切換成root user

1
sudo -i

透過kubeadm初始化k8s cluster環境

1
kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address={your vm ip}
  • --pod-network-cidr: 為pod的網路環境創建 10.244.0.0/16的遮罩
  • --apiserver-advertise-address: 當前master提供apiserver的ip位址

接著會冒出以下訊息,代表已經安裝成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.140.0.2:6443 --token a0zaoy.ymz5anpekuromko4 --discovery-token-ca-cert-hash sha256:e6ace05b13b75c487608b7364b0bf43722f5a15ecd52251eed42dadef24defe2

需要以一般用戶角色執行以下指令,讓使用的用戶可以具備admin權限來使用kubectl

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

這一串指令主要提供Worker Node加入該Master的Cluster用

1
2
3

kubeadm join 10.140.0.2:6443 --token a0zaoy.ymz5anpekuromko4 \
--discovery-token-ca-cert-hash sha256:e6ace05b13b75c487608b7364b0bf43722f5a15ecd52251eed42dadef24defe2

透過 kubectl get nodes查看目前cluster訊息,目前Status為NotReady的原因是因為這個cluster還沒有pod network供溝通

1
2
3
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 7m12s v1.21.1

接著要部署pod network,推薦使用 Weave Net

1
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

以下會在k8s cluster中創建

1
2
3
4
5
6
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

再一次透過 kubectl get nodes,可以看到已經ready

1
2
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master Ready control-plane,master 11m v1.21.1

只在Worker Node上面跑的指令

透過剛剛Master Node所提供的加入指令,來加入k8s cluster

1
kubeadm join 10.140.0.2:6443 --token a0zaoy.ymz5anpekuromko4 --discovery-token-ca-cert-hash sha256:e6ace05b13b75c487608b7364b0bf43722f5a15ecd52251eed42dadef24defe2

看到以下output代表正在加入中

1
2
3
4
5
6
7
8
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

注意,你的跟我的加入指令 --token與 --discovery-token-ca-cert-hash 的值會不同

確認Cluster狀態

接著再回到Master節點檢查其他節點是否成功加入

1
2
3
4
5
6
kubectl get nodes

NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 21m v1.21.1
k8s-node Ready <none> 36s v1.21.1
k8s-node2 Ready <none> 66s v1.21.1

標註node的roles

可以透過 kubectl label,將 k8s-node或是其他節點標註成worker角色

1
2
3
kubectl label node <node name> node-role.kubernetes.io/worker=worker

node/<node name> labeled

貼完標籤後就會是這樣

1
2
3
4
5
6
kubectl get nodes

NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 27m v1.21.1
k8s-node Ready worker 6m5s v1.21.1
k8s-node2 Ready worker 6m35s v1.21.1

重新生成kubeadm join的token與certification

由於token的有效期限default為24小時,所以上述安裝k8s cluster所得到的token隔天就會無法使用,透過下列步驟在Master Node上生成一個新的token

1
kubeadm token create

再透過 kubeadm token list看看新產生的token,可以看到有舊的與剛剛新產生的

1
2
3
4
5
6
kubeadm token list

// output
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
1k3r7o.croz5ilwib3ae8c1 23h 2021-05-31T04:04:04Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
a0zaoy.ymz5anpekuromko4 23h 2021-05-31T03:34:28Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

若要重新取得ca certification的sha256值,透過以下指令將 /etc/kubernetes/pki/ca.crt這個檔案轉成 sha256的值

1
2
3
4
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

// output
e6ace05b13b75c487608b7364b0bf43722f5a15ecd52251eed42dadef24defe2

再將 kubeadm join <ip> --token <your token> ----discovery-token-ca-cert-hash <your k8s cert> 換成剛剛產生的,就可以重新再次加入cluster

參考