在 CentOS 上使用 kubeadm 安装 kubernetes 1.8.4

准备工作

在所有主机执行以下工作。

配置主机

修改主机名称

1
2
3
$ hostnamectl --static set-hostname k8s-master
$ hostnamectl --static set-hostname k8s-node-1
$ hostnamectl --static set-hostname k8s-node-2

配 hosts

1
2
3
$ echo "172.31.21.226  k8s-master
172.31.21.147 k8s-node-1
172.31.21.148 k8s-node-2" >> /etc/hosts

关防火墙和 selinux

1
2
3
$ systemctl stop firewalld && systemctl disable firewalld
$ iptables -P FORWARD ACCEPT
$ sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
1
2
3
4
$ echo "net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0" >> /etc/sysctl.d/k8s.conf
$ sysctl -p /etc/sysctl.d/k8s.conf

关闭 swap

1
$ swapoff -a

永久关闭,注释 swap 相关内容

1
vim /etc/fstab

下载离线安装包

k8s 最新的版本需要 FQ 下载。

1
2
3
4
$ wget https://packages.cloud.google.com/yum/pool/aeaad1e283c54876b759a089f152228d7cd4c049f271125c23623995b8e76f96-kubeadm-1.8.4-0.x86_64.rpm
$ wget https://packages.cloud.google.com/yum/pool/a9db28728641ddbf7f025b8b496804d82a396d0ccb178fffd124623fb2f999ea-kubectl-1.8.4-0.x86_64.rpm
$ wget https://packages.cloud.google.com/yum/pool/1acca81eb5cf99453f30466876ff03146112b7f12c625cb48f12508684e02665-kubelet-1.8.4-0.x86_64.rpm
$ wget https://packages.cloud.google.com/yum/pool/79f9ba89dbe7000e7dfeda9b119f711bb626fe2c2d56abeb35141142cda00342-kubernetes-cni-0.5.1-1.x86_64.rpm

安装 docker

在所有主机执行以下工作。
kubernetes 1.8.4 目前支持 Docker 17.03。

添加阿里源

1
$ yum-config-manager --add-repo <http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo>

安装指定 Docker 版本

1
2
3
$ yum install -y --setopt=obsoletes=0 \
docker-ce-17.03.2.ce-1.el7.centos \
docker-ce-selinux-17.03.2.ce-1.el7.centos

配置 Docker 加速器

1
2
3
4
5
6
7
$ sudo mkdir -p /etc/docker
$ sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://xxx.mirror.aliyuncs.com"]
}
EOF
$ sudo systemctl daemon-reload && systemctl enable docker && systemctl restart docker && systemctl status docker

安装 k8s

在所有主机执行以下工作。

启动 kubelet

1
2
3
4
$ yum -y localinstall *.rpm
$ yum install -y socat
$ sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
$ systemctl daemon-reload && systemctl restart kubelet && systemctl enable kubelet && systemctl status kubelet

这时 kubelet 应该还在报错,不用管它。

1
$ journalctl -u kubelet --no-pager

准备 Docker 镜像

1
2
3
4
5
6
7
8
9
10
gcr.io/google_containers/kube-apiserver-amd64  v1.8.4
gcr.io/google_containers/kube-controller-manager-amd64 v1.8.4
gcr.io/google_containers/kube-proxy-amd64 v1.8.4
gcr.io/google_containers/kube-scheduler-amd64 v1.8.4
quay.io/coreos/flannel v0.9.1-amd64
gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.5
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.5
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.5
gcr.io/google_containers/etcd-amd64 3.0.17
gcr.io/google_containers/pause-amd64 3.0

可以使用这个脚本拉取到本地。

配置 k8s 集群

master 初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
kubeadm init --apiserver-advertise-address=172.31.21.226 --kubernetes-version=v1.8.4 --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.21.226]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 24.501140 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: d87240.989b8aa6b0039283
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join --token d87240.989b8aa6b0039283 172.31.21.226:6443 --discovery-token-ca-cert-hash sha256:4c2b5469ddc4f49ba15f3146bea5bf9ba8f67f68bdc9ef1ff6cb026d39b94dea

配置用户使用 kubectl 访问集群

1
2
3
$ mkdir -p $HOME/.kube && \
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && \
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看一下集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system etcd-k8s-master 1/1 Running 0 1m 172.31.21.226 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 0 1m 172.31.21.226 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 0 1m 172.31.21.226 k8s-master
kube-system kube-dns-545bc4bfd4-84pjx 0/3 Pending 0 2m <none> <none>
kube-system kube-proxy-7d2tc 1/1 Running 0 2m 172.31.21.226 k8s-master
kube-system kube-scheduler-k8s-master 1/1 Running 0 1m 172.31.21.226 k8s-master

$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true”}

安装Pod Network

1
$ kubectl apply -f https://raw.githubusercontent.com/batizhao/dockerfile/master/k8s/flannel/kube-flannel.yml

这时再执行 kubectl get pod –all-namespaces -o wide 应该可以看到 kube-dns-545bc4bfd4-84pjx 已经变成 Running。如果遇到问题可能使用以下命令查看:

1
2
3
$ kubectl -n kube-system describe pod kube-dns-545bc4bfd4-84pjx
$ journalctl -u kubelet --no-pager
$ journalctl -u docker --no-pager

node 加入集群

在 node 节点分别执行

1
$ kubeadm join --token d87240.989b8aa6b0039283 172.31.21.226:6443 --discovery-token-ca-cert-hash sha256:4c2b5469ddc4f49ba15f3146bea5bf9ba8f67f68bdc9ef1ff6cb026d39b94dea

如果需要从其它任意主机控制集群

1
2
3
4
$ mkdir -p $HOME/.kube
$ scp root@172.31.21.226:/etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes

在 master 确认所有节点 ready

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 7m v1.8.4
k8s-node-1 Ready <none> 22s v1.8.4
k8s-node-2 Ready <none> 15s v1.8.4

$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system etcd-k8s-master 1/1 Running 0 51m 172.31.21.226 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 0 51m 172.31.21.226 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 0 51m 172.31.21.226 k8s-master
kube-system kube-dns-545bc4bfd4-84pjx 3/3 Running 0 52m 10.244.0.3 k8s-master
kube-system kube-flannel-ds-gf2hp 1/1 Running 0 6m 172.31.21.226 k8s-master
kube-system kube-flannel-ds-k8wc9 1/1 Running 0 24s 172.31.21.147 k8s-node-1
kube-system kube-flannel-ds-v7jpv 1/1 Running 0 10s 172.31.21.148 k8s-node-2
kube-system kube-proxy-7d2tc 1/1 Running 0 52m 172.31.21.226 k8s-master
kube-system kube-proxy-b9z97 1/1 Running 0 10s 172.31.21.148 k8s-node-2
kube-system kube-proxy-ksvwp 1/1 Running 0 24s 172.31.21.147 k8s-node-1
kube-system kube-scheduler-k8s-master 1/1 Running 0 51m 172.31.21.226 k8s-master

安装 dashboard

准备 Docker 镜像

1
gcr.io/google_containers/kubernetes-dashboard-amd64  v1.8.0

可以使用这个脚本拉取到本地。

初始化

1
2
$ kubectl apply -f https://raw.githubusercontent.com/batizhao/dockerfile/master/k8s/kubernetes-dashboard/kubernetes-dashboard.yaml
$ kubectl apply -f https://raw.githubusercontent.com/batizhao/dockerfile/master/k8s/kubernetes-dashboard/kubernetes-dashboard-admin.rbac.yaml

确认 dashboard 状态

1
2
$ kubectl get pod --all-namespaces -o wide
kube-system kubernetes-dashboard-7486b894c6-2l4c5 1/1 Running 0 17s 10.244.1.3 k8s-node-1

访问

https://172.31.21.226:30000

或者在任意主机执行(比如我的 Mac)

1
$ kubectl proxy

访问:http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

查看登录 token

1
2
3
4
5
$ kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
kubernetes-dashboard-admin-token-r95kv kubernetes.io/service-account-token 3 7m
$ kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-r95kv

eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1yOTVrdiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjM4MWI4OWQzLWQ0ZDctMTFlNy1hY2U3LTAwMGMyOWMyMTdlNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5l

安装 heapster

准备 Docker 镜像

1
2
3
gcr.io/google_containers/heapster-amd64:v1.4.0
gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3

可以使用这个脚本拉取到本地。

初始化

1
2
3
4
$ kubectl apply -f https://raw.githubusercontent.com/batizhao/dockerfile/master/k8s/heapster/heapster-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/batizhao/dockerfile/master/k8s/heapster/grafana.yaml
$ kubectl apply -f https://raw.githubusercontent.com/batizhao/dockerfile/master/k8s/heapster/heapster.yaml
$ kubectl apply -f https://raw.githubusercontent.com/batizhao/dockerfile/master/k8s/heapster/influxdb.yaml