使用 kubeadm 搭建 v1.16.3 版本 Kubernetes 集群
一、环境准备
1.1实验环境
角色 | IP地址 | 主机名 | docker版本 | 硬件配置 | 系统 | 内核 |
---|---|---|---|---|---|---|
master | 10.0.0.130 | k8s-master | 18.09.9 | 2c2g | CentOS7.6 | 3.10.0-957.el7.x86_64 |
node1 | 10.0.0.131 | k8s-node1 | 18.09.9 | 2c1g | CentOS7.6 | 3.10.0-957.el7.x86_64 |
node2 | 10.0.0.132 | k8s-node2 | 18.09.9 | 2c1g | CentOS7.6 | 3.10.0-957.el7.x86_64 |
1.2每个节点配置host信息
cat >> /etc/hosts <<EOF
10.0.0.130 k8s-master
10.0.0.131 k8s-node1
10.0.0.132 k8s-node2
EOF
1.3禁用防火墙和selinux
//禁用防火墙
systemctl stop firewalld && systemctl disable firewalld
//禁用selinux
#临时修改
setenforce 0
#永久修改,重启服务器后生效
sed -i '7s/enforcing/disabled/' /etc/selinux/config
1.4创建/etc/sysctl.d/k8s.conf
文件,添加如下内容
//向文件中写入以下内容
cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
//执行以下命令生效
modprobe br_netfilter && sysctl -p /etc/sysctl.d/k8s.conf
1.5安装ipvs
脚本创建了的/etc/sysconfig/modules/ipvs.modules
文件,保证在节点重启后能自动加载所需模块。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4
命令查看是否已经正确加载所需的内核模块
//向文件中写入以下内容
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
//修改权限以及查看是否已经正确加载所需的内核模块
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4 15053 0
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 133095 2 ip_vs,nf_conntrack_ipv4
libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
安装ipset和ipvsadm(便于查看 ipvs 的代理规则)
yum -y install ipset ipvsadm
1.6同步服务器时间
//安装chrony
yum -y install chrony
//修改同步服务器地址为阿里云
sed -i.bak '3,6d' /etc/chrony.conf && sed -i '3cserver ntp1.aliyun.com iburst' \
/etc/chrony.conf
//启动chronyd及加入开机自启
systemctl start chronyd && systemctl enable chronyd
//查看同步结果
chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 120.25.115.20 2 6 37 13 +194us[+6131us] +/- 33ms
1.7关闭swap分区
修改/etc/fstab
文件,注释掉 SWAP 的自动挂载,使用free -m
确认 swap 已经关闭
//手动关闭swap
swapoff -a
//修改fstab文件,注释swap自动挂载
sed -i '/^\/dev\/mapper\/centos-swap/c#/dev/mapper/centos-swap swap swap defaults 0 0' /etc/fstab
//查看swap是否关闭
free -m
total used free shared buff/cache available
Mem: 1994 682 612 9 699 1086
Swap: 0 0 0
swappiness 参数调整,修改/etc/sysctl.d/k8s.conf
添加下面一行
cat >>/etc/sysctl.d/k8s.conf <<EOF
vm.swappiness=0
EOF
//使配置生效
sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
1.8安装docker18.09.9
1.添加阿里云yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
2.查看可用版本
yum list docker-ce --showduplicates | sort -r
已加载插件:fastestmirror, langpacks
可安装的软件包
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
* extras: mirrors.aliyun.com
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
。。。。。。
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
。。。。。。
3.安装docker18.09.9
yum -y install docker-ce-18.09.9-3.el7 docker-ce-cli-18.09.9
4.启动docker并设置开机自启
systemctl enable docker && systemctl start docker
5.配置阿里云docker镜像加速
cat > /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://gqk8w9va.mirror.aliyuncs.com"]
}
EOF
6.配置完后重启docker
systemctl restart docker
7.查看加速
docker info
找到Registry Mirrors一行
Registry Mirrors:
https://gqk8w9va.mirror.aliyuncs.com/
8.查看docker版本
docker version
Client:
Version: 18.09.9
API version: 1.39
Go version: go1.11.13
Git commit: 039a7df9ba
Built: Wed Sep 4 16:51:21 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.9
API version: 1.39 (minimum version 1.12)
Go version: go1.11.13
Git commit: 039a7df
Built: Wed Sep 4 16:22:32 2019
OS/Arch: linux/amd64
Experimental: false
1.9修改docker Cgroup Driver为systemd
#修改docker Cgroup Driver为systemd
将/usr/lib/systemd/system/docker.service文件中的这一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
修改为 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
如果不修改,在添加 worker 节点时可能会碰到如下错误
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
Please follow the guide at https://kubernetes.io/docs/setup/cri/
//使用如下命令修改
sed -i.bak "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service
//重启docker
systemctl daemon-reload && systemctl restart docker
1.10安装Kubeadm
需要科学上网
cat >/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
使用阿里云yum源
cat >/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装 kubeadm、kubelet、kubectl(阿里云yum源会随官方更新最新版,因此指定版本)
//安装1.16.3版本
yum -y install kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3
//查看版本
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:11:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
设置kubelet开机自启
systemctl enable kubelet
设置k8s命令自动补全
yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
到此,基本环境安装完成!!!
二、初始化集群
2.1master节点操作,配置 kubeadm 初始化文件
cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
#dnsName
controlPlaneEndpoint: "10.0.0.130:6443"
networking:
serviceSubnet: "10.96.0.0/16"
#k8s容器组所在的网段
podSubnet: "10.100.0.1/16"
dnsDomain: "cluster.local"
EOF
2.2初始化master
⚠️如果想要重新初始化,需要执行命令kubeadm reset -f
#kubeadm init --config=kubeadm-config.yaml --upload-certs
完整输出结果
kubeadm init --config=kubeadm-config.yaml
[init] Using Kubernetes version: v1.16.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.130 10.0.0.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.0.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.0.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.501777 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: px979r.mphk9ee5ya8fgy44
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.0.0.130:6443 --token px979r.mphk9ee5ya8fgy44 \
--discovery-token-ca-cert-hash sha256:5e7c7cd1cc1f86c0761e54b9380de22968b6b221cb98939c14ab2942223f6f51 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.130:6443 --token px979r.mphk9ee5ya8fgy44 \
--discovery-token-ca-cert-hash sha256:5e7c7cd1cc1f86c0761e54b9380de22968b6b221cb98939c14ab2942223f6f51
#kubeadm init --config=kubeadm-config.yaml --upload-certs初始化master下载的镜像
gcr.azk8s.cn/google_containers/kube-proxy v1.16.3 9b65a0f78b09 4 months ago 86.1MB
gcr.azk8s.cn/google_containers/kube-apiserver v1.16.3 df60c7526a3d 4 months ago 217MB
gcr.azk8s.cn/google_containers/kube-controller-manager v1.16.3 bb16442bcd94 4 months ago 163MB
gcr.azk8s.cn/google_containers/kube-scheduler v1.16.3 98fecf43a54f 4 months ago 87.3MB
gcr.azk8s.cn/google_containers/etcd 3.3.15-0 b2756210eeab 6 months ago 247MB
gcr.azk8s.cn/google_containers/coredns 1.6.2 bf261d157914 7 months ago 44.1MB
gcr.azk8s.cn/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
拷贝 kubeconfig 文件
//这里的路径为/root
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
2.3master添加节点
node1和node2相同操作
将master节点上的$HOME/.kube/config 文件拷贝到node节点对应的文件中
1.创建目录,这里的路径为/root
mkdir -p $HOME/.kube
2.把master节点上的config文件拷贝到node1和node2的$HOME/.kube
scp k8s-master:~/.kube/config $HOME/.kube
3.修改权限
chown $(id -u):$(id -g) $HOME/.kube/config
将node1和node2加入到集群中
这里需要用到2.2中初始化master最后生成的token和sha256值
#kubeadm join 10.0.0.130:6443 --token px979r.mphk9ee5ya8fgy44 --discovery-token-ca-cert-hash sha256:5e7c7cd1cc1f86c0761e54b9380de22968b6b221cb98939c14ab2942223f6f51
输出结果
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
如果忘记了token和sha256值,可以在master节点使用如下命令查看
//查看token
#kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
px979r.mphk9ee5ya8fgy44 20h 2020-03-18T13:49:48+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
//查看sha256
#openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
5e7c7cd1cc1f86c0761e54b9380de22968b6b221cb98939c14ab2942223f6f51
//同时查看token和sha256
#kubeadm token create --print-join-command
kubeadm join 10.0.0.130:6443 --token 9b28zg.oyt0kvvpmtrem4bg --discovery-token-ca-cert-hash sha256:5e7c7cd1cc1f86c0761e54b9380de22968b6b221cb98939c14ab2942223f6f51
master节点查看node,发现状态都是NotReady,因为还没有安装网络插件,这里我们安装calio官方插件文档
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 19m v1.16.3
k8s-node1 NotReady <none> 4m10s v1.16.3
k8s-node2 NotReady <none> 4m3s v1.16.3
2.4master节点安装网络插件calio
//下载文件
wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
wget https://kuboard.cn/install-script/calico/calico-3.9.2.yaml
将文件中的620行改为如下,因为在上边kubeadm-config.yaml配置文件中指定了容器组IP 620行
value: "10.100.0.1/16"
//修改完成后安装calico网络插件
#kubectl apply -f calico-3.9.2.yaml
//安装完成后稍等一会查看pods状态
#kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-dc6cb64cb-8sh59 1/1 Running 0 6m22s
calico-node-89s9k 1/1 Running 0 6m22s
calico-node-dkt7w 1/1 Running 0 6m22s
calico-node-tgg2h 1/1 Running 0 6m22s
coredns-667f964f9b-7hrj9 1/1 Running 0 33m
coredns-667f964f9b-8q7sh 1/1 Running 0 33m
etcd-k8s-master 1/1 Running 0 33m
kube-apiserver-k8s-master 1/1 Running 0 32m
kube-controller-manager-k8s-master 1/1 Running 0 33m
kube-proxy-b2r5d 1/1 Running 0 12m
kube-proxy-nd982 1/1 Running 0 11m
kube-proxy-zh6cz 1/1 Running 0 33m
kube-scheduler-k8s-master 1/1 Running 0 32m
//查看node状态
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 31m v1.16.3
k8s-node1 Ready <none> 9m46s v1.16.3
k8s-node2 Ready <none> 9m22s v1.16.3
#kubectl apply -f calico.yaml安装网络插件calico下载的镜像
calico/node v3.9.2 14a380c92c40 5 months ago 195MB
calico/cni v3.9.2 c0d73dd53e71 5 months ago 160MB
calico/pod2daemon-flexvol v3.9.2 523f0356e07b 5 months ago 9.78MB
#启动的容器
03df44242d90 calico/node "start_runit" 8 minutes ago Up 8 minutes k8s_calico-node_calico-node-89s9k_kube-system_ff5384c4-72b2-4bfa-b8ea-77f38c2a2112_0
c2a56feedc7c calico/pod2daemon-flexvol "/usr/local/bin/flex…" 9 minutes ago Exited (0) 9 minutes ago k8s_flexvol-driver_calico-node-89s9k_kube-system_ff5384c4-72b2-4bfa-b8ea-77f38c2a2112_0
ca9febcebaa8 c0d73dd53e71 "/install-cni.sh" 10 minutes ago Exited (0) 10 minutes ago k8s_install-cni_calico-node-89s9k_kube-system_ff5384c4-72b2-4bfa-b8ea-77f38c2a2112_0
b581894b1b91 calico/cni "/opt/cni/bin/calico…" 10 minutes ago Exited (0) 10 minutes ago k8s_upgrade-ipam_calico-node-89s9k_kube-system_ff5384c4-72b2-4bfa-b8ea-77f38c2a2112_0
2.5安装Dashboard(可选)
下载文件及修改内容
//下载文件 v2.0.0-rc3是中文版本,beta8是英文版本
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc3/aio/deploy/recommended.yaml
//修改Service为NodePort类型
42行下增加一行
nodePort: 30001
44行下增加一行
type: NodePort
//原先内容
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
//修改后内容
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001 #增加,指定nodeport端口
selector:
k8s-app: kubernetes-dashboard
type: NodePort #增加,修改类型为nodeport
部署dashboard
#kubectl apply -f recommended.yaml
#下载的镜像
kubernetesui/metrics-scraper v1.0.3 3327f0dbcb4a 6 weeks ago 40.1MB
kubernetesui/dashboard v2.0.0-beta8 eb51a3597525 3 months ago 90.8MB
#启动的容器
8bc24b355d78 kubernetesui/dashboard "/dashboard --insecu…" 7 minutes ago Up 7 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-5996555fd8-2ppc5_kubernetes-dashboard_7f46632c-8b7b-41e4-aeb3-a1fbadd29782_0
查看dashboard的运行状态及外网访问端口
//查看dashboard运行状态
#kubectl get pods -n kubernetes-dashboard -l k8s-app=kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-5996555fd8-2ppc5 1/1 Running 0 8m16s
//查看dashboard外网访问端口,命名空间为kubernetes-dashboard
#kubectl get svc -n kubernetes-dashboard -l k8s-app=kubernetes-dashboard
kubectl get svc -n kubernetes-dashboard -l k8s-app=kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.96.142.172 <none> 443:30001/TCP 8m37s
通过上边的30001端口访问dashboard,注意是https
⚠️k8s1.16.3这个版本中,使用的dashboard版本是2.0.0-beta8,只有火狐浏览器可以访问,其余浏览器都不能访问,会报错如下
使用火狐浏览器访问,由于 dashboard 默认是自建的 https 证书,该证书是不受浏览器信任的,所以我们需要强制跳转就可以了
然后创建一个具有全局所有权限的用户来登录Dashboard
//编辑admin.yaml文件
cat > admin.yaml <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
EOF
//直接创建
kubectl apply -f admin.yaml
//查看token
kubectl get secret -n kube-system|grep admin-token
admin-token-j7sfh kubernetes.io/service-account-token 3 23s
//获取base64解码后的字符串,注意需要用到上边命令查看到的token,会生成很长的一串字符串
kubectl get secret admin-token-j7sfh -o jsonpath={.data.token} -n kube-system |base64 -d
#直接用这条命令搞定
kubectl get secret `kubectl get secret -n kube-system|grep admin-token|awk '{print $1}'` -o jsonpath={.data.token} -n kube-system |base64 -d && echo
然后粘贴上述生成的base64字符串登陆dashboard,在登陆页面选择令牌
一项
登陆后的首界面
获取令牌命令
kubectl get secret `kubectl get secret -n kube-system|grep admin-token|awk '{print $1}'` -o jsonpath={.data.token} -n kube-system |base64 -d
2.6安装kuboard(可选)
kuboard是Kubernetes 的一款图形化管理界面
安装kuboard
kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
kubectl apply -f https://addons.kuboard.cn/metrics-server/0.3.6/metrics-server.yaml
查看运行状态
#kubectl get pods -l k8s.eip.work/name=kuboard -n kube-system
NAME READY STATUS RESTARTS AGE
kuboard-756d46c4d4-tvhjq 1/1 Running 0 40s
获取token
//获取管理员token
kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d && echo
//获取只读token
kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-viewer | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d && echo
访问kuboard
Kuboard Service 使用了 NodePort 的方式暴露服务,NodePort 为 32567
k8s集群的任意一个节点都可以访问到
登陆首界面
kuboard下载的镜像和启动的容器
#启动的容器
5d800a6ee8ad eipwork/kuboard "/entrypoint.sh" 9 minutes ago Up 9 minutes k8s_kuboard_kuboard-756d46c4d4-tvhjq_kube-system_6b88b7b5-64d8-4af3-8065-999b19722c86_0
#下载的镜像,本文中为1.0.8.2版本
eipwork/kuboard latest c6d652bbdf90 About an hour ago 180MB
到此,使用kubeadm安装k8s 1.16.3完成!!!
1.16.3master启动的容器及下载的镜像
#下载的镜像
kubernetesui/metrics-scraper v1.0.3 3327f0dbcb4a 6 weeks ago 40.1MB
kubernetesui/dashboard v2.0.0-beta8 eb51a3597525 3 months ago 90.8MB
gcr.azk8s.cn/google_containers/kube-apiserver v1.16.3 df60c7526a3d 4 months ago 217MB
gcr.azk8s.cn/google_containers/kube-proxy v1.16.3 9b65a0f78b09 4 months ago 86.1MB
gcr.azk8s.cn/google_containers/kube-controller-manager v1.16.3 bb16442bcd94 4 months ago 163MB
gcr.azk8s.cn/google_containers/kube-scheduler v1.16.3 98fecf43a54f 4 months ago 87.3MB
calico/node v3.9.2 14a380c92c40 5 months ago 195MB
calico/cni v3.9.2 c0d73dd53e71 5 months ago 160MB
calico/pod2daemon-flexvol v3.9.2 523f0356e07b 5 months ago 9.78MB
gcr.azk8s.cn/google_containers/etcd 3.3.15-0 b2756210eeab 6 months ago 247MB
gcr.azk8s.cn/google_containers/coredns 1.6.2 bf261d157914 7 months ago 44.1MB
kubernetesui/metrics-scraper v1.0.1 709901356c11 8 months ago 40.1MB
gcr.azk8s.cn/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
#启动的容器
8bc24b355d78 kubernetesui/dashboard "/dashboard --insecu…" 19 minutes ago Up 19 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-5996555fd8-2ppc5_kubernetes-dashboard_7f46632c-8b7b-41e4-aeb3-a1fbadd29782_0
4ebd793dc31f gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 19 minutes ago Up 19 minutes k8s_POD_kubernetes-dashboard-5996555fd8-2ppc5_kubernetes-dashboard_7f46632c-8b7b-41e4-aeb3-a1fbadd29782_0
03df44242d90 calico/node "start_runit" 6 hours ago Up 6 hours k8s_calico-node_calico-node-89s9k_kube-system_ff5384c4-72b2-4bfa-b8ea-77f38c2a2112_0
c2a56feedc7c calico/pod2daemon-flexvol "/usr/local/bin/flex…" 6 hours ago Exited (0) 6 hours ago k8s_flexvol-driver_calico-node-89s9k_kube-system_ff5384c4-72b2-4bfa-b8ea-77f38c2a2112_0
ca9febcebaa8 c0d73dd53e71 "/install-cni.sh" 6 hours ago Exited (0) 6 hours ago k8s_install-cni_calico-node-89s9k_kube-system_ff5384c4-72b2-4bfa-b8ea-77f38c2a2112_0
b581894b1b91 calico/cni "/opt/cni/bin/calico…" 6 hours ago Exited (0) 6 hours ago k8s_upgrade-ipam_calico-node-89s9k_kube-system_ff5384c4-72b2-4bfa-b8ea-77f38c2a2112_0
2b58aa8cbc01 gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 6 hours ago Up 6 hours k8s_POD_calico-node-89s9k_kube-system_ff5384c4-72b2-4bfa-b8ea-77f38c2a2112_0
8ff547ee27a4 9b65a0f78b09 "/usr/local/bin/kube…" 7 hours ago Up 7 hours k8s_kube-proxy_kube-proxy-zh6cz_kube-system_7a02864d-77cd-4636-ac70-aa56ad8c1df0_0
4c5be9765eaf gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 7 hours ago Up 7 hours k8s_POD_kube-proxy-zh6cz_kube-system_7a02864d-77cd-4636-ac70-aa56ad8c1df0_0
09ae85c8b8f5 df60c7526a3d "kube-apiserver --ad…" 7 hours ago Up 7 hours k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_a8b1669c873ef7b41dacef818eb65aa9_0
6f17c0d51694 98fecf43a54f "kube-scheduler --au…" 7 hours ago Up 7 hours k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_8abfe947f8f4e810a2308b65bb933780_0
da743be3ec06 bb16442bcd94 "kube-controller-man…" 7 hours ago Up 7 hours k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_018dbc8aa2984b8851a80fc82254cb97_0
123d4d44a34f b2756210eeab "etcd --advertise-cl…" 7 hours ago Up 7 hours k8s_etcd_etcd-k8s-master_kube-system_6ba6d49a08f1a79cf377b8d0029e9a22_0
c4e04b936c8e gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 7 hours ago Up 7 hours k8s_POD_kube-apiserver-k8s-master_kube-system_a8b1669c873ef7b41dacef818eb65aa9_0
f9411212bc50 gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 7 hours ago Up 7 hours k8s_POD_kube-scheduler-k8s-master_kube-system_8abfe947f8f4e810a2308b65bb933780_0
20c18e3366ae gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 7 hours ago Up 7 hours k8s_POD_etcd-k8s-master_kube-system_6ba6d49a08f1a79cf377b8d0029e9a22_0
1e74a3ddf17b gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 7 hours ago Up 7 hours k8s_POD_kube-controller-manager-k8s-master_kube-system_018dbc8aa2984b8851a80fc82254cb97_0
1.16.3node启动的容器及下载的镜像
每个node都有的镜像
kube-proxy
pause
calico/node
calico/cni
calico/pod2daemon-flexvol
coredns、calico/kube-controllers会随机在一个node上
//node1
#下载的镜像
gcr.azk8s.cn/google_containers/kube-proxy v1.16.3 9b65a0f78b09 4 months ago 86.1MB
calico/node v3.9.2 14a380c92c40 5 months ago 195MB
calico/cni v3.9.2 c0d73dd53e71 5 months ago 160MB
calico/kube-controllers v3.9.2 7f7ed50db9fb 5 months ago 56MB
calico/pod2daemon-flexvol v3.9.2 523f0356e07b 5 months ago 9.78MB
gcr.azk8s.cn/google_containers/coredns 1.6.2 bf261d157914 7 months ago 44.1MB
gcr.azk8s.cn/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
#启动的容器
958f160a7e79 calico/kube-controllers "/usr/bin/kube-contr…" 4 minutes ago Up 4 minutes k8s_calico-kube-controllers_calico-kube-controllers-dc6cb64cb-c6cng_kube-system_072a1596-42e8-4375-983c-fd6f9b173424_0
f3a7fbbb8e49 gcr.azk8s.cn/google_containers/coredns "/coredns -conf /etc…" 5 minutes ago Up 5 minutes k8s_coredns_coredns-667f964f9b-qz88x_kube-system_f18743bb-37af-4e8a-8268-9e328d3e0000_0
18150cf23c39 gcr.azk8s.cn/google_containers/coredns "/coredns -conf /etc…" 5 minutes ago Up 5 minutes k8s_coredns_coredns-667f964f9b-sdkc9_kube-system_712e4a15-923a-437b-a1dc-bfefaf30f920_0
e120da514c18 gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_calico-kube-controllers-dc6cb64cb-c6cng_kube-system_072a1596-42e8-4375-983c-fd6f9b173424_24
b5e3c36224ad gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_coredns-667f964f9b-qz88x_kube-system_f18743bb-37af-4e8a-8268-9e328d3e0000_23
c4cd7571e098 gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_coredns-667f964f9b-sdkc9_kube-system_712e4a15-923a-437b-a1dc-bfefaf30f920_24
f497eb8c1b1a calico/node "start_runit" 5 minutes ago Up 5 minutes k8s_calico-node_calico-node-rcnts_kube-system_ebfce9a1-ad69-46e7-8e24-e9d3c51678bc_0
a55cab0bd81c calico/pod2daemon-flexvol "/usr/local/bin/flex…" 5 minutes ago Exited (0) 5 minutes ago k8s_flexvol-driver_calico-node-rcnts_kube-system_ebfce9a1-ad69-46e7-8e24-e9d3c51678bc_0
4b3f60e0d519 c0d73dd53e71 "/install-cni.sh" 6 minutes ago Exited (0) 6 minutes ago k8s_install-cni_calico-node-rcnts_kube-system_ebfce9a1-ad69-46e7-8e24-e9d3c51678bc_0
ee4493f6c482 calico/cni "/opt/cni/bin/calico…" 6 minutes ago Exited (0) 6 minutes ago k8s_upgrade-ipam_calico-node-rcnts_kube-system_ebfce9a1-ad69-46e7-8e24-e9d3c51678bc_0
87fe130ab10d gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 6 minutes ago Up 6 minutes k8s_POD_calico-node-rcnts_kube-system_ebfce9a1-ad69-46e7-8e24-e9d3c51678bc_0
ba408469e7bc gcr.azk8s.cn/google_containers/kube-proxy "/usr/local/bin/kube…" 9 minutes ago Up 9 minutes k8s_kube-proxy_kube-proxy-q2tj7_kube-system_3c629796-9972-491a-87e7-3cc14264088e_0
272708e9251a gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-proxy-q2tj7_kube-system_3c629796-9972-491a-87e7-3cc14264088e_0
//node2
#下载的镜像
gcr.azk8s.cn/google_containers/kube-proxy v1.16.3 9b65a0f78b09 4 months ago 86.1MB
calico/node v3.9.2 14a380c92c40 5 months ago 195MB
calico/cni v3.9.2 c0d73dd53e71 5 months ago 160MB
calico/pod2daemon-flexvol v3.9.2 523f0356e07b 5 months ago 9.78MB
gcr.azk8s.cn/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
#启动的容器
acc3f199861e calico/node "start_runit" 6 minutes ago Up 6 minutes k8s_calico-node_calico-node-kdlz8_kube-system_e879923e-1028-4108-bdcc-29b8c7044127_0
8ff828ca0423 calico/pod2daemon-flexvol "/usr/local/bin/flex…" 6 minutes ago Exited (0) 6 minutes ago k8s_flexvol-driver_calico-node-kdlz8_kube-system_e879923e-1028-4108-bdcc-29b8c7044127_0
0bd0e1544cf7 c0d73dd53e71 "/install-cni.sh" 6 minutes ago Exited (0) 6 minutes ago k8s_install-cni_calico-node-kdlz8_kube-system_e879923e-1028-4108-bdcc-29b8c7044127_0
7bc2a912fda6 calico/cni "/opt/cni/bin/calico…" 6 minutes ago Exited (0) 6 minutes ago k8s_upgrade-ipam_calico-node-kdlz8_kube-system_e879923e-1028-4108-bdcc-29b8c7044127_0
1409a1dbc779 gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_calico-node-kdlz8_kube-system_e879923e-1028-4108-bdcc-29b8c7044127_0
dda871e9daab gcr.azk8s.cn/google_containers/kube-proxy "/usr/local/bin/kube…" 10 minutes ago Up 10 minutes k8s_kube-proxy_kube-proxy-trqdv_kube-system_c572710b-0867-4baf-a629-8262f6511d15_0
a5c5c619ec67 gcr.azk8s.cn/google_containers/pause:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-proxy-trqdv_kube-system_c572710b-0867-4baf-a629-8262f6511d15_0
k8s切换命名空间工具
git clone https://github.com.cnpmjs.org/ahmetb/kubectx
cp kubectx/kubens /usr/local/bin
#查看所有命名空间
kubens
#切换到kube-system命名空间
kubens kube-system
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "kube-system".