使用 kubeadm 搭建 v1.17.4 版本 Kubernetes 集群
一、环境准备
1.1实验环境
角色 | IP地址 | 主机名 | docker版本 | 硬件配置 | 系统 | 内核 |
---|---|---|---|---|---|---|
master | 10.0.0.130 | k8s-master | 19.03.4 | 2c2g | CentOS7.6 | 3.10.0-957.el7.x86_64 |
node1 | 10.0.0.131 | k8s-node1 | 19.03.4 | 2c1g | CentOS7.6 | 3.10.0-957.el7.x86_64 |
node2 | 10.0.0.132 | k8s-node2 | 19.03.4 | 2c1g | CentOS7.6 | 3.10.0-957.el7.x86_64 |
1.2每个节点配置host信息
cat >> /etc/hosts <<EOF
10.0.0.130 k8s-master
10.0.0.131 k8s-node1
10.0.0.132 k8s-node2
EOF
1.3禁用防火墙和selinux
//禁用防火墙
systemctl stop firewalld && systemctl disable firewalld
//禁用selinux
#临时修改
setenforce 0
#永久修改,重启服务器后生效
sed -i '7s/enforcing/disabled/' /etc/selinux/config
1.4创建/etc/sysctl.d/k8s.conf
文件,添加如下内容
//向文件中写入以下内容
cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
//执行以下命令生效
modprobe br_netfilter && sysctl -p /etc/sysctl.d/k8s.conf
1.5安装ipvs(可选)
脚本创建了的/etc/sysconfig/modules/ipvs.modules
文件,保证在节点重启后能自动加载所需模块。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4
命令查看是否已经正确加载所需的内核模块
//向文件中写入以下内容
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
//修改权限以及查看是否已经正确加载所需的内核模块
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4 15053 0
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 133095 2 ip_vs,nf_conntrack_ipv4
libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
安装ipset和ipvsadm(便于查看 ipvs 的代理规则)
yum -y install ipset ipvsadm
1.6同步服务器时间
//安装chrony
yum -y install chrony
//修改同步服务器地址为阿里云
sed -i.bak '3,6d' /etc/chrony.conf && sed -i '3cserver ntp1.aliyun.com iburst' \
/etc/chrony.conf
//启动chronyd及加入开机自启
systemctl start chronyd && systemctl enable chronyd
//查看同步结果
chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 120.25.115.20 2 6 37 13 +194us[+6131us] +/- 33ms
1.7关闭swap分区
修改/etc/fstab
文件,注释掉 SWAP 的自动挂载,使用free -m
确认 swap 已经关闭
//手动关闭swap
swapoff -a
//修改fstab文件,注释swap自动挂载
sed -i '/^\/dev\/mapper\/centos-swap/c#/dev/mapper/centos-swap swap swap defaults 0 0' /etc/fstab
//查看swap是否关闭
free -m
total used free shared buff/cache available
Mem: 1994 682 612 9 699 1086
Swap: 0 0 0
swappiness 参数调整,修改/etc/sysctl.d/k8s.conf
添加下面一行
cat >>/etc/sysctl.d/k8s.conf <<EOF
vm.swappiness=0
EOF
//使配置生效
sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
1.8安装docker19.03.4
1.卸载旧版本
yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
2.安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
3.添加阿里云yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
4.查看可用版本
yum list docker-ce --showduplicates | sort -r
已加载插件:fastestmirror, langpacks
可安装的软件包
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
* extras: mirrors.aliyun.com
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
。。。。。。
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
。。。。。。
5.安装docker19.03.4
yum install -y docker-ce-19.03.4 docker-ce-cli-19.03.4 containerd.io
6.启动docker并设置开机自启
systemctl enable docker && systemctl start docker
7.配置阿里云docker镜像加速
cat > /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://gqk8w9va.mirror.aliyuncs.com"]
}
EOF
8.配置完后重启docker
systemctl restart docker
9.查看加速
docker info
找到Registry Mirrors一行
Registry Mirrors:
https://gqk8w9va.mirror.aliyuncs.com/
10.查看docker版本
docker version
Client: Docker Engine - Community
Version: 19.03.4
API version: 1.40
Go version: go1.12.10
Git commit: 9013bf583a
Built: Fri Oct 18 15:52:22 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.4
API version: 1.40 (minimum version 1.12)
Go version: go1.12.10
Git commit: 9013bf583a
Built: Fri Oct 18 15:50:54 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
1.9安装Kubeadm
需要科学上网
cat >/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
使用阿里云yum源
cat >/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装 kubeadm、kubelet、kubectl(阿里云yum源会随官方更新最新版,因此指定版本)
yum -y install kubelet-1.17.4 kubeadm-1.17.4 kubectl-1.17.4
//查看版本
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:01:11Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
设置kubelet开机自启
⚠️此时kubelet是无法启动的,因为只有完成master的kubeadm init 的操作,kubelet才能正常启动
systemctl enable kubelet
设置k8s命令自动补全
yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
1.10修改docker Cgroup Driver为systemd
#修改docker Cgroup Driver为systemd
将/usr/lib/systemd/system/docker.service文件中的这一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
修改为 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
如果不修改,在添加 worker 节点时可能会碰到如下错误
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
Please follow the guide at https://kubernetes.io/docs/setup/cri/
//使用如下命令修改
sed -i.bak "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service
//重启docker
systemctl daemon-reload && systemctl restart docker
到此,基本环境安装完成!!!
二、初始化集群
2.1master节点操作,配置 kubeadm 初始化文件
- 方法一
- 命令初始化
kubeadm init --apiserver-advertise-address=10.0.0.130 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.17.4 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.20.0.0/16
--apiserver-advertise-address= #master节点IP
--image-repository registry.aliyuncs.com/google_containers #镜像仓库
--kubernetes-version v1.17.4 #k8s版本
--service-cidr=10.96.0.0/16 #service IP网段
--pod-network-cidr=10.20.0.0/16 #pod IP网段,后续网络插件会用到
- 方法二
- 文件初始化
cat > kubeadm.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.0.0.130 # apiserver 节点内网IP
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master # master IP地址
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS # dns类型
etcd:
local:
dataDir: /var/lib/etcd
#遗下两个仓库2选1
#imageRepository: gcr.azk8s.cn/google_containers
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.4 # k8s版本
networking:
dnsDomain: cluster.local
podSubnet: 10.20.0.0/16
serviceSubnet: 10.96.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # kube-proxy 模式
EOF
2.2初始化master
kubeadm init --config kubeadm.yaml
完整输出结果
W0319 22:59:56.631407 12367 validation.go:28] Cannot validate kubelet config - no validator is available
W0319 22:59:56.631444 12367 validation.go:28] Cannot validate kube-proxy config - no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.0.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.0.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0319 23:01:45.692056 12367 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0319 23:01:45.692762 12367 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.502774 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.130:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:27b37276c4451b54c99106f73cb38c3b0bb6e2e178c142808e9ff00300ac7eaf
#kubeadm init --config kubeadm.yaml初始化master下载的镜像
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.4 6dec7cfde1e5 6 days ago 116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.17.4 2e1ba57fe95a 6 days ago 171MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.17.4 7f997fcf3e94 6 days ago 161MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.17.4 5db16c1c7aff 6 days ago 94.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 4 months ago 41.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 4 months ago 288MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
#启动的容器
5220dac932a8 6dec7cfde1e5 "/usr/local/bin/kube…" 2 minutes ago Up 2 minutes k8s_kube-proxy_kube-proxy-jlwfd_kube-system_5af2149d-dc6c-4a6b-a99b-fb0af734d1fb_0
20a9003d1a55 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-proxy-jlwfd_kube-system_5af2149d-dc6c-4a6b-a99b-fb0af734d1fb_0
48c86b9adbf1 7f997fcf3e94 "kube-controller-man…" 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_c06ae2244175d69b82dd8327536a3f47_0
beed6b3671bd 5db16c1c7aff "kube-scheduler --au…" 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_0621ae8690c69d1d72f746bc2de0667e_0
fe47bd324d10 2e1ba57fe95a "kube-apiserver --ad…" 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_ffa9ef4a4e85500fbc6d35b5a88efb43_0
cfbf4d126eae 303ce5db0e90 "etcd --advertise-cl…" 3 minutes ago Up 3 minutes k8s_etcd_etcd-k8s-master_kube-system_ba1d164d5064efe1d13e5645859a6dec_0
aff7b239f5ef registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-k8s-master_kube-system_0621ae8690c69d1d72f746bc2de0667e_0
1266cf8da7d9 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-k8s-master_kube-system_c06ae2244175d69b82dd8327536a3f47_0
7415d07612a9 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-apiserver-k8s-master_kube-system_ffa9ef4a4e85500fbc6d35b5a88efb43_0
fef33ef8bb46 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-k8s-master_kube-system_ba1d164d5064efe1d13e5645859a6dec_0
拷贝 kubeconfig 文件
//这里的路径为/root
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
2.3master添加节点
node1和node2相同操作
将master节点上的$HOME/.kube/config 文件拷贝到node节点对应的文件中
1.创建目录,这里的路径为/root
mkdir -p $HOME/.kube
2.把master节点上的config文件拷贝到node1和node2的$HOME/.kube
scp k8s-master:~/.kube/config $HOME/.kube
3.修改权限
chown $(id -u):$(id -g) $HOME/.kube/config
将node1和node2加入到集群中
这里需要用到2.2中初始化master最后生成的token和sha256值
kubeadm join 10.0.0.130:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:27b37276c4451b54c99106f73cb38c3b0bb6e2e178c142808e9ff00300ac7eaf
输出结果
W0319 23:06:51.048079 4137 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
将node节点加入到k8s集群后,node节点会运行kube-proxy
#下载的镜像
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.4 6dec7cfde1e5 6 days ago 116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
#启动的容器
f8dff6a9caf5 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube…" About a minute ago Up About a minute k8s_kube-proxy_kube-proxy-g8lfq_kube-system_72e60b6a-0023-4d25-b8e0-73e98bb4b1ea_0
64752d1e78b3 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-proxy-g8lfq_kube-system_72e60b6a-0023-4d25-b8e0-73e98bb4b1ea_0
如果忘记了token和sha256值,可以在master节点使用如下命令查看
//查看token
kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
abcdef.0123456789abcdef 23h 2020-03-20T23:02:03+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
//查看sha256
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
27b37276c4451b54c99106f73cb38c3b0bb6e2e178c142808e9ff00300ac7eaf
//同时查看token和sha256
kubeadm token create --print-join-command
W0319 23:07:44.720182 16201 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0319 23:07:44.720215 16201 validation.go:28] Cannot validate kubelet config - no validator is available
kubeadm join 10.0.0.130:6443 --token 36t7ur.rtigrd344emr5urd --discovery-token-ca-cert-hash sha256:27b37276c4451b54c99106f73cb38c3b0bb6e2e178c142808e9ff00300ac7eaf
master节点查看node,发现状态都是NotReady,因为还没有安装网络插件,这里我们安装calio官方插件文档
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 19m v1.15.3
k8s-node1 NotReady <none> 4m10s v1.15.3
k8s-node2 NotReady <none> 4m3s v1.15.3
2.4master节点安装网络插件calio
//下载文件
wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
https://docs.projectcalico.org
//修改完成后安装calico网络插件
kubectl apply -f calico-3.13.1.yaml
//安装完成后稍等一会查看pods状态
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-788d6b9876-4lls7 1/1 Running 0 5m58s
calico-node-jfn5g 1/1 Running 0 5m59s
calico-node-k6f84 1/1 Running 0 5m59s
calico-node-vm4m6 1/1 Running 0 5m59s
coredns-7f9c544f75-5p2mf 1/1 Running 0 30m
coredns-7f9c544f75-prdlb 1/1 Running 0 30m
etcd-k8s-master 1/1 Running 0 30m
kube-apiserver-k8s-master 1/1 Running 0 30m
kube-controller-manager-k8s-master 1/1 Running 0 30m
kube-proxy-g8lfq 1/1 Running 0 25m
kube-proxy-jlwfd 1/1 Running 0 30m
kube-proxy-x8twc 1/1 Running 0 25m
kube-scheduler-k8s-master 1/1 Running 0 30m
//查看node状态
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 30m v1.17.4
k8s-node1 Ready <none> 25m v1.17.4
k8s-node2 Ready <none> 25m v1.17.4
#kubectl apply -f calico-3.13.1.yaml安装网络插件calico下载的镜像
calico/node v3.13.1 2e5029b93d4a 6 days ago 260MB
calico/pod2daemon-flexvol v3.13.1 e8c600448aae 6 days ago 111MB
calico/cni v3.13.1 6912ec2cfae6 6 days ago 207MB
#启动的容器
70d8a967cc69 calico/node "start_runit" 2 minutes ago Up 2 minutes k8s_calico-node_calico-node-jfn5g_kube-system_b86bd096-addf-4665-8a72-ec7c9e4a97f9_0
f9830c017734 calico/pod2daemon-flexvol "/usr/local/bin/flex…" 4 minutes ago Exited (0) 4 minutes ago k8s_flexvol-driver_calico-node-jfn5g_kube-system_b86bd096-addf-4665-8a72-ec7c9e4a97f9_0
8fe868ba10d4 6912ec2cfae6 "/install-cni.sh" 5 minutes ago Exited (0) 5 minutes ago k8s_install-cni_calico-node-jfn5g_kube-system_b86bd096-addf-4665-8a72-ec7c9e4a97f9_0
ae71689904ca calico/cni "/opt/cni/bin/calico…" 5 minutes ago Exited (0) 5 minutes ago k8s_upgrade-ipam_calico-node-jfn5g_kube-system_b86bd096-addf-4665-8a72-ec7c9e4a97f9_0
node节点
#都有的镜像
calico/node
calico/pod2daemon-flexvol
calico/cni
#calico/kube-controllers 、registry.cn-hangzhou.aliyuncs.com/google_containers/coredns会随机在一个node上
2.5安装Dashboard
下载文件及修改内容
//下载文件 v2.0.0-rc3是中文版本,beta8是英文版本
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc3/aio/deploy/recommended.yaml
//修改Service为NodePort类型
42行下增加一行
nodePort: 30001
44行下增加一行
type: NodePort
//原先内容
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
//修改后内容
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001 #增加,指定nodeport端口
selector:
k8s-app: kubernetes-dashboard
type: NodePort #增加,修改类型为nodeport
部署dashboard
#kubectl apply -f recommended.yaml
#下载的镜像
kubernetesui/dashboard v2.0.0-rc3 4a0a1cf1b459 6 weeks ago 126MB
#启动的容器
62a551c0a0e6 kubernetesui/dashboard "/dashboard --insecu…" 9 seconds ago Up 8 seconds k8s_kubernetes-dashboard_kubernetes-dashboard-7867cbccbb-z9vzj_kubernetes-dashboard_de95d4c3-6234-4f7a-8d80-d326c52b65c0_0
查看dashboard的运行状态及外网访问端口
//查看dashboard运行状态
kubectl get pods -n kubernetes-dashboard -l k8s-app=kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-7867cbccbb-z9vzj 1/1 Running 0 50s
//查看dashboard外网访问端口
kubectl get svc -n kubernetes-dashboard -l k8s-app=kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.96.222.222 <none> 443:30001/TCP 82s
通过上边的30001端口访问dashboard,注意是https
⚠️k8s1.16.3这个版本中,使用的dashboard版本是2.0.0-beta8,只有火狐浏览器可以访问,其余浏览器都不能访问,会报错如下
只有火狐浏览器可以!
然后创建一个具有全局所有权限的用户来登录Dashboard
//编辑admin.yaml文件
cat > admin.yaml <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
EOF
//直接创建
kubectl apply -f admin.yaml
//查看token
kubectl get secret -n kube-system|grep admin-token
admin-token-vfmmm kubernetes.io/service-account-token 3 7s
//获取base64解码后的字符串,注意需要用到上边命令查看到的token,会生成很长的一串字符串
kubectl get secret admin-token-j7sfh -o jsonpath={.data.token} -n kube-system |base64 -d
#直接用这条命令搞定
kubectl get secret `kubectl get secret -n kube-system|grep admin-token|awk '{print $1}'` -o jsonpath={.data.token} -n kube-system |base64 -d && echo
然后粘贴上述生成的base64字符串登陆dashboard,在登陆页面选择令牌
一项
登陆后的首界面
2.6安装kuboard(可选)
kuboard是Kubernetes 的一款图形化管理界面
安装kuboard
kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
kubectl apply -f https://addons.kuboard.cn/metrics-server/0.3.6/metrics-server.yaml
查看运行状态
#kubectl get pods -l k8s.eip.work/name=kuboard -n kube-system
NAME READY STATUS RESTARTS AGE
kuboard-756d46c4d4-tvhjq 1/1 Running 0 40s
获取token
//获取管理员token
kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d && echo
//获取只读token
kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-viewer | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d && echo
访问kuboard
Kuboard Service 使用了 NodePort 的方式暴露服务,NodePort 为 32567
k8s集群的任意一个节点都可以访问到
kuboard下载的镜像和启动的容器
#启动的容器
11171ab07891 eipwork/kuboard "/entrypoint.sh" 3 minutes ago Up 3 minutes k8s_kuboard_kuboard-756d46c4d4-jfx4t_kube-system_8f509c5b-fbee-40b0-8d74-9f2b8485639a_0d
#下载的镜像,本文中为1.0.8.2版本
eipwork/kuboard latest c6d652bbdf90 About an hour ago 180MB
到此,使用kubeadm安装k8s 1.17.4完成!!!
获取令牌命令kubectl get secret `kubectl get secret -n kube-system|grep admin-token|awk '{print $1}'` -o jsonpath={.data.token} -n kube-system |base64 -d && echo
1.17.4master启动的容器及下载的镜像
#启动的容器
11171ab07891 eipwork/kuboard "/entrypoint.sh" 4 minutes ago Up 4 minutes k8s_kuboard_kuboard-756d46c4d4-jfx4t_kube-system_8f509c5b-fbee-40b0-8d74-9f2b8485639a_0
f8e5d82cf8ee registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 6 minutes ago Up 6 minutes k8s_POD_kuboard-756d46c4d4-jfx4t_kube-system_8f509c5b-fbee-40b0-8d74-9f2b8485639a_0
62a551c0a0e6 kubernetesui/dashboard "/dashboard --insecu…" 10 hours ago Up 10 hours k8s_kubernetes-dashboard_kubernetes-dashboard-7867cbccbb-z9vzj_kubernetes-dashboard_de95d4c3-6234-4f7a-8d80-d326c52b65c0_0
40ad4724f6f1 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_kubernetes-dashboard-7867cbccbb-z9vzj_kubernetes-dashboard_de95d4c3-6234-4f7a-8d80-d326c52b65c0_0
70d8a967cc69 calico/node "start_runit" 10 hours ago Up 10 hours k8s_calico-node_calico-node-jfn5g_kube-system_b86bd096-addf-4665-8a72-ec7c9e4a97f9_0
f9830c017734 calico/pod2daemon-flexvol "/usr/local/bin/flex…" 10 hours ago Exited (0) 10 hours ago k8s_flexvol-driver_calico-node-jfn5g_kube-system_b86bd096-addf-4665-8a72-ec7c9e4a97f9_0
8fe868ba10d4 6912ec2cfae6 "/install-cni.sh" 10 hours ago Exited (0) 10 hours ago k8s_install-cni_calico-node-jfn5g_kube-system_b86bd096-addf-4665-8a72-ec7c9e4a97f9_0
ae71689904ca calico/cni "/opt/cni/bin/calico…" 10 hours ago Exited (0) 10 hours ago k8s_upgrade-ipam_calico-node-jfn5g_kube-system_b86bd096-addf-4665-8a72-ec7c9e4a97f9_0
cd04477d7393 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_calico-node-jfn5g_kube-system_b86bd096-addf-4665-8a72-ec7c9e4a97f9_0
5220dac932a8 6dec7cfde1e5 "/usr/local/bin/kube…" 10 hours ago Up 10 hours k8s_kube-proxy_kube-proxy-jlwfd_kube-system_5af2149d-dc6c-4a6b-a99b-fb0af734d1fb_0
20a9003d1a55 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_kube-proxy-jlwfd_kube-system_5af2149d-dc6c-4a6b-a99b-fb0af734d1fb_0
48c86b9adbf1 7f997fcf3e94 "kube-controller-man…" 10 hours ago Up 10 hours k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_c06ae2244175d69b82dd8327536a3f47_0
beed6b3671bd 5db16c1c7aff "kube-scheduler --au…" 10 hours ago Up 10 hours k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_0621ae8690c69d1d72f746bc2de0667e_0
fe47bd324d10 2e1ba57fe95a "kube-apiserver --ad…" 10 hours ago Up 10 hours k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_ffa9ef4a4e85500fbc6d35b5a88efb43_0
cfbf4d126eae 303ce5db0e90 "etcd --advertise-cl…" 10 hours ago Up 10 hours k8s_etcd_etcd-k8s-master_kube-system_ba1d164d5064efe1d13e5645859a6dec_0
aff7b239f5ef registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_kube-scheduler-k8s-master_kube-system_0621ae8690c69d1d72f746bc2de0667e_0
1266cf8da7d9 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_kube-controller-manager-k8s-master_kube-system_c06ae2244175d69b82dd8327536a3f47_0
7415d07612a9 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_kube-apiserver-k8s-master_kube-system_ffa9ef4a4e85500fbc6d35b5a88efb43_0
fef33ef8bb46 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_etcd-k8s-master_kube-system_ba1d164d5064efe1d13e5645859a6dec_0
#下载的镜像
eipwork/kuboard latest c6d652bbdf90 2 days ago 180MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.4 6dec7cfde1e5 7 days ago 116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.17.4 2e1ba57fe95a 7 days ago 171MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.17.4 7f997fcf3e94 7 days ago 161MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.17.4 5db16c1c7aff 7 days ago 94.4MB
calico/node v3.13.1 2e5029b93d4a 7 days ago 260MB
calico/pod2daemon-flexvol v3.13.1 e8c600448aae 7 days ago 111MB
calico/cni v3.13.1 6912ec2cfae6 7 days ago 207MB
kubernetesui/dashboard v2.0.0-rc3 4a0a1cf1b459 7 weeks ago 126MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 4 months ago 41.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 4 months ago 288MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
1.17.4node启动的容器和下载的镜像
#启动的容器
59b6a4cd2f44 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns "/coredns -conf /etc…" 10 hours ago Up 10 hours k8s_coredns_coredns-7f9c544f75-prdlb_kube-system_77f42046-8348-4247-b37b-6ce3dc62db18_0
c73eac71d7e4 calico/kube-controllers "/usr/bin/kube-contr…" 10 hours ago Up 10 hours k8s_calico-kube-controllers_calico-kube-controllers-788d6b9876-4lls7_kube-system_25992c41-0bfe-4e84-8a04-70b63c969627_0
947909b16d67 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns "/coredns -conf /etc…" 10 hours ago Up 10 hours k8s_coredns_coredns-7f9c544f75-5p2mf_kube-system_305e69d6-6ae6-4c48-a32d-6dddc290fa8a_0
e2b467f4fb14 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_coredns-7f9c544f75-prdlb_kube-system_77f42046-8348-4247-b37b-6ce3dc62db18_104
a5683b681459 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_coredns-7f9c544f75-5p2mf_kube-system_305e69d6-6ae6-4c48-a32d-6dddc290fa8a_105
11a2725d9030 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_calico-kube-controllers-788d6b9876-4lls7_kube-system_25992c41-0bfe-4e84-8a04-70b63c969627_104
a0064abf6347 calico/node "start_runit" 10 hours ago Up 10 hours k8s_calico-node_calico-node-vm4m6_kube-system_de38b605-5188-4db0-8123-061bb253be9d_0
ad86e37926e4 calico/pod2daemon-flexvol "/usr/local/bin/flex…" 10 hours ago Exited (0) 10 hours ago k8s_flexvol-driver_calico-node-vm4m6_kube-system_de38b605-5188-4db0-8123-061bb253be9d_0
0946ccc79ba8 6912ec2cfae6 "/install-cni.sh" 10 hours ago Exited (0) 10 hours ago k8s_install-cni_calico-node-vm4m6_kube-system_de38b605-5188-4db0-8123-061bb253be9d_0
fee74eaf355d calico/cni "/opt/cni/bin/calico…" 10 hours ago Exited (0) 10 hours ago k8s_upgrade-ipam_calico-node-vm4m6_kube-system_de38b605-5188-4db0-8123-061bb253be9d_0
7635ac40c020 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_calico-node-vm4m6_kube-system_de38b605-5188-4db0-8123-061bb253be9d_0
f8dff6a9caf5 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube…" 10 hours ago Up 10 hours k8s_kube-proxy_kube-proxy-g8lfq_kube-system_72e60b6a-0023-4d25-b8e0-73e98bb4b1ea_0
64752d1e78b3 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 10 hours ago Up 10 hours k8s_POD_kube-proxy-g8lfq_kube-system_72e60b6a-0023-4d25-b8e0-73e98bb4b1ea_0
#下载的镜像
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.4 6dec7cfde1e5 7 days ago 116MB
calico/node v3.13.1 2e5029b93d4a 7 days ago 260MB
calico/pod2daemon-flexvol v3.13.1 e8c600448aae 7 days ago 111MB
calico/cni v3.13.1 6912ec2cfae6 7 days ago 207MB
calico/kube-controllers v3.13.1 3971f13f2c6c 7 days ago 56.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 4 months ago 41.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
k8s切换命名空间工具
git clone https://github.com.cnpmjs.org/ahmetb/kubectx
cp kubectx/kubens /usr/local/bin
#查看所有命名空间
kubens
#切换到kube-system命名空间
kubens kube-system
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "kube-system".