使用 kubeadm 搭建 v1.15.3 版本 Kubernetes 集群

一、环境准备

1.1实验环境

角色 IP地址 主机名 docker版本 硬件配置 系统 内核
master 10.0.0.130 k8s-master 18.09.9 2c2g CentOS7.6 3.10.0-957.el7.x86_64
node1 10.0.0.131 k8s-node1 18.09.9 2c1g CentOS7.6 3.10.0-957.el7.x86_64
node2 10.0.0.132 k8s-node2 18.09.9 2c1g CentOS7.6 3.10.0-957.el7.x86_64

1.2每个节点配置host信息

cat >> /etc/hosts <<EOF
10.0.0.130 k8s-master
10.0.0.131 k8s-node1
10.0.0.132 k8s-node2
EOF

1.3禁用防火墙和selinux

//禁用防火墙
systemctl stop firewalld && systemctl disable firewalld

//禁用selinux
#临时修改
setenforce 0

#永久修改,重启服务器后生效
sed -i '7s/enforcing/disabled/' /etc/selinux/config

1.4创建/etc/sysctl.d/k8s.conf文件,添加如下内容

//向文件中写入以下内容
cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

//执行以下命令生效
modprobe br_netfilter && sysctl -p /etc/sysctl.d/k8s.conf

1.5安装ipvs

脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块

//向文件中写入以下内容
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

//修改权限以及查看是否已经正确加载所需的内核模块
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

nf_conntrack_ipv4      15053  0 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133095  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack

安装ipset和ipvsadm(便于查看 ipvs 的代理规则)

yum -y install ipset ipvsadm

1.6同步服务器时间

//安装chrony
yum -y install chrony

//修改同步服务器地址为阿里云
sed -i.bak '3,6d' /etc/chrony.conf && sed -i '3cserver ntp1.aliyun.com iburst' \
/etc/chrony.conf

//启动chronyd及加入开机自启
systemctl start chronyd && systemctl enable chronyd

//查看同步结果
chronyc sources

210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 120.25.115.20                 2   6    37    13   +194us[+6131us] +/-   33ms

1.7关闭swap分区

修改/etc/fstab文件,注释掉 SWAP 的自动挂载,使用free -m确认 swap 已经关闭

//手动关闭swap
swapoff -a

//修改fstab文件,注释swap自动挂载
sed -i '/^\/dev\/mapper\/centos-swap/c#/dev/mapper/centos-swap swap                    swap    defaults        0 0' /etc/fstab

//查看swap是否关闭
free -m

total        used        free      shared  buff/cache   available
Mem:           1994         682         612           9         699        1086
Swap:             0           0           0

swappiness 参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行

cat >>/etc/sysctl.d/k8s.conf <<EOF
vm.swappiness=0
EOF

//使配置生效
sysctl -p /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

1.8安装docker18.09.9

1.添加阿里云yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2.查看可用版本
yum list docker-ce --showduplicates | sort -r

已加载插件:fastestmirror, langpacks
可安装的软件包
 * updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
 * extras: mirrors.aliyun.com
docker-ce.x86_64            3:19.03.5-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.4-3.el7                     docker-ce-stable
。。。。。。
docker-ce.x86_64            3:18.09.9-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.8-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.7-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.6-3.el7                     docker-ce-stable
。。。。。。


3.安装docker18.09.9
yum -y install docker-ce-18.09.9-3.el7 docker-ce-cli-18.09.9

4.启动docker并设置开机自启
systemctl enable docker && systemctl start docker

5.配置阿里云docker镜像加速
cat > /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://gqk8w9va.mirror.aliyuncs.com"]
}
EOF

6.配置完后重启docker
systemctl restart docker

7.查看加速
docker info
找到Registry Mirrors一行
Registry Mirrors:
 https://gqk8w9va.mirror.aliyuncs.com/

8.查看docker版本
docker version

Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:51:21 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:22:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false

1.9安装Kubeadm

需要科学上网

cat >/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

使用阿里云yum源

cat >/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装 kubeadm、kubelet、kubectl(阿里云yum源会随官方更新最新版,因此指定版本)

yum -y install kubelet-1.15.3 kubeadm-1.15.3 kubectl-1.15.3 --disableexcludes=kubernetes

//查看版本
kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:11:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

设置kubelet开机自启

systemctl enable kubelet

设置k8s命令自动补全

yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

到此,基本环境安装完成!!!

二、初始化集群

2.1master节点操作,配置 kubeadm 初始化文件

//初始化kubeadm文件,这里的路径为/root
kubeadm config print init-defaults > kubeadm.yaml

//查看生成的默认文件
cat kubeadm.yaml 

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

然后根据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,另外需要注意的是我们这里是准备安装 calico 网络插件的,需要将 networking.podSubnet 设置为192.168.0.0/16

修改后的文件内容如下

cat > kubeadm.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.0.130  # apiserver 节点内网IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master   # master IP地址
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS  # dns类型
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.15.3  # k8s版本
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs  # kube-proxy 模式
EOF

2.2初始化master

kubeadm init --config kubeadm.yaml

完整输出结果
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driv
er. The recommended driver is "systemd". Please follow the guide at https://kubernete
s.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet 
connection
[preflight] You can also perform this action in beforehand using 'kubeadm config imag
es pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet
/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kuberne
tes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.
96.0.1 10.0.0.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.0.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.0.130 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.003902 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.130:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:b29cbd05af6924b95b74964f949a65e3367402d76a268295c333e792c3eb7ad2
#kubeadm init --config kubeadm.yaml初始化master下载的镜像
gcr.azk8s.cn/google_containers/kube-proxy                v1.15.3             232b5c793146        6 months ago        82.4MB
gcr.azk8s.cn/google_containers/kube-apiserver            v1.15.3             5eb2d3fc7a44        6 months ago        207MB
gcr.azk8s.cn/google_containers/kube-controller-manager   v1.15.3             e77c31de5547        6 months ago        159MB
gcr.azk8s.cn/google_containers/kube-scheduler            v1.15.3             703f9c69a5d5        6 months ago        81.1MB
gcr.azk8s.cn/google_containers/coredns                   1.3.1               eb516548c180        14 months ago       40.3MB
gcr.azk8s.cn/google_containers/etcd                      3.3.10              2c4adeb21b4f        15 months ago       258MB
gcr.azk8s.cn/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB

#启动的容器
ab18a4a6f78c        232b5c793146                               "/usr/local/bin/kube…"   About an hour ago   Up About an hour                        k8s_kube-proxy_kube-proxy-zns59_kube-system_7b431cd3-6572-4a5e-981e-c536bbad32ea_0
b8b3f6cf6702        gcr.azk8s.cn/google_containers/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-proxy-zns59_kube-system_7b431cd3-6572-4a5e-981e-c536bbad32ea_0
dc2db22d2a84        e77c31de5547                               "kube-controller-man…"   About an hour ago   Up About an hour                        k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_5a41bf631425471a02fc0cca5725518d_0
8a25922bf185        2c4adeb21b4f                               "etcd --advertise-cl…"   About an hour ago   Up About an hour                        k8s_etcd_etcd-k8s-master_kube-system_3c2a1c3e9bec3ffe168c550fd855d63f_0
f90159ea56c4        5eb2d3fc7a44                               "kube-apiserver --ad…"   About an hour ago   Up About an hour                        k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_6eb25fe257b281826a14a7e276a75d61_0
b70fe56aab46        703f9c69a5d5                               "kube-scheduler --bi…"   About an hour ago   Up About an hour                        k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_f1217bcf1fd2d00391bfd0ec41d69a25_0
f084bc0cf4ae        gcr.azk8s.cn/google_containers/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-controller-manager-k8s-master_kube-system_5a41bf631425471a02fc0cca5725518d_0
e93a8bebe038        gcr.azk8s.cn/google_containers/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-scheduler-k8s-master_kube-system_f1217bcf1fd2d00391bfd0ec41d69a25_0
4d6cac595d0b        gcr.azk8s.cn/google_containers/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-apiserver-k8s-master_kube-system_6eb25fe257b281826a14a7e276a75d61_0
f49e9fa0ba1d        gcr.azk8s.cn/google_containers/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_etcd-k8s-master_kube-system_3c2a1c3e9bec3ffe168c550fd855d63f_0

拷贝 kubeconfig 文件

//这里的路径为/root
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

2.3master添加节点

node1和node2相同操作

将master节点上的$HOME/.kube/config 文件拷贝到node节点对应的文件中

1.创建目录,这里的路径为/root
mkdir -p $HOME/.kube 

2.把master节点上的config文件拷贝到node1和node2的$HOME/.kube
scp k8s-master:~/.kube/config $HOME/.kube

3.修改权限
chown $(id -u):$(id -g) $HOME/.kube/config

将node1和node2加入到集群中

这里需要用到2.2中初始化master最后生成的token和sha256值

kubeadm join 10.0.0.130:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:b29cbd05af6924b95b74964f949a65e3367402d76a268295c333e792c3eb7ad2 


输出结果  
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

如果忘记了token和sha256值,可以在master节点使用如下命令查看

//查看token
kubeadm token list

TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
abcdef.0123456789abcdef   23h       2019-12-11T16:52:56+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token


//查看sha256
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

b29cbd05af6924b95b74964f949a65e3367402d76a268295c333e792c3eb7ad2


//同时查看token和sha256
kubeadm token create --print-join-command

kubeadm join 10.0.0.130:6443 --token h8lbi6.27rrq8c6khonopqe     --discovery-token-ca-cert-hash sha256:b29cbd05af6924b95b74964f949a65e3367402d76a268295c333e792c3eb7ad2

master节点查看node,发现状态都是NotReady,因为还没有安装网络插件,这里我们安装calio官方插件文档

kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   19m     v1.15.3
k8s-node1    NotReady   <none>   4m10s   v1.15.3
k8s-node2    NotReady   <none>   4m3s    v1.15.3

2.4master节点安装网络插件calio

//下载文件
wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml

//因为有节点是多网卡,所以需要在资源清单文件中指定内网网卡,修改calico.yaml文件中以下几处
582-594行,原内容如下
containers:
    583         # Runs calico-node container on each Kubernetes node.  This
    584         # container programs network policy and routes on each
    585         # host.
    586         - name: calico-node
    587           image: calico/node:v3.8.5
    588           env:
    589             # Use Kubernetes API as the backing datastore.
    590             - name: DATASTORE_TYPE
    591               value: "kubernetes"
    592             # Wait for the datastore.
    593             - name: WAIT_FOR_DATASTORE
    594               value: "true"


现修改为如下(增加了一个环境变量)
containers:
        # Runs calico-node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: calico/node:v3.8.5
          env:
            # Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            - name: IP_AUTODETECTION_METHOD  # DaemonSet中添加该环境变量
              value: interface=eth0    # 指定内网网卡
            # Wait for the datastore.
            - name: WAIT_FOR_DATASTORE
              value: "true"


//修改完成后安装calico网络插件
kubectl apply -f calico.yaml

//安装完成后稍等一会查看pods状态
kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5bbc8f45cb-rpm96   1/1     Running   0          6m17s
calico-node-cbmcz                          1/1     Running   0          6m17s
calico-node-jbjw4                          1/1     Running   0          6m17s
calico-node-qfnfm                          1/1     Running   0          6m17s
coredns-cf8fb6d7f-vkst4                    1/1     Running   0          50m
coredns-cf8fb6d7f-z622t                    1/1     Running   0          50m
etcd-k8s-master                            1/1     Running   0          49m
kube-apiserver-k8s-master                  1/1     Running   0          49m
kube-controller-manager-k8s-master         1/1     Running   0          50m
kube-proxy-68wzg                           1/1     Running   0          35m
kube-proxy-fbc95                           1/1     Running   0          35m
kube-proxy-g4xkc                           1/1     Running   0          50m
kube-scheduler-k8s-master                  1/1     Running   0          49m


//查看node状态
kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   55m   v1.15.3
k8s-node1    Ready    <none>   39m   v1.15.3
k8s-node2    Ready    <none>   39m   v1.15.3


#kubectl apply -f calico.yaml安装网络插件calico下载的镜像
calico/node                                              v3.8.6              1b9ca446b4da        2 months ago        192MB
calico/pod2daemon-flexvol                                v3.8.6              97bfbee02d48        2 months ago        9.38MB
calico/cni                                               v3.8.6              33af7d7d46b6        2 months ago        161MB

#启动的容器
4cfb387bd9a0        calico/node                                "start_runit"            5 minutes ago       Up 5 minutes                                   k8s_calico-node_calico-node-xc2tg_kube-system_0c89a070-b9e4-419e-916e-058b32ac64d1_0
b75783c7934c        calico/pod2daemon-flexvol                  "/usr/local/bin/flex…"   5 minutes ago       Exited (0) 5 minutes ago                       k8s_flexvol-driver_calico-node-xc2tg_kube-system_0c89a070-b9e4-419e-916e-058b32ac64d1_0
9e4766d2fa80        33af7d7d46b6                               "/install-cni.sh"        6 minutes ago       Exited (0) 6 minutes ago                       k8s_install-cni_calico-node-xc2tg_kube-system_0c89a070-b9e4-419e-916e-058b32ac64d1_0
f7f752fd62dc        calico/cni                                 "/opt/cni/bin/calico…"   6 minutes ago       Exited (0) 6 minutes ago                       k8s_upgrade-ipam_calico-node-xc2tg_kube-system_0c89a070-b9e4-419e-916e-058b32ac64d1_0

2.5安装Dashboard

下载文件及修改内容

//下载文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

//修改镜像名称    注释112行,添加如下两行
在这一行下添加如下两行    - name: kubernetes-dashboard
  image: gcr.azk8s.cn/google_containers/kubernetes-dashboard-amd64:v1.10.1
  imagePullPolicy: IfNotPresent

//修改Service为NodePort类型,文件最后一行加入以下内容
selector:
  k8s-app: kubernetes-dashboard
type: NodePort    #新增这一行,注意与selector在同一级

部署dashboard

kubectl apply -f kubernetes-dashboard.yaml

#下载的镜像
gcr.azk8s.cn/google_containers/kubernetes-dashboard-amd64   v1.10.1             f9aed6605b81        15 months ago       122MB

#启动的容器
7fddf75bb284        gcr.azk8s.cn/google_containers/kubernetes-dashboard-amd64   "/dashboard --insecu…"   9 seconds ago       Up 9 seconds                            k8s_kubernetes-dashboard_kubernetes-dashboard-fcfb4cbc-v7lfh_kube-system_268913c8-f9df-4e9a-9f89-c9dda9462d38_0

查看dashboard的运行状态及外网访问端口

//查看dashboard运行状态
kubectl get pods -n kube-system -l k8s-app=kubernetes-dashboard

NAME                                  READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-fcfb4cbc-fcczh   1/1     Running   0          3m40s



//查看dashboard外网访问端口
kubectl get svc -n kube-system -l k8s-app=kubernetes-dashboard

NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.96.100.138   <none>        443:31758/TCP   4m42s

通过上边的31758端口访问dashboard,注意是https

由于 dashboard 默认是自建的 https 证书,该证书是不受浏览器信任的,所以我们需要强制跳转就可以了

iShot2020-10-15 14.46.19

然后创建一个具有全局所有权限的用户来登录Dashboard

//编辑admin.yaml文件
cat > admin.yaml <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
EOF

//直接创建
kubectl apply -f admin.yaml

//查看token
kubectl get secret -n kube-system|grep admin-token

admin-token-j7sfh                                kubernetes.io/service-account-token   3      23s


//获取base64解码后的字符串,注意需要用到上边命令查看到的token,会生成很长的一串字符串
kubectl get secret admin-token-j7sfh -o jsonpath={.data.token} -n kube-system |base64 -d

#直接用这条命令搞定
kubectl get secret `kubectl get secret -n kube-system|grep admin-token|awk '{print $1}'` -o jsonpath={.data.token} -n kube-system |base64 -d && echo

然后粘贴上述生成的base64字符串登陆dashboard,在登陆页面选择令牌一项

登陆后的首界面

iShot2020-10-15 14.46.46

到此,使用kubeadm安装k8s 1.15.3完成!!!

获取令牌命令

kubectl get secret `kubectl get secret -n kube-system|grep admin-token|awk '{print $1}'` -o jsonpath={.data.token} -n kube-system |base64 -d

1.15.3启动的容器及下载的镜像

#启动的容器
7fddf75bb284        gcr.azk8s.cn/google_containers/kubernetes-dashboard-amd64   "/dashboard --insecu…"   8 minutes ago       Up 8 minutes                                    k8s_kubernetes-dashboard_kubernetes-dashboard-fcfb4cbc-v7lfh_kube-system_268913c8-f9df-4e9a-9f89-c9dda9462d38_0
fa592650d469        gcr.azk8s.cn/google_containers/pause:3.1                    "/pause"                 8 minutes ago       Up 8 minutes                                    k8s_POD_kubernetes-dashboard-fcfb4cbc-v7lfh_kube-system_268913c8-f9df-4e9a-9f89-c9dda9462d38_0
4cfb387bd9a0        calico/node                                                 "start_runit"            18 minutes ago      Up 18 minutes                                   k8s_calico-node_calico-node-xc2tg_kube-system_0c89a070-b9e4-419e-916e-058b32ac64d1_0
b75783c7934c        calico/pod2daemon-flexvol                                   "/usr/local/bin/flex…"   18 minutes ago      Exited (0) 18 minutes ago                       k8s_flexvol-driver_calico-node-xc2tg_kube-system_0c89a070-b9e4-419e-916e-058b32ac64d1_0
9e4766d2fa80        33af7d7d46b6                                                "/install-cni.sh"        18 minutes ago      Exited (0) 18 minutes ago                       k8s_install-cni_calico-node-xc2tg_kube-system_0c89a070-b9e4-419e-916e-058b32ac64d1_0
f7f752fd62dc        calico/cni                                                  "/opt/cni/bin/calico…"   18 minutes ago      Exited (0) 18 minutes ago                       k8s_upgrade-ipam_calico-node-xc2tg_kube-system_0c89a070-b9e4-419e-916e-058b32ac64d1_0
2cd3749c5b37        gcr.azk8s.cn/google_containers/pause:3.1                    "/pause"                 19 minutes ago      Up 19 minutes                                   k8s_POD_calico-node-xc2tg_kube-system_0c89a070-b9e4-419e-916e-058b32ac64d1_0
ab18a4a6f78c        232b5c793146                                                "/usr/local/bin/kube…"   2 hours ago         Up 2 hours                                      k8s_kube-proxy_kube-proxy-zns59_kube-system_7b431cd3-6572-4a5e-981e-c536bbad32ea_0
b8b3f6cf6702        gcr.azk8s.cn/google_containers/pause:3.1                    "/pause"                 2 hours ago         Up 2 hours                                      k8s_POD_kube-proxy-zns59_kube-system_7b431cd3-6572-4a5e-981e-c536bbad32ea_0
dc2db22d2a84        e77c31de5547                                                "kube-controller-man…"   2 hours ago         Up 2 hours                                      k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_5a41bf631425471a02fc0cca5725518d_0
8a25922bf185        2c4adeb21b4f                                                "etcd --advertise-cl…"   2 hours ago         Up 2 hours                                      k8s_etcd_etcd-k8s-master_kube-system_3c2a1c3e9bec3ffe168c550fd855d63f_0
f90159ea56c4        5eb2d3fc7a44                                                "kube-apiserver --ad…"   2 hours ago         Up 2 hours                                      k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_6eb25fe257b281826a14a7e276a75d61_0
b70fe56aab46        703f9c69a5d5                                                "kube-scheduler --bi…"   2 hours ago         Up 2 hours                                      k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_f1217bcf1fd2d00391bfd0ec41d69a25_0
f084bc0cf4ae        gcr.azk8s.cn/google_containers/pause:3.1                    "/pause"                 2 hours ago         Up 2 hours                                      k8s_POD_kube-controller-manager-k8s-master_kube-system_5a41bf631425471a02fc0cca5725518d_0
e93a8bebe038        gcr.azk8s.cn/google_containers/pause:3.1                    "/pause"                 2 hours ago         Up 2 hours                                      k8s_POD_kube-scheduler-k8s-master_kube-system_f1217bcf1fd2d00391bfd0ec41d69a25_0
4d6cac595d0b        gcr.azk8s.cn/google_containers/pause:3.1                    "/pause"                 2 hours ago         Up 2 hours                                      k8s_POD_kube-apiserver-k8s-master_kube-system_6eb25fe257b281826a14a7e276a75d61_0
f49e9fa0ba1d        gcr.azk8s.cn/google_containers/pause:3.1                    "/pause"                 2 hours ago         Up 2 hours                                      k8s_POD_etcd-k8s-master_kube-system_3c2a1c3e9bec3ffe168c550fd855d63f_0

#下载的镜像
calico/node                                                 v3.8.6              1b9ca446b4da        2 months ago        192MB
calico/pod2daemon-flexvol                                   v3.8.6              97bfbee02d48        2 months ago        9.38MB
calico/cni                                                  v3.8.6              33af7d7d46b6        2 months ago        161MB
gcr.azk8s.cn/google_containers/kube-proxy                   v1.15.3             232b5c793146        6 months ago        82.4MB
gcr.azk8s.cn/google_containers/kube-apiserver               v1.15.3             5eb2d3fc7a44        6 months ago        207MB
gcr.azk8s.cn/google_containers/kube-scheduler               v1.15.3             703f9c69a5d5        6 months ago        81.1MB
gcr.azk8s.cn/google_containers/kube-controller-manager      v1.15.3             e77c31de5547        6 months ago        159MB
gcr.azk8s.cn/google_containers/coredns                      1.3.1               eb516548c180        14 months ago       40.3MB
gcr.azk8s.cn/google_containers/kubernetes-dashboard-amd64   v1.10.1             f9aed6605b81        15 months ago       122MB
gcr.azk8s.cn/google_containers/etcd                         3.3.10              2c4adeb21b4f        15 months ago       258MB
gcr.azk8s.cn/google_containers/pause                        3.1                 da86e6ba6ca1        2 years ago         742kB

k8s切换命名空间工具

git clone https://github.com.cnpmjs.org/ahmetb/kubectx
cp kubectx/kubens /usr/local/bin

#查看所有命名空间
kubens

#切换到kube-system命名空间
kubens kube-system
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "kube-system".
泡泡吐肥皂o © gitbook.pptfz.top 2021 all right reserved,powered by Gitbook文件修订时间: 秃笔南波湾!!!

results matching ""

    No results matching ""