kubeadm安装(离线)k8s集群

root
233
文章
0
评论
2020年9月3日20:04:14 评论 20086字阅读66分57秒

kubeadm安装(离线)k8s集群

 

集群介绍

单master,多master都可以,资源有限,这里演示单master节点

角色IP
k8s-master192.168.1.80
k8s-node1192.168.1.81
k8s-node2192.168.1.82

基础环境初始化

关闭防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld

关闭selinux:
$ sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
$ setenforce 0  # 临时

关闭swap:
$ swapoff -a  # 临时
$ vim /etc/fstab  # 永久

设置主机名:
$ hostnamectl set-hostname <hostname>

在master添加hosts:
$ cat >> /etc/hosts << EOF
192.168.1.2 k8s-master
192.168.1.3 k8s-node1
192.168.1.4 k8s-node2
EOF

将桥接的IPv4流量传递到iptables的链:
$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system  # 生效

时间同步:
$ yum install ntpdate -y
$ ntpdate time.windows.com

yum -y install iproute-tc


准备各种包

1.离线安装docker

参考文章Docker概述及安装

2.安装kubeadm,kubectl,kubelet包

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@master01 ~]# yum install --downloadonly --downloaddir=./download kubeadm kubectl kubelet
[root@master01 ~]# cd download/

[root@master01 download]# yum -y localinstall *.rpm

获取指定版本的方法

yum list kubelet kubeadm kubectl  --showduplicates|sort -r
[root@master01 download]# yum install --downloadonly --downloaddir=./download kubeadm-1.19.5 kubectl-1.19.5 kubelet-1.19.5

打包rpm包

[root@master01 ~]# tar -zcvf kubeadm_rpm.tgz download/*

发送kubeadm_rpm.tgz镜像包到各个节点

ansible others -m unarchive -a "src=./kubeadm_rpm.tgz dest=~/ remote_src=no"
[root@master01 ~]# ansible others -m unarchive -a "src=./kubeadm_rpm.tgz dest=~/ remote_src=no"

 

各个节点安装kubeadm_rpm.tgz包

[root@master01 ~]# ansible others -m shell -a "cd ~/download && yum -y localinstall *.rpm && rm -rf ~/download"

 

获取镜像列表

[root@master01 ~]# kubeadm config images list
I0227 20:04:27.490751   81209 version.go:252] remote version is much newer: v1.20.4; falling back to: stable-1.19
W0227 20:04:32.228524   81209 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.19.8
k8s.gcr.io/kube-controller-manager:v1.19.8
k8s.gcr.io/kube-scheduler:v1.19.8
k8s.gcr.io/kube-proxy:v1.19.8
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

 

国内获取镜像的方式

方法一:

//下载镜像
[root@master01 pki]# kubeadm config images list |sed -e 's/^/docker pull /g' -e   's#k8s.gcr.io#registry.aliyuncs.com/google_containers#g'|sh -x

 

//修改镜像标签
[root@master01 pki]docker images |grep google_containers|awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#registry.aliyuncs.com/google_containers#k8s.gcr.io#2' |sh -x

 

//删除多余镜像
[root@master01 pki]docker images | grep google_containers| awk '{print "docker rmi "  $1":"$2}' | sh -x

方法二:

//下载镜像
[root@master01 pki]# kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g'|sh -x

 

//修改镜像标签 
[root@master01 pki]docker images |grep google_containers|awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#registry.cn-hangzhou.aliyuncs.com/google_container#k8s.gcr.io#2' |sh -x

 

//删除多余镜像 [root@master01 pki]docker images | grep google_containers| awk '{print "docker rmi " $1":"$2}' | sh -x

运行命令,查看image

[root@master01 ~]# docker images
REPOSITORY                            TAG        IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-proxy                 v1.19.5    6e5666d85a31   2 months ago    118MB
k8s.gcr.io/kube-controller-manager    v1.19.5    f196e958af67   2 months ago    111MB
k8s.gcr.io/kube-apiserver             v1.19.5    72efb76839e7   2 months ago    119MB
k8s.gcr.io/kube-scheduler             v1.19.5    350a602e5310   2 months ago    45.6MB
k8s.gcr.io/etcd                       3.4.13-0   0369cf4303ff   6 months ago    253MB
k8s.gcr.io/coredns                    1.7.0      bfe3a36ebd25   8 months ago    45.2MB
k8s.gcr.io/pause                      3.2        80d28bedfe5d   12 months ago   683kB

批量打包镜像为kubeadm.tgz

[root@master01 ~]# vim bale.sh 
#!/bin/bash
images=(
kube-proxy:v1.19.5
kube-controller-manager:v1.19.5
kube-apiserver:v1.19.5
kube-scheduler:v1.19.5
etcd:3.4.13-0
coredns:1.7.0
pause:3.2
)
for imageName in ${images[@]}; do
        docker save -o `echo ${imageName}|awk -F ':' '{print $1}'`.tar k8s.gcr.io/${imageName}
done

 

[root@master01 ~]# ll bale
总用量 682760
-rw------- 1 root root  45365760 2月  27 20:34 coredns.tar
-rw------- 1 root root 254679040 2月  27 20:33 etcd.tar
-rw------- 1 root root 119930368 2月  27 20:33 kube-apiserver.tar
-rw------- 1 root root 111959552 2月  27 20:33 kube-controller-manager.tar
-rw------- 1 root root 119646208 2月  27 20:33 kube-proxy.tar
-rw------- 1 root root  46861824 2月  27 20:33 kube-scheduler.tar
-rw------- 1 root root    692736 2月  27 20:34 pause.tar

[root@master01 ~]#tar zcvf ./kubeadm.tgz bale/*

发送kubeadm.tgz镜像包到各个节点

ansible others -m  unarchive -a "src=./kubeadm.tgz dest=~/ remote_src=no"
[root@master01 ~]# ansible others -m  unarchive -a "src=./kubeadm.tgz dest=~/ remote_src=no"
node02 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "dest": "/root/", 
    "extract_results": {
        "cmd": [
            "/usr/bin/gtar", 
            "--extract", 
            "-C", 
            "/root/", 
            "-z", 
            "-f", 
            "/root/.ansible/tmp/ansible-tmp-1614432565.65-102233-83459356460586/source"
        ], 
        "err": "", 
        "out": "", 
        "rc": 0
    }, 
    "gid": 0, 
    "group": "root", 
    "handler": "TgzArchive", 
    "mode": "0550", 
    "owner": "root", 
    "size": 220, 
    "src": "/root/.ansible/tmp/ansible-tmp-1614432565.65-102233-83459356460586/source", 
    "state": "directory", 
    "uid": 0
}
node01 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "dest": "/root/", 
    "extract_results": {
        "cmd": [
            "/usr/bin/gtar", 
            "--extract", 
            "-C", 
            "/root/", 
            "-z", 
            "-f", 
            "/root/.ansible/tmp/ansible-tmp-1614432565.66-102231-79107939803899/source"
        ], 
        "err": "", 
        "out": "", 
        "rc": 0
    }, 
    "gid": 0, 
    "group": "root", 
    "handler": "TgzArchive", 
    "mode": "0550", 
    "owner": "root", 
    "size": 232, 
    "src": "/root/.ansible/tmp/ansible-tmp-1614432565.66-102231-79107939803899/source", 
    "state": "directory", 
    "uid": 0
}
node03 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "dest": "/root/", 
    "extract_results": {
        "cmd": [
            "/usr/bin/gtar", 
            "--extract", 
            "-C", 
            "/root/", 
            "-z", 
            "-f", 
            "/root/.ansible/tmp/ansible-tmp-1614432565.65-102234-234756638205670/source"
        ], 
        "err": "", 
        "out": "", 
        "rc": 0
    }, 
    "gid": 0, 
    "group": "root", 
    "handler": "TgzArchive", 
    "mode": "0550", 
    "owner": "root", 
    "size": 204, 
    "src": "/root/.ansible/tmp/ansible-tmp-1614432565.65-102234-234756638205670/source", 
    "state": "directory", 
    "uid": 0
}

 

导入kubeadm镜像,到各个节点

[root@master01 ~]# vim load.sh 

#!/bin/bash
File=~/bale
ls $File > $File/images-list.txt
cd $File
for i in $(cat $File/images-list.txt)
do
        docker load -i $i
done
rm -rf $File

ansible执行,太长了就不贴结果了

[root@master01 ~]# ansible others -m script -a "load.sh"

随便查看一台

[root@node02 ~]# docker images
REPOSITORY                           TAG        IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-proxy                v1.19.5    6e5666d85a31   2 months ago    118MB
k8s.gcr.io/kube-apiserver            v1.19.5    72efb76839e7   2 months ago    119MB
k8s.gcr.io/kube-controller-manager   v1.19.5    f196e958af67   2 months ago    111MB
k8s.gcr.io/kube-scheduler            v1.19.5    350a602e5310   2 months ago    45.6MB
k8s.gcr.io/etcd                      3.4.13-0   0369cf4303ff   6 months ago    253MB
k8s.gcr.io/coredns                   1.7.0      bfe3a36ebd25   8 months ago    45.2MB
k8s.gcr.io/pause                     3.2        80d28bedfe5d   12 months ago   683kB

 

修改docker cgroup driver为systemd

[root@master01 ~]# docker info | grep "Cgroup Driver"  
 Cgroup Driver: cgroupfs
根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用
systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个
节点上docker的cgroup driver为systemd。
# mkdir /etc/docker #没启动docker之前没有该目录
# vim /etc/docker/daemon.json #如果不存在则创建
{
 "exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master01 ~]# vim /etc/docker/daemon.json 
[root@master01 ~]# systemctl restart docker
[root@master01 ~]# docker info | grep "Cgroup Driver"  
 Cgroup Driver: systemd

上面的就看一下就行了,使用ansible完成各节点docker配置文件

[root@master01 ~]# vim node_docker.sh
#!/bin/bash
mkdir -p /etc/docker /data/docker
cat >/etc/docker/daemon.json<<x
 {
   "graph": "/data/docker",
   "storage-driver": "overlay2",
   "insecure-registries": ["garbor.qytang.com"],
   "registry-mirrors": ["https://fooyh53n.mirror.aliyuncs.com"],
   "exec-opts": ["native.cgroupdriver=systemd"],
   "live-restore": true
 }
x
systemctl restart docker
systemctl enable docker
echo `docker info | grep "Cgroup Driver"`

使用ansible script 在各个节点执行

[root@master01 ~]# ansible others -m script -a "node_docker.sh"
node01 | CHANGED => {
    "changed": true, 
    "rc": 0, 
    "stderr": "Shared connection to node01 closed.\r\n", 
    "stderr_lines": [
        "Shared connection to node01 closed."
    ], 
    "stdout": "Cgroup Driver: systemd\r\n", 
    "stdout_lines": [
        "Cgroup Driver: systemd"
    ]
}
node03 | CHANGED => {
    "changed": true, 
    "rc": 0, 
    "stderr": "Shared connection to node03 closed.\r\n", 
    "stderr_lines": [
        "Shared connection to node03 closed."
    ], 
    "stdout": "Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.\r\nCgroup Driver: systemd\r\n", 
    "stdout_lines": [
        "Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.", 
        "Cgroup Driver: systemd"
    ]
}
node02 | CHANGED => {
    "changed": true, 
    "rc": 0, 
    "stderr": "Shared connection to node02 closed.\r\n", 
    "stderr_lines": [
        "Shared connection to node02 closed."
    ], 
    "stdout": "Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.\r\nCgroup Driver: systemd\r\n", 
    "stdout_lines": [
        "Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.", 
        "Cgroup Driver: systemd"
    ]
}

 

产生根自签证书

下载自签证书工具

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

赋予权限

chmod +x cfssl*

重命名

for x in cfssl*; do mv $x ${x%*_linux-amd64};  done

移动文件到目录 (/usr/bin)

mv cfssl* /usr/bin

 

产生证书

mkdir -p /opt/certs
# 自签名根证书(20年有效期)
---shell script
cat > /opt/certs/ca-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ],
    "ca": {
          "expiry": "175200h"
    }
}
EOF


## 初始化CA

[root@master01 certs]# cfssl gencert -initca ca-csr.json|cfssljson -bare ca
2021/02/27 14:32:05 [INFO] generating a new CA key and certificate from CSR
2021/02/27 14:32:05 [INFO] generate received request
2021/02/27 14:32:05 [INFO] received CSR
2021/02/27 14:32:05 [INFO] generating key: rsa-2048
2021/02/27 14:32:05 [INFO] encoded CSR
2021/02/27 14:32:05 [INFO] signed certificate with serial number 573064007157510450719317684764236397056657440283

拷贝证书到master控制节点目录

---shell script
mkdir -p /etc/kubernetes/pki
cp -a /opt/certs/ca.pem /etc/kubernetes/pki/ca.crt
cp -a /opt/certs/ca-key.pem /etc/kubernetes/pki/ca.key

 

初始化Master

初始化master

--kubernetes-version #指定Kubernetes版本
--image-repository #由于kubeadm默认是从官网k8s.grc.io下载所需镜像,国内无法
访问,所以这里通过--image-repository指定为阿里云镜像仓库地址
--pod-network-cidr #指定pod网络段
--service-cidr #指定service网络段
--ignore-preflight-errors=Swap #忽略swap报错信息

kubeadm的配置文件

//在证书目录里操作
cd /etc/kubernetes/pki

//添加证书配置文件
cat >> ./kubeadm-config.yaml<<x
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.19.5
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "192.168.1.80:6443"  ##node网络,也是负载均衡的接口地址
networking:
  serviceSubnet:  "10.1.0.0/16"  ##svc网络
  podSubnet: "172.16.0.0/16"  ##pod网络
  dnsDomain: "cluster.local"
x

初始化master

kubeadm init --config=kubeadm-config.yaml --upload-certs

 

按照上面初始化成功提示创建配置文件

[root@master01 ~]#   mkdir -p $HOME/.kube
[root@master01 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master01 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看node状态

[root@master01 ~]# kubectl get nodes
NAME       STATUS     ROLES    AGE     VERSION
master01   NotReady   master   4m27s   v1.18.5

 

部署calico网络组件

calico官网地址:https://docs.projectcalico.org/releases

calico下载官方地址https://github.com/projectcalico/calico/releases

[root@master01 ~]# tar -xf release-v3.18.0.tgz
[root@master01 images]# cd  /root/release-v3.18.0/images
//导入镜像
[root@master01 images]# find . -name "*.tar" -exec docker image load -i {} \;

 

添加pod网段地址

[root@master01 k8s-manifests]# vim calico.yaml

当有多张网卡的时候

当 Calico 被用作路由,每个 node 必须配置一个 IPv4 地址 和/或者 一个 IPv6 地址,用作 node 间的路由。为了排除节点特定的 IP 地址的配置,calico/node 这个容器可以被配置为自动检测 IP 地址配置。在许多系统中,一个主机上或许会有多个物理网卡,或者可能有多个 IP 地址配置到一个物理网卡。在这些情况下,自动检测模式下会有多个地址可选,所以难以确认正确的地址

            - name: IP_AUTODETECTION_METHOD
              value: "interface=ens.*"
[root@master01 k8s-manifests]# kubectl create -f calico.yaml
[root@master01 k8s-manifests]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
default       busybox                                   1/1     Running   0          5m10s
kube-system   calico-kube-controllers-5c6446dd9-pntqz   1/1     Running   0          17m
kube-system   calico-node-7n985                         1/1     Running   0          20m
kube-system   calico-node-lm9rd                         1/1     Running   0          20m
kube-system   calico-node-vk6q2                         1/1     Running   0          27m
kube-system   calico-node-ztbj7                         1/1     Running   0          19m
kube-system   coredns-6c76c8bb89-djc9f                  1/1     Running   0          14m
kube-system   coredns-6c76c8bb89-fb2cb                  1/1     Running   0          14m
kube-system   etcd-master01                             1/1     Running   0          25h
kube-system   kube-apiserver-master01                   1/1     Running   0          25h
kube-system   kube-controller-manager-master01          1/1     Running   1          25h
kube-system   kube-proxy-52x7d                          1/1     Running   0          14h
kube-system   kube-proxy-99s9v                          1/1     Running   0          14h
kube-system   kube-proxy-h4p7s                          1/1     Running   0          14h
kube-system   kube-proxy-rvrkr                          1/1     Running   0          25h
kube-system   kube-scheduler-master01                   1/1     Running   1          25h

 

扩容集群节点(将其他从节点slave1,slave2添加到集群里)

分别登陆到slave1, slave2,skave3上,运行下面的命令即可了

[root@node01 ~]# kubeadm join 192.168.1.80:6443 --token bdc97c.rhrzq1t5klbkv5pk     --discovery-token-ca-cert-hash sha256:2bcd3b460a25360ac29b7931c2a4f119eca073fea978020479cc398e41b37a2b 
[root@node02 ~]# kubeadm join 192.168.1.80:6443 --token bdc97c.rhrzq1t5klbkv5pk     --discovery-token-ca-cert-hash sha256:2bcd3b460a25360ac29b7931c2a4f119eca073fea978020479cc398e41b37a2b 
[root@node03 ~]# kubeadm join 192.168.1.80:6443 --token bdc97c.rhrzq1t5klbkv5pk     --discovery-token-ca-cert-hash sha256:2bcd3b460a25360ac29b7931c2a4f119eca073fea978020479cc398e41b37a2b 

 

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:

# kubeadm token create
# kubeadm token list
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
63bca849e0e01691ae14eab449570284f0c3ddeea590f8da988c07fe2729e924

# kubeadm join 192.168.31.61:6443 --token nuja6n.o3jrhsffiqs9swnu --discovery-token-ca-cert-hash sha256:63bca849e0e01691ae14eab449570284f0c3ddeea590f8da988c07fe2729e924
kubeadm token create --print-join-command

 

假如:忘记上面的token,可以使用下面的命令,找回(master节点上执行)添加新node节点(去node上执行)

这一条可以直接让node节点加入master节点

[root@master01 k8s-manifests]# kubeadm token create --print-join-command

kubeadm join 192.168.1.2:6443 --token le4z2t.w3kal4h22ol23ncx     --discovery-token-ca-cert-hash sha256:493a87d8257467cbb3d75eba9695d1d4662a55ee5202113e0a7dca024fefe6db 

 

添加新的master节点

# 生成certificate key,这个值在以Master身份加入集群时用到
kubeadm init phase upload-certs --upload-certs
[root@master01 k8s-manifests]# kubeadm init phase upload-certs --upload-certs

[upload-certs] Using certificate key:
ab694a41eb8dfb4f5188c24c51a92a3b5a5e5a698c433e7d9e1b2e1abd0b2660

加入进行master命令

kubeadm join 192.168.1.80:6443 --token gks5g5.s2xcgz47ygll4nz3     --discovery-token-ca-cert-hash sha256:2bcd3b460a25360ac29b7931c2a4f119eca073fea978020479cc398e41b37a2b --control-plane --certificate-key ab694a41eb8dfb4f5188c24c51a92a3b5a5e5a698c433e7d9e1b2e1abd0b2660

 

其他节点操作命令

# 下线节点
kubectl drain <node> --delete-local-data --ignore-daemonsets

# 删除节点
kubectl delete node <node>

# 上线节点
kubectl uncordon <node>

 

给noode节点打上标签

[root@master01 ~]# kubectl label node node01 node-role.kubernetes.io/node=
[root@master01 ~]# kubectl label node node02 node-role.kubernetes.io/node=
[root@master01 ~]# kubectl label node node03 node-role.kubernetes.io/node=

[root@master01 k8s-manifests]# kubectl get node
NAME       STATUS   ROLES    AGE   VERSION
master01   Ready    master   24h   v1.19.5
node01     Ready    node     14h   v1.19.5
node02     Ready    node     14h   v1.19.5
node03     Ready    node     14h   v1.19.5

 

测试Coredns

创建nslookup服务

cat >busybox.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

查看DNS解析

[root@master01 k8s-manifests]# kubectl apply -f busybox.yaml 
pod/busybox created

[root@master01 k8s-manifests]# kubectl exec busybox -- cat /etc/resolv.conf
nameserver 10.1.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

连接进去测试
[root@master01 k8s-manifests]# kubectl exec -ti busybox -- sh
/ # nslookup kubernetes
Server:    10.1.0.10
Address 1: 10.1.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.1.0.1 kubernetes.default.svc.cluster.local

 

 

 

部署 Dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort

访问地址:http://NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

解决Google-Chrome浏览器无法打开Kubernetes-K8S-Dashboard页面

kubeadm自动生成的证书,很多浏览器不支持。所以我们需要自己创建证书。

创建一个目录,存放证书等文件

[root@master01 ~]# mkdir key && cd key

生成证书

openssl genrsa -out dashboard.key 2048
Generating RSA private key, 2048 bit long modulus
.................+++
......+++
e is 65537 (0x10001)

# 192.168.1.5为master节点的IP地址
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=172.16.64.229'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
Signature ok
subject=/CN=192.168.1.5
Getting Private key

 

[root@master01 key]# openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.1.5'
[root@master01 key]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
Signature ok
subject=/CN=192.168.1.5
Getting Private key

删除原有的证书

注意新版的Dashboard的namespace已经变为kubernetes-dashboard

[root@master01 key]# kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
secret "kubernetes-dashboard-certs" delet

创建新证书的secret

[root@master01 key]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
secret/kubernetes-dashboard-certs created

查看正在运行的Pod

[root@master01 key]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-694557449d-k77vn   1/1     Running   0          87s
kubernetes-dashboard-9774cc786-sc9pc         1/1     Running   0          87s

删除Pod

kubectl delete po kubernetes-dashboard-b65488c4-rcdjh -n kubernetes-dashboard
pod "kubernetes-dashboard-b65488c4-rcdjh" deleted
kubectl delete po dashboard-metrics-scraper-76585494d8-dzgt9 -n kubernetes-dashboard
pod "dashboard-metrics-scraper-76585494d8-dzgt9" deleted

如果pod比较多的时候,可以使用以下这条命令批量删除。

[root@master01 key]# kubectl get pod -n kubernetes-dashboard | grep -v NAME | awk '{print "kubectl delete po " $1 " -n kubernetes-dashboard"}' | sh
pod "dashboard-metrics-scraper-694557449d-k77vn" deleted
pod "kubernetes-dashboard-9774cc786-sc9pc" deleted

删除后,新的pod会自动启动起来。就可以访问到kubernetes的web页面了

创建service account并绑定默认cluster-admin管理员集群角色:

[root@master01 key]# kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
serviceaccount/dashboard-admin created
[root@master01 key]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
[root@master01 key]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
删除sa命令
[root@master01 key]# kubectl delete sa -n kube-system dashboard-admin
serviceaccount "dashboard-admin" deleted

解决token有效期问题

Dashboard的Token失效时间可以通过 token-ttl 参数来设置,修改创建Dashboard的yaml文件,并重新创建即可。
ports:
- containerPort: 8443
  protocol: TCP
args:
  - --auto-generate-certificates
  - --token-ttl=43200
更新一遍dashboard
[root@master01 ~]# kubectl apply -f dashboard.yaml 
重新生成一下token
[root@master01 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-rl9s5
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 2c52bfbf-ba25-4a25-b3e3-9fb21aaeaab5

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkNDalMxaE1zbVJleEdoTE9qb1N6QzhjaHhNVnZrcE9ycXJTUGd5bF9wMVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcmw5czUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMmM1MmJmYmYtYmEyNS00YTI1LWIzZTMtOWZiMjFhYWVhYWI1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.NvNmy7_Uh6Se4-tlFPd_MOqcc-HgJezf8gxEMTzkuCOvyGheGZV6j_tBTlkEBWC6pNCvEQL0WeaxWUVnksXZhziIzcbgNAlVE11Ui-eQSxWNMqvII6uBlwxR7wBA8JUyRi2h3wy7Is64Hc7xj8Fya6iq966ctyUGRSHc1XMVgur9p0nNGvSmiwufDmp1NIDd-8NM760xrruizgCskJYS2K_BR2AzCjrtsHzM9CpdUyeB2nC0h_B9ZZHKPM6lVlqtU9Ta5AN9poG-Wl3KPzu6_9NAZU1PnoFm9C7dAe2oqUe9YKC1Vp9Vi_wuoRdETuGydSBXeu0cgocIwDVZR4NBLw
再次在web中填入,就不会有默认15分钟再次认证token的问题
清理环境使用命令,在哪个节点上执行就清理哪个节点上的环境
kubeadm reset

 

 

继续阅读
weinxin
我的微信
这是我的微信扫一扫
  • 文本由 发表于 2020年9月3日20:04:14
  • 除非特殊声明,本站文章均为原创,转载请务必保留本文链接
k8s-Service Account Kubernetes

k8s-Service Account

k8s-Service Account的授权管理 Service Account也是一种账号,是给运行在Pod里的进程提供了必要的身份证明。需要在Pod定义中指明引用的Service Account,...
k8s-RBAC Kubernetes

k8s-RBAC

k8s-RBAC认证授权策略 RBAC介绍 在Kubernetes中,所有资源对象都是通过API进行操作,他们保存在etcd里。而对etcd的操作我们需要通过访问 kube-apiserver 来实现...
k8s-Secret Kubernetes

k8s-Secret

配置管理中心Secret Secret是什么? Configmap一般是用来存放明文数据的,如配置文件,对于一些敏感数据,如密码、私钥等数据时,要用secret类型 Secret解决了密码、token...
匿名

发表评论

匿名网友 填写信息

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: