Kubernetes二进制安装部署 - 开发说
当前位置: 主页 » 大数据 » Kubernetes » Kubernetes二进制安装部署

Kubernetes二进制安装部署

      2021年08月21日   阅读 44 次     0 评论   Tags: ·

Kubernetes是Google开源的一款容器编排工具,它是诞生在Google内部运行N多年的博格系统之上的产物,因此其成熟度从其诞生初期就广泛受到业界的关注,并且迅速成为编排工具市场的主流,其社区活跃度非常高,版本迭代速度也很惊人,它的主要作用是对Docker容器做编排工作,当然,Docker只是容器工具的一种引擎,K8s可支持多种容器引擎,但从目前来说Docker容器引擎是具有绝对优势的,容器需要编排,也很容易理解,因为我们最核心要跑到业务通常都是LNMT/P的不同形式的扩展,但NMT/P他们的运行是有先后顺序的,也就是说MySQL要先启动,然后是Tomcat或PHP,最后是Nginx,而控制这种顺序就需要有容器编排工具来帮我们实现,另外,我们的业务希望7×24小时在线,如何保障?靠人是很难做到实时的,但编排工具可以,K8s帮我们实现了很多控制器,这控制器可以帮我们监控容器运行的状态,并自动帮我们重建(在容器时代重启就是重建)容器,并且还可以在容器处理能力不足时,自动根据我们定义的扩展规则,自动创建新Pod(k8s中最小单元,每个Pod中可有一个或多个容器),并且在压力下去后,自动删除Pod等等功能。

一、环境准备

1.0、k8s工作原理

1.1、环境服务器环境

    – 建议最小硬件配置:2核CPU、2G内存、30G硬盘
    – 配置csr和配置文件详见文件夹

1.2、组件部署说明

1.3、各组件密钥和csr文件说明

1.4、组件和组件用到的配置文件

1.5、网络规划

保留IP段说明:https://blog.csdn.net/weixin_41282397/article/details/80705162

二、初始化服务器

    – 下载cfssl工具
    – 设置主机名
    – 关闭selinux和防火墙
    – 下载kubernetes二进制文件
    – 时间同步
    – 配置主机间无密码登录
    – 安装依赖软件并清空iptables规则
    – 安装docker-ce
    – 将桥接的IPv4流量传递到iptables的链,加载ipvs模块

2.0、下载cfssl工具


# 下载cfssl,为生成证书做准备

wget -O /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget -O /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget -O /usr/local/bin/cfssl-certinfo  https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
for cfssl in `ls /usr/local/bin/cfssl*`;do chmod +x $cfssl;done;

2.1、设置主机名


# 根据规划设置主机名
hostnamectl set-hostname k8s-master1 && bash
hostnamectl set-hostname k8s-node1 && bash
hostnamectl set-hostname k8s-node2 && bash

# 在所有节点添加hosts

vi /etc/hosts
10.0.8.14  k8s-master1
10.0.8.16  k8s-node1 
10.0.8.17  k8s-node2

2.2、关闭selinux


# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

2.3、下载k8s二进制文件


# 官方下载链接:https://dl.k8s.io/v1.20.9/kubernetes-server-linux-amd64.tar.gz

2.4、时间同步


# 安装chrony
yum install chrony -y

# 设置开机自启动并启动
systemctl enable --now chronyd

# 设置时间同步定时任务
chronyc sources -v

crontab -e
*/10 * * * * chronyc sources  >/dev/null 2>&1 &

# 查看定时任务
crontab -l

2.5、配置主机间无密码登录


# 生成ssh 密钥对,一路回车,不输入密码
ssh-keygen -t rsa

# 把本地的ssh公钥文件安装到远程主机对应的账户
ssh-copy-id -i .ssh/id_rsa.pub k8s-master1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node2

2.6、安装依赖软件并清空iptables规则


# 所有节点

yum install wget yum-utils net-tools tar curl jq ipvsadm ipset conntrack iptables sysstat libseccomp iptables-services telnet -y

#清空iptable规则
systemctl disable --now iptables && iptables -F

2.7、安装docker-ce


#所有节点

# 下载docker软件仓库
curl https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo

# 设置软件源(按需修改)
sed -i 's+download.docker.com+mirrors.cloud.tencent.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo

# 安装docker-ce

yum install docker-ce -y

# 创建docker image保存目录并优化docker配置
mkdir -p /etc/docker /data/docker

vi /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "registry-mirrors": [
    "https://mirror.ccs.tencentyun.com",
    "https://registry.docker-cn.com",
    "http://hub-mirror.c.163.com",
    "https://docker.mirrors.ustc.edu.cn",
  ],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "max-concurrent-downloads": 5,
  "max-concurrent-uploads": 3,
  "log-opts": {
     "max-size": "300m",
     "max-file": "2"  
   },
  "live-restore": true 
}

# 设置docker开机自启动
systemctl daemon-reload
systemctl restart docker
systemctl enable docker


2.8、将桥接的IPv4流量传递到iptables的链


# 将桥接的IPv4流量传递到iptables的链

vi /etc/sysctl.d/k8s.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384


sysctl --system  # 生效



# 加载ipvs模块

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
lsmod | grep ip_vs
lsmod | grep nf_conntrack

# 开机自动加载ipvs模块
vi /etc/modules-load.d/ipvs.conf 
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip


systemctl enable --now systemd-modules-load.service

三、部署k8s

    – 生成CA证书

    – 部署etcd集群

    – 部署kube-apiserver

    – 部署kube-controller-manager

    – 部署kube-scheduler

    – 设定集群管理角色

    – 部署启用 TLS Bootstrapping 机制,授权kubelet-bootstrap用户允许请求证书

    – 部署kubelet

    – 部署kube-proxy

3.1、生成CA公钥和私钥


# 创建证书csr的json文件

mkdir /root/tls
cd /root/tls

# 生成ca证书所需要的json文件:ca-config.json CA配置文件ca-csr.json CA证书请求文件
vi ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ],
    "ca": {
       "expiry": "87600h"
    }
}


vi ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}


# 生成CA证书:会生成两个文件:ca.pem和ca-key.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -


3.2、部署etcd集群


# 生成etcd的csr证书请求文件
vi  etcd-csr.json
{
    "CN": "etcd",
    "hosts": [
     "127.0.0.1",
     "10.0.8.14",
     "10.0.8.16",
     "10.0.8.17"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
         "C": "CN",
         "L": "BeiJing",
         "ST": "BeiJing"
        }
    ]
}

# 生成etcd的证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

# 下载etcd二进制包,创建etcd工作目录并解压二进制包

# 官方下载链接
https://github.com/etcd-io/etcd/releases/download/v3.5.0/etcd-v3.5.0-linux-amd64.tar.gz

# 创建etcd工作目录

mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.5.0-linux-amd64.tar.gz
cp -r etcd-v3.5.0-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

# 创建etcd配置文件
vi  /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.8.14:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.8.14:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.8.14:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.8.14:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.0.8.14:2380,etcd-2=https://10.0.8.16:2380,etcd-3=https://10.0.8.17:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"


# 配置文件参数详解
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

# systemd管理etcd
vi  /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/etcd.pem \
--key-file=/opt/etcd/ssl/etcd-key.pem \
--peer-cert-file=/opt/etcd/ssl/etcd.pem \
--peer-key-file=/opt/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


# 拷贝证书到/opt/etcd/ssl/
cp ca*pem etcd*pem /opt/etcd/ssl/

# 启动并设置开机启动
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

# 将k8s-master1节点所有生成的文件拷贝到k8s-node1和k8s-node2
scp -r /opt/etcd/ root@k8s-node1:/opt/
scp /usr/lib/systemd/system/etcd.service root@k8s-node1:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@k8s-node2:/opt/
scp /usr/lib/systemd/system/etcd.service root@k8s-node1:/usr/lib/systemd/system/

# 然后在k8s-node1和k8s-node2节点分别修改etcd.conf配置文件中的节点名称和当前服务器IP:
vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-2"   # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.8.16:2380"   # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://10.0.8.16:2379" # 修改此处为当前服务器IP

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.8.16:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.8.16:2379" # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://10.0.8.14:2380,etcd-2=https://10.0.8.16:2380,etcd-3=https://10.0.8.17:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

# 在k8s-node1和k8s-node2节点启动etcd
systemctl daemon-reload
systemctl restart etcd
systemctl enable etcd

3.2.1、etcd命令使用说明


# 查看etcd集群健康状态

ETCDCTL_API=3 /opt/etcd/bin/etcdctl \
--cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem \
--endpoints="https://10.0.8.14:2379,https://10.0.8.16:2379,https://10.0.8.17:2379" \
endpoint health --write-out=table

# 查看etcd节点状态

ETCDCTL_API=3 /opt/etcd/bin/etcdctl \
--cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem \
--endpoints="https://10.0.8.14:2379,https://10.0.8.16:2379,https://10.0.8.17:2379" \
endpoint status --write-out=table

# 查看etcd节点列表

ETCDCTL_API=3 /opt/etcd/bin/etcdctl \
--cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem \
--endpoints="https://10.0.8.14:2379,https://10.0.8.16:2379,https://10.0.8.17:2379" \
member list  --write-out=table

# 删除所有数据
ETCDCTL_API=3 /opt/etcd/bin/etcdctl \
--cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem \
--endpoints="https://10.0.8.14:2379,https://10.0.8.16:2379,https://10.0.8.17:2379" \
del / --prefix

# 查看etcd集群存储的k8s所有数据

ETCDCTL_API=3 /opt/etcd/bin/etcdctl \
--cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem \
--endpoints="https://10.0.8.14:2379,https://10.0.8.16:2379,https://10.0.8.17:2379" \
get / --prefix --keys-only

3.3、部署kube-apiserver



# 创建kube-apiserver证书csr文件

vi kube-apiserver-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "172.16.0.1",
      "10.0.8.14",
      "10.0.8.16",
      "10.0.8.17",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}


# 生成kube-apiserver证书pem文件:kube-apiserver.pem和kube-apiserver-key.pem

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

# 解压二进制包

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager kube-proxy kubelet /opt/kubernetes/bin
cp kubectl /usr/local/bin/

# 创建kube-apiserver配置文件
vi /opt/kubernetes/cfg/kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://10.0.8.14:2379,https://10.0.8.16:2379,https://10.0.8.17:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=10.0.8.14 \
--allow-privileged=true \
--service-cluster-ip-range=172.16.0.0/12 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/opt/kubernetes/ssl/kube-apiserver-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/kube-apiserver.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/etcd.pem \
--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem \
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--proxy-client-cert-file=/opt/kubernetes/ssl/kube-apiserver.pem \
--proxy-client-key-file=/opt/kubernetes/ssl/kube-apiserver-key.pem \
--requestheader-allowed-names=kubernetes \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--enable-aggregator-routing=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"


# 注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。

#kube-apiserver配置文件详解
–logtostderr:启用日志
—v:日志等级
–log-dir:日志目录
–etcd-servers:etcd集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service虚拟IP地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap机制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport类型默认分配端口范围
–kubelet-client-xxx:apiserver访问kubelet客户端证书
–tls-xxx-file:apiserver https证书
1.20版本必须加的参数:–service-account-issuer,–service-account-signing-key-file
–etcd-xxxfile:连接Etcd集群证书
–audit-log-xxx:审计日志
启动聚合层相关配置:–requestheader-client-ca-file,–proxy-client-cert-file,–proxy-client-key-file,–requestheader-allowed-names,–requestheader-extra-headers-prefix,–requestheader-group-headers,–requestheader-username-headers,–enable-aggregator-routing

# 拷贝kube-apiserver证书到相应的目录
cp ca*pem kube-apiserver*pem /opt/kubernetes/ssl/

# 创建kube-apiserver.conf配置文件中token文件,格式:token,用户名,UID,用户组

vi  /opt/kubernetes/cfg/token.csv
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"


# systemd管理kube-apiserver

vi  /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target


# 启动kube-apiserver并设置开机自启动
systemctl daemon-reload
systemctl start kube-apiserver 
systemctl enable kube-apiserver

# 验证kube-apiserver
systemctl status kube-apiserver
curl --insecure https://10.0.8.14:6443/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {
    
  },
  "code": 403
}

只要有返回说明启动正常

3.4、部署kube-controller-manager



# 创建kube-controller-manager证书申请的csr文件

vi  kube-controller-manager-csr.json
{
  "CN": "system:kube-controller-manager",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}


# 生成kube-controller-manager证书:kube-controller-manager-key.pem和kube-controller-manager.pem

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

# 复制kube-controller-manager到/opt/kubernetes/ssl下面

cp kube-controller-manager*.pem /opt/kubernetes/ssl/

# 创建kube-controller-manager配置文件
vi  /opt/kubernetes/cfg/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=172.16.0.0/12 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--cluster-signing-duration=87600h0m0s"

# 配置文件详解
–kubeconfig:连接apiserver配置文件
–leader-elect:当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

# 设置k8s集群环境并生成kube-controller-manager.kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://10.0.8.14:6443"

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-controller-manager \
--client-certificate=./kube-controller-manager.pem \
--client-key=./kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-controller-manager \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

# systemd管理kube-controller-manager
vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target


# 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

3.5、部署kube-scheduler



# 创建kube-scheduler证书申请的csr文件

vi kube-scheduler-csr.json
{
  "CN": "system:kube-scheduler",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}


# 生成kube-scheduler证书:会生成kube-scheduler-key.pem和kube-scheduler.pem

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

# 复制kube-scheduler证书到/opt/kubernetes/ssl/

cp kube-scheduler*.pem /opt/kubernetes/ssl/

# 创建kube-scheduler配置文件

vi /opt/kubernetes/cfg/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/opt/kubernetes/logs \
--v=2"


# 设置k8s集群环境并生成kube-scheduler.kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://10.0.8.14:6443"

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
  
  
kubectl config set-credentials kube-scheduler \
--client-certificate=./kube-scheduler.pem \
--client-key=./kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-scheduler \
--kubeconfig=${KUBE_CONFIG}
  
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

# systemd管理scheduler

vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

# 启动并设置开机启动

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

3.6、设定集群管理角色



# 创建集群角色admin证书的csr文件
vi admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

# 生成集群管理员证书:admin.pem和admin-key.pem

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

# 设置k8s集群环境并生成/root/.kube/config文件

mkdir /root/.kube
KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://10.0.8.14:6443"

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials cluster-admin \
--client-certificate=./admin.pem \
--client-key=./admin-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
--cluster=kubernetes \
--user=cluster-admin \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

# 通过kubectl工具查看当前集群组件状态:
kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-2               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok

kubectl cluster-info
kubectl get all --all-namespaces

3.7、部署启用 TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。



# 授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

3.8、 部署kubelet



# 创建配置文件
vi /opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-master1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 \
--image-pull-progress-deadline=30m \
--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause-amd64:3.3"


# 配置文件详解

–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像

# 配置参数文件,此处的cgroupDriver: systemd 与/etc/docker/daemon.json是对应的。

vi /opt/kubernetes/cfg/kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd
clusterDNS:
- 172.16.0.10
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

# 生成kubelet初次加入集群引导kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://10.0.8.14:6443"
TOKEN=$(awk -F "," '{print $1}' /opt/kubernetes/cfg/token.csv)

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kubelet-bootstrap \
--token=${TOKEN} \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

# systemd管理kubelet

vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# 启动并设置开机启动
systemctl daemon-reload
systemctl restart kubelet
systemctl enable kubelet

3.9、批准kubelet证书申请并加入集群



# k8s命令:kubectl 自动补全功能

yum install -y bash-completion 
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

# 查看kubelet证书请求

kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A   6m3s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 批准申请
kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A

# 查看节点
kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master1   NotReady      7s    v1.20.9

# 注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

3.10、部署kube-proxy



# 创建配置文件
vi /opt/kubernetes/cfg/kube-proxy.conf 
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"


# 配置参数文件

vi /opt/kubernetes/cfg/kube-proxy-config.yml 
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.244.0.0/16
mode: "ipvs"


# 创建kube-proxy证书申请的csr文件

vi kube-proxy-csr.json 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
    }
  ]
}


# 生成kube-proxy证书:kube-proxy.pem和kube-proxy-key.pem

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# 生成kubeconfig文件:
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://10.0.8.14:6443"

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=${KUBE_CONFIG}
 
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

# systemd管理kube-proxy

vi  /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

四、部署网络calico、dashboard、coredns、metrics-server

4.1、 部署网络组件calico



# 官方yaml地址:https://docs.projectcalico.org/manifests/calico-etcd.yaml

# 修改calico.yaml中的etcd密钥,获取命令cat  | base64 -w 0 输出的字符串填写到etcd-ca、etcd-cert、etcd-key
etcd-ca: cat ca.pem | base64 -w 0 输出的字符串填写到此处
etcd-cert: cat etcd.pem | base64 -w 0
etcd-key: cat etcd-key.pem | base64 -w 0

# 示例:
etcd-ca: LS0tLS1CRUdJT....

# 修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kube-controller-manager配置文件指定的cluster-cidr网段一样。

- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
  
# 部署YAML:
kubectl apply -f calico.yaml
kubectl get pods -n kube-system

# 等Calico Pod都Running,节点也会准备就绪:

kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master     Ready       37m   v1.20.9

4.2、授权apiserver访问kubelet



vi apiserver-to-kubelet-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes


kubectl apply -f apiserver-to-kubelet-rbac.yaml

4.3、Node节点部署与扩容



# 在Master节点将Node涉及文件拷贝到node1和node2

scp -r /opt/kubernetes root@k8s-node1:/opt/
scp -r /opt/kubernetes root@k8s-node2:/opt/

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@k8s-node1:/usr/lib/systemd/system
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@k8s-node2:/usr/lib/systemd/system


# 根据步骤3.4部署TLS Bootstrapping的机制,node节点通过ca.pem和/opt/kubernetes/cfg/bootstrap.kubeconfig,向kube-apiserver申请node用到的kubelet证书pem文件和kubelet.kuconfig,因为bootstrap.kubeconfig文件中包含了集群信息:

cat /opt/kubernetes/cfg/bootstrap.kubeconfig

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CR...   # ca证书 cat ca.pem | base64 -w 0 
    server: https://10.0.8.14:6443   #kube-apiserver地址
  name: kubernetes   #集群名称 由 kubectl config set-cluster kubernetes 设定(名称自定义)
contexts:
- context:
    cluster: kubernetes  
    user: kubelet-bootstrap  #步骤:3.8、授权kubelet-bootstrap用户允许请求证书 中设定
  name: default  
current-context: default   # kubectl config set-context default  设定
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: bb9cade9294a79dc5aa3014715b63ed4   # 步骤 3.4、部署启用 TLS Bootstrapping 机制 设定


# 删除node节点kubelet证书和kubeconfig文件(也是节点重新加入集群的步骤)
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*

注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除

# 修改主机名
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1

sed -i 's/k8s-master1/k8s-node1/g' kubelet.conf 

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1

# 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet


systemctl start kube-proxy
systemctl enable kube-proxy

# 在Master上批准新Node kubelet证书申请
# 查看证书请求
kubectl get csr
NAME           AGE   SIGNERNAME                    REQUESTOR           CONDITION
node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro   89s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 授权请求
kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro

# 查看Node状态
[root@k8s-master1 tls]# kubectl get node
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready       2d17h   v1.20.9
k8s-node1     Ready       2d17h   v1.20.9
k8s-node2     Ready       2d17h   v1.20.9

4.4、部署Dashboard



# 默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:把imagePullPolicy: Always改为imagePullPolicy: IfNotPresent

官方地址:https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

[root@k8s-master1 yaml]# grep -n image recommended.yaml 
192:          image: kubernetesui/dashboard:v2.3.1
193:          imagePullPolicy: IfNotPresent
276:          image: kubernetesui/metrics-scraper:v1.0.6

---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001  #固定端口30001 加入行
  type: NodePort       #暴露到外部 加入行
  selector:
    k8s-app: kubernetes-dashboard

kubectl apply -f recommended.yaml
# 查看部署
kubectl get pods,svc -n kubernetes-dashboard


# 创建service account并绑定默认cluster-admin管理员集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

4.5、部署coredns


# coredns官方地址
https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed

wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed -o coredns.yaml

# 修改coredns.yaml文件三处:
kubernetes cluster.local in-addr.arpa ip6.arpa
forward . /etc/resolv.conf
clusterIP: 10.255.0.2

# 在下面文件
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local  in-addr.arpa ip6.arpa {   #修改此行,原文:CLUSTER_DOMAIN REVERSE_CIDRS
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {    #修改此行 原文:UPSTREAMNAMESERVER
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }                                #修改此行: 原文:STUBDOMAINS
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 172.16.0.10   #修改此行 原文:CLUSTER_DNS_IP
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
    

# 应用coredns  

kubectl apply -f coredns.yaml

# 验证coredns

kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx-svc.default.svc.cluster.local
Server:    172.16.0.10
Address 1: 172.16.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx-svc.default.svc.cluster.local
Address 1: 172.20.41.16 nginx-svc.default.svc.cluster.local
/ # 
/ # nslookup kubernetes.default.svc.cluster.local
Server:    172.16.0.10
Address 1: 172.16.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 172.16.0.1 kubernetes.default.svc.cluster.local
/ # 
/ # nslookup kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
Server:    172.16.0.10
Address 1: 172.16.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
Address 1: 172.24.32.53 kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
/ # 
/ # nslookup metrics-server.kube-system.svc.cluster.local
Server:    172.16.0.10
Address 1: 172.16.0.10 kube-dns.kube-system.svc.cluster.local

Name:      metrics-server.kube-system.svc.cluster.local
Address 1: 172.27.245.32 metrics-server.kube-system.svc.cluster.local

4.6、安装metrics-server



# metrics-server官方

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server.yaml

# 修改metrics-server.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp  
        - --secure-port=443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem   #添加行
        - --requestheader-username-headers=X-Remote-User  #添加行
        - --requestheader-group-headers=X-Remote-Group   #添加行
        - --requestheader-extra-headers-prefix=X-Remote-Extra-  #添加行
        image: k8s.gcr.io/metrics-server/metrics-server:v0.5.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - name: ca-ssl   #添加行
          mountPath: /opt/kubernetes/ssl   #添加行
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: ca-ssl    #添加行
        hostPath:   #添加行
          path: /opt/kubernetes/ssl  #添加行


# 应用 metrics-server
kubectl apply -f metrics-server.yaml

# 检验metrics-server是否生效

[root@k8s-master1 yaml]# kubectl top node
NAME          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master1   168m         8%     2214Mi          60%       
k8s-node1     85m          8%     1010Mi          58%       
k8s-node2     104m         5%     1501Mi          41%

[root@k8s-master1 ~]# kubectl top pod -A
NAMESPACE              NAME                                         CPU(cores)   MEMORY(bytes)   
default                demo-pod                                     3m           83Mi            
default                my-nginx-84445b465c-gghxd                    0m           6Mi             
default                my-nginx-84445b465c-rf28q                    0m           6Mi             
kube-system            cilium-6v2z2                                 4m           251Mi           
kube-system            cilium-operator-57499ff9b5-l2cvz             1m           30Mi            
kube-system            cilium-operator-57499ff9b5-rfcwr             1m           33Mi            
kube-system            cilium-qrr9w                                 4m           249Mi           
kube-system            cilium-rpz9w                                 2m           245Mi           
kube-system            coredns-746fcb4bc5-d2j97                     1m           13Mi            
kube-system            metrics-server-65b5b7c699-qvgkr              3m           17Mi            
kubernetes-dashboard   dashboard-metrics-scraper-79c5968bdc-mc6b4   1m           6Mi             
kubernetes-dashboard   kubernetes-dashboard-7d5446c79-sjqfd         1m           16Mi

五、安装helm和ingress



# helm安装,官方网站https://helm.sh/docs/intro/install/

wget https://get.helm.sh/helm-v3.6.3-linux-amd64.tar.gz

tar xf helm-v3.6.3-linux-amd64.tar.gz

mv linux-amd64/helm /usr/local/bin/helm

# helm安装ingress,官方网址:https://kubernetes.github.io/ingress-nginx/deploy/#using-helm

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

#查看helm仓库
helm repo list
[root@k8s-master1 yaml]# helm repo list
NAME         	URL                                                   
ingress-nginx	https://kubernetes.github.io/ingress-nginx            
stable       	https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

# 主动安装ingress-nginx (本文为采用此方式)
helm install ingress-nginx ingress-nginx/ingress-nginx 

[root@k8s-master1 yaml]# helm search repo ingress-nginx
NAME                       	CHART VERSION	APP VERSION	DESCRIPTION                                       
ingress-nginx/ingress-nginx	3.35.0       	0.48.1     	Ingress controller for Kubernetes using NGINX a..

# 手动安装ingress-nginx
helm pull ingress-nginx/ingress-nginx

tar xf ingress-nginx-3.35.0.tgz

cd ingress-nginx

 # 编辑 vim values.yaml配置文件

 # 修改项:
 # 去除image sha26校验
 19     #digest: sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899
 20     pullPolicy: IfNotPresent
 
 # 启用宿主机节点
 55   dnsPolicy: ClusterFirstWithHostNet
 64   hostNetwork: true
 
 # 每个节点部署一个,可选项deployment
 165   kind: DaemonSet

 # node节点选择,可默认不改
 263   nodeSelector:
 264     kubernetes.io/os: linux
         ingress: "true"
 
 # 资源优化项,可默认
 320   resources:
 321   #  limits:
 322   #    cpu: 100m
 323   #    memory: 90Mi
 324     requests:
 325       cpu: 100m
 326       memory: 90Mi
 
 # 暴露端口,默认值LoadBalancer
 436     type: ClusterIP
 
 # 默认就行,如低版本ingress,enabled: true可能回报证书错误
 506   admissionWebhooks:
 507     annotations: {}
 508     enabled: true

# 创建namespace ingress-nginx
kubectl create ns ingress-nginx

# helm 安装ingress,注意是在ingress-nginx目录内执行
helm  install ingress-nginx -n ingress-nginx .

# helm 删除ingress-nginx
helm  uninstall ingress-nginx -n ingress-nginx

六、集群可用性验证

    – Pod必须能解析Service
    – Pod必须能解析跨namespace的Service
    – 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53
    – Pod和Pod之前要能通
    – 同namespace能通信
    – 跨namespace能通信
    – 跨机器能通信

6.1、集群pod和service信息

6.2、验证集群可用性


# 验证
# Pod必须能解析Service
# Pod必须能解析跨namespace的Service

kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx-svc.default.svc.cluster.local
Server:    172.16.0.10
Address 1: 172.16.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx-svc.default.svc.cluster.local
Address 1: 172.20.41.16 nginx-svc.default.svc.cluster.local
/ # 
/ # nslookup kubernetes.default.svc.cluster.local
Server:    172.16.0.10
Address 1: 172.16.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 172.16.0.1 kubernetes.default.svc.cluster.local
/ # 
/ # nslookup kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
Server:    172.16.0.10
Address 1: 172.16.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
Address 1: 172.24.32.53 kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
/ # 
/ # nslookup metrics-server.kube-system.svc.cluster.local
Server:    172.16.0.10
Address 1: 172.16.0.10 kube-dns.kube-system.svc.cluster.local

Name:      metrics-server.kube-system.svc.cluster.local
Address 1: 172.27.245.32 metrics-server.kube-system.svc.cluster.local

# 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53 验证

[root@k8s-node1 ~]# telnet 172.16.0.10 53
Trying 172.16.0.10...
Connected to 172.16.0.10.
Escape character is '^]'.
^CConnection closed by foreign host.

# 验证
# Pod和Pod之前要能通
# 同namespace能通信
# 跨namespace能通信
# 跨机器能通信

[root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # 
/ # ping 10.244.0.196
PING 10.244.0.196 (10.244.0.196): 56 data bytes
64 bytes from 10.244.0.196: seq=0 ttl=63 time=0.441 ms
64 bytes from 10.244.0.196: seq=1 ttl=63 time=0.391 ms
64 bytes from 10.244.0.196: seq=2 ttl=63 time=0.338 ms
^C
--- 10.244.0.196 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.338/0.390/0.441 ms
  • 版权声明:本文版权归开发说和原作者所有,未经许可不得转载。文章部分来源于网络仅代表作者看法,如有不同观点,欢迎进行交流。除非注明,文章均由 开发说 整理发布,欢迎转载,转载请带版权。

  • 来源:开发说 ( https://www.kaifashuo.com/ ),提供主机优惠信息深度测评和服务器运维编程技术。
  • 链接:https://www.kaifashuo.com/2297.html
  • 评论(0

    1. 还没有任何评论,你来说两句吧

    发表评论

    您的电子邮箱地址不会被公开。 必填项已用*标注