K8s安装-基于二进制文件(高可用)
K8s安装-基于二进制文件(高可用)
基于1.29
通过kubeadm可以快速搭建一个K8s集群,但是调整K8s集群参数,以及安全设置、高可用模式,就需要二进制安装
Master高可用部署架构
Master高可用需要注意几个方面:
- Master的kube-apiserver、kube-controller-manager和kube-schedule服务以多实例方式部署,至少有3个节点,节点需要为奇数,至少3个节点
- Master启用基于CA认证的HTTPS安全机制
- etcd至少有3个节点的集群部署模式
- etcd集群启用了基于CA认证的HTTPS安全机制
- Master启用了RBAC授权模式
创建CA根证书
为了启用etcd和K8s服务基于CA认证的安全机制,首先需要配置CA证书
- etcd和K8s制作CA证书的时候,都需要基于CA根证书
openssl genrsa -out ca.key 2048 |
-subj
:”/CN”=CA机构的名称、格式为域名或者IP-days
:设置证书有效时间(天)
部署安全的etcd高可用集群
etcd下载二进制文件
wget https://github.com/etcd-io/etcd/releases/download/v3.4.35/etcd-v3.4.35-linux-amd64.tar.gz
把其中的etcd和etcdctl 文件放到/usr/bin/部署一个etcd的systemd文件
[Unit]
Description=etcd key-value store
Documentation=https://etcd.io/docs/
After=network.target
[Service]
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd
Restart=always
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target创建etcd的CA证书
- 创建一个x509 v3的配置文件etcd_ssl.conf,其中subjectName参数(alt_names)包含所有的etcd主机的IP地址
[ req ]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
[ req_distinguished_name ]
[ v3_req ]
basicConstrains = CA:FALSE
keyUsage = nonRepudiation , digitalSignature, keyEncipherment
subjectAltName = @alt_names
[ alt_names ]
IP.1 = 192.168.1.1
IP.2 = 192.168.1.2
IP.3 = 192.168.1.3- 创建etcd服务端的CA证书
openssl genrsa -out etcd_server.key 2048
openssl req -new -key etcd_server.key -config etcd_ssl.conf -subj "/CN=etcd-server" -out etcd_server.csr
openssl x509 -req -in etcd_server.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.conf -out etcd_server.crt创建客户端的CA证书,包含etcd_client.key和etcd_client.crt,也要保存到/etc/etcd/pki目录
openssl genrsa -out etcd_client.key 2048
openssl req -new -key etcd_client.key -config etcd_ssl.conf -subj "/CN=etcd-client" -out etcd_client.csr
openssl x509 -req -in etcd_client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.conf -out etcd_client.crt
对etcd参数的说明
etcd节点的配置方式包含:启动参数、环境变量、配置文件,下面通过环境变量方式配置到
/etc/etcd/etcd.conf
中,提供给systemd使用3个etcd节点为:192.168.18.3、192.168.18.4、192.168.18.5
# /etc/etcd/etcd.conf
# 节点1
# 节点名称
ETCD_NAME=etcd1
# etcd数据存储目录
ETCD_DATA_DIR=/etc/etcd/data
# etcd服务端CA证书路径
ETCD_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_KEY_FILE=etc/etcd/pki/etcd_server.key
# CA根证书路径
ETCD_TRUSTED_CA_FILE=/etc/etcd/pki/ca.crt
# 是否启用客户端证书认证
ETCD_CLIENT_CERT_AUTH=true
# 为客户端提供的监听地址
ETCD_LISTEN_CLIENT_URLS=https://192.168.18.3:2379
ETCD_ADVERTISE_CLIENT_URL=https://192.168.18.3:2379
# 集群各个节点互认的证书全路径
ETCD_PEER_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_PERR_KEY_FILE=etc/etcd/pki/etcd_server.key
# 集群各个节点的根证书路径
ETCD_PEER_TRUSTED_CA_FILE=/etc/etcd/pki/ca.crt
# 为本集群其他节点提供的服务监听地址
ETCD_LISTEN_PEER_CLIENT_URLS=https://192.168.18.3:2379
ETCD_ADVERTISE_PEE_CLIENT_URL=https://192.168.18.3:2379
# 集群名称
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
# 集群其他节点
ETCD_INITAL_CLUSTER="etcd1=https://192.168.18.3:2380,etcd2=https://192.168.18.4:2380,etcd4=https://192.168.18.5:2380"
# 初始化集群状态,新建的new,已经存在设置为existing
ETCD_INITIAL_CLUSTER_STATE=new
# 节点2
ETCD_NAME=etcd2
ETCD_DATA_DIR=/etc/etcd/data
ETCD_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_KEY_FILE=etc/etcd/pki/etcd_server.key
ETCD_TRUSTED_CA_FILE=/etc/etcd/pki/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.18.4:2379
ETCD_ADVERTISE_CLIENT_URL=https://192.168.18.4:2379
ETCD_PEER_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_PERR_KEY_FILE=etc/etcd/pki/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/etc/etcd/pki/ca.crt
ETCD_LISTEN_PEER_CLIENT_URLS=https://192.168.18.4:2379
ETCD_ADVERTISE_PEE_CLIENT_URL=https://192.168.18.4:2379
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITAL_CLUSTER="etcd1=https://192.168.18.3:2380,etcd2=https://192.168.18.4:2380,etcd4=https://192.168.18.5:2380"
ETCD_INITIAL_CLUSTER_STATE=new
# 节点3
ETCD_NAME=etcd3
ETCD_DATA_DIR=/etc/etcd/data
ETCD_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_KEY_FILE=etc/etcd/pki/etcd_server.key
ETCD_TRUSTED_CA_FILE=/etc/etcd/pki/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.18.5:2379
ETCD_ADVERTISE_CLIENT_URL=https://192.168.18.5:2379
ETCD_PEER_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_PERR_KEY_FILE=etc/etcd/pki/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/etc/etcd/pki/ca.crt
ETCD_LISTEN_PEER_CLIENT_URLS=https://192.168.18.5:2379
ETCD_ADVERTISE_PEE_CLIENT_URL=https://192.168.18.5:2379
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITAL_CLUSTER="etcd1=https://192.168.18.3:2380,etcd2=https://192.168.18.4:2380,etcd4=https://192.168.18.5:2380"
ETCD_INITIAL_CLUSTER_STATE=new启动etcd集群
systemctl restart etcd && systemctl enable etcd
验证etcd集群状态监控
通过etcd自动的etcdctl工具可以验证集群是否都在helahty 健康
etcdctl --cacert=/etc/etcd/pki/ca.crt --cert=/etc/etcd/pki/etcd_client.crt --key= /etc/etcd/pki/etcd_client.key --endpoints=https://192.168.18.3:2379,https://192.168.18.3:2379,https://192.168.18.5:2379
部署安全的K8s Master高可用集群
下载K8s的服务二进制文件
wget https://dl.k8s.io/v1.29.3/kubernetes-server-linux-amd64.tar.gz |
主要的服务端程序的二进制文件列表
文件名 | 说明 |
---|---|
kube-apiserver | kube-apiserver主程序 |
kube-apiserver.docker_tag | kube-apiserver docker镜像tag |
kube-apiserver.tar | kube-apiserver docker镜像文件 |
kube-controller-manager | kube-controller-manager主程序 |
kube-controller-manager.docker tag | kube-controller-manager docker镜像tag |
kube-controller-manager.tar | kube-controller-manager docker镜像 |
kube-schedule | kube-schedule主程序 |
kube-schedule.docker tag | kube-schedule docker镜像tag |
kube-schedule.tar | kube-schedule docker镜像 |
kubelet | kubelet主程序 |
kube-proxy | kube-proxy 主程序 |
kube-proxy.docker tag | kube-proxy 镜像tag |
kube-proxy.tar | kube-proxy docker镜像文件 |
kubectl | 客户端命令行工具 |
kubeadm | 用于安装K8s集群的命令行工具 |
apiextensions-apiserver | 提供实现自定义资源对象的拓展API Server |
kube-aggregator | 聚合APIServer程序 |
把K8s的可执行文件放到/usr/bin目录下,然后/usr/lib/systemd/system目录下为各种服务创建systemd服务的配置文件
部署kube-apiserver服务
设置kube-apiserver服务需要的CA相关证书
准备一个x509 v3版本的配置文件master_ssl.conf
[ req ]
distinguished_name = req_distinguished_name
req_extensions = v3_req
[ req_distinguished_name ]
[ v3_req ]
basicConstrains = CA:FALSE
keyUsage = nonRepudiation , digitalSignature, keyEncipherment
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s-1
DNS.6 = k8s-2
DNS.7 = k8s-3
IP.1 = 192.168.1.1
IP.2 = 192.168.1.2
IP.3 = 192.168.1.3
IP.4 = 192.168.1.4
IP.5 = 192.168.1.5- 通过openssl命令创建kube-apiserver的服务端证书CA证书,包括apiserver.key 和apiserver.crt文件,将其保存到/etc/kubernetes/pki目录下
openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -config etcd_ssl.conf -subj "/CN=192.168.1.1" -out apiserver.csr
openssl x509 -req -in apiserver.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.conf -out apiserver.crt- 为kube-apiserver服务创建systemd服务配置文件/usr/lib/systemd/system/kube-apiserver.service,在该环境文件中,EnvironmentFile参数指定
/etc/kubernetes/apiserver
作为环境文件,其中通过环境变量KUBE-API-ARGS设置kube-apiserver的启动参数
[Unit]
Description=Kubernetes API Server
Documentation=https://kubernetes.io/zh-cn/docs/home/
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
[Install]
WantedBy=multi-user.target- 在环境文件
/etc/kubernetes/apiserver
,配置变量KUBE_API_ARGS的值为kube-apiserver的全部启动参数
- –secure-port:HTTPS端口号,默认值6443
- –tls-cert-file:服务端CA证书全路径
- –tls-private-key-file:服务端CA私钥文件
- –client-ca-file:CA根证书路径
- –apiserver-count:API Server实例的数量,同时需要设置参数–endpoint-reconcile-type=master-count
- –etcd-servers:连接etcd的URL列表
- –etcd-cafile:etcd使用的CA根证书路径
- –etcd-certfile:etcd客户端CA证书
- –etcd-keyfile:etcd客户端私钥文件路径
- –service-cluster-ip-range:Service虚拟IP地址范围,以CIDR格式表示
- –service-node-port-range:Serice可用端口范围
- –allow-privileged:是否允许容器以特权模式运行,默认值true
KUBE_API_ARGS="--secure-port=6443 \
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt \
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key \
--client-ca-file=/etc/kubernetes/pki/ca.crt \
--apiserver-count=3 --endpoint-reconcile-type=master-count \
--etcd-servers=https://192.168.1.3:2379,https://192.168.1.4:2379,https://192.168.1.5:2379 \
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt \
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt \
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key \
--service-cluster-ip-range=10.96.0.0/12 \
--service-node-port-range=30000-32767 \
--allow-privileged=true"在配置文件准备好之后,在3台主机启动kube-apiserver服务,设置为开机自启动
systemctl start kube-apiserver && systemctl enable kube-apiserver
创建客户端CA证书
kube-controller-manager、kube-schedule、kubelet和kube-proxy服务作为客户端连接kube-apiserver服务,需要为它们创建客户端CA证书,使其能正确访问kube-apiserver
通过openssl命令创建CA证书和私钥文件
openssl genrsa -out client.key 2048
openssl req -new -key client.key -config etcd_ssl.conf -subj "/CN=admin" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -out client.crt- 将生成的client.key和client.crt文件保存到/etc/kubernetes/pki目录下
创建客户端连接kube-apiserver服务所需要kubeconfig配置文件
为kube-controller-manager、kube- schedule、kubelet和kube-proxy统一创建kubeconfig,保存到/etc/kubernetes目录下
apiVersion: v1 |
部署kube-controller- manage服务
为kube-controller-manager服务创建systemd服务的配置文件/usr/lib/systemd/system/kube-controller-manager.service,其中EnvironmentFile参数指定/etc/kubernetes/controller-manager文件作为环境文件,在该环境变量中通过KUBE_CONTROLLER_MANAGER_ARGS设置kube-controller-manager启动参数
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://kubernetes.io/zh-cn/docs/home/
[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=always
[Install]
WantedBy=multi-user.target在环境文件
/etc/kubernetes/controller-manager
中,配置KUBE_CONTROLLER_MANAGER_ARGS全部启动参数- –kubeconfig:与API Server连接的相关配置
- –leader-elect:启用选举机制,在有3个节点下一个设置为true
- –service-account-private-key-file:为SA自动颁发token使用的私钥文件路径
- –root-ca-file:Service的虚拟IP地址范围
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--leader-elect=true \
--service-cluster-ip-range=10.96.0.0/12 \
--service-account-private-key-file=/etc/kubernetes/pki/apiserver.key \
--root-ca-file=/etc/kubernetes/pki/ca.crt"在配置文件准备完毕后,在三台主机上启动kube-controller-manager
systemctl start kube-controller-manager && systemctl enable kube-controller-manager
部署kube-schduler服务
创建kube-scheduler的systemd的配置文件/usr/lib/systemd/system/kube-scheduler.service,其中Env文件制定到/etc/kubernetes/scheduler,通过KUBE_SCHEDULER_ARGS设置
# kube-scheduler.service
[Unit]
Description=Kubernetes Kube Scheduler
Documentation=https://kubernetes.io/zh-cn/docs/home/
[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=always
[Install]
WantedBy=multi-user.target环境配置文件
/etc/kubernetes/scheduler
- –kubeconfig:API Server连接的信息
- –leader-elect=true:启用选举机制
KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--leader-elect=true"在三个节点上启动kube-scheduler服务
systemctl start kube-scheduler && systemctl enable kube-scheduler
验证服务状态
systemctl status kube-sheduler
使用HAPorxy和keepalived部署高可用负载均衡器
在3个kube-apiserver部署的前端部署HAProxy和keepliaved,将虚拟IP地址192.168.x.x作为Master的唯一入口,提供为客户端访问
- HAPorxy和keepalived部署都应该大于2,防止单点故障
部署HAProxy
准备HAProxy的配置文件haproxy.cfg文件
# 全局配置 |
使用Docker运行HAProxy镜像,通过http:hostip:8888/status可以访问HAProxy的管理界面
docker run -d --name k8s-haporxy \ |
部署keepalived
keepalived用于维护虚拟IP地址的高可用,都需要部署在节点
当某个HAProxy实例不可用,虚拟IP会切换到另外一台主机上
Keepavlied.conf配置如下
! Configuration File for Keepalived
global_defs {
router_id LVS_1
}
# 设置keepalived 虚拟机路由器组的名称
vrrp_instance VI_1 {
# 其他的设置为BACKUP
state MASTER
# 设置虚拟网卡的名称
interface eth33
virtual_router_id 51
# 优先级
priority 100
advert_int 1
# 访问keepalied服务鉴权信息
authentication {
auth_type PASS
auth_pass password
}
# 虚拟IP地址
virtual_ipaddress {
192.168.1.100/24 dev ens33
}
# HAProxy的健康检查脚本
track_script {
check_haproxy
}
}check_haproxy.sh需要保存到/usr/bin命令下
- 检查成功返回0,不然返回非0
! /bin/bash
count=`netstat -apn |grep 9443 |wc -l`
if [ $count -gt 0 ]; then
exit 0
else
exit 1
fi第二台主机的keepalived.conf如下
! Configuration File for Keepalived
global_defs {
router_id LVS_1
}
vrrp_script checkharoxy
{
script "/usr/bin/check-haproxy.sh"
interval 2
weight -30
}
# 设置keepalived 虚拟机路由器组的名称,和MASTER设置一样
vrrp_instance VI_1 {
# 其他的设置为BACKUP
state BACKUP
# 设置虚拟网卡的名称
interface eth33
virtual_router_id 51
# 优先级
priority 100
advert_int 1
# 访问keepalied服务鉴权信息
authentication {
auth_type PASS
auth_pass password
}
# 虚拟IP地址
virtual_ipaddress {
192.168.1.100/24 dev ens33
}
# HAProxy的健康检查脚本
track_script {
check_haproxy
}
}Docker部署启动keepalived
docker run -d --name k8s-keepaviled \
--restart=always \
--net=host \
--cap-add=NET_ADMIN --cap-add=NET_BPOADCAST --cap-add=NET_RAW \
-v ${PWD}/keepavlied.conf:/container/service/keepalived/assents/keepalived.conf \
-v ${PWD}/check-haproxy.sh:/usr/bin/check-harproxy.sh \
osixia/keepavlived:2.0.20 --copy-service通过ip addr可以查看ens33网卡上新增的虚拟IP地址
通过curl可以验证HAProxy是否可以访问kube-apiserver
部署各个Node的服务
在Node上部署需要容器运行时、kubelet和kube-proxy等系统组件
容器运行时
包含contained、cri-o
…
部署kubelet
- 为kubelet服务创建systemd服务的配置文件/usr/lib/systemd/system/kubelet.service
[Unit] |
配置文件/etc/kubernetes/kubelet内容通过环境变量KUBELET_ARGS,设置kubelet的启动参数
- –kubeconfig:设置kube-apiserver的访问,需要把master的相关证书放到/etc/kubernetes/pki目录下,比如ca.crt、ca.key、client.crt等文件
- –config:kubelet配置文件
- –hostname-overide:设置本Node到集群中的名称,默认主机名
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--config=/etc/kubernetes/kubelet.config \
--hostname-overide=192.168.18.3"kubelet.config示例:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# 基本设置
address: 0.0.0.0 # kubelet监听的IP地址
port: 10250 # kubelet API服务的端口
# Cgroup 设置
cgroupDriver: "systemd" # cgroup驱动 (systemd 或 cgroupfs)
# 集群DNS服务IP地址
clusterDNS: ["169.169.0.100"]
# 服务的DNS服务后缀
clusterDomain: cluster.local
# 设置是否允许匿名访问
authentication:
anonymous:
enable: true配置文件准备好后,在节点上启动kubelet
systemctl statrt kubelet && systemctl enable kubelet
部署kube-proxy服务
创建kube-proxy服务的systemd服务的配置文件/usr/lib/systemd/sytem/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Porxy Server
Documentation=https://kubernetes.io/zh-cn/docs/home/
After=network.target
[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=always
[Install]
WantedBy=multi-user.target配置文件/etc/kubernetes/proxy内容通过环境变量KUBE_PROXY_ARGS设置kube-proxy启动参数
- –kubeconfig:设置和API Server连接的客户端身份
- –hostname-override:本Node在集群中名称
- –proxy-mode:代理模式,支持iptables、ipvs、kernelspace(Windows Node)
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--hostname-override=192.168.18.3 \
--proxy-mode=iptables"准备完环境文件,在Node上启动kube-proxy 服务
systemctl start kube-proxy && systemctl enable kube-proxy
在Master上通过kubectl验证各个Node的信息
kubectl --kubeconfig=/etc/kubernetes/kubeconfig get nodes |
Node为NotReady
- 还需要部署CNI网络插件