Skip to content


且用且珍惜,Docker国内镜像库全面失效

国内从 Docker Hub 拉取镜像有时会遇到困难

近日一些提供公有镜像仓库的组织,宣布因监管要求被下架.

近日不可使用镜像

上海交大
https://docker.mirrors.sjtug.sjtu.edu.cn
南京大学
https://docker.nju.edu.cn/

查看 GitHub 仓库:docker-practice/docker-registry-cn-mirror-test 的 Github Action 执行结果

其它已经不可用的 Docker 加速域名

中国科技大学(已限ip,不能用)
https://docker.mirrors.ustc.edu.cn
百度云 Mirror:mirror.baidubce.com
网易: http://hub-mirror.c.163.com
腾讯云:mirror.ccs.tencentyun.com
Azure 中国: dockerhub.azk8s.cn
七牛云:reg-mirror.qiniu.com
西北农林科技大学(仅在校内可用)
https://dockerhub.mirrors.nwafu.edu.cn/

以下魔法还可以试试

阿里云个人镜像ACR

https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors

第三方 public-image-mirror

https://github.com/DaoCloud/public-image-mirror

源站 替换为
cr.l5d.io l5d.m.daocloud.io
docker.elastic.co elastic.m.daocloud.io
docker.io docker.m.daocloud.io
gcr.io gcr.m.daocloud.io
ghcr.io ghcr.m.daocloud.io
k8s.gcr.io k8s-gcr.m.daocloud.io
registry.k8s.io k8s.m.daocloud.io
mcr.microsoft.com mcr.m.daocloud.io
nvcr.io nvcr.m.daocloud.io
quay.io quay.m.daocloud.io
registry.jujucharms.com jujucharms.m.daocloud.io
rocks.canonical.com rocks-canonical.m.daocloud.io

第三方 docker-registry-mirrors

https://github.com/kubesre/docker-registry-mirrors
https://dockerproxy.xyz/
源站 替换为
cr.l5d.io l5d.kubesre.xyz
docker.elastic.co elastic.kubesre.xyz
docker.io docker.kubesre.xyz
gcr.io gcr.kubesre.xyz
ghcr.io ghcr.kubesre.xyz
k8s.gcr.io k8s-gcr.kubesre.xyz
registry.k8s.io k8s.kubesre.xyz
mcr.microsoft.com mcr.kubesre.xyz
nvcr.io nvcr.kubesre.xyz
quay.io quay.kubesre.xyz
registry.jujucharms.com jujucharms.kubesre.xyz

第三方 cloudflare-docker-proxy

https://github.com/ciiiii/cloudflare-docker-proxy
部署参考
https://developer.aliyun.com/article/1436840

Deploy
Deploy to Cloudflare Workers

1.fork this project
2.modify the link of the above button to your fork url注意指的是访问https://deploy.workers.cloudflare.com/?url=https://github.com/ciiiii/cloudflare-docker-proxy
3.click the button, you will be redirected to the deploy page

Posted in 安装k8s/kubernetes, 容器.

Tagged with , , .


k8s_安装rocky9_二进制安装1.29.2

查看环境

查看系统版本

cat /etc/redhat-release
Rocky Linux release 9.3 (Blue Onyx)

查看内核版本

uname -a
Linux localhost.localdomain 5.14.0-362.8.1.el9_3.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Nov 8 17:36:32 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

查看ssh及openssl 版本

ssh -V
OpenSSH_8.7p1, OpenSSL 3.0.7 1 Nov 2022
不升级

python版本

python -V
Python 3.9.18

glibc版本

ldd –version
ldd (GNU libc) 2.34

主机名设定,各个主机上设定

使用hostnamectl设定
hostnamectl set-hostname dev-k8s-master01.local
hostnamectl set-hostname dev-k8s-node01.local
hostnamectl set-hostname dev-k8s-node02.local

使用nmcli设定
nmcli general hostname dev-k8s-master01.local

查看环境信息

hostnamectl status
Static hostname: dev-k8s-master01.local
       Icon name: computer-vm
         Chassis: vm 
                            Machine ID: 45d4ec6ccf3646248a8b9cc382baf29d
         Boot ID: e3167b9bd8864d3a9de968f7459f73d2
  Virtualization: oracle
Operating System: Rocky Linux 9.3 (Blue Onyx)      
     CPE OS Name: cpe:/o:rocky:rocky:9::baseos
          Kernel: Linux 5.14.0-362.8.1.el9_3.x86_64
    Architecture: x86-64
 Hardware Vendor: innotek GmbH
  Hardware Model: VirtualBox
Firmware Version: VirtualBox

rocky9上打开加密兼容

便于低版本ssh 连接
update-crypto-policies –show
update-crypto-policies –set LEGACY

关闭 swap

kubelet 的默认行为是在节点上检测到交换内存时无法启动。 kubelet 自 v1.22 起已开始支持交换分区。自 v1.28 起,仅针对 cgroup v2 支持交换分区; kubelet 的 NodeSwap 特性门控处于 Beta 阶段,但默认被禁用。

#临时关闭
swapoff -a
#永久关闭
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab

修改ip

查看当前ip

ip a
nmcli device show
nmcli con show

使用配制文件修改

vi /etc/NetworkManager/system-connections/enp0s3.nmconnection

[ipv4]
method=manual
address1=192.168.244.14/24,192.168.244.1
dns=223.5.5.5;1.1.1.1

重新载入生效

nmcli connection reload
nmcli connection down enp0s3 && nmcli connection up enp0s3

使用命令修改

nmcli con mod enp0s3 ipv4.addresses 192.168.244.14/24; nmcli con mod enp0s3 ipv4.gateway  192.168.244.1; nmcli con mod enp0s3 ipv4.method manual; nmcli con mod enp0s3 ipv4.dns "8.8.8.8"; nmcli con up enp0s3

machine-id修改

在master节点和node节点都需要执行,主要是为了清除克隆机器machine-id值一样的问题
rm -f /etc/machine-id && systemd-machine-id-setup

机器product_uuid

确保每个节点上 MAC 地址和 product_uuid 的唯一性
你可以使用命令 ip link 或 ifconfig -a 来获取网络接口的 MAC 地址
可以使用 sudo cat /sys/class/dmi/id/product_uuid 命令对 product_uuid 校验
生成uuid可以使用uuidgen命令,或者cat一下这个节点获取:/proc/sys/kernel/random/uuid

uuidgen
9729e211-c76b-42fb-8635-75128ec44be6

cat /sys/class/dmi/id/product_uuid
5a8da902-c112-4c97-881b-63c917cce8b7

修改网卡uuid

# 若虚拟机是进行克隆的那么网卡的UUID会重复
# 若UUID重复需要重新生成新的UUID
# UUID重复无法获取到IPV6地址
## uuidgen eth0 
# 
# 查看当前的网卡列表和 UUID:
# nmcli con show
# 删除要更改 UUID 的网络连接:
# nmcli con delete uuid <原 UUID>
# 重新生成 UUID:
# nmcli con add type ethernet ifname <接口名称> con-name <新名称>
# 重新启用网络连接:
# nmcli con up <新名称>

安装中文语言包

localectl list-locales |grep zh
dnf list |grep glibc-langpack
dnf install glibc-langpack-zh

systemctl stop firewalld
systemctl disable firewalld

24小时制

localectl set-locale LC_TIME=en_GB.UTF-8

selinux

setenforce 0 && sed -i ‘/SELINUX/s/enforcing/disabled/’ /etc/selinux/config

vim 打开黏贴模式

echo ‘set paste’ >> ~/.vimrc

其它初始化省略

设置好各机的hostname

hostnamectl set-hostname dev-k8s-master01.local
hostnamectl set-hostname dev-k8s-node01.local
hostnamectl set-hostname dev-k8s-node02.local

写入主机名入IP映射

cat >> /etc/hosts << EOF
192.168.244.14 dev-k8s-master01 dev-k8s-master01.local
192.168.244.15 dev-k8s-node01 dev-k8s-node01.local
192.168.244.16 dev-k8s-node02 dev-k8s-node02.local
EOF

免密连接,在master01上

生成主机身份验证密钥

ssh-keygen -t ed25519 -C ‘support OpenSSH 6.5~’

分发到目标主机

ssh-copy-id -i ~/.ssh/id_ed25519 [email protected]
ssh-copy-id -i ~/.ssh/id_ed25519 [email protected]

k8s主机网段规划

网段 物理主机:192.168.244.0/24 service:10.96.0.0/12 pod:172.16.0.0/12

升级内核

uname -a 查看当前内核
Linux dev-k8s-master01.local 5.14.0-362.8.1.el9_3.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Nov 8 17:36:32 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

引入rpm库

rpm –import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rhel9的库
wget https://www.elrepo.org/elrepo-release-9.el9.elrepo.noarch.rpm
rpm -Uvh elrepo-release-9.el9.elrepo.noarch.rpm

查看现在elrepo里有什么版本的新内核可以用,运行
yum –disablerepo="*" –enablerepo="elrepo-kernel" list available

kernel-lt.x86_64  6.1.82-1.el9.elrepo  elrepo-kernel  
kernel-ml.x86_64  6.8.1-1.el9.elrepo  elrepo-kernel  

安装长期支持版

yum –enablerepo=elrepo-kernel install kernel-lt-devel kernel-lt -y

先不安装 kernel-lt-headers,两个版本的headers会冲突
先移除kernel-headers-5.14,再安装新版本kernel-headers
yum -y remove kernel-headers-5.14.0-284.11.1.el9_2.x86_64
yum –enablerepo="elrepo-kernel" install -y kernel-lt-headers

确认已安装内核版本

rpm -qa | grep kernel
kernel-lt-6.1.82-1.el9.elrepo.x86_64

设置开机从新内核启动

grub2-editenv list
grub2-set-default 0
grub2-editenv list

cp /boot/grub2/grub.cfg /boot/grub2/grub.bak.cfg

grub2-mkconfig -o /boot/grub2/grub.cfg

重启并查看内核版本

reboot
uname -a
Linux dev-k8s-master01.local 6.1.82-1.el9.elrepo.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Mar 15 18:18:05 EDT 2024 x86_64 x86_64 x86_64 GNU/Linux

查看内核模块

lsmod
modinfo nf_conntrack

安装ipset和ipvsadm

所有工作节点上均执行
yum install ipset ipvsadmin ipvsadm sysstat conntrack conntrack-tools libseccomp -y

加载内核

cat >> /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

重新载入

systemctl restart systemd-modules-load.service

查看内核模块

lsmod |grep ipip
lsmod |grep -e ip_vs -e nf_conntrack

ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 192512  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          188416  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

安装常用工具

yum -y install wget vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 tar curl lrzsz rsync psmisc sysstat lsof

下载所需软件

cd /root/
mkdir -p k8s/shell && cd k8s/shell
vi down.sh

#!/bin/bash

# 查看版本地址:
# 
# https://github.com/containernetworking/plugins/releases/
# https://github.com/containerd/containerd/releases/
# https://github.com/kubernetes-sigs/cri-tools/releases/
# https://github.com/Mirantis/cri-dockerd/releases/
# https://github.com/etcd-io/etcd/releases/
# https://github.com/cloudflare/cfssl/releases/
# https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
# https://download.docker.com/linux/static/stable/x86_64/
# https://github.com/opencontainers/runc/releases/
# https://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el7/x86_64/RPMS/
# https://github.com/helm/helm/tags
# http://nginx.org/download/

# Version numbers
cni_plugins_version='v1.4.0'
cri_containerd_cni_version='1.7.13'
crictl_version='v1.29.0'
cri_dockerd_version='0.3.10'
etcd_version='v3.5.12'
cfssl_version='1.6.4'
kubernetes_server_version='1.29.2'
docker_version='25.0.3'
runc_version='1.1.12'
kernel_version='5.4.268'
helm_version='3.14.1'
nginx_version='1.25.4'

# URLs 
base_url='https://github.com'
kernel_url="http://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el7/x86_64/RPMS/kernel-lt-${kernel_version}-1.el7.elrepo.x86_64.rpm"
runc_url="${base_url}/opencontainers/runc/releases/download/v${runc_version}/runc.amd64"
docker_url="https://mirrors.ustc.edu.cn/docker-ce/linux/static/stable/x86_64/docker-${docker_version}.tgz"
cni_plugins_url="${base_url}/containernetworking/plugins/releases/download/${cni_plugins_version}/cni-plugins-linux-amd64-${cni_plugins_version}.tgz"
cri_containerd_cni_url="${base_url}/containerd/containerd/releases/download/v${cri_containerd_cni_version}/cri-containerd-cni-${cri_containerd_cni_version}-linux-amd64.tar.gz"
crictl_url="${base_url}/kubernetes-sigs/cri-tools/releases/download/${crictl_version}/crictl-${crictl_version}-linux-amd64.tar.gz"
cri_dockerd_url="${base_url}/Mirantis/cri-dockerd/releases/download/v${cri_dockerd_version}/cri-dockerd-${cri_dockerd_version}.amd64.tgz"
etcd_url="${base_url}/etcd-io/etcd/releases/download/${etcd_version}/etcd-${etcd_version}-linux-amd64.tar.gz"
cfssl_url="${base_url}/cloudflare/cfssl/releases/download/v${cfssl_version}/cfssl_${cfssl_version}_linux_amd64"
cfssljson_url="${base_url}/cloudflare/cfssl/releases/download/v${cfssl_version}/cfssljson_${cfssl_version}_linux_amd64"
helm_url="https://mirrors.huaweicloud.com/helm/v${helm_version}/helm-v${helm_version}-linux-amd64.tar.gz"
kubernetes_server_url="https://storage.googleapis.com/kubernetes-release/release/v${kubernetes_server_version}/kubernetes-server-linux-amd64.tar.gz"
nginx_url="http://nginx.org/download/nginx-${nginx_version}.tar.gz"

# Download packages
packages=(
  $kernel_url
  $runc_url
  $docker_url
  $cni_plugins_url
  $cri_containerd_cni_url
  $crictl_url
  $cri_dockerd_url
  $etcd_url
  $cfssl_url
  $cfssljson_url
  $helm_url
  $kubernetes_server_url
  $nginx_url
)

for package_url in "${packages[@]}"; do
  filename=$(basename "$package_url")
  if curl --parallel --parallel-immediate -k -L -C - -o "$filename" "$package_url"; then
    echo "Downloaded $filename"
  else
    echo "Failed to download $filename"
    exit 1
  fi
done

在master01上下载

chmod 755 down.sh
./down.sh

传输到其它node

scp * [email protected]:k8s/  
scp * [email protected]:k8s/  

calico网络配置

cat > /etc/NetworkManager/conf.d/calico.conf << EOF 
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
EOF
cat > /etc/NetworkManager/conf.d/calico.conf <<EOF
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:wireguard.cali
EOF

systemctl restart NetworkManager

# 参数解释
#
# 这个参数用于指定不由 NetworkManager 管理的设备。它由以下两个部分组成
# 
# interface-name:cali*
# 表示以 "cali" 开头的接口名称被排除在 NetworkManager 管理之外。例如,"cali0", "cali1" 等接口不受 NetworkManager 管理。
# 
# interface-name:tunl*
# 表示以 "tunl" 开头的接口名称被排除在 NetworkManager 管理之外。例如,"tunl0", "tunl1" 等接口不受 NetworkManager 管理。
# 
# 通过使用这个参数,可以将特定的接口排除在 NetworkManager 的管理范围之外,以便其他工具或进程可以独立地管理和配置这些接口。

安装Containerd作为Runtime

创建cni插件所需目录
mkdir -p /etc/cni/net.d /opt/cni/bin

解压cni二进制包
tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/

解压
tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C /

创建服务启动文件

cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

配置Containerd所需的模块

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

systemctl restart systemd-modules-load.service

确认Containerd所需的内核为1,都已开启
sysctl net.bridge.bridge-nf-call-iptables
sysctl net.ipv4.ip_forward
sysctl net.bridge.bridge-nf-call-ip6tables

创建Containerd的配置文件
mkdir -p /etc/containerd

修改Containerd的配置文件

containerd config default | tee  /etc/containerd/config.toml

sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep SystemdCgroup
sed -i "s#registry.k8s.io#m.daocloud.io/registry.k8s.io#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep sandbox_image
sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep certs.d

新版本的containerd镜像仓库配置都是建议放在一个单独的文件夹当中,并且在/etc/containerd/config.toml配置文件当中打开config_path配置,指向镜像仓库配置目录即可。
特别需要注意的是,hosts.toml中可以配置多个镜像仓库,containerd下载竟像时会根据配置的顺序使用镜像仓库,只有当上一个仓库下载失败才会使用下一个镜像仓库。因此,镜像仓库的配置原则就是镜像仓库下载速度越快,那么这个仓库就应该放在最前面。

配置加速器

https://docker.mirrors.ustc.edu.cn 不可用
https://registry.cn-hangzhou.aliyuncs.com 需验证

mkdir /etc/containerd/certs.d/docker.io -pv
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://docker.nju.edu.cn/"]
  capabilities = ["pull", "resolve"]
EOF

如果报以下错误,那么需要更换镜像源,可以去阿里申请免费私人加速源https://xxx.mirror.aliyuncs.com
"PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack ima

配置本地harbor私仓示列

设定hosts

假设harbor私仓在192.168.244.6
echo ‘192.168.244.6 repo.k8s.local’ >> /etc/hosts

设定配制文件

vi /etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".registry]
   config_path = "/etc/containerd/certs.d"
      [plugins."io.containerd.grpc.v1.cri".registry.configs."repo.k8s.local".auth]
        username = "k8s_pull"
        password = "k8s_Pul1"

mkdir -p /etc/containerd/certs.d/repo.k8s.local

#harbor私仓使用https方式
cat > /etc/containerd/certs.d/repo.k8s.local/hosts.toml <<EOF
server = "https://repo.k8s.local"
[host."https://repo.k8s.local"]
  capabilities = ["pull", "resolve","push"]
  skip_verify = true
EOF

启动并设置为开机启动
systemctl daemon-reload
systemctl enable –now containerd.service
systemctl stop containerd.service
systemctl start containerd.service
systemctl restart containerd.service
systemctl status containerd.service

配置crictl客户端连接的运行时位置

tar xf crictl-v*-linux-amd64.tar.gz -C /usr/bin/
#生成配置文件
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

systemctl restart containerd

crictl info

crictl -version
crictl version 1.27.0

测试拉取

crictl pull docker.io/library/nginx 
crictl pull repo.k8s.local/google_containers/busybox:9.9
crictl images
IMAGE                                                              TAG                 IMAGE ID            SIZE
docker.io/library/nginx                                            latest              92b11f67642b6       70.5MB
docker.io/library/busybox                                          1.28                
repo.k8s.local/google_containers/busybox                           9.9                 a416a98b71e22       2.22MB

k8s与etcd下载及安装(仅在master01操作)

解压k8s安装包

tar -xf kubernetes-server-linux-amd64.tar.gz –strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

# - --strip-components=3:表示解压时忽略压缩文件中的前3级目录结构,提取文件时直接放到目标目录中。
# - -C /usr/local/bin:指定提取文件的目标目录为/usr/local/bin。
# - kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}:要解压和提取的文件名模式,用花括号括起来表示模式中的多个可能的文件名。
# 
# 总的来说,这个命令的作用是将kubernetes-server-linux-amd64.tar.gz文件中的kubelet、kubectl、kube-apiserver、kube-controller-manager、kube-scheduler和kube-proxy六个文件提取到/usr/local/bin目录下,同时忽略文件路径中的前三级目录结构。

解压etcd安装文件

tar -xf etcd.tar.gz && mv etcd-/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/
ls /usr/local/bin/

containerd               containerd-shim-runc-v2  critest      etcd            kube-controller-manager  kube-proxy
containerd-shim          containerd-stress        ctd-decoder  etcdctl         kubectl                  kube-scheduler
containerd-shim-runc-v1  crictl                   ctr          kube-apiserver  kubelet

查看版本

kubelet –version

Kubernetes v1.29.2
etcdctl version
etcdctl version: 3.5.12
API version: 3.5

分发

分发master组件
Master=’dev-k8s-master02 dev-k8s-master03′
Work=’dev-k8s-node01 dev-k8s-node02′

for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done

# 该命令是一个for循环,对于在$Master变量中的每个节点,执行以下操作:
# 
# 1. 打印出节点的名称。
# 2. 使用scp命令将/usr/local/bin/kubelet、kubectl、kube-apiserver、kube-controller-manager、kube-scheduler和kube-proxy文件复制到节点的/usr/local/bin/目录下。
# 3. 使用scp命令将/usr/local/bin/etcd*文件复制到节点的/usr/local/bin/目录下。

分发work组件

for NODE in $Work; do echo $NODE; scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done

# 该命令是一个for循环,对于在$Work变量中的每个节点,执行以下操作:
# 
# 1. 打印出节点的名称。
# 2. 使用scp命令将/usr/local/bin/kubelet和kube-proxy文件复制到节点的/usr/local/bin/目录下。

所有节点执行
mkdir -p /opt/cni/bin

相关证书生成

cp cfssl__linuxamd64 /usr/local/bin/cfssl
cp cfssljson
_linux_amd64 /usr/local/bin/cfssljson

添加执行权限

chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

生成etcd证书

以下操作在所有master节点操作
所有master节点创建证书存放目录
mkdir /etc/etcd/ssl -p
master01节点生成etcd证书

写入生成证书所需的配置文件

cat > ca-config.json << EOF 
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
# 这段配置文件是用于配置加密和认证签名的一些参数。
# 
# 在这里,有两个部分:`signing`和`profiles`。
# 
# `signing`包含了默认签名配置和配置文件。
# 默认签名配置`default`指定了证书的过期时间为`876000h`。`876000h`表示证书有效期为100年。
# 
# `profiles`部分定义了不同的证书配置文件。
# 在这里,只有一个配置文件`kubernetes`。它包含了以下`usages`和过期时间`expiry`:
# 
# 1. `signing`:用于对其他证书进行签名
# 2. `key encipherment`:用于加密和解密传输数据
# 3. `server auth`:用于服务器身份验证
# 4. `client auth`:用于客户端身份验证
# 
# 对于`kubernetes`配置文件,证书的过期时间也是`876000h`,即100年。
cat > etcd-ca-csr.json  << EOF 
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF
# 这是一个用于生成证书签名请求(Certificate Signing Request,CSR)的JSON配置文件。JSON配置文件指定了生成证书签名请求所需的数据。
# 
# - "CN": "etcd" 指定了希望生成的证书的CN字段(Common Name),即证书的主题,通常是该证书标识的实体的名称。
# - "key": {} 指定了生成证书所使用的密钥的配置信息。"algo": "rsa" 指定了密钥的算法为RSA,"size": 2048 指定了密钥的长度为2048位。
# - "names": [] 包含了生成证书时所需的实体信息。在这个例子中,只包含了一个实体,其相关信息如下:
#   - "C": "CN" 指定了实体的国家/地区代码,这里是中国。
#   - "ST": "Beijing" 指定了实体所在的省/州。
#   - "L": "Beijing" 指定了实体所在的城市。
#   - "O": "etcd" 指定了实体的组织名称。
#   - "OU": "Etcd Security" 指定了实体所属的组织单位。
# - "ca": {} 指定了生成证书时所需的CA(Certificate Authority)配置信息。
#   - "expiry": "876000h" 指定了证书的有效期,这里是876000小时。
# 
# 生成证书签名请求时,可以使用这个JSON配置文件作为输入,根据配置文件中的信息生成相应的CSR文件。然后,可以将CSR文件发送给CA进行签名,以获得有效的证书。

# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)
# 若没有IPv6 可删除可保留 

cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca

# 具体的解释如下:
# 
# cfssl是一个用于生成TLS/SSL证书的工具,它支持PKI、JSON格式配置文件以及与许多其他集成工具的配合使用。
# 
# gencert参数表示生成证书的操作。-initca参数表示初始化一个CA(证书颁发机构)。CA是用于签发其他证书的根证书。etcd-ca-csr.json是一个JSON格式的配置文件,其中包含了CA的详细信息,如私钥、公钥、有效期等。这个文件提供了生成CA证书所需的信息。
# 
# | 符号表示将上一个命令的输出作为下一个命令的输入。
# 
# cfssljson是cfssl工具的一个子命令,用于格式化cfssl生成的JSON数据。 -bare参数表示直接输出裸证书,即只生成证书文件,不包含其他格式的文件。/etc/etcd/ssl/etcd-ca是指定生成的证书文件的路径和名称。
# 
# 所以,这条命令的含义是使用cfssl工具根据配置文件ca-csr.json生成一个CA证书,并将证书文件保存在/etc/etcd/ssl/etcd-ca路径下。
cat > etcd-csr.json << EOF 
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
EOF

# 这段代码是一个JSON格式的配置文件,用于生成一个证书签名请求(Certificate Signing Request,CSR)。
# 
# 首先,"CN"字段指定了该证书的通用名称(Common Name),这里设为"etcd"。
# 
# 接下来,"key"字段指定了密钥的算法("algo"字段)和长度("size"字段),此处使用的是RSA算法,密钥长度为2048位。
# 
# 最后,"names"字段是一个数组,其中包含了一个名字对象,用于指定证书中的一些其他信息。这个名字对象包含了以下字段:
# - "C"字段指定了国家代码(Country),这里设置为"CN"。
# - "ST"字段指定了省份(State)或地区,这里设置为"Beijing"。
# - "L"字段指定了城市(Locality),这里设置为"Beijing"。
# - "O"字段指定了组织(Organization),这里设置为"etcd"。
# - "OU"字段指定了组织单元(Organizational Unit),这里设置为"Etcd Security"。
# 
# 这些字段将作为证书的一部分,用于标识和验证证书的使用范围和颁发者等信息。

cfssl gencert \
   -ca=/etc/etcd/ssl/etcd-ca.pem \
   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
   -config=ca-config.json \
   -hostname=127.0.0.1,dev-k8s-master01,dev-k8s-master02,dev-k8s-master03,192.168.244.14,192.168.244.15,192.168.244.16,::1 \
   -profile=kubernetes \
   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
 ```

这是一条使用cfssl生成etcd证书的命令,下面是各个参数的解释:

-ca=/etc/etcd/ssl/etcd-ca.pem:指定用于签名etcd证书的CA文件的路径。

-ca-key=/etc/etcd/ssl/etcd-ca-key.pem:指定用于签名etcd证书的CA私钥文件的路径。

-config=ca-config.json:指定CA配置文件的路径,该文件定义了证书的有效期、加密算法等设置。

-hostname=xxxx:指定要为etcd生成证书的主机名和IP地址列表。

-profile=kubernetes:指定使用的证书配置文件,该文件定义了证书的用途和扩展属性。

etcd-csr.json:指定etcd证书请求的JSON文件的路径,该文件包含了证书请求的详细信息。

| cfssljson -bare /etc/etcd/ssl/etcd:通过管道将cfssl命令的输出传递给cfssljson命令,并使用-bare参数指定输出文件的前缀路径,这里将生成etcd证书的.pem和-key.pem文件。

这条命令的作用是使用指定的CA证书和私钥,根据证书请求的JSON文件和配置文件生成etcd的证书文件。


## 将证书复制到其他节点

Master=’dev-k8s-master02 dev-k8s-master03′
for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

这个命令是一个简单的for循环,在一个由$Master存储的主机列表中迭代执行。对于每个主机,它使用ssh命令登录到主机,并在远程主机上创建一个名为/etc/etcd/ssl的目录(如果不存在)。接下来,它使用scp将本地主机上/etc/etcd/ssl目录中的四个文件(etcd-ca-key.pemetcd-ca.pemetcd-key.pemetcd.pem)复制到远程主机的/etc/etcd/ssl目录中。最终的结果是,远程主机上的/etc/etcd/ssl目录中包含与本地主机上相同的四个文件的副本。


## 生成k8s相关证书
mkdir -p /etc/kubernetes/pki
master01节点生成k8s证书
**  写入生成证书所需的配置文件 **  

cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF

这是一个用于生成 Kubernetes 相关证书的配置文件。该配置文件中包含以下信息:

– CN:CommonName,即用于标识证书的通用名称。在此配置中,CN 设置为 "kubernetes",表示该证书是用于 Kubernetes。

– key:用于生成证书的算法和大小。在此配置中,使用的算法是 RSA,大小是 2048 位。

– names:用于证书中的名称字段的详细信息。在此配置中,有以下字段信息:

– C:Country,即国家。在此配置中,设置为 "CN"。

– ST:State,即省/州。在此配置中,设置为 "Beijing"。

– L:Locality,即城市。在此配置中,设置为 "Beijing"。

– O:Organization,即组织。在此配置中,设置为 "Kubernetes"。

– OU:Organization Unit,即组织单位。在此配置中,设置为 "Kubernetes-manual"。

– ca:用于证书签名的证书颁发机构(CA)的配置信息。在此配置中,设置了证书的有效期为 876000 小时。

这个配置文件可以用于生成 Kubernetes 相关的证书,以确保集群中的通信安全性。


cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

具体的解释如下:

cfssl是一个用于生成TLS/SSL证书的工具,它支持PKI、JSON格式配置文件以及与许多其他集成工具的配合使用。

gencert参数表示生成证书的操作。-initca参数表示初始化一个CA(证书颁发机构)。CA是用于签发其他证书的根证书。ca-csr.json是一个JSON格式的配置文件,其中包含了CA的详细信息,如私钥、公钥、有效期等。这个文件提供了生成CA证书所需的信息。

| 符号表示将上一个命令的输出作为下一个命令的输入。

cfssljson是cfssl工具的一个子命令,用于格式化cfssl生成的JSON数据。 -bare参数表示直接输出裸证书,即只生成证书文件,不包含其他格式的文件。/etc/kubernetes/pki/ca是指定生成的证书文件的路径和名称。

所以,这条命令的含义是使用cfssl工具根据配置文件ca-csr.json生成一个CA证书,并将证书文件保存在/etc/kubernetes/pki/ca路径下。

cat > apiserver-csr.json << EOF
{
"CN": "kube-apiserver",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
]
}
EOF

这是一个用于生成 Kubernetes 相关证书的配置文件。该配置文件中包含以下信息:

CN 字段指定了证书的通用名称 (Common Name),这里设置为 "kube-apiserver",表示该证书用于 Kubernetes API Server。

key 字段指定了生成证书时所选用的加密算法和密钥长度。这里选用了 RSA 算法,密钥长度为 2048 位。

names 字段包含了一组有关证书持有者信息的项。这里使用了以下信息:

C 表示国家代码 (Country),这里设置为 "CN" 表示中国。

ST 表示州或省份 (State),这里设置为 "Beijing" 表示北京市。

L 表示城市或地区 (Location),这里设置为 "Beijing" 表示北京市。

O 表示组织名称 (Organization),这里设置为 "Kubernetes" 表示 Kubernetes。

OU 表示组织单位 (Organizational Unit),这里设置为 "Kubernetes-manual" 表示手动管理的 Kubernetes 集群。

这个配置文件可以用于生成 Kubernetes 相关的证书,以确保集群中的通信安全性。

生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备

10.96.0.1是service网段的第一个地址,需要计算,192.168.1.36为高可用vip地址

若没有IPv6 可删除可保留

cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=10.96.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.244.14,192.168.244.15,192.168.244.16,::1 \
-profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

这个命令是使用cfssl工具生成Kubernetes API Server的证书。

命令的参数解释如下:

-ca=/etc/kubernetes/pki/ca.pem:指定证书的颁发机构(CA)文件路径。

-ca-key=/etc/kubernetes/pki/ca-key.pem:指定证书的颁发机构(CA)私钥文件路径。

-config=ca-config.json:指定证书生成的配置文件路径,配置文件中包含了证书的有效期、加密算法等信息。

-hostname=10.96.0.1,192.168.1.36,127.0.0.1,fc00:43f4:1eea:1::10:指定证书的主机名或IP地址列表。

-profile=kubernetes:指定证书生成的配置文件中的配置文件名。

apiserver-csr.json:API Server的证书签名请求配置文件路径。

| cfssljson -bare /etc/kubernetes/pki/apiserver:通过管道将生成的证书输出到cfssljson工具,将其转换为PEM编码格式,并保存到 /etc/kubernetes/pki/apiserver.pem/etc/kubernetes/pki/apiserver-key.pem 文件中。

最终,这个命令将会生成API Server的证书和私钥,并保存到指定的文件中。


生成apiserver聚合证书

cat > front-proxy-ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"ca": {
"expiry": "876000h"
}
}
EOF

这个JSON文件表示了生成一个名为"kubernetes"的证书的配置信息。这个证书是用来进行Kubernetes集群的身份验证和安全通信。

配置信息包括以下几个部分:

1. "CN": "kubernetes":这表示了证书的通用名称(Common Name),也就是证书所代表的实体的名称。在这里,证书的通用名称被设置为"kubernetes",表示这个证书是用来代表Kubernetes集群。

2. "key":这是用来生成证书的密钥相关的配置。在这里,配置使用了RSA算法,并且设置了密钥的大小为2048位。

3. "ca":这个字段指定了证书的颁发机构(Certificate Authority)相关的配置。在这里,配置指定了证书的有效期为876000小时,即100年。这意味着该证书在100年内将被视为有效,过期后需要重新生成。

总之,这个JSON文件中的配置信息描述了如何生成一个用于Kubernetes集群的证书,包括证书的通用名称、密钥算法和大小以及证书的有效期。


cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 

具体的解释如下:

cfssl是一个用于生成TLS/SSL证书的工具,它支持PKI、JSON格式配置文件以及与许多其他集成工具的配合使用。

gencert参数表示生成证书的操作。-initca参数表示初始化一个CA(证书颁发机构)。CA是用于签发其他证书的根证书。front-proxy-ca-csr.json是一个JSON格式的配置文件,其中包含了CA的详细信息,如私钥、公钥、有效期等。这个文件提供了生成CA证书所需的信息。

| 符号表示将上一个命令的输出作为下一个命令的输入。

cfssljson是cfssl工具的一个子命令,用于格式化cfssl生成的JSON数据。 -bare参数表示直接输出裸证书,即只生成证书文件,不包含其他格式的文件。/etc/kubernetes/pki/front-proxy-ca是指定生成的证书文件的路径和名称。

所以,这条命令的含义是使用cfssl工具根据配置文件ca-csr.json生成一个CA证书,并将证书文件保存在/etc/kubernetes/pki/front-proxy-ca路径下。

cat > front-proxy-client-csr.json << EOF
{
"CN": "front-proxy-client",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF

这是一个JSON格式的配置文件,用于描述一个名为"front-proxy-client"的配置。配置包括两个字段:CN和key。

– CN(Common Name)字段表示证书的通用名称,这里为"front-proxy-client"。

– key字段描述了密钥的算法和大小。"algo"表示使用RSA算法,"size"表示密钥大小为2048位。

该配置文件用于生成一个SSL证书,用于在前端代理客户端进行认证和数据传输的加密。这个证书中的通用名称是"front-proxy-client",使用RSA算法生成,密钥大小为2048位。

cfssl gencert \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

```

# 这个命令使用cfssl工具生成一个用于Kubernetes的front-proxy-client证书。
# 
# 主要参数解释如下:
# - `-ca=/etc/kubernetes/pki/front-proxy-ca.pem`: 指定用于签署证书的根证书文件路径。
# - `-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem`: 指定用于签署证书的根证书的私钥文件路径。
# - `-config=ca-config.json`: 指定用于配置证书签署的配置文件路径。该配置文件描述了证书生成的一些规则,如加密算法和有效期等。
# - `-profile=kubernetes`: 指定生成证书时使用的配置文件中定义的profile,其中包含了一些默认的参数。
# - `front-proxy-client-csr.json`: 指定用于生成证书的CSR文件路径,该文件包含了证书请求的相关信息。
# - `| cfssljson -bare /etc/kubernetes/pki/front-proxy-client`: 通过管道将生成的证书输出到cfssljson工具进行解析,并通过`-bare`参数将证书和私钥分别保存到指定路径。
# 
# 这个命令的作用是根据提供的CSR文件和配置信息,使用指定的根证书和私钥生成一个前端代理客户端的证书,并将证书和私钥分别保存到`/etc/kubernetes/pki/front-proxy-client.pem`和`/etc/kubernetes/pki/front-proxy-client-key.pem`文件中。

生成controller-manage的证书
选择使用那种高可用方案 若使用 haproxy、keepalived 那么为 –server=https://192.168.1.36:9443 若使用 nginx方案,那么为 –server=https://127.0.0.1:8443

cat > manager-csr.json << EOF 
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
# 这是一个用于生成密钥对(公钥和私钥)的JSON配置文件。下面是针对该文件中每个字段的详细解释:
# 
# - "CN": 值为"system:kube-controller-manager",代表通用名称(Common Name),是此密钥对的主题(subject)。
# - "key": 这个字段用来定义密钥算法和大小。
#   - "algo": 值为"rsa",表示使用RSA算法。
#   - "size": 值为2048,表示生成的密钥大小为2048位。
# - "names": 这个字段用来定义密钥对的各个名称字段。
#   - "C": 值为"CN",表示国家(Country)名称是"CN"(中国)。
#   - "ST": 值为"Beijing",表示省/州(State/Province)名称是"Beijing"(北京)。
#   - "L": 值为"Beijing",表示城市(Locality)名称是"Beijing"(北京)。
#   - "O": 值为"system:kube-controller-manager",表示组织(Organization)名称是"system:kube-controller-manager"。
#   - "OU": 值为"Kubernetes-manual",表示组织单位(Organizational Unit)名称是"Kubernetes-manual"。
# 
# 这个JSON配置文件基本上是告诉生成密钥对的工具,生成一个带有特定名称和属性的密钥对。
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
# 这是一个命令行操作,使用cfssl工具生成证书。
# 
# 1. `cfssl gencert` 是cfssl工具的命令,用于生成证书。
# 2. `-ca` 指定根证书的路径和文件名,这里是`/etc/kubernetes/pki/ca.pem`。
# 3. `-ca-key` 指定根证书的私钥的路径和文件名,这里是`/etc/kubernetes/pki/ca-key.pem`。
# 4. `-config` 指定配置文件的路径和文件名,这里是`ca-config.json`。
# 5. `-profile` 指定证书使用的配置文件中的配置模板,这里是`kubernetes`。
# 6. `manager-csr.json` 是证书签发请求的配置文件,用于生成证书签发请求。
# 7. `|` 管道操作符,将前一条命令的输出作为后一条命令的输入。
# 8. `cfssljson -bare` 是 cfssl 工具的命令,作用是将证书签发请求的输出转换为PKCS#1、PKCS#8和x509 PEM文件。
# 9. `/etc/kubernetes/pki/controller-manager` 是转换后的 PEM 文件的存储位置和文件名。
# 
# 这个命令的作用是根据根证书和私钥、配置文件以及证书签发请求的配置文件,生成经过签发的控制器管理器证书和私钥,并将转换后的 PEM 文件保存到指定的位置。
# 设置一个集群项
# 选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.244.14:9443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`
kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://127.0.0.1:8443 \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# kubectl config set-cluster命令用于配置集群信息。
# --certificate-authority选项指定了集群的证书颁发机构(CA)的路径,这个CA会验证kube-apiserver提供的证书是否合法。
# --embed-certs选项用于将证书嵌入到生成的kubeconfig文件中,这样就不需要在kubeconfig文件中单独指定证书文件路径。
# --server选项指定了kube-apiserver的地址,这里使用的是127.0.0.1:8443,表示使用本地主机上的kube-apiserver,默认端口为8443。
# --kubeconfig选项指定了生成的kubeconfig文件的路径和名称,这里指定为/etc/kubernetes/controller-manager.kubeconfig。
# 综上所述,kubectl config set-cluster命令的作用是在kubeconfig文件中设置集群信息,包括证书颁发机构、证书、kube-apiserver地址等。

设置一个环境项,一个上下文

kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 这个命令用于配置 Kubernetes 控制器管理器的上下文信息。下面是各个参数的详细解释:
# 1. `kubectl config set-context system:kube-controller-manager@kubernetes`: 设置上下文的名称为 `system:kube-controller-manager@kubernetes`,这是一个标识符,用于唯一标识该上下文。
# 2. `--cluster=kubernetes`: 指定集群的名称为 `kubernetes`,这是一个现有集群的标识符,表示要管理的 Kubernetes 集群。
# 3. `--user=system:kube-controller-manager`: 指定使用的用户身份为 `system:kube-controller-manager`。这是一个特殊的用户身份,具有控制 Kubernetes 控制器管理器的权限。
# 4. `--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig`: 指定 kubeconfig 文件的路径为 `/etc/kubernetes/controller-manager.kubeconfig`。kubeconfig 文件是一个用于管理 Kubernetes 配置的文件,包含了集群、用户和上下文的相关信息。
# 通过运行这个命令,可以将这些配置信息保存到 `/etc/kubernetes/controller-manager.kubeconfig` 文件中,以便在后续的操作中使用。

设置一个用户项

 kubectl config set-credentials system:kube-controller-manager \
       --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
       --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
       --embed-certs=true \
       --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 上述命令是用于设置 Kubernetes 的 controller-manager 组件的客户端凭据。下面是每个参数的详细解释:
# 
# - kubectl config: 是使用 kubectl 命令行工具的配置子命令。
# - set-credentials: 是定义一个新的用户凭据配置的子命令。
# - system:kube-controller-manager: 是设置用户凭据的名称,system: 是 Kubernetes API Server 内置的身份验证器使用的用户标识符前缀,它表示是一个系统用户,在本例中是 kube-controller-manager 组件使用的身份。
# - --client-certificate=/etc/kubernetes/pki/controller-manager.pem: 指定 controller-manager.pem 客户端证书的路径。
# - --client-key=/etc/kubernetes/pki/controller-manager-key.pem: 指定 controller-manager-key.pem 客户端私钥的路径。
# - --embed-certs=true: 表示将证书和私钥直接嵌入到生成的 kubeconfig 文件中,而不是通过引用外部文件。
# - --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig: 指定生成的 kubeconfig 文件的路径和文件名,即 controller-manager.kubeconfig。
# 
# 通过运行上述命令,将根据提供的证书和私钥信息,为 kube-controller-manager 创建一个 kubeconfig 文件,以便后续使用该文件进行身份验证和访问 Kubernetes API。

设置默认环境


kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 这个命令是用来指定kubectl使用指定的上下文环境来执行操作。上下文环境是kubectl用来确定要连接到哪个Kubernetes集群以及使用哪个身份验证信息的配置。
# 
# 在这个命令中,kubectl config use-context是用来设置当前上下文环境的命令。 system:kube-controller-manager@kubernetes是指定的上下文名称,它告诉kubectl要使用的Kubernetes集群和身份验证信息。 
# --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig是用来指定使用的kubeconfig文件的路径。kubeconfig文件是存储集群连接和身份验证信息的配置文件。
# 通过执行这个命令,kubectl将使用指定的上下文来执行后续的操作,包括部署和管理Kubernetes资源。

kubectl config view

生成kube-scheduler的证书

cat > scheduler-csr.json << EOF 
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
# 这个命令是用来创建一个叫做scheduler-csr.json的文件,并将其中的内容赋值给该文件。
# 
# 文件内容是一个JSON格式的文本,包含了一个描述证书请求的结构。
# 
# 具体内容如下:
# 
# - "CN": "system:kube-scheduler":Common Name字段,表示该证书的名称为system:kube-scheduler。
# - "key": {"algo": "rsa", "size": 2048}:key字段指定生成证书时使用的加密算法是RSA,并且密钥的长度为2048位。
# - "names": [...]:names字段定义了证书中的另外一些标识信息。
# - "C": "CN":Country字段,表示国家/地区为中国。
# - "ST": "Beijing":State字段,表示省/市为北京。
# - "L": "Beijing":Locality字段,表示所在城市为北京。
# - "O": "system:kube-scheduler":Organization字段,表示组织为system:kube-scheduler。
# - "OU": "Kubernetes-manual":Organizational Unit字段,表示组织单元为Kubernetes-manual。
# 
# 而EOF是一个占位符,用于标记开始和结束的位置。在开始的EOF之后到结束的EOF之间的内容将会被写入到scheduler-csr.json文件中。
# 
# 总体来说,这个命令用于生成一个描述kube-scheduler证书请求的JSON文件。
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
# 上述命令是使用cfssl工具生成Kubernetes Scheduler的证书。
# 
# 具体解释如下:
# 
# 1. cfssl gencert:使用cfssl工具生成证书。
# 2. -ca=/etc/kubernetes/pki/ca.pem:指定根证书文件的路径。在这里,是指定根证书的路径为/etc/kubernetes/pki/ca.pem。
# 3. -ca-key=/etc/kubernetes/pki/ca-key.pem:指定根证书私钥文件的路径。在这里,是指定根证书私钥的路径为/etc/kubernetes/pki/ca-key.pem。
# 4. -config=ca-config.json:指定证书配置文件的路径。在这里,是指定证书配置文件的路径为ca-config.json。
# 5. -profile=kubernetes:指定证书的配置文件中的一个配置文件模板。在这里,是指定配置文件中的kubernetes配置模板。
# 6. scheduler-csr.json:指定Scheduler的证书签名请求文件(CSR)的路径。在这里,是指定请求文件的路径为scheduler-csr.json。
# 7. |(管道符号):将前一个命令的输出作为下一个命令的输入。
# 8. cfssljson:将cfssl工具生成的证书签名请求(CSR)进行解析。
# 9. -bare /etc/kubernetes/pki/scheduler:指定输出路径和前缀。在这里,是将解析的证书签名请求生成以下文件:/etc/kubernetes/pki/scheduler.pem(包含了证书)、/etc/kubernetes/pki/scheduler-key.pem(包含了私钥)。
# 
# 总结来说,这个命令的目的是根据根证书、根证书私钥、证书配置文件、CSR文件等生成Kubernetes Scheduler的证书和私钥文件。
# 选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 --server=https://192.168.244.14:9443
# 若使用 nginx方案,那么为 --server=https://127.0.0.1:8443
kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://127.0.0.1:8443 \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
# 该命令用于配置一个名为"kubernetes"的集群,并将其应用到/etc/kubernetes/scheduler.kubeconfig文件中。
# 
# 该命令的解释如下:
# - kubectl config set-cluster kubernetes: 设置一个集群并命名为"kubernetes"。
# - --certificate-authority=/etc/kubernetes/pki/ca.pem: 指定集群使用的证书授权机构的路径。
# - --embed-certs=true: 该标志指示将证书嵌入到生成的kubeconfig文件中。
# - --server=https://127.0.0.1:8443: 指定集群的 API server 位置。
# - --kubeconfig=/etc/kubernetes/scheduler.kubeconfig: 指定要保存 kubeconfig 文件的路径和名称。
kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/etc/kubernetes/pki/scheduler.pem \
     --client-key=/etc/kubernetes/pki/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
# 这段命令是用于设置 kube-scheduler 组件的身份验证凭据,并生成相应的 kubeconfig 文件。
# 
# 解释每个选项的含义如下:
# - kubectl config set-credentials system:kube-scheduler:设置 system:kube-scheduler 用户的身份验证凭据。
# - --client-certificate=/etc/kubernetes/pki/scheduler.pem:指定一个客户端证书文件,用于基于证书的身份验证。在这种情况下,指定了 kube-scheduler 组件的证书文件路径。
# - --client-key=/etc/kubernetes/pki/scheduler-key.pem:指定与客户端证书相对应的客户端私钥文件。
# - --embed-certs=true:将客户端证书和私钥嵌入到生成的 kubeconfig 文件中。
# - --kubeconfig=/etc/kubernetes/scheduler.kubeconfig:指定生成的 kubeconfig 文件的路径和名称。
# 
# 该命令的目的是为 kube-scheduler 组件生成一个 kubeconfig 文件,以便进行身份验证和访问集群资源。kubeconfig 文件是一个包含了连接到 Kubernetes 集群所需的所有配置信息的文件,包括服务器地址、证书和秘钥等。
kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
# 该命令用于设置一个名为"system:kube-scheduler@kubernetes"的上下文,具体配置如下:
# 
# 1. --cluster=kubernetes: 指定集群的名称为"kubernetes",这个集群是在当前的kubeconfig文件中已经定义好的。
# 2. --user=system:kube-scheduler: 指定用户的名称为"system:kube-scheduler",这个用户也是在当前的kubeconfig文件中已经定义好的。这个用户用于认证和授权kube-scheduler组件访问Kubernetes集群的权限。
# 3. --kubeconfig=/etc/kubernetes/scheduler.kubeconfig: 指定kubeconfig文件的路径为"/etc/kubernetes/scheduler.kubeconfig",这个文件将被用来保存上下文的配置信息。
# 
# 这个命令的作用是将上述的配置信息保存到指定的kubeconfig文件中,以便后续使用该文件进行认证和授权访问Kubernetes集群。
kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
# 上述命令是使用kubectl命令来配置Kubernetes集群中的调度器组件。
# 
# kubectl config use-context命令用于切换kubectl当前使用的上下文。上下文是Kubernetes集群、用户和命名空间的组合,用于确定kubectl的连接目标。下面解释这个命令的不同部分:
# 
# - system:kube-scheduler@kubernetes是一个上下文名称。它指定了使用kube-scheduler用户和kubernetes命名空间的系统级别上下文。系统级别上下文用于操作Kubernetes核心组件。
# 
# - --kubeconfig=/etc/kubernetes/scheduler.kubeconfig用于指定Kubernetes配置文件的路径。Kubernetes配置文件包含连接到Kubernetes集群所需的身份验证和连接信息。
# 
# 通过运行以上命令,kubectl将使用指定的上下文和配置文件,以便在以后的命令中能正确地与Kubernetes集群中的调度器组件进行交互。

生成admin的证书配置

cat > admin-csr.json << EOF 
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
# 这段代码是一个JSON格式的配置文件,用于创建和配置一个名为"admin"的Kubernetes凭证。
# 
# 这个凭证包含以下字段:
# 
# - "CN": "admin": 这是凭证的通用名称,表示这是一个管理员凭证。
# - "key": 这是一个包含证书密钥相关信息的对象。
#   - "algo": "rsa":这是使用的加密算法类型,这里是RSA加密算法。
#   - "size": 2048:这是密钥的大小,这里是2048位。
# - "names": 这是一个包含证书名称信息的数组。
#   - "C": "CN":这是证书的国家/地区字段,这里是中国。
#   - "ST": "Beijing":这是证书的省/州字段,这里是北京。
#   - "L": "Beijing":这是证书的城市字段,这里是北京。
#   - "O": "system:masters":这是证书的组织字段,这里是system:masters,表示系统的管理员组。
#   - "OU": "Kubernetes-manual":这是证书的部门字段,这里是Kubernetes-manual。
# 
# 通过这个配置文件创建的凭证将具有管理员权限,并且可以用于管理Kubernetes集群。
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
# 上述命令是使用cfssl工具生成Kubernetes admin的证书。
# 
# 具体解释如下:
# 
# 1. cfssl gencert:使用cfssl工具生成证书。
# 2. -ca=/etc/kubernetes/pki/ca.pem:指定根证书文件的路径。在这里,是指定根证书的路径为/etc/kubernetes/pki/ca.pem。
# 3. -ca-key=/etc/kubernetes/pki/ca-key.pem:指定根证书私钥文件的路径。在这里,是指定根证书私钥的路径为/etc/kubernetes/pki/ca-key.pem。
# 4. -config=ca-config.json:指定证书配置文件的路径。在这里,是指定证书配置文件的路径为ca-config.json。
# 5. -profile=kubernetes:指定证书的配置文件中的一个配置文件模板。在这里,是指定配置文件中的kubernetes配置模板。
# 6. admin-csr.json:指定admin的证书签名请求文件(CSR)的路径。在这里,是指定请求文件的路径为admin-csr.json。
# 7. |(管道符号):将前一个命令的输出作为下一个命令的输入。
# 8. cfssljson:将cfssl工具生成的证书签名请求(CSR)进行解析。
# 9. -bare /etc/kubernetes/pki/admin:指定输出路径和前缀。在这里,是将解析的证书签名请求生成以下文件:/etc/kubernetes/pki/admin.pem(包含了证书)、/etc/kubernetes/pki/admin-key.pem(包含了私钥)。
# 
# 总结来说,这个命令的目的是根据根证书、根证书私钥、证书配置文件、CSR文件等生成Kubernetes Scheduler的证书和私钥文件。
# 选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 --server=https://192.168.244.14:9443
# 若使用 nginx方案,那么为 --server=https://127.0.0.1:8443
kubectl config set-cluster kubernetes     \
  --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  --embed-certs=true     \
  --server=https://127.0.0.1:8443     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig
# 该命令用于配置一个名为"kubernetes"的集群,并将其应用到/etc/kubernetes/scheduler.kubeconfig文件中。
# 
# 该命令的解释如下:
# - kubectl config set-cluster kubernetes: 设置一个集群并命名为"kubernetes"。
# - --certificate-authority=/etc/kubernetes/pki/ca.pem: 指定集群使用的证书授权机构的路径。
# - --embed-certs=true: 该标志指示将证书嵌入到生成的kubeconfig文件中。
# - --server=https://127.0.0.1:8443: 指定集群的 API server 位置。
# - --kubeconfig=/etc/kubernetes/admin.kubeconfig: 指定要保存 kubeconfig 文件的路径和名称。
kubectl config set-credentials kubernetes-admin  \
  --client-certificate=/etc/kubernetes/pki/admin.pem     \
  --client-key=/etc/kubernetes/pki/admin-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig
# 这段命令是用于设置 kubernetes-admin 组件的身份验证凭据,并生成相应的 kubeconfig 文件。
# 
# 解释每个选项的含义如下:
# - kubectl config set-credentials kubernetes-admin:设置 kubernetes-admin 用户的身份验证凭据。
# - --client-certificate=/etc/kubernetes/pki/admin.pem:指定一个客户端证书文件,用于基于证书的身份验证。在这种情况下,指定了 admin 组件的证书文件路径。
# - --client-key=/etc/kubernetes/pki/admin-key.pem:指定与客户端证书相对应的客户端私钥文件。
# - --embed-certs=true:将客户端证书和私钥嵌入到生成的 kubeconfig 文件中。
# - --kubeconfig=/etc/kubernetes/admin.kubeconfig:指定生成的 kubeconfig 文件的路径和名称。
# 
# 该命令的目的是为 admin 组件生成一个 kubeconfig 文件,以便进行身份验证和访问集群资源。kubeconfig 文件是一个包含了连接到 Kubernetes 集群所需的所有配置信息的文件,包括服务器地址、证书和秘钥等。
kubectl config set-context kubernetes-admin@kubernetes    \
  --cluster=kubernetes     \
  --user=kubernetes-admin     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig

```

该命令用于设置一个名为"kubernetes-admin@kubernetes"的上下文,具体配置如下:

1. --cluster=kubernetes: 指定集群的名称为"kubernetes",这个集群是在当前的kubeconfig文件中已经定义好的。

2. --user=kubernetes-admin: 指定用户的名称为"kubernetes-admin",这个用户也是在当前的kubeconfig文件中已经定义好的。这个用户用于认证和授权admin组件访问Kubernetes集群的权限。

3. --kubeconfig=/etc/kubernetes/admin.kubeconfig: 指定kubeconfig文件的路径为"/etc/kubernetes/admin.kubeconfig",这个文件将被用来保存上下文的配置信息。

这个命令的作用是将上述的配置信息保存到指定的kubeconfig文件中,以便后续使用该文件进行认证和授权访问Kubernetes集群。

kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig

上述命令是使用`kubectl`命令来配置Kubernetes集群中的调度器组件。

`kubectl config use-context`命令用于切换`kubectl`当前使用的上下文。上下文是Kubernetes集群、用户和命名空间的组合,用于确定`kubectl`的连接目标。下面解释这个命令的不同部分:

- `kubernetes-admin@kubernetes`是一个上下文名称。它指定了使用`kubernetes-admin`用户和`kubernetes`命名空间的系统级别上下文。系统级别上下文用于操作Kubernetes核心组件。

- `--kubeconfig=/etc/kubernetes/admin.kubeconfig`用于指定Kubernetes配置文件的路径。Kubernetes配置文件包含连接到Kubernetes集群所需的身份验证和连接信息。

通过运行以上命令,`kubectl`将使用指定的上下文和配置文件,以便在以后的命令中能正确地与Kubernetes集群中的调度器组件进行交互。


创建kube-proxy证书
选择使用那种高可用方案 若使用 haproxy、keepalived 那么为 --server=https://192.168.244.14:9443 若使用 nginx方案,那么为 --server=https://127.0.0.1:8443

cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-proxy",
"OU": "Kubernetes-manual"
}
]
}
EOF

这段代码是一个JSON格式的配置文件,用于创建和配置一个名为"kube-proxy-csr"的Kubernetes凭证。

这个凭证包含以下字段:

- "CN": "system:kube-proxy": 这是凭证的通用名称,表示这是一个管理员凭证。

- "key": 这是一个包含证书密钥相关信息的对象。

- "algo": "rsa":这是使用的加密算法类型,这里是RSA加密算法。

- "size": 2048:这是密钥的大小,这里是2048位。

- "names": 这是一个包含证书名称信息的数组。

- "C": "CN":这是证书的国家/地区字段,这里是中国。

- "ST": "Beijing":这是证书的省/州字段,这里是北京。

- "L": "Beijing":这是证书的城市字段,这里是北京。

- "O": "system:kube-proxy":这是证书的组织字段,这里是system:kube-proxy。

- "OU": "Kubernetes-manual":这是证书的部门字段,这里是Kubernetes-manual。

通过这个配置文件创建的凭证将具有管理员权限,并且可以用于管理Kubernetes集群。

cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy

上述命令是使用cfssl工具生成Kubernetes admin的证书。

具体解释如下:

1. `cfssl gencert`:使用cfssl工具生成证书。

2. `-ca=/etc/kubernetes/pki/ca.pem`:指定根证书文件的路径。在这里,是指定根证书的路径为`/etc/kubernetes/pki/ca.pem`。

3. `-ca-key=/etc/kubernetes/pki/ca-key.pem`:指定根证书私钥文件的路径。在这里,是指定根证书私钥的路径为`/etc/kubernetes/pki/ca-key.pem`。

4. `-config=ca-config.json`:指定证书配置文件的路径。在这里,是指定证书配置文件的路径为`ca-config.json`。

5. `-profile=kubernetes`:指定证书的配置文件中的一个配置文件模板。在这里,是指定配置文件中的`kubernetes`配置模板。

6. `kube-proxy-csr.json`:指定admin的证书签名请求文件(CSR)的路径。在这里,是指定请求文件的路径为`kube-proxy-csr.json`。

7. `|`(管道符号):将前一个命令的输出作为下一个命令的输入。

8. `cfssljson`:将cfssl工具生成的证书签名请求(CSR)进行解析。

9. `-bare /etc/kubernetes/pki/kube-proxy`:指定输出路径和前缀。在这里,是将解析的证书签名请求生成以下文件:`/etc/kubernetes/pki/kube-proxy.pem`(包含了证书)、`/etc/kubernetes/pki/kube-proxy-key.pem`(包含了私钥)。

总结来说,这个命令的目的是根据根证书、根证书私钥、证书配置文件、CSR文件等生成Kubernetes Scheduler的证书和私钥文件。


创建kube-proxy证书

选择使用那种高可用方案

若使用 haproxy、keepalived 那么为 `--server=https://192.168.244.14:9443`;

若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`;

kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

该命令用于配置一个名为"kubernetes"的集群,并将其应用到/etc/kubernetes/kube-proxy.kubeconfig文件中。

该命令的解释如下:

- `kubectl config set-cluster kubernetes`: 设置一个集群并命名为"kubernetes"。

- `--certificate-authority=/etc/kubernetes/pki/ca.pem`: 指定集群使用的证书授权机构的路径。

- `--embed-certs=true`: 该标志指示将证书嵌入到生成的kubeconfig文件中。

- `--server=https://127.0.0.1:8443`;: 指定集群的 API server 位置。

- `--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig`: 指定要保存 kubeconfig 文件的路径和名称。

kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \
--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

这段命令是用于设置 kube-proxy 组件的身份验证凭据,并生成相应的 kubeconfig 文件。

解释每个选项的含义如下:

- `kubectl config set-credentials kube-proxy`:设置 `kube-proxy` 用户的身份验证凭据。

- `--client-certificate=/etc/kubernetes/pki/kube-proxy.pem`:指定一个客户端证书文件,用于基于证书的身份验证。在这种情况下,指定了 kube-proxy 组件的证书文件路径。

- `--client-key=/etc/kubernetes/pki/kube-proxy-key.pem`:指定与客户端证书相对应的客户端私钥文件。

- `--embed-certs=true`:将客户端证书和私钥嵌入到生成的 kubeconfig 文件中。

- `--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig`:指定生成的 kubeconfig 文件的路径和名称。

该命令的目的是为 kube-proxy 组件生成一个 kubeconfig 文件,以便进行身份验证和访问集群资源。kubeconfig 文件是一个包含了连接到 Kubernetes 集群所需的所有配置信息的文件,包括服务器地址、证书和秘钥等。

kubectl config set-context kube-proxy@kubernetes \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

该命令用于设置一个名为"kube-proxy@kubernetes"的上下文,具体配置如下:

1. --cluster=kubernetes: 指定集群的名称为"kubernetes",这个集群是在当前的kubeconfig文件中已经定义好的。

2. --user=kube-proxy: 指定用户的名称为"kube-proxy",这个用户也是在当前的kubeconfig文件中已经定义好的。这个用户用于认证和授权kube-proxy组件访问Kubernetes集群的权限。

3. --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig: 指定kubeconfig文件的路径为"/etc/kubernetes/kube-proxy.kubeconfig",这个文件将被用来保存上下文的配置信息。

这个命令的作用是将上述的配置信息保存到指定的kubeconfig文件中,以便后续使用该文件进行认证和授权访问Kubernetes集群。

kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

上述命令是使用`kubectl`命令来配置Kubernetes集群中的调度器组件。

`kubectl config use-context`命令用于切换`kubectl`当前使用的上下文。上下文是Kubernetes集群、用户和命名空间的组合,用于确定`kubectl`的连接目标。下面解释这个命令的不同部分:

- `kube-proxy@kubernetes`是一个上下文名称。它指定了使用`kube-proxy`用户和`kubernetes`命名空间的系统级别上下文。系统级别上下文用于操作Kubernetes核心组件。

- `--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig`用于指定Kubernetes配置文件的路径。Kubernetes配置文件包含连接到Kubernetes集群所需的身份验证和连接信息。

通过运行以上命令,`kubectl`将使用指定的上下文和配置文件,以便在以后的命令中能正确地与Kubernetes集群中的调度器组件进行交互。


### 创建ServiceAccount Key ——secret

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

这两个命令是使用OpenSSL工具生成RSA密钥对。

命令1:openssl genrsa -out /etc/kubernetes/pki/sa.key 2048

该命令用于生成私钥文件。具体解释如下:

- openssl:openssl命令行工具。

- genrsa:生成RSA密钥对。

- -out /etc/kubernetes/pki/sa.key:指定输出私钥文件的路径和文件名。

- 2048:指定密钥长度为2048位。

命令2:openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

该命令用于从私钥中导出公钥。具体解释如下:

- openssl:openssl命令行工具。

- rsa:与私钥相关的RSA操作。

- -in /etc/kubernetes/pki/sa.key:指定输入私钥文件的路径和文件名。

- -pubout:指定输出公钥。

- -out /etc/kubernetes/pki/sa.pub:指定输出公钥文件的路径和文件名。

总结:通过以上两个命令,我们可以使用OpenSSL工具生成一个RSA密钥对,并将私钥保存在/etc/kubernetes/pki/sa.key文件中,将公钥保存在/etc/kubernetes/pki/sa.pub文件中。


### 将证书发送到其他master节点

其他节点创建目录

mkdir /etc/kubernetes/pki/ -p

for NODE in dev-k8s-master02 dev-k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done


### 查看证书
ls /etc/kubernetes/pki/

admin.csr apiserver.pem controller-manager-key.pem front-proxy-client.csr scheduler.csr
admin-key.pem ca.csr controller-manager.pem front-proxy-client-key.pem scheduler-key.pem
admin.pem ca-key.pem front-proxy-ca.csr front-proxy-client.pem scheduler.pem
apiserver.csr ca.pem front-proxy-ca-key.pem sa.key
apiserver-key.pem controller-manager.csr front-proxy-ca.pem sa.pub


### k8s系统组件配置
etcd配置
这个配置文件是用于 etcd 集群的配置,其中包含了一些重要的参数和选项:
  • `name`:指定了当前节点的名称,用于集群中区分不同的节点。
  • `data-dir`:指定了 etcd 数据的存储目录。
  • `wal-dir`:指定了 etcd 数据写入磁盘的目录。
  • `snapshot-count`:指定了触发快照的事务数量。
  • `heartbeat-interval`:指定了 etcd 集群中节点之间的心跳间隔。
  • `election-timeout`:指定了选举超时时间。
  • `quota-backend-bytes`:指定了存储的限额,0 表示无限制。
  • `listen-peer-urls`:指定了节点之间通信的 URL,使用 HTTPS 协议。
  • `listen-client-urls`:指定了客户端访问 etcd 集群的 URL,同时提供了本地访问的 URL。
  • `max-snapshots`:指定了快照保留的数量。
  • `max-wals`:指定了日志保留的数量。
  • `initial-advertise-peer-urls`:指定了节点之间通信的初始 URL。
  • `advertise-client-urls`:指定了客户端访问 etcd 集群的初始 URL。
  • `discovery`:定义了 etcd 集群发现相关的选项。
  • `initial-cluster`:指定了 etcd 集群的初始成员。
  • `initial-cluster-token`:指定了集群的 token。
  • `initial-cluster-state`:指定了集群的初始状态。
  • `strict-reconfig-check`:指定了严格的重新配置检查选项。
  • `enable-v2`:启用了 v2 API。
  • `enable-pprof`:启用了性能分析。
  • `proxy`:设置了代理模式。
  • `client-transport-security`:客户端的传输安全配置。
  • `peer-transport-security`:节点之间的传输安全配置。
  • `debug`:是否启用调试模式。
  • `log-package-levels`:日志的输出级别。
  • `log-outputs`:指定了日志的输出类型。
  • `force-new-cluster`:是否强制创建一个新的集群。

这些参数和选项可以根据实际需求进行调整和配置。

master01配置

如果要用IPv6那么把IPv4地址修改为IPv6即可


cat > /etc/etcd/etcd.config.yml << EOF 
name: 'dev-k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.244.14:2380'
listen-client-urls: 'https://192.168.244.14:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.244.14:2380'
advertise-client-urls: 'https://192.168.244.14:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'dev-k8s-master01=https://192.168.244.14:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

创建service(所有master节点操作)

创建etcd.service并启动

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service

EOF
# 这是一个系统服务配置文件,用于启动和管理Etcd服务。
# 
# [Unit] 部分包含了服务的一些基本信息,它定义了服务的描述和文档链接,并指定了服务应在网络连接之后启动。
# 
# [Service] 部分定义了服务的具体配置。在这里,服务的类型被设置为notify,意味着当服务成功启动时,它将通知系统。ExecStart 指定了启动服务时要执行的命令,这里是运行 /usr/local/bin/etcd 命令并传递一个配置文件 /etc/etcd/etcd.config.yml。Restart 设置为 on-failure,意味着当服务失败时将自动重启,并且在10秒后进行重启。LimitNOFILE 指定了服务的最大文件打开数。
# 
# [Install] 部分定义了服务的安装配置。WantedBy 指定了服务应该被启动的目标,这里是 multi-user.target,表示在系统进入多用户模式时启动。Alias 定义了一个别名,可以通过etcd3.service来引用这个服务。
# 
# 这个配置文件描述了如何启动和管理Etcd服务,并将其安装到系统中。通过这个配置文件,可以确保Etcd服务在系统启动后自动启动,并在出现问题时进行重启。

创建etcd证书目录

mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/

systemctl daemon-reload  
# 用于重新加载systemd管理的单位文件。当你新增或修改了某个单位文件(如.service文件、.socket文件等),需要运行该命令来刷新systemd对该文件的配置。

systemctl enable --now etcd.service
# 启用并立即启动etcd.service单元。etcd.service是etcd守护进程的systemd服务单元。

systemctl restart etcd.service
# 重启etcd.service单元,即重新启动etcd守护进程。

systemctl status etcd.service
# etcd.service单元的当前状态,包括运行状态、是否启用等信息。

查看etcd状态

如果要用IPv6那么把IPv4地址修改为IPv6即可

export ETCDCTL_API=3
etcdctl --endpoints="192.168.244.14:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      ENDPOINT       |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.244.14:2379 | fa8e297f3f09517f |  3.5.12 |   20 kB |      true |      false |         3 |          6 |                  6 |        |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
# 这个命令是使用etcdctl工具,用于查看指定etcd集群的健康状态。下面是每个参数的详细解释:
# 
# - `--endpoints`:指定要连接的etcd集群节点的地址和端口。在这个例子中,指定了3个节点的地址和端口,分别是`192.168.1.33:2379,192.168.1.32:2379,192.168.1.31:2379`。
# - `--cacert`:指定用于验证etcd服务器证书的CA证书的路径。在这个例子中,指定了CA证书的路径为`/etc/kubernetes/pki/etcd/etcd-ca.pem`。CA证书用于验证etcd服务器证书的有效性。
# - `--cert`:指定用于与etcd服务器进行通信的客户端证书的路径。在这个例子中,指定了客户端证书的路径为`/etc/kubernetes/pki/etcd/etcd.pem`。客户端证书用于在与etcd服务器建立安全通信时进行身份验证。
# - `--key`:指定与客户端证书配对的私钥的路径。在这个例子中,指定了私钥的路径为`/etc/kubernetes/pki/etcd/etcd-key.pem`。私钥用于对通信进行加密解密和签名验证。
# - `endpoint status`:子命令,用于检查etcd集群节点的健康状态。
# - `--write-out`:指定输出的格式。在这个例子中,指定以表格形式输出。
# 
# 通过执行这个命令,可以获取到etcd集群节点的健康状态,并以表格形式展示。

查看calico数据

etcdctl --endpoints="192.168.244.14:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem get /registry/crd.projectcalico.org/ --prefix --keys-only

/registry/crd.projectcalico.org/blockaffinities/dev-k8s-master01.local-172-17-50-192-26

/registry/crd.projectcalico.org/blockaffinities/dev-k8s-node01.local-172-21-244-64-26

/registry/crd.projectcalico.org/blockaffinities/dev-k8s-node02.local-172-22-227-64-26

/registry/crd.projectcalico.org/clusterinformations/default

/registry/crd.projectcalico.org/felixconfigurations/default

/registry/crd.projectcalico.org/ipamblocks/172-17-50-192-26

/registry/crd.projectcalico.org/ipamblocks/172-21-244-64-26

/registry/crd.projectcalico.org/ipamblocks/172-22-227-64-26

/registry/crd.projectcalico.org/ipamconfigs/default

/registry/crd.projectcalico.org/ipamhandles/ipip-tunnel-addr-dev-k8s-master01.local

/registry/crd.projectcalico.org/ipamhandles/ipip-tunnel-addr-dev-k8s-node01.local

/registry/crd.projectcalico.org/ipamhandles/ipip-tunnel-addr-dev-k8s-node02.local

/registry/crd.projectcalico.org/ipamhandles/k8s-pod-network.96126796171ab0ac6b9081542112c9be79e02a006351a7fec66f389870b82983

/registry/crd.projectcalico.org/ipamhandles/k8s-pod-network.d278382a1b5a5b16faeebfc7e75ce83b2021962eeea3a754c107d51470063008

/registry/crd.projectcalico.org/ipamhandles/k8s-pod-network.d6236f8b68ef7f04d099ceef31b9861869c8af6b09e6c19be60ba7c124e49000

/registry/crd.projectcalico.org/ipamhandles/k8s-pod-network.e5d490330b24e0f953c525e9f81c77ea8fdcb71f2df06b0040d9d57b9678d984

/registry/crd.projectcalico.org/ippools/default-ipv4-ippool

/registry/crd.projectcalico.org/kubecontrollersconfigurations/default

NGINX高可用方案

安装编译环境

yum install gcc -y

下载解压nginx二进制文件

wget http://nginx.org/download/nginx-1.25.3.tar.gz
tar xvf nginx-.tar.gz
cd nginx-

进行编译

./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
make && make install

分发编译好的nginx

Master='dev-k8s-master02 dev-k8s-master03'
Work='dev-k8s-node01 dev-k8s-node02'
for NODE in $Master; do scp -r /usr/local/nginx/ $NODE:/usr/local/nginx/; done
for NODE in $Work; do scp -r /usr/local/nginx/ $NODE:/usr/local/nginx/; done

# 这是一系列命令行指令,用于编译和安装软件。
# 
# 1. `./configure` 是用于配置软件的命令。在这个例子中,配置的软件是一个Web服务器,指定了一些选项来启用流模块,并禁用了HTTP、uwsgi、scgi和fastcgi模块。
# 2. `--with-stream` 指定启用流模块。流模块通常用于代理TCP和UDP流量。
# 3. `--without-http` 指定禁用HTTP模块。这意味着编译的软件将没有HTTP服务器功能。
# 4. `--without-http_uwsgi_module` 指定禁用uwsgi模块。uwsgi是一种Web服务器和应用服务器之间的通信协议。
# 5. `--without-http_scgi_module` 指定禁用scgi模块。scgi是一种用于将Web服务器请求传递到应用服务器的协议。
# 6. `--without-http_fastcgi_module` 指定禁用fastcgi模块。fastcgi是一种用于在Web服务器和应用服务器之间交换数据的协议。
# 7. `make` 是用于编译软件的命令。该命令将根据之前的配置生成可执行文件。
# 8. `make install` 用于安装软件。该命令将生成的可执行文件和其他必要文件复制到系统的适当位置,以便可以使用该软件。
# 
# 总之,这个命令序列用于编译一个配置了特定选项的Web服务器,并将其安装到系统中。

写入nginx配置文件

cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF
worker_processes 1;
events {
    worker_connections  1024;
}
stream {
    upstream backend {
        least_conn;
        hash $remote_addr consistent;
        server 192.168.244.14:6443        max_fails=3 fail_timeout=30s;
    }
    server {
        listen 127.0.0.1:8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
EOF
# 这段配置是一个nginx的stream模块的配置,用于代理TCP和UDP流量。
# 
# 首先,`worker_processes 1;`表示启动一个worker进程用于处理流量。
# 接下来,`events { worker_connections 1024; }`表示每个worker进程可以同时处理最多1024个连接。
# 在stream块里面,定义了一个名为`backend`的upstream,用于负载均衡和故障转移。
# `least_conn`表示使用最少连接算法进行负载均衡。
# `hash $remote_addr consistent`表示用客户端的IP地址进行哈希分配请求,保持相同IP的请求始终访问同一台服务器。
# `server`指令用于定义后端的服务器,每个服务器都有一个IP地址和端口号,以及一些可选的参数。
# `max_fails=3`表示当一个服务器连续失败3次时将其标记为不可用。
# `fail_timeout=30s`表示如果一个服务器被标记为不可用,nginx将在30秒后重新尝试。
# 在server块内部,定义了一个监听地址为127.0.0.1:8443的服务器。
# `proxy_connect_timeout 1s`表示与后端服务器建立连接的超时时间为1秒。
# `proxy_pass backend`表示将流量代理到名为backend的上游服务器组。
# 
# 总结起来,这段配置将流量代理到一个包含3个后端服务器的上游服务器组中,使用最少连接算法进行负载均衡,并根据客户端的IP地址进行哈希分配请求。如果一个服务器连续失败3次,则将其标记为不可用,并在30秒后重新尝试。

写入启动配置文件

cat > /etc/systemd/system/kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx
ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
# 
# [Unit]部分包含了单位的描述和依赖关系。它指定了在network.target和network-online.target之后启动,并且需要network-online.target。
# 
# [Service]部分定义了如何运行该服务。Type指定了服务进程的类型(forking表示主进程会派生一个子进程)。ExecStartPre指定了在服务启动之前需要运行的命令,用于检查NGINX配置文件的语法是否正确。ExecStart指定了启动服务所需的命令。ExecReload指定了在重新加载配置文件时运行的命令。PrivateTmp设置为true表示将为服务创建一个私有的临时文件系统。Restart和RestartSec用于设置服务的自动重启机制。StartLimitInterval设置为0表示无需等待,可以立即重启服务。LimitNOFILE指定了服务的文件描述符的限制。
# 
# [Install]部分指定了在哪些target下该单位应该被启用。
# 
# 综上所述,此单位文件用于启动和管理kube-apiserver的NGINX代理服务。它通过NGINX来反向代理和负载均衡kube-apiserver的请求。该服务会在系统启动时自动启动,并具有自动重启的机制。

设置开机自启

systemctl daemon-reload

# 用于重新加载systemd管理的单位文件。当你新增或修改了某个单位文件(如.service文件、.socket文件等),需要运行该命令来刷新systemd对该文件的配置。
systemctl enable --now kube-nginx.service
# 启用并立即启动kube-nginx.service单元。kube-nginx.service是kube-nginx守护进程的systemd服务单元。
systemctl restart kube-nginx.service
# 重启kube-nginx.service单元,即重新启动kube-nginx守护进程。
systemctl status kube-nginx.service
# kube-nginx.service单元的当前状态,包括运行状态、是否启用等信息。

测试

每个node上
telnet 127.0.0.1 8443

Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
^CConnection closed by foreign host.

curl https://127.0.0.1:8443/healthz -k
ok
curl https://192.168.244.14:6443/healthz -k
ok
curl https://192.168.244.14:8443/healthz -k
ok

k8s组件配置

所有k8s节点创建以下目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

创建apiserver(所有master节点)

master01节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
      --v=2  \\
      --allow-privileged=true  \\
      --bind-address=0.0.0.0  \\
      --secure-port=6443  \\
      --advertise-address=192.168.244.14 \\
      --service-cluster-ip-range=10.96.0.0/12 \\
      --service-node-port-range=30000-32767  \\
      --etcd-servers=https://192.168.244.14:2379 \\
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User \\
      --enable-aggregator-routing=true
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF

该配置文件是用于定义Kubernetes API Server的systemd服务的配置。systemd是一个用于启动和管理Linux系统服务的守护进程。

[Unit]

  • Description: 服务的描述信息,用于显示在日志和系统管理工具中。
  • Documentation: 提供关于服务的文档链接。
  • After: 规定服务依赖于哪些其他服务或单元。在这个例子中,API Server服务在网络目标启动之后启动。

[Service]

  • ExecStart: 定义服务的命令行参数和命令。这里指定了API Server的启动命令,包括各种参数选项。
  • Restart: 指定当服务退出时应该如何重新启动。在这个例子中,服务在失败时将被重新启动。
  • RestartSec: 指定两次重新启动之间的等待时间。
  • LimitNOFILE: 指定进程可以打开的文件描述符的最大数量。

[Install]

  • WantedBy: 指定服务应该安装到哪个系统目标。在这个例子中,服务将被安装到multi-user.target目标,以便在多用户模式下启动。

上述配置文件中定义的kube-apiserver服务将以指定的参数运行,这些参数包括:

- `--v=2` 指定日志级别为2,打印详细的API Server日志。
- `--allow-privileged=true` 允许特权容器运行。
- `--bind-address=0.0.0.0` 绑定API Server监听的IP地址。
- `--secure-port=6443` 指定API Server监听的安全端口。
- `--advertise-address=192.168.1.31` 广告API Server的地址。
- `--service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112` 指定服务CIDR范围。
- `--service-node-port-range=30000-32767` 指定NodePort的范围。
- `--etcd-servers=https://192.168.1.31:2379,https://192.168.1.32:2379,https://192.168.1.33:2379` 指定etcd服务器的地址。
- `--etcd-cafile` 指定etcd服务器的CA证书。
- `--etcd-certfile` 指定etcd服务器的证书。
- `--etcd-keyfile` 指定etcd服务器的私钥。
- `--client-ca-file` 指定客户端CA证书。
- `--tls-cert-file` 指定服务的证书。
- `--tls-private-key-file` 指定服务的私钥。
- `--kubelet-client-certificate` 和 `--kubelet-client-key` 指定与kubelet通信的客户端证书和私钥。
- `--service-account-key-file` 指定服务账户公钥文件。
- `--service-account-signing-key-file` 指定服务账户签名密钥文件。
- `--service-account-issuer` 指定服务账户的发布者。
- `--kubelet-preferred-address-types` 指定kubelet通信时的首选地址类型。
- `--enable-admission-plugins` 启用一系列准入插件。
- `--authorization-mode` 指定授权模式。
- `--enable-bootstrap-token-auth` 启用引导令牌认证。
- `--requestheader-client-ca-file` 指定请求头中的客户端CA证书。
- `--proxy-client-cert-file` 和 `--proxy-client-key-file` 指定代理客户端的证书和私钥。
- `--requestheader-allowed-names` 指定请求头中允许的名字。
- `--requestheader-group-headers` 指定请求头中的组头。
- `--requestheader-extra-headers-prefix` 指定请求头中的额外头前缀。
- `--requestheader-username-headers` 指定请求头中的用户名头。
- `--enable-aggregator-routing` 启用聚合路由。

整个配置文件为Kubernetes API Server提供了必要的参数,以便正确地启动和运行。

systemctl daemon-reload
# 用于重新加载systemd管理的单位文件。当你新增或修改了某个单位文件(如.service文件、.socket文件等),需要运行该命令来刷新systemd对该文件的配置。

systemctl enable --now kube-apiserver.service
# 启用并立即启动kube-apiserver.service单元。kube-apiserver.service是kube-apiserver守护进程的systemd服务单元。

systemctl restart kube-apiserver.service
# 重启kube-apiserver.service单元,即重新启动etcd守护进程。

systemctl status kube-apiserver.service
# kube-apiserver.service单元的当前状态,包括运行状态、是否启用等信息。

配置kube-controller-manager service

所有master节点配置,且配置相同
172.16.0.0/12为pod网段,按需求设置你自己的网段

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
      --v=2 \\
      --bind-address=0.0.0.0 \\
      --root-ca-file=/etc/kubernetes/pki/ca.pem \\
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
      --leader-elect=true \\
      --use-service-account-credentials=true \\
      --node-monitor-grace-period=40s \\
      --node-monitor-period=5s \\
      --controllers=*,bootstrapsigner,tokencleaner \\
      --allocate-node-cidrs=true \\
      --service-cluster-ip-range=10.96.0.0/12 \\
      --cluster-cidr=172.16.0.0/12 \\
      --node-cidr-mask-size-ipv4=24 \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

这是一个用于启动 Kubernetes 控制器管理器的 systemd 服务单元文件。下面是对每个部分的详细解释:

[Unit]:单元的基本信息部分,用于描述和标识这个服务单元。

  • Description:服务单元的描述信息,说明了该服务单元的作用,这里是 Kubernetes 控制器管理器。
  • Documentation:可选项,提供了关于该服务单元的文档链接。
  • After:定义了该服务单元在哪些其他单元之后启动,这里是 network.target,即在网络服务启动之后启动。

[Service]:定义了服务的运行参数和行为。

  • ExecStart:指定服务启动时执行的命令,这里是 /usr/local/bin/kube-controller-manager,并通过后续+ 的行继续传递了一系列的参数设置。
  • Restart:定义了服务在退出后的重新启动策略,这里设置为 always,表示总是重新启动服务。
  • RestartSec:定义了重新启动服务的时间间隔,这里设置为 10 秒。

[Install]:定义了如何安装和启用服务单元。

  • WantedBy:指定了服务单元所属的 target,这里是 multi-user.target,表示启动多用户模式下的服务。
    在 ExecStart 中传递的参数说明如下:
--v=2:设置日志的详细级别为 2。
--bind-address=0.0.0.0:绑定的 IP 地址,用于监听 Kubernetes 控制平面的请求,这里设置为 0.0.0.0,表示监听所有网络接口上的请求。
--root-ca-file:根证书文件的路径,用于验证其他组件的证书。
--cluster-signing-cert-file:用于签名集群证书的证书文件路径。
--cluster-signing-key-file:用于签名集群证书的私钥文件路径。
--service-account-private-key-file:用于签名服务账户令牌的私钥文件路径。
--kubeconfig:kubeconfig 文件的路径,包含了与 Kubernetes API 服务器通信所需的配置信息。
--leader-elect=true:启用 Leader 选举机制,确保只有一个控制器管理器作为 leader 在运行。
--use-service-account-credentials=true:使用服务账户的凭据进行认证和授权。
--node-monitor-grace-period=40s:节点监控的优雅退出时间,节点长时间不响应时会触发节点驱逐。
--node-monitor-period=5s:节点监控的检测周期,用于检测节点是否正常运行。
--controllers:指定要运行的控制器类型,在这里使用了通配符 *,表示运行所有的控制器,同时还包括了 bootstrapsigner 和 tokencleaner 控制器。
--allocate-node-cidrs=true:为节点分配 CIDR 子网,用于分配 Pod 网络地址。
--service-cluster-ip-range:定义 Service 的 IP 范围,这里设置为 10.96.0.0/12 。
--cluster-cidr:定义集群的 CIDR 范围,这里设置为 172.16.0.0/12 。
--node-cidr-mask-size-ipv4:分配给每个节点的 IPv4 子网掩码大小,默认是 24。
--node-cidr-mask-size-ipv6:分配给每个节点的 IPv6 子网掩码大小,默认是 120。
--requestheader-client-ca-file:设置请求头中客户端 CA 的证书文件路径,用于认证请求头中的 CA 证书。

这个服务单元文件描述了 Kubernetes 控制器管理器的启动参数和行为,并且定义了服务的依赖关系和重新启动策略。通过 systemd 启动该服务单元,即可启动 Kubernetes 控制器管理器组件。

启动kube-controller-manager,并查看状态

systemctl daemon-reload
# 用于重新加载systemd管理的单位文件。当你新增或修改了某个单位文件(如.service文件、.socket文件等),需要运行该命令来刷新systemd对该文件的配置。

systemctl enable --now kube-controller-manager.service
# 启用并立即启动kube-controller-manager.service单元。kube-controller-manager.service是kube-controller-manager守护进程的systemd服务单元。

systemctl restart kube-controller-manager.service
# 重启kube-controller-manager.service单元,即重新启动etcd守护进程。

systemctl status kube-controller-manager.service
# kube-controller-manager.service单元的当前状态,包括运行状态、是否启用等信息。

配置kube-scheduler service

所有master节点配置,且配置相同

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
      --v=2 \\
      --bind-address=0.0.0.0 \\
      --leader-elect=true \\
      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

这是一个用于启动 Kubernetes 调度器的 systemd 服务单元文件。下面是对每个部分的详细解释:

[Unit]:单元的基本信息部分,用于描述和标识这个服务单元。

  • Description:服务单元的描述信息,说明了该服务单元的作用,这里是 Kubernetes 调度器。
  • Documentation:可选项,提供了关于该服务单元的文档链接。
  • After:定义了该服务单元在哪些其他单元之后启动,这里是 network.target,即在网络服务启动之后启动。

[Service]:定义了服务的运行参数和行为。

  • ExecStart:指定服务启动时执行的命令,这里是 /usr/local/bin/kube-scheduler,并通过后续的行继续传+ 递了一系列的参数设置。
  • Restart:定义了服务在退出后的重新启动策略,这里设置为 always,表示总是重新启动服务。
  • RestartSec:定义了重新启动服务的时间间隔,这里设置为 10 秒。

[Install]:定义了如何安装和启用服务单元。

  • WantedBy:指定了服务单元所属的 target,这里是 multi-user.target,表示启动多用户模式下的服务。

在 ExecStart 中传递的参数说明如下:

--v=2:设置日志的详细级别为 2。
--bind-address=0.0.0.0:绑定的 IP 地址,用于监听 Kubernetes 控制平面的请求,这里设置为 0.0.0.0,表示监听所有网络接口上的请求。
--leader-elect=true:启用 Leader 选举机制,确保只有一个调度器作为 leader 在运行。
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig:kubeconfig 文件的路径,包含了与 Kubernetes API 服务器通信所需的配置信息。

这个服务单元文件描述了 Kubernetes 调度器的启动参数和行为,并且定义了服务的依赖关系和重新启动策略。通过 systemd 启动该服务单元,即可启动 Kubernetes 调度器组件。

启动并查看服务状态

systemctl daemon-reload
# 用于重新加载systemd管理的单位文件。当你新增或修改了某个单位文件(如.service文件、.socket文件等),需要运行该命令来刷新systemd对该文件的配置。

systemctl enable --now kube-scheduler.service
# 启用并立即启动kube-scheduler.service单元。kube-scheduler.service是kube-scheduler守护进程的systemd服务单元。

systemctl restart kube-scheduler.service
# 重启kube-scheduler.service单元,即重新启动etcd守护进程。

systemctl status kube-scheduler.service
# kube-scheduler.service单元的当前状态,包括运行状态、是否启用等信息。

TLS Bootstrapping配置

在master01上配置

# 选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.1.36:8443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443`

参考:

https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/
https://www.kancloud.cn/pshizhsysu/kubernetes/1827157

令牌格式
启动引导令牌使用 abcdef.0123456789abcdef 的形式。 更加规范地说,它们必须符合正则表达式 [a-z0-9]{6}.[a-z0-9]{16}。

令牌的第一部分是 “Token ID”,它是一种公开信息,用于引用令牌并确保不会 泄露认证所使用的秘密信息。 第二部分是“令牌秘密(Token Secret)”,它应该被共享给受信的第三方。

证书轮换
RotateKubeletClientCertificate 会导致 kubelet 在其现有凭据即将过期时通过 创建新的 CSR 来轮换其客户端证书。要启用此功能特性,可将下面的标志传递给 kubelet:

--rotate-certificates

生成随机密码

len=16; tr -dc a-z0-9 < /dev/urandom | head -c ${len} | xargs
w4d0fuek41tdwk80
len=6; tr -dc a-z0-9 < /dev/urandom | head -c ${len} | xargs
92c1wc

bootstrap

mkdir /root/k8s/bootstrap
cd /root/k8s/bootstrap

修改其中的密钥
新建secret,修改其中的token
cat > bootstrap.secret.yaml<< EOF
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-92c1wc
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The default bootstrap token generated by 'kubelet '."
  token-id: 92c1wc
  token-secret: w4d0fuek41tdwk80
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-certificate-rotation
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kube-apiserver
EOF

kubectl config set-cluster kubernetes     \
--certificate-authority=/etc/kubernetes/pki/ca.pem     \
--embed-certs=true     --server=https://127.0.0.1:8443     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
# 这是一个使用 kubectl 命令设置 Kubernetes 集群配置的命令示例。下面是对每个选项的详细解释:
# 
# config set-cluster kubernetes:指定要设置的集群名称为 "kubernetes",表示要修改名为 "kubernetes" 的集群配置。
# --certificate-authority=/etc/kubernetes/pki/ca.pem:指定证书颁发机构(CA)的证书文件路径,用于验证服务器证书的有效性。
# --embed-certs=true:将证书文件嵌入到生成的 kubeconfig 文件中。这样可以避免在 kubeconfig 文件中引用外部证书文件。
# --server=https://127.0.0.1:8443:指定 Kubernetes API 服务器的地址和端口,这里使用的是 https 协议和本地地址(127.0.0.1),端口号为 8443。你可以根据实际环境修改该参数。
# --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig:指定 kubeconfig 文件的路径和名称,这里是 /etc/kubernetes/bootstrap-kubelet.kubeconfig。
# 通过执行此命令,你可以设置名为 "kubernetes" 的集群配置,并提供 CA 证书、API 服务器地址和端口,并将这些配置信息嵌入到 bootstrap-kubelet.kubeconfig 文件中。这个 kubeconfig 文件可以用于认证和授权 kubelet 组件与 Kubernetes API 服务器之间的通信。请确保路径和文件名与实际环境中的配置相匹配。

kubectl config set-credentials tls-bootstrap-token-user     \
--token=92c1wc.w4d0fuek41tdwk80 \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
# 这是一个使用 kubectl 命令设置凭证信息的命令示例。下面是对每个选项的详细解释:
# 
# config set-credentials tls-bootstrap-token-user:指定要设置的凭证名称为 "tls-bootstrap-token-user",表示要修改名为 "tls-bootstrap-token-user" 的用户凭证配置。
# --token=92c1wc.w4d0fuek41tdwk80:指定用户的身份验证令牌(token)。在这个示例中,令牌是 92c1wc.w4d0fuek41tdwk80。你可以根据实际情况修改该令牌,须是小写字母及数字。
# --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig:指定 kubeconfig 文件的路径和名称,这里是 /etc/kubernetes/bootstrap-kubelet.kubeconfig。
# 通过执行此命令,你可以设置名为 "tls-bootstrap-token-user" 的用户凭证,并将令牌信息加入到 bootstrap-kubelet.kubeconfig 文件中。这个 kubeconfig 文件可以用于认证和授权 kubelet 组件与 Kubernetes API 服务器之间的通信。请确保路径和文件名与实际环境中的配置相匹配。
kubectl config set-context tls-bootstrap-token-user@kubernetes     \
--cluster=kubernetes     \
--user=tls-bootstrap-token-user     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
# 这是一个使用 kubectl 命令设置上下文信息的命令示例。下面是对每个选项的详细解释:
# 
# config set-context tls-bootstrap-token-user@kubernetes:指定要设置的上下文名称为 "tls-bootstrap-token-user@kubernetes",表示要修改名为 "tls-bootstrap-token-user@kubernetes" 的上下文配置。
# --cluster=kubernetes:指定上下文关联的集群名称为 "kubernetes",表示使用名为 "kubernetes" 的集群配置。
# --user=tls-bootstrap-token-user:指定上下文关联的用户凭证名称为 "tls-bootstrap-token-user",表示使用名为 "tls-bootstrap-token-user" 的用户凭证配置。
# --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig:指定 kubeconfig 文件的路径和名称,这里是 /etc/kubernetes/bootstrap-kubelet.kubeconfig。
# 通过执行此命令,你可以设置名为 "tls-bootstrap-token-user@kubernetes" 的上下文,并将其关联到名为 "kubernetes" 的集群配置和名为 "tls-bootstrap-token-user" 的用户凭证配置。这样,bootstrap-kubelet.kubeconfig 文件就包含了完整的上下文信息,可以用于指定与 Kubernetes 集群建立连接时要使用的集群和凭证。请确保路径和文件名与实际环境中的配置相匹配。
kubectl config use-context tls-bootstrap-token-user@kubernetes     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
# 这是一个使用 kubectl 命令设置当前上下文的命令示例。下面是对每个选项的详细解释:
# 
# config use-context tls-bootstrap-token-user@kubernetes:指定要使用的上下文名称为 "tls-bootstrap-token-user@kubernetes",表示要将当前上下文切换为名为 "tls-bootstrap-token-user@kubernetes" 的上下文。
# --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig:指定 kubeconfig 文件的路径和名称,这里是 /etc/kubernetes/bootstrap-kubelet.kubeconfig。
# 通过执行此命令,你可以将当前上下文设置为名为 "tls-bootstrap-token-user@kubernetes" 的上下文。这样,当你执行其他 kubectl 命令时,它们将使用该上下文与 Kubernetes 集群进行交互。请确保路径和文件名与实际环境中的配置相匹配。

token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改

mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

kube-controller-manager -h | grep RotateKubeletServerCertificate
参数是默认开启的

查看集群状态,没问题的话继续后续操作
重启一下controller-manager


export ETCDCTL_API=3
etcdctl --endpoints="192.168.244.14:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      ENDPOINT       |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.244.14:2379 | fa8e297f3f09517f |  3.5.12 |  860 kB |      true |      false |         3 |      63042 |              63042 |        |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

kubectl get cs


Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
scheduler            Healthy   ok        
controller-manager   Healthy   ok        
etcd-0               Healthy   ok        

切记执行,别忘记!!!

kubectl create -f bootstrap.secret.yaml

secret/bootstrap-token-92c1wc created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created

kubelet配置

node节点配置
在master01上将证书复制到node节点
cd /etc/kubernetes/

Master='dev-k8s-master02 dev-k8s-master03'
Work='dev-k8s-node01 dev-k8s-node02'

for NODE in $Master; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

for NODE in $Work; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

kubelet配置
用Containerd作为Runtime (推荐)

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/

所有k8s节点配置kubelet service

cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \\
    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
    --config=/etc/kubernetes/kubelet-conf.yml \\
    --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \\
    --node-labels=node.kubernetes.io/node=

[Install]
WantedBy=multi-user.target
EOF
# 这是一个表示 Kubernetes Kubelet 服务的 systemd 单位文件示例。与之前相比,添加了 After 和 Requires 字段来指定依赖关系。
# 
# [Unit]
# 
# Description=Kubernetes Kubelet:指定了此单位文件对应的服务描述信息为 "Kubernetes Kubelet"。
# Documentation=...:指定了对该服务的文档链接。
# - After: 说明该服务在哪些其他服务之后启动,这里是在网络在线、firewalld服务和containerd服务后启动。
# - Wants: 说明该服务想要的其他服务,这里是网络在线服务。
# - Requires: 说明该服务需要的其他服务,这里是docker.socket和containerd.service。
# [Service]
# 
# ExecStart=/usr/local/bin/kubelet ...:指定了启动 Kubelet 服务的命令和参数,与之前的示例相同。
# --container-runtime-endpoint=unix:///run/containerd/containerd.sock:修改了容器运行时接口的端点地址,将其更改为使用 containerd 运行时(通过 UNIX 套接字)。
# [Install]
# 
# WantedBy=multi-user.target:指定了在 multi-user.target 被启动时,该服务应该被启用。
# 通过这个单位文件,你可以配置 Kubelet 服务的启动参数,并指定了它依赖的 containerd 服务。确保路径和文件名与你实际环境中的配置相匹配。

所有k8s节点创建kubelet的配置文件

cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF
# 这是一个Kubelet的配置文件,用于配置Kubelet的各项参数。
# 
# - apiVersion: kubelet.config.k8s.io/v1beta1:指定了配置文件的API版本为kubelet.config.k8s.io/v1beta1。
# - kind: KubeletConfiguration:指定了配置类别为KubeletConfiguration。
# - address: 0.0.0.0:指定了Kubelet监听的地址为0.0.0.0。
# - port: 10250:指定了Kubelet监听的端口为10250。
# - readOnlyPort: 10255:指定了只读端口为10255,用于提供只读的状态信息。
# - authentication:指定了认证相关的配置信息。
#   - anonymous.enabled: false:禁用了匿名认证。
#   - webhook.enabled: true:启用了Webhook认证。
#   - x509.clientCAFile: /etc/kubernetes/pki/ca.pem:指定了X509证书的客户端CA文件路径。
# - authorization:指定了授权相关的配置信息。
#   - mode: Webhook:指定了授权模式为Webhook。
#   - webhook.cacheAuthorizedTTL: 5m0s:指定了授权缓存时间段为5分钟。
#   - webhook.cacheUnauthorizedTTL: 30s:指定了未授权缓存时间段为30秒。
# - cgroupDriver: systemd:指定了Cgroup驱动为systemd。
# - cgroupsPerQOS: true:启用了每个QoS类别一个Cgroup的设置。
# - clusterDNS: 指定了集群的DNS服务器地址列表。
#   - 10.96.0.10:指定了DNS服务器地址为10.96.0.10。
# - clusterDomain: cluster.local:指定了集群的域名后缀为cluster.local。
# - containerLogMaxFiles: 5:指定了容器日志文件保留的最大数量为5个。
# - containerLogMaxSize: 10Mi:指定了容器日志文件的最大大小为10Mi。
# - contentType: application/vnd.kubernetes.protobuf:指定了内容类型为protobuf。
# - cpuCFSQuota: true:启用了CPU CFS Quota。
# - cpuManagerPolicy: none:禁用了CPU Manager。
# - cpuManagerReconcilePeriod: 10s:指定了CPU管理器的调整周期为10秒。
# - enableControllerAttachDetach: true:启用了控制器的挂载和拆卸。
# - enableDebuggingHandlers: true:启用了调试处理程序。
# - enforceNodeAllocatable: 指定了强制节点可分配资源的列表。
#   - pods:强制节点可分配pods资源。
# - eventBurst: 10:指定了事件突发的最大数量为10。
# - eventRecordQPS: 5:指定了事件记录的最大请求量为5。
# - evictionHard: 指定了驱逐硬性限制参数的配置信息。
#   - imagefs.available: 15%:指定了镜像文件系统可用空间的限制为15%。
#   - memory.available: 100Mi:指定了可用内存的限制为100Mi。
#   - nodefs.available: 10%:指定了节点文件系统可用空间的限制为10%。
#   - nodefs.inodesFree: 5%:指定了节点文件系统可用inode的限制为5%。
# - evictionPressureTransitionPeriod: 5m0s:指定了驱逐压力转换的时间段为5分钟。
# - failSwapOn: true:指定了在发生OOM时禁用交换分区。
# - fileCheckFrequency: 20s:指定了文件检查频率为20秒。
# - hairpinMode: promiscuous-bridge:设置了Hairpin Mode为"promiscuous-bridge"。
# - healthzBindAddress: 127.0.0.1:指定了健康检查的绑定地址为127.0.0.1。
# - healthzPort: 10248:指定了健康检查的端口为10248。
# - httpCheckFrequency: 20s:指定了HTTP检查的频率为20秒。
# - imageGCHighThresholdPercent: 85:指定了镜像垃圾回收的上阈值为85%。
# - imageGCLowThresholdPercent: 80:指定了镜像垃圾回收的下阈值为80%。
# - imageMinimumGCAge: 2m0s:指定了镜像垃圾回收的最小时间为2分钟。
# - iptablesDropBit: 15:指定了iptables的Drop Bit为15。
# - iptablesMasqueradeBit: 14:指定了iptables的Masquerade Bit为14。
# - kubeAPIBurst: 10:指定了KubeAPI的突发请求数量为10个。
# - kubeAPIQPS: 5:指定了KubeAPI的每秒请求频率为5个。
# - makeIPTablesUtilChains: true:指定了是否使用iptables工具链。
# - maxOpenFiles: 1000000:指定了最大打开文件数为1000000。
# - maxPods: 110:指定了最大的Pod数量为110。
# - nodeStatusUpdateFrequency: 10s:指定了节点状态更新的频率为10秒。
# - oomScoreAdj: -999:指定了OOM Score Adjustment为-999。
# - podPidsLimit: -1:指定了Pod的PID限制为-1,表示无限制。
# - registryBurst: 10:指定了Registry的突发请求数量为10个。
# - registryPullQPS: 5:指定了Registry的每秒拉取请求数量为5个。
# - resolvConf: /etc/resolv.conf:指定了resolv.conf的文件路径。
# - rotateCertificates: true:指定了是否轮转证书。
# - runtimeRequestTimeout: 2m0s:指定了运行时请求的超时时间为2分钟。
# - serializeImagePulls: true:指定了是否序列化镜像拉取。
# - staticPodPath: /etc/kubernetes/manifests:指定了静态Pod的路径。
# - streamingConnectionIdleTimeout: 4h0m0s:指定了流式连接的空闲超时时间为4小时。
# - syncFrequency: 1m0s:指定了同步频率为1分钟。
# - volumeStatsAggPeriod: 1m0s:指定了卷统计聚合周期为1分钟。

启动kubelet

systemctl daemon-reload
# 用于重新加载systemd管理的单位文件。当你新增或修改了某个单位文件(如.service文件、.socket文件等),需要运行该命令来刷新systemd对该文件的配置。

systemctl enable --now kubelet.service
# 启用并立即启动kubelet.service单元。kubelet.service是kubelet守护进程的systemd服务单元。

systemctl restart kubelet.service
# 重启kubelet.service单元,即重新启动kubelet守护进程。

systemctl status kubelet.service
# kubelet.service单元的当前状态,包括运行状态、是否启用等信息。

查看集群

kubectl get node

NAME                     STATUS   ROLES    AGE   VERSION
dev-k8s-master01.local   Ready    <none>   21m   v1.29.2
dev-k8s-node01.local     Ready    <none>   21m   v1.29.2
dev-k8s-node02.local     Ready    <none>   21m   v1.29.2

在执行kubectl get node时发现节点直接为空
SSL certificate problem: unable to get local issuer certificate
"Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group

kubelet客户端证书,也没缺
ls /var/lib/kubelet/pki/kubelet*

/var/lib/kubelet/pki/kubelet-client-2024-03-26-17-00-50.pem  /var/lib/kubelet/pki/kubelet-client-current.pem  /var/lib/kubelet/pki/kubelet.crt  /var/lib/kubelet/pki/kubelet.key

在kube-controller-manager.service,没有配ipv6但多配了个参数,去掉后重启,可以看到nod
--node-cidr-mask-size-ipv6

测试环境临时解决

kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous

test:anonymous角色绑定名称,随意取名,重新启动kube-proxy

查看容器运行时
kubectl describe node | grep Runtime

  Container Runtime Version:  containerd://1.7.13
  Container Runtime Version:  containerd://1.7.13
  Container Runtime Version:  containerd://1.7.13

kube-proxy配置

将kubeconfig发送至其他节点

master-1执行

Master='dev-k8s-master02 dev-k8s-master03'
Work='dev-k8s-node01 dev-k8s-node02'
for NODE in $Master; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done
for NODE in $Work; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done

所有k8s节点添加kube-proxy的service文件

cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy.yaml \\
  --cluster-cidr=172.16.0.0/12 \\
  --v=2
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF
# 这是一个 systemd 服务单元文件的示例,用于配置 Kubernetes Kube Proxy 服务。下面是对其中一些字段的详细解释:
# 
# [Unit]
# 
# Description: 描述了该服务单元的用途,这里是 Kubernetes Kube Proxy。
# Documentation: 指定了该服务单元的文档地址,即 https://github.com/kubernetes/kubernetes。
# After: 指定该服务单元应在 network.target(网络目标)之后启动。
# [Service]
# 
# ExecStart: 指定了启动 Kube Proxy 服务的命令。通过 /usr/local/bin/kube-proxy 命令启动,并指定了配置文件的路径为 /etc/kubernetes/kube-proxy.yaml,同时指定了日志级别为 2。
# Restart: 配置了服务在失败或退出后自动重启。
# RestartSec: 配置了重启间隔,这里是每次重启之间的等待时间为 10 秒。
# [Install]
# 
# WantedBy: 指定了该服务单元的安装目标为 multi-user.target(多用户目标),表示该服务将在多用户模式下启动。
# 通过配置这些字段,你可以启动和管理 Kubernetes Kube Proxy 服务。请注意,你需要根据实际情况修改 ExecStart 中的路径和文件名,确保与你的环境一致。另外,可以根据需求修改其他字段的值,以满足你的特定要求。

所有k8s节点添加kube-proxy的配置

cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 172.16.0.0/12
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF
# 这是一个Kubernetes的kube-proxy组件配置文件示例。以下是每个配置项的详细解释:
# 
# 1. apiVersion: kubeproxy.config.k8s.io/v1alpha1
#    - 指定该配置文件的API版本。
# 
# 2. bindAddress: 0.0.0.0
#    - 指定kube-proxy使用的监听地址。0.0.0.0表示监听所有网络接口。
# 
# 3. clientConnection:
#    - 客户端连接配置项。
# 
#    a. acceptContentTypes: ""
#       - 指定接受的内容类型。
# 
#    b. burst: 10
#       - 客户端请求超出qps设置时的最大突发请求数。
# 
#    c. contentType: application/vnd.kubernetes.protobuf
#       - 指定客户端请求的内容类型。
# 
#    d. kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
#       - kube-proxy使用的kubeconfig文件路径。
# 
#    e. qps: 5
#       - 每秒向API服务器发送的请求数量。
# 
# 4. clusterCIDR: 172.16.0.0/12,fc00:2222::/112
#    - 指定集群使用的CIDR范围,用于自动分配Pod IP。
# 
# 5. configSyncPeriod: 15m0s
#    - 指定kube-proxy配置同步到节点的频率。
# 
# 6. conntrack:
#    - 连接跟踪设置。
# 
#    a. max: null
#       - 指定连接跟踪的最大值。
# 
#    b. maxPerCore: 32768
#       - 指定每个核心的最大连接跟踪数。
# 
#    c. min: 131072
#       - 指定最小的连接跟踪数。
# 
#    d. tcpCloseWaitTimeout: 1h0m0s
#       - 指定处于CLOSE_WAIT状态的TCP连接的超时时间。
# 
#    e. tcpEstablishedTimeout: 24h0m0s
#       - 指定已建立的TCP连接的超时时间。
# 
# 7. enableProfiling: false
#    - 是否启用性能分析。
# 
# 8. healthzBindAddress: 0.0.0.0:10256
#    - 指定健康检查监听地址和端口。
# 
# 9. hostnameOverride: ""
#    - 指定覆盖默认主机名的值。
# 
# 10. iptables:
#     - iptables设置。
# 
#     a. masqueradeAll: false
#        - 是否对所有流量使用IP伪装。
# 
#     b. masqueradeBit: 14
#        - 指定伪装的Bit标记。
# 
#     c. minSyncPeriod: 0s
#        - 指定同步iptables规则的最小间隔。
# 
#     d. syncPeriod: 30s
#        - 指定同步iptables规则的时间间隔。
# 
# 11. ipvs:
#     - ipvs设置。
# 
#     a. masqueradeAll: true
#        - 是否对所有流量使用IP伪装。
# 
#     b. minSyncPeriod: 5s
#        - 指定同步ipvs规则的最小间隔。
# 
#     c. scheduler: "rr"
#        - 指定ipvs默认使用的调度算法。
# 
#     d. syncPeriod: 30s
#        - 指定同步ipvs规则的时间间隔。
# 
# 12. kind: KubeProxyConfiguration
#     - 指定该配置文件的类型。
# 
# 13. metricsBindAddress: 127.0.0.1:10249
#     - 指定指标绑定的地址和端口。
# 
# 14. mode: "ipvs"
#     - 指定kube-proxy的模式。这里指定为ipvs,使用IPVS代理模式。
# 
# 15. nodePortAddresses: null
#     - 指定可用于NodePort的网络地址。
# 
# 16. oomScoreAdj: -999
#     - 指定kube-proxy的OOM优先级。
# 
# 17. portRange: ""
#     - 指定可用于服务端口范围。
# 
# 18. udpIdleTimeout: 250ms
#     - 指定UDP连接的空闲超时时间。

启动kube-proxy

systemctl daemon-reload
# 用于重新加载systemd管理的单位文件。当你新增或修改了某个单位文件(如.service文件、.socket文件等),需要运行该命令来刷新systemd对该文件的配置。

systemctl enable --now kube-proxy.service
# 启用并立即启动kube-proxy.service单元。kube-proxy.service是kube-proxy守护进程的systemd服务单元。

systemctl restart kube-proxy.service
systemctl start kube-proxy.service
# 重启kube-proxy.service单元,即重新启动kube-proxy守护进程。

systemctl status kube-proxy.service
# kube-proxy.service单元的当前状态,包括运行状态、是否启用等信息。

验证

netstat -ntlp | grep kube-proxy

kubectl -n kube-system get ds kube-proxy
kubectl -n kube-system get cm kube-proxy

iptables -nL | grep -v KUBE 
# 查看查看 ipvs 路由规则
ipvsadm -ln

批量查看证书过期时间

for NODE in `ls /etc/kubernetes/pki/*.pem`; do ls $NODE; done  
for NODE in `ls /etc/kubernetes/pki/*.pem`; do openssl x509 -in $NODE  -noout -text|grep 'Not'; done  

检查Kube-Proxy模式
默认,Kube-Proxy在端口10249上运行,并暴露一组端点,你可以使用这些端点查询Kube-Proxy的信息
curl -v localhost:10249/proxyMode

*   Trying 127.0.0.1:10249...
* Connected to localhost (127.0.0.1) port 10249 (#0)
> GET /proxyMode HTTP/1.1
> Host: localhost:10249
> User-Agent: curl/7.76.1
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Mon, 22 Apr 2024 05:43:59 GMT
< Content-Length: 4
< 
* Connection #0 to host localhost left intact

安装Calico

Calico是一个纯3层的数据中心网络方案,而且无缝集成像OpenStack这种IaaS云架构,能够提供可控的VM、容器、裸机之间的IP通信。
Calico的原理是通过修改每个主机节点上的iptables和路由表规则,实现容器间数据路由和访问控制,并通过Etcd协调节点配置信息的。因此Calico服务本身和许多分布式服务一样,需要运行在集群的每一个节点上。

Calico Typha(可选的扩展组件)
Typha是Calico的一个扩展组件,用于Calico通过Typha直接与Etcd通信,而不是通过kube-apiserver。通常当K8S的规模超过50个节点的时候推荐启用它,以降低kube-apiserver的负载。每个Pod/calico-typha可承载100~200个Calico节点的连接请求,最多不要超过200个。
当集群规模大于50节点时,应该使用calico-typha.yaml。calico-typha的作用主要是:减轻Apiserver的压力;因为各节点的Felix都会监听Apiserver,当节点数众多时,Apiserver的Watch压力会很大;当安装了calico-typha后,Felix不监听Apiserver,而是由calico-typha监听Apiserver,然后calico-typha再和Felix进行通信。

calico的优点

  • endpoints组成的网络是单纯的三层网络,报文的流向完全通过路由规则控制,没有overlay等额外开销;
  • calico的endpoint可以漂移,并且实现了acl。

calico的缺点

  • 路由的数目与容器数目相同,非常容易超过路由器、三层交换、甚至node的处理能力,从而限制了整个网络的扩+ 张;
  • calico的每个node上会设置大量(海量)的iptables规则、路由,运维、排障难度大;
  • calico的原理决定了它不可能支持VPC,容器只能从calico设置的网段中获取ip;
  • calico目前的实现没有流量控制的功能,会出现少数容器抢占node多数带宽的情况;
  • calico的网络规模受到BGP网络规模的限制。

Calico有多种安装方式:

  • 使用calico.yaml清单文件安装
  • 使用Tigera Calico Operator安装Calico(官方最新指导)
    Tigera Calico Operator,Calico操作员是一款用于管理Calico安装、升级的管理工具,它用于管理Calico的安装生命周期。从Calico-v3.15版本官方开始使用此工具。

配置NetworkManager

如果主机系统使用NetworkManager来管理网络的话,则需要配置NetworkManager,以允许Calico管理接口。
NetworkManger操作默认网络命名空间接口的路由表,这可能会干扰Calico代理正确路由的能力。
在所有主机上操作:

cat > /etc/NetworkManager/conf.d/calico.conf <<EOF
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:wireguard.cali
EOF

更改calico网段

mkdir /root/k8s/yaml && cd /root/k8s/yaml
wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico-typha.yaml

curl https://docs.projectcalico.org/v3.25/manifests/calico.yaml -O

wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yaml

修改网段

cp calico-typha.yaml calico.yaml
默认为192.168.0.0/16
grep -C1 'CALICO_IPV4POOL_CIDR' calico.yaml

            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "192.168.0.0/16"

vim calico.yaml
若POD网段不是192.168.0.0/16,需打开注释修改:CALICO_IPV4POOL_CIDR
要和kubeadm init pod-network-cidr的值一致, 或者单独安装的clusterCIDR一致
注意空格对齐,不然报错error: error parsing calico.yaml: error converting YAML to JSON: yaml: line 210: did not find expected '-' indicator

             - name: CALICO_IPV4POOL_CIDR
               value: "172.16.0.0/12"

若docker镜像拉不下来,可以使用国内的仓库

grep 'image:' calico.yaml 
          image: docker.io/calico/cni:v3.25.0
          image: docker.io/calico/cni:v3.25.0
          image: docker.io/calico/node:v3.25.0
          image: docker.io/calico/node:v3.25.0
          image: docker.io/calico/kube-controllers:v3.25.0

sed -i "s#docker.io/calico/#m.daocloud.io/docker.io/calico/#g" calico.yaml 

kubectl apply -f calico.yaml

poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

删除calico

kubectl delete -f calico.yaml

查看容器状态

kubectl get pod -n kube-system

kube-system   calico-kube-controllers-58b845f4cb-xk7nc   0/1     ContainerCreating   0          84s
kube-system   calico-node-4hptf                          0/1     Init:0/3            0          84s
kube-system   calico-node-862cx                          0/1     Init:2/3            0          84s
kube-system   calico-node-8dnvr                          0/1     Init:2/3            0          84s

NAME READY STATUS RESTARTS AGE
calico-kube-controllers-58b845f4cb-sjjlj 0/1 Unknown 0 59m
calico-node-2pz2z 0/1 Completed 0 59m
calico-node-n7t97 0/1 Unknown 0 59m
calico-node-q4r5w 0/1 Completed 0 59m
coredns-84748f969f-6gtzh 0/1 Completed 11 8d
metrics-server-57d65996cf-w48gs 0/1 Completed 21 8d

kubectl -n kube-system logs ds/calico-node
kubectl -n kube-system describe pod calico-node-8dnvr
kubectl -n kube-system describe pod calico-node-sw8dv
kubectl -n kube-system describe pod calico-kube-controllers-58b845f4cb-wnc2k
kubectl -n kube-system logs calico-node-n7t97

master上镜像查看
crictl images

IMAGE                                 TAG                 IMAGE ID            SIZE
m.daocloud.io/docker.io/calico/cni    master              9f486eed9534a       92.4MB
m.daocloud.io/registry.k8s.io/pause   3.8                 4873874c08efc       311kB

cat calico.yaml |grep image:

          image: m.daocloud.io/docker.io/calico/cni:master
          image: m.daocloud.io/docker.io/calico/cni:master
          image: m.daocloud.io/docker.io/calico/node:master
          image: m.daocloud.io/docker.io/calico/node:master
          image: m.daocloud.io/docker.io/calico/kube-controllers:master
      - image: m.daocloud.io/docker.io/calico/typha:master

手动拉取

crictl pull m.daocloud.io/docker.io/calico/node:master
crictl pull m.daocloud.io/docker.io/calico/kube-controllers:master
crictl pull m.daocloud.io/docker.io/calico/typha:master

node上镜像查看
crictl images

IMAGE                                  TAG                 IMAGE ID            SIZE
m.daocloud.io/docker.io/calico/cni     master              9f486eed9534a       92.4MB
m.daocloud.io/docker.io/calico/typha   master              4f4b8e34f5143       30.3MB
m.daocloud.io/registry.k8s.io/pause    3.8                 4873874c08efc       311kB

kubectl get pod -A

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-58b845f4cb-xk7nc   1/1     Running   0          15m
kube-system   calico-node-4hptf                          1/1     Running   0          15m
kube-system   calico-node-862cx                          1/1     Running   0          15m
kube-system   calico-node-8dnvr                          1/1     Running   0          15m

https://blog.csdn.net/weixin_45015255/article/details/117207177?utm_medium=distribute.pc_relevant.none-task-blog-2~default~baidujs_baidulandingword~default-5-117207177-blog-136372820.235^v43^pc_blog_bottom_relevance_base1&spm=1001.2101.3001.4242.4&utm_relevant_index=8

Readiness probe failed: calico/node is not ready: BIRD is not ready: Failed to stat() nodename file: stat /var/lib/calico/nodename: no such file or directory

Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory

错误
kubectl -n kube-system logs ds/calico-node

Found 3 pods, using pod/calico-node-9fhv9
Defaulted container "calico-node" out of: calico-node, upgrade-ipam (init), install-cni (init), mount-bpffs (init)
Error from server (BadRequest): container "calico-node" in pod "calico-node-9fhv9" is waiting to start: PodInitializing  

解决
crictl ps -a

CONTAINER           IMAGE               CREATED             STATE               NAME                               ATTEMPT             POD ID              POD
d878cd05d0992       d70a5947d57e5       3 minutes ago       Exited              install-cni                        678                 6eee2eeb107e8       calico-node-tt7gg
6f351d95632f3       ffcc66479b5ba       5 minutes ago       Exited              controller                         1108                9a2ba30bdbd93       ingress-nginx-controller-hnc7t
304e25cfbf755       d70a5947d57e5       2 days ago          Exited              upgrade-ipam                       0                   6eee2eeb107e8       calico-node-tt7gg

crictl logs d878cd05d0992

[ERROR][1] cni-installer/<nil> <nil>: Unable to create token for CNI kubeconfig error=Post "https://10.96.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/calico-node/token": dial tcp 10.96.0.1:443: i/o timeout

kube-proxy没有启动,启动
systemctl start kube-proxy.service

使用Tigera Calico Operator安装Calico

wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yaml -O calico-tigera-operator.yaml --no-check-certificate

wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml -O calico-custom-resources.yaml --no-check-certificate

cat calico-tigera-operator.yaml |grep 'image:'

                    image:
          image: quay.io/tigera/operator:v1.32.5

sed -i "s#image: quay.io#image: m.daocloud.io/quay.io#g" calico-tigera-operator.yaml

注:这边需要修改一下Pod分配子网范围(CIDR),该地址需要与kubeadm初始化集群时的"podSubnet"字段或"--pod-network-cidr"参数中填写的值相同。l清单文件中"CALICO_IPV4POOL_CIDR"部分
vim calico-custom-resources.yaml

      cidr: 172.16.0.0/12

kubectl apply -f calico-tigera-operator.yaml
kubectl apply -f calico-custom-resources.yaml

calicoctl工具下载

下载地址,需跟calico网络插件版本一致

wget https://github.com/projectcalico/calico/releases/download/v3.25.0/calicoctl-linux-amd64
wget https://mirrors.chenby.cn/https://github.com/projectcalico/calico/releases/download/v3.25.0/calicoctl-linux-amd64
wget https://mirrors.chenby.cn/https://github.com/projectcalico/calico/releases/download/v3.27.3/calicoctl-linux-amd64

移动并重命名工具
mv calicoctl-linux-amd64 /usr/local/bin/calicoctl

赋予执行权限
chmod +x /usr/local/bin/calicoctl

calicoctl version

Client Version:    v3.25.0
Git commit:        3f7fe4d29
Cluster Version:   v3.25.0
Cluster Type:      k8s,bgp,kdd

calicoctl查看节点状态,默认为Peers模式

calicoctl node status

Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+----------------+-------------------+-------+----------+-------------+
| 192.168.244.16 | node-to-node mesh | up    | 09:38:16 | Established |
| 192.168.244.15 | node-to-node mesh | up    | 09:39:52 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

calicoctl get wep -o wide

NAME                                                                     WORKLOAD                            NODE                     NETWORKS           INTERFACE         PROFILES                          NATS   
dev--k8s--master01.local-k8s-nginx--deployment--588c996f64--f8bt2-eth0   nginx-deployment-588c996f64-f8bt2   dev-k8s-master01.local   172.17.50.192/32   calib539a8e3e41   kns.default,ksa.default.default          
dev--k8s--node02.local-k8s-nginx--deployment--588c996f64--nrddh-eth0     nginx-deployment-588c996f64-nrddh   dev-k8s-node02.local     172.22.227.73/32   calif7e8750fa84   kns.default,ksa.default.default          
dev--k8s--node01.local-k8s-nginx--deployment--588c996f64--r68r5-eth0     nginx-deployment-588c996f64-r68r5   dev-k8s-node01.local     172.21.244.66/32   cali45575135b87   kns.default,ksa.default.default   

查看 IP 资源池

calicoctl get ippool -o wide

NAME                  CIDR            NAT    IPIPMODE   VXLANMODE   DISABLED   DISABLEBGPEXPORT   SELECTOR   
default-ipv4-ippool   172.16.0.0/12   true   Always     Never       false      false              all()      

calicoctl查看配制

calicoctl get ipPool -o yaml

apiVersion: projectcalico.org/v3
items:
- apiVersion: projectcalico.org/v3
  kind: IPPool
  metadata:
    creationTimestamp: "2024-03-27T09:38:11Z"
    name: default-ipv4-ippool
    resourceVersion: "230578"
    uid: 2f7a2e97-8574-49bc-a816-186223f7bcb8
  spec:
    allowedUses:
    - Workload
    - Tunnel
    blockSize: 26
    cidr: 172.16.0.0/12
    ipipMode: Always
    natOutgoing: true
    nodeSelector: all()
    vxlanMode: Never
kind: IPPoolList
metadata:
  resourceVersion: "340939"

一些场景往往对IP地址有依赖,需要使用固定IP地址的Pod,可以使用kube-ipam轻松解决这类问题.

修改ipipMode: CrossSubnet

DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get ippool default-ipv4-ippool -o yaml > ippool.yaml

#修改 ipipMode 值为 CrossSubnet
apiVersion: v1
items:
- apiVersion: crd.projectcalico.org/v1
  kind: IPPool
  metadata:
    annotations:
      projectcalico.org/metadata: '{"uid":"50222d78-c525-4362-a783-ef09c6aa77c3","creationTimestamp":"2024-03-27T09:38:11Z"}'
    creationTimestamp: "2024-03-27T09:38:11Z"
    generation: 1
    name: default-ipv4-ippool
    resourceVersion: "230578"
    uid: 2f7a2e97-8574-49bc-a816-186223f7bcb8
  spec:
    allowedUses:
    - Workload
    - Tunnel
    blockSize: 26
    cidr: 172.16.0.0/12
    ipipMode: CrossSubnet
    natOutgoing: true
    nodeSelector: all()
    vxlanMode: Never
kind: List
metadata:
  resourceVersion: ""

重新使用 DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl apply -f ippool.yaml应用既可

删除calico

如果之前是使用yaml部署并且保留了原来的文件的,可以直接使用yaml进行

kubectl delete -f calico-tigera-operator.yaml --grace-period=0 --force
kubectl delete -f calico-custom-resources.yaml --grace-period=0 --force
 # 检查所有名字里面带有 calico|tigera 的资源: 
 kubectl get all --all-namespaces | egrep "calico|tigera"
 ​
 # 检查所有名字里面带有 calico|tigera 的 api resources: 
 kubectl api-resources --verbs=list --namespaced -o name | egrep "calico|tigera"
 ​
 # 检查所有名字里面带有 calico|tigera 的 不带namespace信息的 api resources: 
 kubectl api-resources --verbs=list -o name  | egrep "calico|tigera"
手动删除一直Terminating
kube-system            pod/calico-kube-controllers-58b845f4cb-xk7nc              0/1     Terminating   2               13d

kubectl delete pod calico-kube-controllers-58b845f4cb-xk7nc  -n kube-system

kubectl delete pod calico-kube-controllers-58b845f4cb-xk7nc -n kube-system --grace-period=0 --force

当出现资源无法删除的时候可以通过检查其finalizers字段来定位信息
检查calico-node这个serviceaccounts的配置文件,查看对应的finalizers和status中的conditions定位故障原因

 kubectl get serviceaccounts calico-node -n calico-system -o yaml

如果是finalizers中存在tigera.io/cni-protector导致资源无法被顺利删除,可以尝试修改为finalizers: []。这个问题看起来似乎是个Kubernetes上游的BUG,在github上面能找到相关的issue,主要集中在使用tigera-operator部署的calico。

最后删除所有节点上面残留的cni配置文件,然后重启集群的所有机器

 # 删除cni下相关的配置文件
cp -ar  /etc/cni/net.d/  /etc/cni/net.d.calico
rm -rf /etc/cni/net.d/

重启机器之后会把此前calico创建的路由信息、iptables规则和cni网卡删除,当然不想重启也可以手动删除干净

 # 清理路由信息
 $ ip route flush proto bird
 ​
 # 清理calico相关网卡
 $ ip link list | grep cali | awk '{print $2}' | cut -c 1-15 | xargs -I {} ip link delete {}
 ​
 # 删除ipip模块
 $ modprobe -r ipip
 ​
 # 清理iptables规则
 $ iptables-save | grep -i cali | iptables -F
 $ iptables-save | grep -i cali | iptables -X
 ​
 # 清理ipvsadm规则
 $ ipvsadm -C
 ip link del kube-ipvs0

安装cilium

参考:
https://developer.aliyun.com/article/1436812

要使用完整的 Cilium 功能, 需要非常新版本的 Linux 内核. 目前官方推荐的 Linux Kernel 是 ≥ 5.10.

将 Kubernetes 的 CNI 从其他组件切换为 Cilium, 已经可以有效地提升网络的性能。但是通过对 Cilium 不同模式的切换/功能的启用,可以进一步提升 Cilium 的网络性能。具体调优项包括不限于:

  • 启用本地路由 (Native Routing)
  • 完全替换 KubeProxy
  • IP 地址伪装 (Masquerading) 切换为基于 eBPF 的模式,使用POD本身的真实IP
  • Kubernetes NodePort 实现在 DSR(Direct Server Return) 模式下运行
  • 绕过 iptables 连接跟踪 (Bypass iptables Connection Tracking)
  • 主机路由 (Host Routing) 切换为基于 BPF 的模式 (需要 Linux Kernel >= 5.10)
  • 启用 IPv6 BIG TCP (需要 Linux Kernel >= 5.19)
  • 禁用 Hubble(但是不建议,可观察性比一点点的性能提升更重要)
  • 修改 MTU 为巨型帧 (jumbo frames) (需要网络条件允许)
  • 启用带宽管理器 (Bandwidth Manager) (需要 Kernel >= 5.1)
  • 启用 Pod 的 BBR 拥塞控制 (需要 Kernel >= 5.18)
  • 启用 XDP 加速 (需要 支持本地 XDP 驱动程序)
  • (高级用户可选)调整 eBPF Map Size
  • Linux Kernel 优化和升级
    • CONFIG_PREEMPT_NONE=y
  • 其他:
    • tuned network-* profiles, 如:tuned-adm profile network-latency 或 network-throughput
    • CPU 调为性能模式
    • 停止 irqbalance,将网卡中断引脚指向特定 CPU
    • 在网络/网卡设备/OS 等条件满足的情况下,我们尽可能多地启用这些调优选项,

取代 kube-proxy 组件

Cilium 另外一个很大的宣传点是宣称已经全面实现kube-proxy的功能,包括 ClusterIP, NodePort, ExternalIPs 和 LoadBalancer,可以完全取代它的位置,同时提供更好的性能、可靠性以及可调试性。当然,这些都要归功于 eBPF 的能力。官方文档中提到,如果你是在先有 kube-proxy 后部署的 Cilium,那么他们是一个 “共存” 状态,Cilium 会根据节点操作系统的内核版本来决定是否还需要依赖 kube-proxy 实现某些功能,可以通过以下手段验证是否能停止 kube-proxy 组件:

Cilium 支持 2 种安装方式:

  • [ ] Cilium CLI
  • [ ] Helm chart

CLI 工具能让你轻松上手 Cilium,尤其是在刚开始学习时。它直接使用 Kubernetes API 来检查与现有 kubectl 上下文相对应的集群,并为检测到的 Kubernetes 实施选择合适的安装选项。

Helm Chart 方法适用于需要对 Cilium 安装进行精细控制的高级安装和生产环境。它要求你为特定的 Kubernetes 环境手动选择最佳数据路径 (datapath) 和 IPAM 模式。

删除kube-proxy

因为我们这里使用cilium的kubeProxyReplacement模式,所以先删除kube-proxy

# 在master节点上备份kube-proxy相关的配置
 $ kubectl get ds -n kube-system kube-proxy -o yaml > kube-proxy-ds.yaml
 $ kubectl get cm -n kube-system kube-proxy -o yaml > kube-proxy-cm.yaml
 ​
 # 删除掉kube-proxy这个daemonset
 $ kubectl -n kube-system delete ds kube-proxy
 daemonset.apps "kube-proxy" deleted
 # 删除掉kube-proxy的configmap,防止以后使用kubeadm升级K8S的时候重新安装了kube-proxy(1.19版本之后的K8S)
 $ kubectl -n kube-system delete cm kube-proxy
 configmap "kube-proxy" deleted
 ​
 # 在每台机器上面使用root权限清除掉iptables规则和ipvs规则以及ipvs0网卡
 $ iptables-save | grep -v KUBE | iptables-restore
 $ ipvsadm -C
 $ ip link del kube-ipvs0

cilium-cli安装

手动下载

wget https://github.com/cilium/cilium-cli/releases/download/v0.16.4/cilium-linux-amd64.tar.gz  
tar xvf cilium-linux-amd64.tar.gz -C /usr/bin/

自动下载

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

验证:

cilium version

cilium-cli: v0.16.4 compiled with go1.22.1 on linux/amd64
cilium image (default): v1.15.3
cilium image (stable): v1.15.3
cilium image (running): 1.15.3

安装

cilium install
通过该命令,cilium 会自动进行一些环境信息的识别,以及参数的选择和判断:

验证:

我这里是Helm安装的
cilium status --wait

    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

Deployment             hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
Containers:            hubble-ui          Running: 1
                       cilium-operator    Running: 2
                       hubble-relay       Running: 1
                       cilium             Running: 3
Cluster Pods:          39/39 managed by Cilium
Helm chart version:    
Image versions         cilium             quay.io/cilium/cilium:v1.15.3@sha256:da74ab61d1bc665c1c088dff41d5be388d252ca5800f30c7d88844e6b5e440b0: 3
                       hubble-ui          quay.io/cilium/hubble-ui:v0.13.0@sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666: 1
                       hubble-ui          quay.io/cilium/hubble-ui-backend:v0.13.0@sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803: 1
                       cilium-operator    quay.io/cilium/operator-generic:v1.15.3@sha256:c97f23161906b82f5c81a2d825b0646a5aa1dfb4adf1d49cbb87815079e69d61: 2
                       hubble-relay       quay.io/cilium/hubble-relay:v1.15.3@sha256:b9c6431aa4f22242a5d0d750c621d9d04bdc25549e4fb1116bfec98dd87958a2: 1

运行以下命令验证群集是否具有正确的网络连接:

在中国安装时,由于网络环境所限,可能部分测试会失败(如访问 1.1.1.1:443). 具体见下方示例.
属于正常情况。
连接性测试需要至少 两个 worker node 才能在群集中成功部署。连接性测试 pod 不会在以控制面角色运行的节点上调度。如果您没有为群集配置两个 worker node,连接性测试命令可能会在等待测试环境部署完成时停滞。

cilium connectivity test --request-timeout 30s --connect-timeout 10s

  Monitor aggregation detected, will skip some flow validation steps
✨ [kubernetes] Creating namespace cilium-test for connectivity check...
✨ [kubernetes] Deploying echo-same-node service...
✨ [kubernetes] Deploying DNS test server configmap...
✨ [kubernetes] Deploying same-node deployment...
✨ [kubernetes] Deploying client deployment...
✨ [kubernetes] Deploying client2 deployment...
✨ [kubernetes] Deploying client3 deployment...
✨ [kubernetes] Deploying echo-other-node service...
✨ [kubernetes] Deploying other-node deployment...
✨ [host-netns] Deploying kubernetes daemonset...
✨ [host-netns-non-cilium] Deploying kubernetes daemonset...
ℹ️  Skipping tests that require a node Without Cilium
⌛ [kubernetes] Waiting for deployment cilium-test/client to become ready...
⌛ [kubernetes] Waiting for deployment cilium-test/client2 to become ready...
⌛ [kubernetes] Waiting for deployment cilium-test/echo-same-node to become ready...
⌛ [kubernetes] Waiting for deployment cilium-test/client3 to become ready...
⌛ [kubernetes] Waiting for deployment cilium-test/echo-other-node to become ready...
⌛ [kubernetes] Waiting for pod cilium-test/client-69748f45d8-xkhrb to reach DNS server on cilium-test/echo-same-node-7f896b84-hnsnh pod...
⌛ [kubernetes] Waiting for pod cilium-test/client2-ccd7b8bdf-g2fzb to reach DNS server on cilium-test/echo-same-node-7f896b84-hnsnh pod...
⌛ [kubernetes] Waiting for pod cilium-test/client3-868f7b8f6b-256tx to reach DNS server on cilium-test/echo-same-node-7f896b84-hnsnh pod...
⌛ [kubernetes] Waiting for pod cilium-test/client-69748f45d8-xkhrb to reach DNS server on cilium-test/echo-other-node-58999bbffd-jk45m pod...
⌛ [kubernetes] Waiting for pod cilium-test/client2-ccd7b8bdf-g2fzb to reach DNS server on cilium-test/echo-other-node-58999bbffd-jk45m pod...
⌛ [kubernetes] Waiting for pod cilium-test/client3-868f7b8f6b-256tx to reach DNS server on cilium-test/echo-other-node-58999bbffd-jk45m pod...
⌛ [kubernetes] Waiting for pod cilium-test/client-69748f45d8-xkhrb to reach default/kubernetes service...
⌛ [kubernetes] Waiting for pod cilium-test/client2-ccd7b8bdf-g2fzb to reach default/kubernetes service...
⌛ [kubernetes] Waiting for pod cilium-test/client3-868f7b8f6b-256tx to reach default/kubernetes service...
⌛ [kubernetes] Waiting for Service cilium-test/echo-other-node to become ready...
⌛ [kubernetes] Waiting for Service cilium-test/echo-other-node to be synchronized by Cilium pod kube-system/cilium-xjhdj
⌛ [kubernetes] Waiting for Service cilium-test/echo-other-node to be synchronized by Cilium pod kube-system/cilium-zzrjm
⌛ [kubernetes] Waiting for Service cilium-test/echo-same-node to become ready...
⌛ [kubernetes] Waiting for Service cilium-test/echo-same-node to be synchronized by Cilium pod kube-system/cilium-xjhdj
⌛ [kubernetes] Waiting for Service cilium-test/echo-same-node to be synchronized by Cilium pod kube-system/cilium-zzrjm
⌛ [kubernetes] Waiting for NodePort 192.168.244.15:30235 (cilium-test/echo-other-node) to become ready...
⌛ [kubernetes] Waiting for NodePort 192.168.244.15:30621 (cilium-test/echo-same-node) to become ready...
⌛ [kubernetes] Waiting for NodePort 192.168.244.16:30621 (cilium-test/echo-same-node) to become ready...
⌛ [kubernetes] Waiting for NodePort 192.168.244.16:30235 (cilium-test/echo-other-node) to become ready...
⌛ [kubernetes] Waiting for NodePort 192.168.244.14:30621 (cilium-test/echo-same-node) to become ready...
⌛ [kubernetes] Waiting for NodePort 192.168.244.14:30235 (cilium-test/echo-other-node) to become ready...
⌛ [kubernetes] Waiting for DaemonSet cilium-test/host-netns-non-cilium to become ready...
⌛ [kubernetes] Waiting for DaemonSet cilium-test/host-netns to become ready...
ℹ️  Skipping IPCache check
 Enabling Hubble telescope...
⚠️  Unable to contact Hubble Relay, disabling Hubble telescope and flow validation: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:4245: connect: connection refused"
ℹ️  Expose Relay locally with:
   cilium hubble enable
   cilium hubble port-forward&
ℹ️  Cilium version: 1.15.3
 Running 75 tests ...
[=] Test [no-unexpected-packet-drops] [1/75]
...
[=] Test [no-policies] [2/75]
.
  [-] Scenario [no-policies/pod-to-external-workload]
  [-] Scenario [no-policies/pod-to-cidr]
  [.] Action [no-policies/pod-to-cidr/external-1111-0: cilium-test/client-69748f45d8-xkhrb (172.16.2.79) -> external-1111 (1.1.1.1:443)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 10 --max-time 30 --retry 3 --retry-all-errors --retry-delay 3 https://1.1.1.1:443" failed: error with exec request (pod=cilium-test/client-69748f45d8-xkhrb, container=client): command terminated with exit code 7
  ℹ️  curl output:
  :0 -> :0 = 000

  connectivity test failed: 7 tests failed

安装 Cilium Hubble

cilium hubble enable --ui
cilium status
kubectl get nodes
kubectl get daemonsets --all-namespaces

卸载 Cilium

首先卸载通过 cilium install 安装的 Cilium.

cilium uninstall

cp -ar /etc/cni/net.d/ /etc/cni/net.d.cilium/    
rm -f /etc/cni/net.d/*

helm安装cilium

# 添加源
helm repo add cilium https://helm.cilium.io

# 修改为国内源
helm pull cilium/cilium
tar xvf cilium-*.tgz
cd cilium/
sed -i "s#quay.io/#m.daocloud.io/quay.io/#g" values.yaml

cd ..

# 默认参数安装
helm install  cilium ./cilium/ -n kube-system
NAME: cilium
LAST DEPLOYED: Wed Apr 10 15:51:53 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble.

Your release version is 1.15.3.

For any further help, visit https://docs.cilium.io/en/v1.15/gettinghelp

# 删除
helm delete cilium ./cilium/ -n kube-system

# 启用ipv6(可选)
# helm install cilium cilium/cilium --namespace kube-system --set ipv6.enabled=true

# 修改 ​​​k8sServiceHost​​​ 和 ​​k8sServicePort​​ 参数为 master 的 nodeip 和 apiServer 的端口,默认为 6443
# 默认vxlan网络,不使用kubeproxy,启用ipam,启用ip地址伪装,启用hubble-ui,启用hubble.metrics,启用prometheus
helm install cilium cilium/cilium \
--version 1.15.3 \
--namespace kube-system \
--set operator.replicas=1 \
--set k8sServiceHost=192.168.244.14 \
--set k8sServicePort=8443 \
--set tunnelProtocol=vxlan \
--set routingMode=tunnel \
--set bpf.masquerade=true \
--set ipv4.enabled=true \
--set kubeProxyReplacement=true \
--set ipv4NativeRoutingCIDR=172.16.0.0/12 \
--set ipam.mode=kubernetes \
--set ipam.operator.clusterPoolIPv4PodCIDRList=172.16.0.0/12 \
--set ipam.operator.clusterPoolIPv4MaskSize=24 \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true \
--set prometheus.enabled=true \
--set operator.prometheus.enabled=true \
--set hubble.enabled=true \
--set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" 

说明如下:

  • --namespace kube-system 和默认的 cilium install 保持一致, cilium 安装在 kube-system 下
  • --set operator.replicas=1 指定 Operator 副本数为 1, 默认为 2
  • --set k8sServiceHost k8sServicePort 显式指定 K8s 集群的 APIServer 的 IP 和 端口
  • --set kubeProxyReplacement=true 使用cilium替换默认的kube proxy,也可以使用strict
  • --set tunnelProtocol=vxlan 使用默认vxlan, 启用本地路由模式,需要BGP支持
  • --set ipv4NativeRoutingCIDR=172.16.0.0/12 本地路由该 CIDR 内的所有目的地都不会被伪装,Cilium 会自动将离开集群的所有流量的源 IP 地址伪装成节点的 IPv4 地址
  • --set ipam.mode=kubernetes 启用 Kubernetes IPAM 模式。启用此选项将自动启用k8s-require-ipv4-pod-cidr,集群运行中不能再切换.cluster-scope模式是cilium默认的IPAM模式,cluster-pool,cluster-pool-v2beta
  • --set ipam.operator.clusterPoolIPv4PodCIDRList= 和k8s的cluster-cidr 一致
    https://docs.cilium.io/en/stable/network/concepts/ipam/
  • --set ipam.operator.clusterPoolIPv4MaskSize=24 和k8s 一致
  • hubble.relay.enabled=true hubble.ui.enabled=true 启用 Hubble 可观察性.
  • -- set hubble.relay.enabled=true --set hubble.ui.enabled=true 开启hubble
  • --set prometheus.enabled=true --set operator.prometheus.enabled=true 安装prometheus

查看
kubectl get pod -A | grep cil

kube-system            cilium-5wj7m                                          0/1     Init:0/6   0               3m17s
kube-system            cilium-gll7t                                          1/1     Running    0               3m17s
kube-system            cilium-operator-684f97848d-8pjxp                      1/1     Running    0               3m17s
kube-system            cilium-operator-684f97848d-fzvmx                      1/1     Running    0               3m17s
kube-system            cilium-st6zr                                          1/1     Running    0               3m17s
# 检测cilium的状态
kubectl -n kube-system exec ds/cilium -- cilium status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.29 (v1.29.2) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    False   [enp0s3   192.168.244.16 fe80::a00:27ff:fe2f:f060]
Host firewall:           Disabled
SRv6:                    Disabled
CNI Chaining:            none
Cilium:                  Ok   1.15.3 (v1.15.3-22dfbc58)
NodeMonitor:             Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok   
IPAM:                    IPv4: 2/254 allocated from 10.0.0.0/24, 
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       20/20 healthy
Proxy Status:            OK, ip 10.0.0.247, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 1198/4095 (29.26%), Flows/s: 0.74   Metrics: Ok
Encryption:              Disabled        
Cluster health:          3/3 reachable   (2024-04-10T09:23:08Z)
Modules Health:          Stopped(0) Degraded(0) OK(11) Unknown(3)

这里有几个点注意一下:
datapath mode: tunnel: 因为兼容性原因,Cilium 会默认启用 tunnel(基于 vxlan) 的 datapatch 模式,也就是 overlay 网络结构。
KubeProxyReplacement:  Disabled或true Cilium 是没有完全替换掉 kube-proxy ,Probe 是共存,Strict或true是已替换。
Host Routing: Legacy Legacy Host Routing 还是会用到 iptables, 性能较弱;但是 BPF-based host routing 需要 Linux Kernel >= 5.10
Masquerading: IPtables IP 伪装有几种方式:基于 eBPF 的,和基于 iptables 的。默认使用基于 iptables, 推荐使用 基于 eBPF 的。
Hubble Relay: Ok 默认 Hubble 。
Cilium 的最重要的特点就是其性能,所以只要是可以增强性能的,后续会一一介绍如何启用。

# 查看k8s集群的node状态
kubectl -n kube-system exec ds/cilium -- cilium node list
# 查看k8s集群的service列表
kubectl -n kube-system exec ds/cilium -- cilium service list
# 查看对应cilium所处node上面的endpoint信息
kubectl -n kube-system exec ds/cilium -- cilium endpoint list

再次调整后

kubectl -n kube-system exec ds/cilium -- cilium status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.29 (v1.29.2) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    Strict   [enp0s3   192.168.244.16 fe80::a00:27ff:fe2f:f060 (Direct Routing)]
Host firewall:           Disabled
SRv6:                    Disabled
CNI Chaining:            none
Cilium:                  Ok   1.15.3 (v1.15.3-22dfbc58)
NodeMonitor:             Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok   
IPAM:                    IPv4: 27/254 allocated from 172.16.1.0/24, 
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       143/143 healthy
Proxy Status:            OK, ip 172.16.1.153, 4 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 68.72   Metrics: Ok
Encryption:              Disabled        
Cluster health:          3/3 reachable   (2024-04-11T05:29:32Z)
Modules Health:          Stopped(0) Degraded(0) OK(11) Unknown(3)

报错

journalctl -u kubelet -f
NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
cilium status正常,查看kubectl get node 为Notready,服务无法正常通信。

解决:

  1. 所有节点重启后正常
    reboot
  2. 尝试 重启未受管节点
    kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod

测试安装效果

官方提供了一个 connectivity 检查工具,以检测部署好的 Cilium 是否工作正常。如果你的网络环境有些限制,我作了一些简单修改,可以参照这里。部署起来很简单,请确保至少有两个可用的节点,否则有几个 deployment 会无法成功运行
下载部署测试用例
wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml
sed -i "s#google.com#baidu.cn#g" connectivity-check.yaml
sed -i "s#quay.io/#m.daocloud.io/quay.io/#g" connectivity-check.yaml

kubectl apply -f connectivity-check.yaml

deployment.apps/echo-a created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
deployment.apps/pod-to-a created
deployment.apps/pod-to-external-1111 created
deployment.apps/pod-to-a-denied-cnp created
deployment.apps/pod-to-a-allowed-cnp created
deployment.apps/pod-to-external-fqdn-allow-google-cnp created
deployment.apps/pod-to-b-multi-node-clusterip created
deployment.apps/pod-to-b-multi-node-headless created
deployment.apps/host-to-b-multi-node-clusterip created
deployment.apps/host-to-b-multi-node-headless created
deployment.apps/pod-to-b-multi-node-nodeport created
deployment.apps/pod-to-b-intra-node-nodeport created
service/echo-a created
service/echo-b created
service/echo-b-headless created
service/echo-b-host-headless created
ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created

如果所有的 deployment 都能成功运行起来,说明 Cilium 已经成功部署并工作正常。

删除测试

kubectl delete -f connectivity-check.yaml

下载专属监控面板

网络可视化神器 Hubble

上文提到了 Cilium 强大之处就是提供了简单高效的网络可视化功能,它是通过 Hubble组件完成的。Cilium在1.7版本后推出并开源了Hubble,它是专门为网络可视化设计,能够利用 Cilium 提供的 eBPF 数据路径,获得对 Kubernetes 应用和服务的网络流量的深度可见性。这些网络流量信息可以对接 Hubble CLI、UI 工具,可以通过交互式的方式快速诊断如与 DNS 相关的问题。除了 Hubble 自身的监控工具,还可以对接主流的云原生监控体系—— Prometheus 和 Grafana,实现可扩展的监控策略。

你可以对接现有的 Grafana+Prometheus 服务,也可以部署一个简单的:
prometheus+grafana
wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml
sed -i "s#docker.io/#m.daocloud.io/docker.io/#g" monitoring-example.yaml
kubectl apply -f monitoring-example.yaml

namespace/cilium-monitoring created
serviceaccount/prometheus-k8s created
configmap/grafana-config created
configmap/grafana-cilium-dashboard created
configmap/grafana-cilium-operator-dashboard created
configmap/grafana-hubble-dashboard created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/grafana created
service/prometheus created
deployment.apps/grafana created
deployment.apps/prometheus created

修改为NodePort

kubectl  edit svc  -n kube-system hubble-ui
kubectl  edit svc  -n cilium-monitoring grafana
kubectl  edit svc  -n cilium-monitoring prometheus
 type: NodePort

查看端口

kubectl get svc -A | grep monit

cilium-monitoring      grafana                                   NodePort    10.104.59.179    <none>        3000:31924/TCP                  22h
cilium-monitoring      prometheus                                NodePort    10.103.30.55     <none>        9090:30401/TCP                  22h

kubectl get svc -A | grep hubble

kube-system            hubble-metrics                            ClusterIP   None             <none>        9965/TCP                        19h
kube-system            hubble-peer                               ClusterIP   10.110.145.92    <none>        443/TCP                         19h
kube-system            hubble-relay                              ClusterIP   10.104.140.67    <none>        80/TCP                          19h
kube-system            hubble-ui                                 NodePort    10.110.58.214    <none>        80:31235/TCP                    19h

使用patch修改hubble-ui 为nodeport port为 31235

kubectl patch svc hubble-ui -n kube-system -p '{"spec":{"type":"NodePort","ports":[{"port":80,"nodePort":31235}]}}'

访问
grafana
http://192.168.244.14:31924
prometheus
http://192.168.244.14:30401
hubble-ui
http://192.168.244.14:31235

完成部署后,打开 Grafana 网页,导入官方制作的 dashboard,可以快速创建基于 Hubble 的 metrics 监控。等待一段时间,就能在 Grafana 上看到数据了:

安装CoreDNS

以下步骤只在master01操作

安装helm

tar xvf helm-v3.14.1-linux-amd64.tar.gz 
linux-amd64/
linux-amd64/README.md
linux-amd64/helm
linux-amd64/LICENSE

cp linux-amd64/helm /usr/local/bin/

cd /root/k8s/helm
helm repo add coredns https://coredns.github.io/helm
helm pull coredns/coredns
tar xvf coredns-*.tgz
cd coredns/

修改参数

vim values.yaml

service:
# clusterIP: ""
# clusterIPs: []
# loadBalancerIP: ""
# externalIPs: []
# externalTrafficPolicy: ""
# ipFamilyPolicy: ""
  # The name of the Service
  # If not set, a name is generated using the fullname template
  clusterIP: "10.96.0.10"
  name: ""
  annotations: {}

cat values.yaml | grep clusterIP:

# clusterIP: ""
  clusterIP: "10.96.0.10"

修改为国内源 docker源可选

sed -i "s#coredns/#m.daocloud.io/docker.io/coredns/#g" values.yaml
sed -i "s#registry.k8s.io/#m.daocloud.io/registry.k8s.io/#g" values.yaml

默认参数安装

cd ..

helm install coredns ./coredns/ -n kube-system

NAME: coredns
LAST DEPLOYED: Wed Mar 27 18:24:49 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CoreDNS is now running in the cluster as a cluster-service.

It can be tested with the following:

1. Launch a Pod with DNS tools:

kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools

2. Query the DNS server:

/ # host kubernetes
kubectl -n kube-system get pod|grep core
coredns-84748f969f-k88wh                   1/1     Running   0          2m4s

删除

helm uninstall coredns -n kube-system

安装Metrics Server

以下步骤只在master01操作
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率
cd ../yaml
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
mv components.yaml metrics.yaml

修改配置

vim metrics.yaml
先设为黏贴模式
:set paste

      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=10250
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
                - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem
                - --requestheader-username-headers=X-Remote-User
                - --requestheader-group-headers=X-Remote-Group
                - --requestheader-extra-headers-prefix=X-Remote-Extra-
        image: registry.k8s.io/metrics-server/metrics-server:v0.7.0

增加挂载目录

        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - name: ca-ssl
          mountPath: /etc/kubernetes/pki
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/pki

修改为国内源 镜像源可选

sed -i "s#registry.k8s.io/#m.daocloud.io/registry.k8s.io/#g" metrics.yaml

cat components.yaml |grep image

        image: m.daocloud.io/registry.k8s.io/metrics-server/metrics-server:v0.7.0
        imagePullPolicy: IfNotPresent

执行部署

kubectl apply -f metrics.yaml

serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

验证

kubectl get svc -A

NAMESPACE     NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP         40h
kube-system   calico-typha     ClusterIP   10.111.195.94   <none>        5473/TCP        16h
kube-system   coredns          ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   16h
kube-system   metrics-server   ClusterIP   10.107.112.28   <none>        443/TCP         22m

kubectl get po -n kube-system

NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-58b845f4cb-xk7nc   1/1     Running   0          54m
calico-node-4hptf                          1/1     Running   0          54m
calico-node-862cx                          1/1     Running   0          54m
calico-node-8dnvr                          1/1     Running   0          54m
coredns-84748f969f-k88wh                   1/1     Running   0          6m23s
metrics-server-57d65996cf-gr2sc            1/1     Running   0          75s

kubectl top node

NAME                     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
dev-k8s-master01.local   82m          4%     1432Mi          19%       
dev-k8s-node01.local     36m          1%     802Mi           23%       
dev-k8s-node02.local     41m          2%     866Mi           24%       

集群验证

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: docker.io/library/busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

带curl的busybox

cat > busybox-curl.yaml <<EOF 
apiVersion: v1
kind: Pod
metadata:
  name: busybox-curl
  namespace: default
spec:
  containers:
  - name: busybox-curl
    image: repo.k8s.local/library/yauritux/busybox-curl:latest
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

kubectl apply -f busybox-curl.yaml
kubectl delete -f busybox-curl.yaml

查看

kubectl get pod

NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          11s

进行k8s内部解析

kubectl exec  busybox -n default -- nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10 coredns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

进行公网curl获取

kubectl exec  busybox-curl -n default -- curl http://www.baidu.com

测试跨命名空间是否可以解析

查看有那些name

kubectl  get svc -A
NAMESPACE     NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP         2d
kube-system   coredns          ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   14m
kube-system   metrics-server   ClusterIP   10.101.55.140   <none>        443/TCP         8m55s

进行解析

kubectl exec  busybox -n default -- nslookup coredns.kube-system        
Server:    10.96.0.10
Address 1: 10.96.0.10 coredns.kube-system.svc.cluster.local

Name:      coredns.kube-system
Address 1: 10.96.0.10 coredns.kube-system.svc.cluster.local

每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.
^CConnection closed by foreign host.
[root@dev-k8s-node02 ~]# telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.
^CConnection closed by foreign host.

Pod和Pod之前要能通

kubectl get po -owide

NAME      READY   STATUS    RESTARTS   AGE   IP              NODE                     NOMINATED NODE   READINESS GATES
busybox   1/1     Running   0          10m   172.17.50.194   dev-k8s-master01.local   <none>           <none>
kubectl get po -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE                     NOMINATED NODE   READINESS GATES
calico-kube-controllers-58b845f4cb-xk7nc   1/1     Running   0          71m   172.17.50.193    dev-k8s-master01.local   <none>           <none>
calico-node-4hptf                          1/1     Running   0          71m   192.168.244.15   dev-k8s-node01.local     <none>           <none>
calico-node-862cx                          1/1     Running   0          71m   192.168.244.14   dev-k8s-master01.local   <none>           <none>
calico-node-8dnvr                          1/1     Running   0          71m   192.168.244.16   dev-k8s-node02.local     <none>           <none>
coredns-84748f969f-k88wh                   1/1     Running   0          23m   172.22.227.65    dev-k8s-node02.local     <none>           <none>
metrics-server-57d65996cf-gr2sc            1/1     Running   0          18m   172.21.244.66    dev-k8s-node01.local     <none>           <none>

进入busybox ping其他节点上的pod

可以连通证明这个pod是可以跨命名空间和跨主机通信的

kubectl exec -ti busybox -- sh
/ # ping 172.17.50.193  
PING 172.17.50.193 (172.17.50.193): 56 data bytes
64 bytes from 172.17.50.193: seq=0 ttl=63 time=0.101 ms
64 bytes from 172.17.50.193: seq=1 ttl=63 time=0.129 ms
^C
--- 172.17.50.193 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.101/0.115/0.129 ms
/ # ping 192.168.244.15
PING 192.168.244.15 (192.168.244.15): 56 data bytes
64 bytes from 192.168.244.15: seq=0 ttl=63 time=0.348 ms
64 bytes from 192.168.244.15: seq=1 ttl=63 time=0.418 ms
^C
--- 192.168.244.15 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.348/0.383/0.418 ms
/ # ping 192.168.244.16
PING 192.168.244.16 (192.168.244.16): 56 data bytes
64 bytes from 192.168.244.16: seq=0 ttl=63 time=0.558 ms
64 bytes from 192.168.244.16: seq=1 ttl=63 time=0.531 ms
^C
--- 192.168.244.16 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.531/0.544/0.558 ms
/ # ping 172.22.227.65
PING 172.22.227.65 (172.22.227.65): 56 data bytes
64 bytes from 172.22.227.65: seq=0 ttl=62 time=0.501 ms
64 bytes from 172.22.227.65: seq=1 ttl=62 time=0.537 ms
^C
--- 172.22.227.65 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.501/0.519/0.537 ms

创建 nginx-deployment

创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)

cat<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: docker.io/library/nginx:latest
        ports:
        - containerPort: 80
EOF

kubectl get pod -o wide

NAME                                READY   STATUS    RESTARTS       AGE   IP              NODE                     NOMINATED NODE   READINESS GATES
busybox                             1/1     Running   20 (57m ago)   20h   172.17.50.194   dev-k8s-master01.local   <none>           <none>
nginx-deployment-588c996f64-54dvk   1/1     Running   0              7s    172.21.244.69   dev-k8s-node01.local     <none>           <none>
nginx-deployment-588c996f64-5fhmf   1/1     Running   0              7s    172.17.50.197   dev-k8s-master01.local   <none>           <none>
nginx-deployment-588c996f64-fn2dj   1/1     Running   0              7s    172.22.227.68   dev-k8s-node02.local     <none>           <none>

删除nginx

kubectl delete deployments nginx-deployment

helm安装dashboard

cd /root/k8s/helm
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm pull kubernetes-dashboard/kubernetes-dashboard
tar xvf kubernetes-dashboard-7.1.2.tgz

删除

helm uninstall kubernetes-dashboard -n kubernetes-dashboard

安装

helm install kubernetes-dashboard ./kubernetes-dashboard -n kubernetes-dashboard --create-namespace

NAME: kubernetes-dashboard
LAST DEPLOYED: Fri Mar 29 18:15:55 2024
NAMESPACE: kubernetes-dashboard
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*************************************************************************************************
*** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready ***
*************************************************************************************************

Congratulations! You have just installed Kubernetes Dashboard in your cluster.

To access Dashboard run:
  kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443

NOTE: In case port-forward command does not work, make sure that kong service name is correct.
      Check the services in Kubernetes Dashboard namespace using:
        kubectl -n kubernetes-dashboard get svc

Dashboard will be available at:
  https://localhost:8443

更改dashboard的svc为NodePort,如果已是请忽略

kubectl get svc -n kubernetes-dashboard

NAME                                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
kubernetes-dashboard-api               ClusterIP   10.97.220.204    <none>        8000/TCP                        42s
kubernetes-dashboard-auth              ClusterIP   10.110.81.74     <none>        8000/TCP                        42s
kubernetes-dashboard-kong-manager      NodePort    10.102.239.226   <none>        8002:31799/TCP,8445:32012/TCP   42s
kubernetes-dashboard-kong-proxy        ClusterIP   10.97.174.179    <none>        443/TCP                         42s
kubernetes-dashboard-metrics-scraper   ClusterIP   10.106.164.54    <none>        8000/TCP                        42s
kubernetes-dashboard-web               ClusterIP   10.101.247.41    <none>        8000/TCP                        42s

注意修改kubernetes-dashboard-kong-proxy为nodeport
kubectl edit svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard

  sessionAffinity: None
  type: ClusterIP
  改成
  type: NodePort

添加nodePort: 32220

  ports:
  - name: kong-proxy-tls
    nodePort: 32220
    port: 443
    protocol: TCP
    targetPort: 8443

kubectl get svc -n kubernetes-dashboard

kubernetes-dashboard-kong-proxy        NodePort    10.97.174.179    <none>        443:32220/TCP                   2d19h

在master测试nodeport
curl -k https://192.168.244.14:32220/

在浏览器测试

在虚拟机管理里增加nat端口转发,127.0.0.1:32220->192.168.244.14:32220
https://127.0.0.1:32220/#/login
界面提示
You can generate token for service account with: kubectl -n NAMESPACE create token SERVICE_ACCOUNT

cd /root/k8s/yaml

cat > dashboard-account.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kubernetes-dashboard
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: dashboard-admin
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kubernetes-dashboard
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
EOF

kubectl apply -f dashboard-account.yaml

创建token

kubectl -n kubernetes-dashboard create token dashboard-admin

可以加上 --duration 参数设置时间 kubectl create token account -h查看具体命令

kubectl create token dashboard-admin --namespace kubernetes-dashboard --duration 10h

eyJhbGciOiJSUzI1NiIsImtpZCI6IjRST3pGS3NKUmNSbFloc0x3Z09ERGYwdllscWpwMjBiWFY5aGFFbHI0V3MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzExNzExMzIyLCJpYXQiOjE3MTE3MDc3MjIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkYXNoYm9hcmQtYWRtaW4iLCJ1aWQiOiI0MDNkYmM2ZS1hOTVhLTRkZTMtYTEzYi1mNTk0NjNmODNkMjgifX0sIm5iZiI6MTcxMTcwNzcyMiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.V47YNZW4qK5P_eI0UkldCdGTdvSGWoT1g6Mjv2XeHAEqjf4F5TISpCY5CgHQNhEn-qxA3yC7ziiNpQ1PITgJjO_tajbNvy4x6YL9FtSgNcPAsdyK_16yI8R7CI_FrGWoWIV5CYcOoBYBAWCQistCZD27We6ZzleekEXml-6nRubmsXhPD67iGwIdNYovwytEhmWR7t57xCDGlbVvoEGRSREx8sJvAReQ9C9fkh0-JVEHwBwQFtumUGA9MgDhz2y3PE98WUtNYCy-yfVMP1qWYSCmg0prXS7VkZmCy7vE3oMz7TIFQy8F0kTXrr9q3a1Y0p9OWqhPyr0SpqinV9jVlg

Unknown error (200): Http failure during parsing for http://127.0.0.1:32220/api/v1/csrftoken/login

在浏览器使用https来访问
http://127.0.0.1:32220/

ingress安装

参考:
https://github.com/kubernetes/ingress-nginx
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/

部署方式

方式一 yaml部署
方式二 helm部署

使用yaml部署

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml -O ingress-nginx.yaml

使用yaml加速

wget https://mirrors.chenby.cn/https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml -O ingress-nginx.yaml

修改为国内源 docker源可选

grep image: ingress-nginx.yaml

image: registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334

替换为国内源

sed -i "s#registry.k8s.io/#m.daocloud.io/registry.k8s.io/#g" ingress-nginx.yaml

查看node节点

kubectl get nodes

NAME                     STATUS   ROLES    AGE   VERSION
dev-k8s-master01.local   Ready    <none>   2d    v1.29.2
dev-k8s-node01.local     Ready    <none>   25h   v1.29.2
dev-k8s-node02.local     Ready    <none>   2d    v1.29.2

查看node节点 标签

kubectl get node --show-labels

NAME                     STATUS   ROLES    AGE   VERSION   LABELS
dev-k8s-master01.local   Ready    <none>   2d    v1.29.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=dev-k8s-master01.local,kubernetes.io/os=linux,node.kubernetes.io/node=
dev-k8s-node01.local     Ready    <none>   25h   v1.29.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=dev-k8s-node01.local,kubernetes.io/os=linux,node.kubernetes.io/node=
dev-k8s-node02.local     Ready    <none>   2d    v1.29.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=dev-k8s-node02.local,kubernetes.io/os=linux,node.kubernetes.io/node=

给节点打上角色标签(非必需)

kubectl label node dev-k8s-master01.local node-role.kubernetes.io/control-plane=
kubectl label node dev-k8s-node01.local node-role.kubernetes.io/work=
kubectl label node dev-k8s-node02.local node-role.kubernetes.io/work=

给节点打上ingress标签,后续指定ingress部署在此

ingress扩容与缩容,只需要给想要扩容的节点加标签就行,缩容就把节点标签去除即可

kubectl label node dev-k8s-master01.local ingressroute=ingress-nginx

ingress-nginx 改成 DaemonSet 方式

vi ingress-nginx.yaml
注释更新策略

    #strategy:
    #rollingUpdate:
    # maxUnavailable: 1
    #type: RollingUpdate
apiVersion: apps/v1
#kind: Deployment
kind: DaemonSet

    #strategy:
    #rollingUpdate:
    # maxUnavailable: 1
    #type: RollingUpdate
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      tolerations
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
      nodeSelector:
        kubernetes.io/os: linux
        ingressroute: ingress-nginx

创建ingress

kubectl apply -f ingress-nginx.yaml

查看

kubectl get pods -owide -n ingress-nginx

NAME                                   READY   STATUS      RESTARTS   AGE     IP               NODE                     NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-dphsw   0/1     Completed   0          5m53s   172.17.50.202    dev-k8s-master01.local   <none>           <none>
ingress-nginx-admission-patch-djpcx    0/1     Completed   0          5m53s   172.22.227.72    dev-k8s-node02.local     <none>           <none>
ingress-nginx-controller-mgb5p         1/1     Running     0          3m26s   192.168.244.14   dev-k8s-master01.local   <none>           <none>

ingress 已部署到指定的dev-k8s-master01.local

验证测试

curl -Lv http://192.168.244.14:80

*   Trying 192.168.244.14:80...
* Connected to 192.168.244.14 (192.168.244.14) port 80 (#0)
> GET / HTTP/1.1
> Host: 192.168.244.14
> User-Agent: curl/7.76.1
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Date: Thu, 28 Mar 2024 10:00:56 GMT
< Content-Type: text/html
< Content-Length: 146
< Connection: keep-alive
< 
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host 192.168.244.14 left intact

helm部署内外双ingress

可选安装

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx  
helm repo update

helm search repo ingress
helm pull ingress-nginx/ingress-nginx

网络不畅,直接下载tar包
wget https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.10.0/ingress-nginx-4.10.0.tgz

tar xvf ingress-nginx-4.10.0.tgz
cd ingress-nginx/

查看image
cat values.yaml |grep image:

ingress-nginx/controller
ingress-nginx/opentelemetry
ingress-nginx/kube-webhook-certgen
defaultbackend-amd64

查看仓库和tag
cat values.yaml |grep -C 4 image:

registry: registry.k8s.io
将 registry.k8s.io------>替换为 k8s.mirror.nju.edu.cn

修改镜像为私仓地址

cat values.yaml | grep 'registry.k8s.io'

sed -n "/registry: registry.k8s.io/{s/registry.k8s.io/k8s.mirror.nju.edu.cn/p}" values.yaml
sed -i "/registry: registry.k8s.io/{s/registry.k8s.io/k8s.mirror.nju.edu.cn/}" values.yaml

注释掉相关images的digest

cat values.yaml | grep 'digest:'

    digest: sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
  #     digest: ""
      digest: sha256:13bee3f5223883d3ca62fee7309ad02d22ec00ff0d7033e3e9aca7a9f60fd472
        digest: sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334
sed -n "/digest:/{s/digest:/#digest:/p}" values.yaml
sed -i "/digest:/{s/digest:/#digest:/}" values.yaml

cp values.yaml values_int.yaml

创建对外ingress

注意 同一个集群中不同套Ingress Controller名称必须唯一
ingressClassResource:
name: nginx
controllerValue: "k8s.io/ingress-nginx"
ingressClass: nginx

设置一个namespace和class为iingress-nginx的对外ingress

使用Dameset布署,注意会占用宿主机80 443端口,改 containerPort和hostPort无效,内外ingress不要部署在一起。
vi values.yaml

  #打开注释修改
  allowSnippetAnnotations: true

  #使用主机和集群的dns
  dnsPolicy: ClusterFirstWithHostNet
  #容器端口和hostPort一致
  containerPort:
    http: 80
    https: 443

  #使用主机的端口
  hostNetwork: true
  hostPort:
    enabled: true
    ports:
      http: 80
      https: 443
 # hostNetwork 模式下设置为false,通过节点IP地址上报ingress status数据
  publishService:  
    enabled: false

  kind: DaemonSet
  #充许部署到master
   tolerations:
   - key: "node-role.kubernetes.io/control-plane"
     operator: "Exists"
     effect: "NoSchedule"

  ingressClassResource:
    name: nginx
    controllerValue: "k8s.io/ingress-nginx"
  ingressClass: nginx

  admissionWebhooks:
    enabled: true
      nodeSelector:
        kubernetes.io/os: linux
        ingressroute: ingress-nginx
    objectSelector: 
      matchLabels:
        ingressname: nginx

hostNetwork下service 也可关闭(可选)

  service:
    enabled: false

service type改LoadBalancer 为ClusterIP

  service:
    type: LoadBalancer

反亲和,ingress不要在一起

    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app.kubernetes.io/name
            operator: In
            values:
            - ingress-nginx
          - key: app.kubernetes.io/instance
            operator: In
            values:
            - ingress-nginx
          - key: app.kubernetes.io/component
            operator: In
            values:
            - controller
        topologyKey: kubernetes.io/hostname

指定标签的node

  nodeSelector:
    kubernetes.io/os: linux
    ingressroute: ingress-nginx

修改时区和使用宿主的日志目录

  extraVolumeMounts:
  - name: timezone
    mountPath: /etc/localtime  
  - name: vol-ingress-logdir
    mountPath: /var/log/nginx
  extraVolumes:
  - name: timezone       
    hostPath:
      path: /usr/share/zoneinfo/Asia/Shanghai  
  - name: vol-ingress-logdir
    hostPath:
      path: /var/log/nginx
      type: DirectoryOrCreate

在各节点上创建日志目录
mkdir /var/log/nginx
chmod 777 /var/log/nginx

# 创建
helm -n ingress-nginx install ingress-nginx ./ --create-namespace 

# 回滚一个 chart
helm rollback ingress-nginx 1

# 删除
helm delete ingress-nginx -n ingress-nginx

# 更新
helm upgrade ingress-nginx ./ -f values.yaml -n ingress-nginx 
Release "ingress-nginx" has been upgraded. Happy Helming!
NAME: ingress-nginx
LAST DEPLOYED: Mon Apr  1 16:19:28 2024
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running 'kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch'

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

kubectl get all -n ingress-nginx

NAME                                 READY   STATUS    RESTARTS      AGE
pod/ingress-nginx-controller-bcjsx   1/1     Running   8 (10m ago)   23m

NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.109.156.255   <pending>     80:31500/TCP,443:31471/TCP   23m
service/ingress-nginx-controller-admission   ClusterIP      10.108.123.13    <none>        443/TCP                      23m

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                      AGE
daemonset.apps/ingress-nginx-controller   1         1         1       1            1           ingressroute=ingress-nginx,kubernetes.io/os=linux   23m

kubectl get pod -n ingress-nginx -o wide

NAME                             READY   STATUS    RESTARTS        AGE   IP               NODE                     NOMINATED NODE   READINESS GATES
ingress-nginx-controller-bcjsx   1/1     Running   8 (9m31s ago)   22m   192.168.244.14   dev-k8s-master01.local   <none>           <none>
kubectl describe pod ingress-nginx-controller-bcjsx -n ingress-nginx
kubectl delete pod ingress-nginx-controller-bcjsx -n ingress-nginx
kubectl logs ingress-nginx-controller-bcjsx -n ingress-nginx
kubectl exec -it pod/ingress-nginx-controller-bcjsx -n ingress-nginx -- bash

部置一个nginx测试对外ingress

cd /root/k8s/app

使用Deployment+nodeName+hostPath,指定分配到node01上

cat > test-nginx-hostpath.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: test
  labels: {app: nginx}
spec:
  replicas: 1
  selector:
    matchLabels: {app: nginx}
  template:
    metadata:
      name: nginx
      labels: {app: nginx}
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: nginx
        image: docker.io/library/nginx:latest
        ports:
        - name: http
          containerPort: 80
          protocol: TCP
        - name: https
          containerPort: 443
          protocol: TCP        
        volumeMounts:
        - name: timezone
          mountPath: /etc/localtime  
        - name: vol-nginx-html
          mountPath: "/usr/share/nginx/html/"
        - name: vol-nginx-log
          mountPath: "/var/log/nginx/"
        - name: vol-nginx-conf
          mountPath: "/etc/nginx/conf.d/"
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  
      - name: vol-nginx-html
        hostPath:
          path: /nginx/html/
          type: DirectoryOrCreate
      - name: vol-nginx-log
        hostPath:
          path: /nginx/logs/
          type: DirectoryOrCreate
      - name: vol-nginx-conf
        hostPath:
          path: /nginx/conf.d/
          type: DirectoryOrCreate
      #nodeName: dev-k8s-node01.local 
      nodeSelector:
        kubernetes.io/hostname: dev-k8s-node01.local
EOF

ingress 开了 HostNetwork 时,可以不开nodeport 用ClusterIP

cat > svc-test-nginx.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: svc-test-nginx
  namespace: test
spec:
  ports:
  - port: 31080
    targetPort: http
    protocol: TCP
    name: http
  selector:
    app: nginx
  type: ClusterIP
EOF

创建Ingress规则,将ingress和service绑一起

podip和clusterip都不固定,但是service name是固定的
namespace 要一致

cat > ingress-svc-test-nginx.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
  namespace: test
  labels:
    app.kubernetes.io/name: nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  ingressClassName: nginx
  rules:
#  - host: wwwtest.k8s.local
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: svc-test-nginx
            port:
              name: http
EOF

在node1 上创建本地文件夹,后续pod因spec:nodeName: 会分配到此机。

mkdir -p /nginx/{html,logs,conf.d}
#生成一个首页
echo `hostname` > /nginx/html/index.html
echo `date` >> /nginx/html/index.html

在node1 生成ingress测试页

mkdir -pv /nginx/html/testpath/
echo "test "`hostname` > /nginx/html/testpath/index.html

在node1 配上nginx

cat > /nginx/conf.d/default.conf << EOF
server {
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}
EOF

创建ingress

kubectl create namespace test
kubectl apply -f test-nginx-hostpath.yaml
kubectl delete -f test-nginx-hostpath.yaml

kubectl apply -f svc-test-nginx.yaml
kubectl delete -f svc-test-nginx.yaml

kubectl apply -f ingress-svc-test-nginx.yaml
kubectl delete -f ingress-svc-test-nginx.yaml

kubectl -n test describe ingress nginx
kubectl -n test  get svc 
kubectl -n test  get pod -o wide 

这里有个坑,添加ingress时会报错

Error from server (InternalError): error when creating "ingress-svc-test-nginx.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": tls: failed to verify certificate: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, not ingress-nginx-controller-admission.ingress-nginx.svc

解决方式一

这是一个验证加入ingress时资源是否正确的服务,可以临时先删除,但如果ingress配制错误有卡死风险

kubectl get ValidatingWebhookConfiguration
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

解决方式二 修复指定ingressname进行验证

此ingressname应该和创建ingress-nginx-controller的ingress-class=nginx 一致

方式一 添加ingressname匹配

  objectSelector: 
    matchLabels:
      ingressname: nginx

方式二 使用namespace匹配

    namespaceSelector: {}

这里用方式二
vi ingress-nginx.yaml

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.10.0
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1/ingresses
  failurePolicy: Fail
  objectSelector: 
    matchLabels:
      ingressname: nginx
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io      

进入pod内查看

kubectl -n test  get pod -o wide 
NAME                     READY   STATUS    RESTARTS       AGE   IP              NODE                   NOMINATED NODE   READINESS GATES
nginx-848b895f57-hg2b7   1/1     Running   1 (147m ago)   19h   172.21.244.79   dev-k8s-node01.local   <none>           <none>

kubectl -n test get pod |grep nginx
kubectl -n test exec -it $(kubectl -n test get pod |grep nginx|awk '{print $1}') -- bash
cat /etc/nginx/conf.d/default.conf 
ls /usr/share/nginx/html/

curl http://localhost
dev-k8s-node01.local
Thu Mar 28 06:34:07 PM CST 2024

tail /var/log/nginx/access.log 
127.0.0.1 - - [29/Mar/2024:14:10:48 +0800] "GET / HTTP/1.1" 200 53 "-" "curl/7.88.1" "-"

在测试pod内可以用service域名来访问

#公网
kubectl exec busybox-curl -n default -- curl -sLv http://www.baidu.com
#不同的namespace不能访问
kubectl exec busybox-curl -n default -- curl -sLv http://svc-test-nginx:31080
#带上namespace可以访问
kubectl exec busybox-curl -n default -- curl -sLv http://svc-test-nginx.test:31080
#使用全域名可以访问
kubectl exec busybox-curl -n default -- curl -sLv http://svc-test-nginx.test.svc.cluster.local:31080
#使用svc的ip可以访问
kubectl exec busybox-curl -n default -- curl -sLv http://10.109.224.60:31080
#通过node上ingress访问
kubectl exec busybox-curl -n default -- curl -sLv http://192.168.244.14:80/testpath/

部署一个对内的ingress

cp values.yaml values_int.yaml

给节点打上ingress标签,后续指定ingress部署在此

ingress扩容与缩容,只需要给想要扩容的节点加标签就行,缩容就把节点标签去除即可

kubectl label node dev-k8s-node02.local ingressroute=int-ingress-nginx

创建对内ingress

设置一个namespace和class为int-ingress-nginx的对内ingress

使用Dameset布署,注意会占用宿主机80 443端口,改 containerPort和hostPort无效,内外ingress不要部署在一起。
vi values_int.yaml
声明标签

commonLabels: 
  ingressroute: int-ingress-nginx

注意 同一个集群中不同套Ingress Controller名称必须唯一

  ingressClassResource:
    name: int-ingress-nginx
    controllerValue: "k8s.io/int-ingress-nginx"
  ingressClass: int-ingress-nginx

hostNetwork下service 也可关闭(可选)

  service:
    enabled: false

准入器打开,校验int-ingress-nginx的nginx配制

  admissionWebhooks:
    enabled: true

    objectSelector: 
      matchLabels:
        ingressname: int-ingress-nginx

调度到指定标签的node

  nodeSelector:
    kubernetes.io/os: linux
    ingressroute: int-ingress-nginx
# 创建
helm -n int-ingress-nginx install int-ingress-nginx -f values_int.yaml ./ --create-namespace 

# 回滚一个 chart
helm rollback int-ingress-nginx 1

# 删除
helm delete int-ingress-nginx -n int-ingress-nginx

# 更新
helm upgrade int-ingress-nginx ./ -f values_int.yaml -n int-ingress-nginx 

kubectl -n int-ingress-nginx get pod -o wide

kubectl -n int-ingress-nginx get all

NAME                                             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/int-ingress-nginx-controller             LoadBalancer   10.101.15.142    <pending>     80:32404/TCP,443:30919/TCP   19s
service/int-ingress-nginx-controller-admission   ClusterIP      10.100.223.105   <none>        443/TCP                      19s

NAME                                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                                                     AGE
daemonset.apps/int-ingress-nginx-controller   0         0         0       0            0           ingressroute=int-ingress-nginx,kubernetes.io/os=linux   19s

将 dashboard改成ingress访问

ingresss安装好后,可以创建dashboard的ingress,通过ingress访问。

cat > ingress-dashboard.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard
  namespace: kubernetes-dashboard
  labels:
    app.kubernetes.io/name: int-nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  ingressClassName: int-ingress-nginx
  rules:
  - host: dashboard.k8s.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard-kong-proxy
            port:
              name: kong-proxy-tls
              #number: 443
EOF

生效

kubectl apply -f ingress-dashboard.yaml
kubectl get svc -n kubernetes-dashboard

kubectl -n int-ingress-nginx get pod -o wide

NAME                                 READY   STATUS    RESTARTS   AGE    IP               NODE                   NOMINATED NODE   READINESS GATES
int-ingress-nginx-controller-c99bg   1/1     Running   0          6m5s   192.168.244.16   dev-k8s-node02.local   <none>           <none>

使用域名测试

本机host中添加域名
dashboard.k8s.local

curl -H "Host:dashboard.k8s.local" http://192.168.244.16:80/
curl -k -H "Host:dashboard.k8s.local" https://192.168.244.16:443/

在virtbox网络添加nat指向
127.0.0.1:1680 -> 192.168.244.16:80
127.0.0.1:16443 -> 192.168.244.16:443
在host文件添加域名后,浏览器访问,注意需要https

https://dashboard.k8s.local:16443/#/login

注意此为测试环境,所有人都可以访问,线上还需加ip限制等。

关于默认ingressClassName

在书写ingress服务时应指定ingressClassName 来指明通过哪个入口进入。
如果 ingressClassName 被省略,那么你应该定义一个默认的 Ingress 类

spec:
  ingressClassName: nginx

关于默认 Ingress 类

有一些 Ingress 控制器不需要定义默认的 IngressClass。比如:Ingress-NGINX 控制器可以通过参数 --watch-ingress-without-class 来配置。 不过仍然推荐 设置默认的 IngressClass。
如果集群中有多个 IngressClass 被标记为默认,准入控制器将阻止创建新的未指定 ingressClassName 的 Ingress 对象。 解决这个问题需要确保集群中最多只能有一个 IngressClass 被标记为默认。

查看参数
kubectl describe ds ingress-nginx-controller -n ingress-nginx
注意有个参数:--watch-ingress-without-class=true

配制默认ingress
wget https://raw.githubusercontent.com/kubernetes/website/main/content/zh-cn/examples/service/networking/default-ingressclass.yaml

这里设置对外nginx为默认ingress

cat > default-ingressclass.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
  name: nginx
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
spec:
  controller: k8s.io/ingress-nginx
EOF

kubectl apply -f default-ingressclass.yaml

查看对外ingress

kubectl describe IngressClass nginx

Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.10.0
              helm.sh/chart=ingress-nginx-4.10.0
              ingressroute=ingress-nginx
Annotations:  ingressclass.kubernetes.io/is-default-class: true
              meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: ingress-nginx
Controller:   k8s.io/ingress-nginx
Events:       <none>

多了 ingressclass.kubernetes.io/is-default-class: true

查看对内ingress

kubectl describe IngressClass int-ingress-nginx

配制全局白名单

重命名日志文件名
修改为充许node节点,service节点,及测试ip访问,其它ip都不充许访问
如果在内部还需配制开启realip模块。

  whitelist-source-range: "127.0.0.1,192.168.244.0/24,10.96.0.0/12,223.2.3.0/24"
cat > int-ingress-nginx-ConfigMap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: int-ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.10.0
  name: int-ingress-nginx-controller
  namespace: int-ingress-nginx
data:
  allow-snippet-annotations: "true"
  whitelist-source-range: "127.0.0.1,192.168.244.0/24,10.96.0.0/12,223.2.3.0/24"
  access-log-path: "/var/log/nginx/access_int_ingress.log"
  error-log-path: "/var/log/nginx/error_int_ingress.log"

EOF

kubectl apply -f ./int-ingress-nginx-ConfigMap.yaml -n int-ingress-nginx

查看nginx 配制文件

kubectl exec -it pod/$(kubectl get pod -n int-ingress-nginx|grep ingress-nginx|awk '{print $1}') -n int-ingress-nginx -- cat /etc/nginx/nginx.conf

安装kubectl命令行自动补全功能

yum install bash-completion -y

source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

Posted in 安装k8s/kubernetes.

Tagged with , , .


k8s_安装14_vpa_hpa

Vertical Pod Autoscaler ( VPA )

Vertical Pod Autoscaler ( VPA )会自动调整 Pod 的 CPU 和内存属性,被称为纵向扩展。
VPA可以给出服务运行所适合的CPU和内存配置,省去估计服务占用资源的时间,更合理的使用资源。
https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/
https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

VPA 与 HPA
从根本上来说,VPA 和 HPA 之间的区别在于它们的扩展方式。HPA 通过添加或删除pod进行扩展,从而水平扩展容量。然而,VPA 通过增加或减少现有 Pod 容器内的 CPU 和内存资源来进行扩展,从而垂直扩展容量

VPA 的组成部分
VPA 部署具有三个主要组件:VPA Recommender、VPA Updater和VPA Admission Controller。让我们看一下每个组件的作用。VPA Recommender:

监控资源利用率并计算目标值。
查看指标历史记录、OOM 事件和 VPA 部署规范并建议公平请求。根据定义的限制请求比例提高/降低限制。
VPA 更新程序:

驱逐那些需要新资源限制的 Pod。
如果定义了“updateMode: Auto”,则实现推荐器建议的任何内容。
VPA 准入控制器:

每当 VPA 更新程序逐出并重新启动 Pod 时,都会在新 Pod 启动之前更改 CPU 和内存设置(使用 Webhook)。
当 Vertical Pod Autoscaler 设置为“Auto”的 updateMode 时,如果需要更改 Pod 的资源请求,则驱逐 Pod。由于 Kubernetes 的设计,修改正在运行的 pod 的资源请求的唯一方法是重新创建 pod。

流程说明

vpa 连接检查 pod 在运行过程中占用的资源,默认间隔为10s一次
当发现 pod 资源占用到达阈值时,vpa会尝试更改分配的内存或cpu
vpa尝试更新部署组件中的pod资源定义
pod重启,新资源将应用于创建出来的实例

使用注意

同一个 deployment,不能同时使用 hpa 和 vpa。 但是,您可以在自定义和外部指标上使用VPA和HPA。
vpa 更新资源会导致 pod 重建、重启、甚至重新调度
Auto模式中的VPA 只能用于在控制器(例如部署)下运行的pod,后者负责重新启动已删除的pod。 在Auto模式下,没有在任何控制器下运行的pod的模式下使用VPA 将导致删除该pod并且不会重新创建该pod。
vpa 使用 admission webhook ,需要确保与其他 webhook 不冲突
vpa的性能没有在大型集群中测试过
vap建议值可能超过实际资源上限(例如节点大小,可用大小,可用配额),从而导致pod处于pending无法被调度
多个 vpa 同时配置同一个pod会造成未定义的行为
vpa不支持扩展控制器

已知的限制

每当 VPA 更新 Pod 资源时,都会重新创建 Pod,这会导致重新创建所有正在运行的容器。Pod 可以在不同的节点上重新创建。
VPA 无法保证它驱逐或删除以应用建议(在Auto和Recreate模式下配置时)的 pod 将成功重新创建。通过将 VPA 与Cluster Autoscaler结合使用可以部分解决这个问题。
VPA 不会更新不在控制器下运行的 Pod 的资源。
目前,Vertical Pod Autoscaler不应与CPU 或内存上的Horizontal Pod Autoscaler (HPA)一起使用。但是,您可以在自定义和外部指标上将 VPA 与 HPA结合使用。
VPA 准入控制器是一个准入 Webhook。如果您将其他准入 Webhook 添加到集群中,则分析它们如何交互以及它们是否可能相互冲突非常重要。准入控制器的顺序由 API 服务器上的标志定义。
VPA 会对大多数内存不足事件做出反应,但并非在所有情况下都会做出反应。
VPA 性能尚未在大型集群中进行测试。
VPA 建议可能会超出可用资源(例如节点大小、可用大小、可用配额)并导致Pod 处于挂起状态。通过将 VPA 与Cluster Autoscaler结合使用可以部分解决这个问题。
与同一 Pod 匹配的多个 VPA 资源具有未定义的行为。
从1.27 版本动态调整容器CPU和内存资源限制,无需重启应用程序

安装

git clone https://github.com/kubernetes/autoscaler.git
cd autoscaler/vertical-pod-autoscaler/deploy
sed -i 's/Always/IfNotPresent/g'  recommender-deployment.yaml
sed -i 's/Always/IfNotPresent/g'  admission-controller-deployment.yaml
sed -i 's/Always/IfNotPresent/g'  updater-deployment.yaml

准备image

cat *.yaml|grep image:|sed -e 's/.*image: //'|sort|uniq   
registry.k8s.io/autoscaling/vpa-admission-controller:1.0.0
registry.k8s.io/autoscaling/vpa-recommender:1.0.0
registry.k8s.io/autoscaling/vpa-updater:1.0.0

使用k8s.mirror.nju.edu.cn替换registry.k8s.io
docker pull k8s.mirror.nju.edu.cn/autoscaling/vpa-admission-controller:1.0.0
docker pull k8s.mirror.nju.edu.cn/autoscaling/vpa-recommender:1.0.0
docker pull k8s.mirror.nju.edu.cn/autoscaling/vpa-updater:1.0.0

docker tag k8s.mirror.nju.edu.cn/autoscaling/vpa-admission-controller:1.0.0 repo.k8s.local/registry.k8s.io/autoscaling/vpa-admission-controller:1.0.0
docker tag k8s.mirror.nju.edu.cn/autoscaling/vpa-recommender:1.0.0 repo.k8s.local/registry.k8s.io/autoscaling/vpa-recommender:1.0.0
docker tag k8s.mirror.nju.edu.cn/autoscaling/vpa-updater:1.0.0 repo.k8s.local/registry.k8s.io/autoscaling/vpa-updater:1.0.0

docker push repo.k8s.local/registry.k8s.io/autoscaling/vpa-admission-controller:1.0.0
docker push repo.k8s.local/registry.k8s.io/autoscaling/vpa-recommender:1.0.0
docker push repo.k8s.local/registry.k8s.io/autoscaling/vpa-updater:1.0.0

替换yaml中image地址为私仓

#测试
sed -n "/image:/{s/image: registry.k8s.io/image: repo.k8s.local\/registry.k8s.io/p}" *.yaml

#替换
sed -i "/image:/{s/image: registry.k8s.io/image: repo.k8s.local\/registry.k8s.io/}" *.yaml

#重新验证
cat *.yaml|grep image:|sed -e 's/.*image: //'|sort|uniq   

安装

cd autoscaler/vertical-pod-autoscaler/hack
安装脚本安装之前保证你的K8S集群的metrics-server已安装,并且openssl升级到1.1.1或更高版本
./vpa-up.sh

deployment.apps/vpa-recommender created
Generating certs for the VPA Admission Controller in /tmp/vpa-certs.
Generating RSA private key, 2048 bit long modulus
..................................+++
.....................................................+++
e is 65537 (0x10001)
unknown option -addext
req [options] <infile >outfile
where options  are
 -inform arg    input format - DER or PEM
 -outform arg   output format - DER or PEM
 -in arg        input file
 -out arg       output file
 -text          text form of request
 -pubkey        output public key
 -noout         do not output REQ
 -verify        verify signature on REQ
 -modulus       RSA modulus
 -nodes         don't encrypt the output key
 -engine e      use engine e, possibly a hardware device
 -subject       output the request's subject
 -passin        private key password source
 -key file      use the private key contained in file
 -keyform arg   key file format
 -keyout arg    file to send the key to
 -rand file:file:...
                load the file (or the files in the directory) into
                the random number generator
 -newkey rsa:bits generate a new RSA key of 'bits' in size
 -newkey dsa:file generate a new DSA key, parameters taken from CA in 'file'
 -newkey ec:file generate a new EC key, parameters taken from CA in 'file'
 -[digest]      Digest to sign with (see openssl dgst -h for list)
 -config file   request template file.
 -subj arg      set or modify request subject
 -multivalue-rdn enable support for multivalued RDNs
 -new           new request.
 -batch         do not ask anything during request generation
 -x509          output a x509 structure instead of a cert. req.
 -days          number of days a certificate generated by -x509 is valid for.
 -set_serial    serial number to use for a certificate generated by -x509.
 -newhdr        output "NEW" in the header lines
 -asn1-kludge   Output the 'request' in a format that is wrong but some CA's
                have been reported as requiring
 -extensions .. specify certificate extension section (override value in config file)
 -reqexts ..    specify request extension section (override value in config file)
 -utf8          input characters are UTF8 (default ASCII)
 -nameopt arg    - various certificate name options
 -reqopt arg    - various request text options

ERROR: Failed to create CA certificate for self-signing. If the error is "unknown option -addext", update your openssl version or deploy VPA from the vpa-release-0.8 branch.
deployment.apps/vpa-admission-controller created

openssl 没升级出错

customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalercheckpoints.autoscaling.k8s.io created
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalers.autoscaling.k8s.io created
clusterrole.rbac.authorization.k8s.io/system:metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:vpa-actor created
clusterrole.rbac.authorization.k8s.io/system:vpa-status-actor created
clusterrole.rbac.authorization.k8s.io/system:vpa-checkpoint-actor created
clusterrole.rbac.authorization.k8s.io/system:evictioner created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-actor created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-status-actor created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-checkpoint-actor created
clusterrole.rbac.authorization.k8s.io/system:vpa-target-reader created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-target-reader-binding created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-evictioner-binding created
serviceaccount/vpa-admission-controller created
serviceaccount/vpa-recommender created
serviceaccount/vpa-updater created
clusterrole.rbac.authorization.k8s.io/system:vpa-admission-controller created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-admission-controller created
clusterrole.rbac.authorization.k8s.io/system:vpa-status-reader created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-status-reader-binding created
deployment.apps/vpa-updater created
deployment.apps/vpa-recommender created
Generating certs for the VPA Admission Controller in /tmp/vpa-certs.
Generating RSA private key, 2048 bit long modulus (2 primes)
.........+++++
...........................................................................................................................................+++++
e is 65537 (0x010001)
Generating RSA private key, 2048 bit long modulus (2 primes)
................................................+++++
...................+++++
e is 65537 (0x010001)
Signature ok
subject=CN = vpa-webhook.kube-system.svc
Getting CA Private Key
Uploading certs to the cluster.
secret/vpa-tls-certs created
Deleting /tmp/vpa-certs.
deployment.apps/vpa-admission-controller created
service/vpa-webhook created

kubectl -n kube-system get pods|grep vpa

kubectl get po -n kube-system

vpa-admission-controller-55f6c45765-k7w5c    1/1     Running   0               112s
vpa-recommender-7d974b6444-dhwrk             1/1     Running   0               114s
vpa-updater-5ff957f8cf-q2hnw                 1/1     Running   0               114s

kubectl get customresourcedefinition|grep verticalpodautoscalers

verticalpodautoscalers.autoscaling.k8s.io             2024-01-02T09:48:59Z

删除

请注意,如果您停止在群集中运行VPA,则VPA已修改的pod的资源请求将不会更改,但任何新pod将获取控制器中定义的资源(即部署或复制),而不是根据先前的建议由VPA。
要停止在群集中使用Vertical Pod Autoscaling:
如果在GKE上运行,请清除在先决条件中创建的角色绑定:

kubectl delete clusterrolebinding myname-cluster-admin-binding

删除VPA组件:

./hack/vpa-down.sh

VPA实例

​ 安装后,系统已准备好为您的 pod 推荐和设置资源请求。为了使用它,您需要为要自动计算资源需求的每个控制器插入一个Vertical Pod Autoscaler资源。这将是最常见的Deployment。VPA有三种运行模式

"Auto":VPA 在创建 pod 时分配资源请求,并使用首选更新机制在现有 pod 上更新它们。目前这相当于"Recreate"(见下文)。一旦 pod 请求的免重启(“就地”)更新可用,它可能会被该"Auto"模式用作首选的更新机制。注意: VPA 的此功能是实验性的,可能会导致您的应用程序停机,当目前运行的pod的资源达不到VPA的推荐值,就会执行pod驱逐,重新部署新的足够资源的服务
"Recreate":VPA 在创建 Pod 时分配资源请求,并在现有 Pod 上更新它们,当请求的资源与新建议有很大差异时(尊重 Pod 中断预算,如果定义)。这种模式应该很少使用,只有当您需要确保在资源请求发生变化时重新启动 Pod 时。否则,更喜欢这种"Auto"模式,一旦它们可用,就可以利用重新启动免费更新。注意: VPA 的此功能是实验性的,可能会导致您的应用程序停机
"Initial":VPA 仅在创建 pod 时分配资源请求,以后不会更改它们
"Off":VPA 不会自动更改 Pod 的资源需求。这些建议是经过计算的,并且可以在 VPA 对象中进行检查。这种模式仅获取资源推荐值,但是不更新Pod。

注意

需要至少两个健康的 Pod 才能工作
默认最小内存分配为250MiB
无论指定了什么配置,VPA 都会分配250MiB的最小内存,虽然这个默认值可以在全局层面进行修改,但对于消耗较少内存的应用程序来说,是比较浪费资源的。
不建议在生产中用自动模式“Auto”来运行 VPA,updateMode 模式设置为 "Off",仅获取资源推荐值。

注意vpa版本为 autoscaling.k8s.io/v1

kubectl api-resources |grep autoscaling

horizontalpodautoscalers           hpa                autoscaling/v2                         true         HorizontalPodAutoscaler
verticalpodautoscalercheckpoints   vpacheckpoint      autoscaling.k8s.io/v1                  true         VerticalPodAutoscalerCheckpoint
verticalpodautoscalers             vpa                autoscaling.k8s.io/v1                  true         VerticalPodAutoscaler

查看资源

kubectl get vpa -n test -w
kubectl top pod -n test

创建死循环持续访问

测试原有的openresty服务

kubectl exec -it pod/test-pod-0 -n test -- /bin/sh  -c 'while true; do curl -k http://svc-openresty.test.svc:31080; done '

apache ab压测

压力不够,制作一个apache ab压测镜像

mkdir alpine_apachebench
vi Dockerfile

FROM alpine
RUN apk update                \
&& apk add apache2-utils  \
&& rm -rf /var/cache/apk/*

创建后推送私仓

cd ..
docker build -f alpine_apachebench/Dockerfile -t apachebench:v1 .
docker tag apachebench:v1 repo.k8s.local/library/apachebench:v1
docker push repo.k8s.local/library/apachebench:v1

用docker进入容器进行压测

docker run -it --rm apachebench:v1
wget http://192.168.244.5:80/showvar
ab -n 10000 -c 100 http://192.168.244.5:80/showvar
ab -t 120 -c 100 http://192.168.244.5:80/showvar

k8s中临时测试容器

kubectl run busybox --image repo.k8s.local/library/apachebench:v1 --restart=Never --rm -it busybox -- sh
wget http://svc-openresty.test.svc:31080/showvar
ab -n 10000 -c 100 http://svc-openresty.test.svc:31080/showvar

使用官方测试镜像测试

wget https://k8s.io/examples/application/php-apache.yaml

cat php-apache.yaml|grep image:
使用k8s.mirror.nju.edu.cn替换registry.k8s.io

docker pull k8s.mirror.nju.edu.cn/hpa-example
docker tag k8s.mirror.nju.edu.cn/hpa-example:latest repo.k8s.local/registry.k8s.io/hpa-example:latest
docker push repo.k8s.local/registry.k8s.io/hpa-example:latest
sed -i "/image:/{s/image: registry.k8s.io/image: repo.k8s.local\/registry.k8s.io/}" php-apache.yaml

kubectl apply -f php-apache.yaml -n test
kubectl get pod -n test
kubectl get svc -n test

apachebench进行测试

kubectl run busybox --image repo.k8s.local/library/apachebench:v1 --restart=Never --rm -it busybox -- sh
wget -O index.html http://php-apache.test.svc 
ab -n 10000 -c 100 http://php-apache.test.svc/

kubectl top pod -n test

NAME                                     CPU(cores)   MEMORY(bytes)           
php-apache-86b74c667b-pb6cg              199m         110Mi      

get vpa -n test vpa-test-php-apache

NAME                  MODE   CPU    MEM     PROVIDED   AGE
vpa-test-php-apache   Auto   247m   100Mi   True       14m

kubectl describe vpa -n test vpa-test-php-apache

Name:         vpa-test-php-apache
Namespace:    test
Labels:       app.kubernetes.io/instance=vpa
Annotations:  <none>
API Version:  autoscaling.k8s.io/v1
Kind:         VerticalPodAutoscaler
Metadata:
  Creation Timestamp:  2024-01-03T06:33:54Z
  Generation:          1
  Resource Version:    15495088
  UID:                 8b2df56f-88be-4201-b9fb-e42fa4eac284
Spec:
  Resource Policy:
    Container Policies:
      Container Name:  php-apache
      Max Allowed:
        Cpu:     700m
        Memory:  100Mi
      Min Allowed:
        Cpu:     100m
        Memory:  20Mi
  Target Ref:
    API Version:  apps/v1
    Kind:         Deployment
    Name:         php-apache
  Update Policy:
    Update Mode:  Auto
Status:
  Conditions:
    Last Transition Time:  2024-01-03T06:34:04Z
    Status:                True
    Type:                  RecommendationProvided
  Recommendation:
    Container Recommendations:
      Container Name:  php-apache
      Lower Bound:
        Cpu:     100m
        Memory:  100Mi
      Target:
        Cpu:     247m
        Memory:  100Mi
      Uncapped Target:
        Cpu:     247m
        Memory:  262144k
      Upper Bound:
        Cpu:     700m
        Memory:  100Mi
Events:          <none>

如果你看 Recommendation 推荐部分,可以看到如下所示的一些信息:
Target:这是 VPA 在驱逐当前 Pod 并创建另一个 Pod 时将使用的真实值。
Lower Bound:这反映了触发调整大小的下限,如果你的 Pod 利用率低于这些值,则 VPA 会将其逐出并缩小其规模。
Upper Bound:这表示下一次要触发调整大小的上限。如果你的 Pod 利用率高于这些值,则 VPA 会将其驱逐并扩大其规模。
Uncapped target:如果你没有为 VPA 提供最小或最大边界值,则表示目标利用率。

kubectl top pod -n test

NAME                                     CPU(cores)   MEMORY(bytes)              
php-apache-86b74c667b-mnpdw              494m         76Mi            
php-apache-86b74c667b-q2qwj              496m         64Mi  

replicas: 改成2后,查看pod,其中资源限制已自动修改从200m改到494m,完成扩容

      resources:
        limits:
          cpu: 494m
        requests:
          cpu: 247m
          memory: 100Mi

==============

HPA 水平自动伸缩

负责调整 pod 的副本数量来实现。是最常用的弹性伸缩组件
解决的是业务负载波动较大的问题
依赖 metrics-server 组件收集 pod 上的 metrics,然后根据预先设定的伸缩策略决定扩缩容 pod
metrics-server 默认只支持基于 cpu、memory 监控指标伸缩策略
如果要使用自定义指标(比如 QPS)作为伸缩策略,需要额外安装 prometheus-adapter,将自定义指标转换为 k8s apiserver可以识别的指标
HPA-Controller 在k8s默认的 controller-manager 中已经安装
Kubernetes 将水平 Pod 自动扩缩实现为一个间歇运行的控制回路(它不是一个连续的过程)。间隔由 kube-controller-manager 的 –horizontal-pod-autoscaler-sync-period 参数设置(默认间隔为 15秒)
缩容要超过一定冷却器(默认5min)-horizontal-pod-autoscaler-cpu-initialization-period

在 kubernetes 1.9 之后,支持hpa自动扩展 statefulset ,水平 Pod 自动扩缩不适用于无法扩缩的对象(例如:DaemonSet。)

扩缩比例

期望副本数 = ceil[当前副本数 * (当前指标 / 期望指标)]

例 1: 某个 deployment 中当前 replicas 设置为 2,CPU request 值为 100m(desiredMetricValue),当前从 metric server 得到的 CPU metric 为 200m(currentMetricValue)
100m,代表 0.1 个 CPU
则:desiredReplicas = 2 ( 200 / 100 ) = 4,即运行的 Pod 数量会翻倍,ratio = 2
如果当前 CPU metric 为 50m
则:desiredReplicas = 2
( 50 / 100 ) = 1,即运行的 Pod 数量减半,ratio = 0.5
当 ratio 计算出来与 1 接近时,则不会变化,以防止出现过度扩展或者收缩,默认变化小于 0.1(tolerance)时(即 ratio 计算出来大于 0.9 小于 1.1)不会变动。
我们可以通过–horizontal-pod-autoscaler-tolerance 改变这个默认值。
当指定了 targetAverageValue 或者 targetAverageUtilization 值 ,计算 currentMetricValue 时会把得到的结果对目标(比如 deployment)中所有的 Pod 的数量取平均值。
在计算平均值时,正在停止或者失败的 Pod 不记入 Pod 数量。这里会计算两次平均值。
在第一次计算平均值时,不是 ready 状态和缺少 metric 的 Pod不记入Pod 总数。
在第二次计算平均值时,不是 ready 状态和缺少 metric 的 Pod记入Pod 总数。
用两次计算平均值算出对应的调整比率(ratio),如果第二次算出的 ratio 反转了 scale 方向,或者处于 tolerance 之内则放弃 scale,否则按第二次算出的 ratio 进行调整。
如果在 HPA 中指定多个目标 metric,则对每个 metric 重复上述计算过程,最后取 desiredReplicas 最大的值做为调整值。
最后,scale up 会尽快进行,而 scale down 会逐步进行,默认间隔为 5 分钟,可以通过–horizontal-pod-autoscaler-downscale-stabilization flag 更改设置。
在 5 分钟之内,每隔 15 秒计算一次 scale down 的值,最后取所有结果中最大的值做为这次 scale down 的最终值。

metrics 的分类

最新版 HPA:autoscaling/v2beta1,有四种类型的 metrics

Resource:支持k8s所有系统资源,包括cpu、memory。一般只用cpu,memory不准确。
Pods:由 pod 自身提供的自定义 metrics 数据
Object:监控指标部署由 pod 本身的服务提供,比如 Ingress
External:来自外部系统的自定义指标

autoscaling api 版本

k8s 1.23 起 autoscaling/v2beta2 正式GA ,需改用autoscaling/v2,不支持autoscaling/v1

hpa

vi hpa.yaml

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: hpa-test-openresty
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: openresty
  minReplicas: 1
  maxReplicas: 3
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

kubectl apply -f hpa.yaml

测试

kubectl exec -it pod/test-pod-0 -n test -- curl -k http://svc-openresty.test.svc:31080
kubectl exec -it pod/test-pod-0 -n test -- /bin/sh  -c 'while true; do curl -k http://svc-openresty.test.svc:31080; done '

kubectl get hpa -n test -w
kubectl top pod -n test 

命行修改

kubectl autoscale deployment php-apache -ntest –cpu-percent=50 –min=1 –max=4
kubectl get autoscale deployment php-apache -ntest
命令创建了一个叫“php-apache”的 HPA,与 deployment 的名称相同
我们为 deployment “php-apache”创建了 HPA,replicas 变动范围是最小 1,最大 4。目标 CPU 利用率(utilization)为 50%。上面我们设定 CPU request 值为 200m,所以转发为目标平均 CPU 值为 100m
命令只能运行一次,重复运行会报“AlreadyExists”错误
Error from server (AlreadyExists): horizontalpodautoscalers.autoscaling "php-apache" already exists

kubectl delete hpa/php-apache -n test

查看 HPA “php-apache”
kubectl get hpa/php-apache -n test

NAME                  REFERENCE                    TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
php-apache            Deployment/php-apache        <unknown>/50%   1         4         0          7s

apachebench进行测试

kubectl run busybox --image repo.k8s.local/library/apachebench:v1 --restart=Never --rm -it busybox -- sh
wget -O index.html http://php-apache.test.svc 
ab -n 1000 -c 100 http://php-apache.test.svc/

kubectl get hpa/hpa-test-php-apache -n test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hpa-test-php-apache Deployment/php-apache 0%/50% 1 3 1 10m

kubectl exec -it pod/test-pod-0 -n test — /bin/sh -c ‘while true; do curl -k http://php-apache.test.svc/; done ‘

kubectl get hpa/hpa-test-php-apache -n test

kubectl get hpa/hpa-test-php-apache -n test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hpa-test-php-apache Deployment/php-apache 75%/50% 1 8 3 12m

kubectl get hpa/hpa-test-php-apache -n test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hpa-test-php-apache Deployment/php-apache 56%/50% 1 8 5 13m

kubectl get hpa/hpa-test-php-apache -n test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hpa-test-php-apache Deployment/php-apache 42%/50% 1 8 6 13m

基于官方镜像创建1~8个副本,扩容每30秒创建2个pod,缩容每分种缩一个,会先缩新创建的
vi hpa.yaml

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: hpa-test-php-apache
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: php-apache
  minReplicas: 1
  maxReplicas: 8
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50
  behavior:
    scaleDown:
      #selectPolicy: Disabled
      stabilizationWindowSeconds: 60
      policies:
      - type: Percent
        value: 20
        periodSeconds: 60
      - type: Pods
        value: 1
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
      - type: Percent
        value: 20
        periodSeconds: 30
      - type: Pods
        value: 2
        periodSeconds: 30
      selectPolicy: Max

Posted in 安装k8s/kubernetes.

Tagged with , , .


k8s_安装14_helm_redis

什么是Helm

Helm是一个为K8s进行包管理的工具。Helm将yaml作为一个整体管理并实现了这些yaml的高效复用,就像Linux中的yum或apt-get,它使我们能够在K8s中方便快捷的安装、管理、卸载K8s应用。
Helm基于go模板语言,用户只要提供规定的目录结构和模板文件。在真正部署时Helm模板引擎便可以将其渲染成真正的K8s资源配置文件,并按照正确的顺序将它们部署到节点上。

Helm中有三个重要概念,分别为Chart、Repository和Release。
Chart代表中Helm包。它包含在K8s集群内部运行应用程序,工具或服务所需的所有资源定义。可以类比成yum中的RPM。

Repository就是用来存放和共享Chart的地方,可以类比成Maven仓库。

Release是运行在K8s集群中的Chart的实例,一个Chart可以在同一个集群中安装多次。Chart就像流水线中初始化好的模板,Release就是这个“模板”所生产出来的各个产品。

Helm作为K8s的包管理软件,每次安装Charts 到K8s集群时,都会创建一个新的 release。你可以在Helm 的Repository中寻找需要的Chart。Helm对于部署过程的优化的点在于简化了原先完成配置文件编写后还需使用一串kubectl命令进行的操作、统一管理了部署时的可配置项以及方便了部署完成后的升级和维护。

Helm的架构

Helm客户端使用REST+JSON的方式与K8s中的apiserver进行交互,进而管理deployment、service等资源,并且客户端本身并不需要数据库,它会把相关的信息储存在K8s集群内的Secrets中。

Helm的目录结构

★ templates/ 目录包含了模板文件。Helm会通过模板渲染引擎渲染所有该目录下的文件来生成Chart,之后将收集到的模板渲染结果发送给K8s。

★ values.yaml 文件对于模板也非常重要。这个文件包含了对于一个Chart的默认值 。这些值可以在用户执行Helm install 或 Helm upgrade时指定新的值来进行覆盖。

★ Chart.yaml 文件包含对于该Chart元数据描述。这些描述信息可以在模板中被引用。

★ _helper.tpl 包含了一些可以在Chart中进行复用的模板定义。

★ 其他诸如deployment.yaml、service.yaml、ingress.yaml文件,就是我们用于生成K8s配置文件的模板,Helm默认会按照如下的顺序将生成资源配置发送给K8s:

Namespace -> NetworkPolicy -> ResourceQuota -> LimitRange -> PodSecurityPolicy --> PodDisruptionBudget -> ServiceAccount -> Secret -> SecretList -> ConfigMap -> StorageClass -> PersistentVolume -> PersistentVolumeClaim -> CustomResourceDefinition -> ClusterRole -> ClusterRoleList -> ClusterRoleBinding -> ClusterRoleBindingList -> Role -> RoleList -> RoleBinding -> RoleBindingList -> Service -> DaemonSet -> Pod -> ReplicationController -> ReplicaSet -> Deployment -> HorizontalPodAutoscaler -> StatefulSet -> Job -> CronJob -> Ingress -> APIService

helm3 安装部署

由于国外很多镜像网站国内无法访问,例如gcr.io ,建议使用阿里源,https://developer.aliyun.com/hub
目前helm3已经不依赖于tiller,Release 名称可在不同 ns 间重用。

安装helm

Helm3 不需要安装tiller,下载到 Helm 二进制文件直接解压到 $PATH 下就可以使用了。
https://github.com/helm/

cd /opt && wget https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz
wget https://mirrors.huaweicloud.com/helm/v3.13.2/helm-v3.13.2-linux-amd64.tar.gz

mkdir tmp
tar xf helm-v3.13.2-linux-amd64.tar.gz -C ./tmp
cp tmp/linux-amd64/helm /usr/local/bin/
helm version --short
rm -rf tmp

helm version
v3.13.2+g2a2fb3b

Helm补全

在当前shell会话中加载自动补全:

source <(helm completion bash)

为每个新的会话加载自动补全,执行一次:
vi /etc/profile

source <(kubectl completion bash)
source <(helm completion bash) # 增加改行内容

source /etc/profile

Helm验证

helm version 
version.BuildInfo{Version:"v3.13.2", GitCommit:"2a2fb3b98829f1e0be6fb18af2f6599e0f4e8243", GitTreeState:"clean", GoVersion:"go1.20.10"}

配置国内helm源

查看当前

helm repo list

添加阿里源

helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

其它源

helm repo add bitnami https://charts.bitnami.com/bitnami

删除 incubator 仓库:

helm repo remove incubator

更新chart仓库:

helm repo update

查找软件

https://artifacthub.io/packages/helm/bitnami/mysql

搜索需指定 repo|hub

helm search repo memcached

从指定 chart 仓库地址搜索 chart:

helm search repo aliyun | grep 名称

找出大于指定版本chart,不能找app版本

helm search repo mysql –version ^1.0

查看具体信息

helm show chart aliyun/mysql

使用helm3安装应用举例

helm search hub guestbook
helm install guestbook apphub/guestbook

下载并解压到当前目录

helm pull bitnami/redis –untar

helm命令帮助

completion  # 为指定的shell生成自动完成脚本(bash或zsh)
create      # 创建一个具有给定名称的新 chart
delete      # 从 Kubernetes 删除指定名称的 release
dependency  # 管理 chart 的依赖关系
fetch       # 从存储库下载 chart 并(可选)将其解压缩到本地目录中
get         # 下载一个命名 release
help        # 列出所有帮助信息
history     # 获取 release 历史
home        # 显示 HELM_HOME 的位置
init        # 在客户端和服务器上初始化Helm
inspect     # 检查 chart 详细信息
install     # 安装 chart 存档
lint        # 对 chart 进行语法检查
list        # releases 列表
package     # 将 chart 目录打包成 chart 档案
plugin      # 添加列表或删除 helm 插件
repo        # 添加列表删除更新和索引 chart 存储库
reset       # 从集群中卸载 Tiller
rollback    # 将版本回滚到以前的版本
search      # 在 chart 存储库中搜索关键字
serve       # 启动本地http网络服务器
status      # 显示指定 release 的状态
template    # 本地渲染模板
test        # 测试一个 release
upgrade     # 升级一个 release
verify      # 验证给定路径上的 chart 是否已签名且有效
version     # 打印客户端/服务器版本信息
dep         # 分析 Chart 并下载依赖

指定value.yaml部署一个chart

helm install –name els1 -f values.yaml stable/elasticsearch

升级一个chart

helm upgrade –set mysqlRootPassword=passwd db-mysql stable/mysql

helm upgrade go2cloud-api-doc go2cloud-api-doc/

回滚一个 chart

helm rollback db-mysql 1

删除一个 release

helm delete –purge db-mysql

只对模板进行渲染然后输出,不进行安装

helm install/upgrade xxx –dry-run –debug

列出仓库中chart

helm search repo bitnami
helm search repo aliyun

Helm Chart实战安装redis

helm search repo redis

将 redis chart 拉到本地

helm pull aliyun/redis

helm pull bitnami/redis
tar -xvf redis-18.4.0.tgz
cat redis/values.yaml

查看镜像地址

cat redis/values.yaml |grep -C 3 image:

  registry: repo.k8s.local/docker.io
  repository: bitnami/redis
  tag: 7.2.3-debian-11-r1
--
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##     ports:
  ##       - name: portname
--
  ##    image: your-image
  ##    imagePullPolicy: Always
  ##    command: ['sh', '-c', 'echo "hello world"']
  ##
--
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##     ports:
  ##       - name: portname
--
  ##    image: your-image
  ##    imagePullPolicy: Always
  ##    command: ['sh', '-c', 'echo "hello world"']
  ##
--
  image:
    registry: repo.k8s.local/docker.io
    repository: bitnami/redis-sentinel
    tag: 7.2.3-debian-11-r1
--
  image:
    registry: repo.k8s.local/docker.io
    repository: bitnami/redis-exporter
    tag: 1.55.0-debian-11-r2
--
  image:
    registry: repo.k8s.local/docker.io
    repository: bitnami/os-shell
    tag: 11-debian-11-r91
--
  image:
    registry: repo.k8s.local/docker.io
    repository: bitnami/os-shell
    tag: 11-debian-11-r91

准备私仓镜像

docker pull bitnami/redis:7.2.3-debian-11-r1
docker pull bitnami/redis-sentinel:7.2.3-debian-11-r1
docker pull bitnami/redis-exporter:1.55.0-debian-11-r2
docker pull bitnami/os-shell:11-debian-11-r91

docker tag docker.io/bitnami/redis:7.2.3-debian-11-r1 repo.k8s.local/docker.io/bitnami/redis:7.2.3-debian-11-r1
docker tag docker.io/bitnami/redis-sentinel:7.2.3-debian-11-r1 repo.k8s.local/docker.io/bitnami/redis-sentinel:7.2.3-debian-11-r1
docker tag docker.io/bitnami/redis-exporter:1.55.0-debian-11-r2 repo.k8s.local/docker.io/bitnami/redis-exporter:1.55.0-debian-11-r2
docker tag docker.io/bitnami/os-shell:11-debian-11-r91 repo.k8s.local/docker.io/bitnami/os-shell:11-debian-11-r91

docker push repo.k8s.local/docker.io/bitnami/redis:7.2.3-debian-11-r1
docker push repo.k8s.local/docker.io/bitnami/redis-sentinel:7.2.3-debian-11-r1
docker push repo.k8s.local/docker.io/bitnami/redis-exporter:1.55.0-debian-11-r2
docker push repo.k8s.local/docker.io/bitnami/os-shell:11-debian-11-r91

修改镜像为私仓地址

cat redis/values.yaml | grep ‘registry: docker.io’
cat redis/values.yaml | grep ‘registry: ‘
sed -n "/registry: docker.io/{s/docker.io/repo.k8s.local\/docker.io/p}" redis/values.yaml
sed -i "/registry: docker.io/{s/docker.io/repo.k8s.local\/docker.io/}" redis/values.yaml

安装

选择集群模式

支持

  • 独立模式
  • 主从模式
  • 哨兵模式
  • 集群模式

独立模式

cat redis/values.yaml |grep architecture:
sed -rn '/architecture: /{s/architecture: (.*)/architecture: standalone/p}' redis/values.yaml
sed -ri '/architecture: /{s/architecture: (.*)/architecture: standalone/}' redis/values.yaml

主从模式

cat redis/values.yaml |grep architecture:
sed -rn '/architecture: /{s/architecture: (.*)/architecture: replication/p}' redis/values.yaml
sed -ri '/architecture: /{s/architecture: (.*)/architecture: replication/}' redis/values.yaml
设置副本数量

默认为3个,这里改2个

cat redis/values.yaml |grep replicaCount:
sed -rn '/replicaCount: /{s/replicaCount: (.*)/replicaCount: 2/p}' redis/values.yaml
sed -ri '/replicaCount: /{s/replicaCount: (.*)/replicaCount: 2/}' redis/values.yaml
选择Service类型

bitnami/redis默认使用clusterIP模式,如需外部访问可开NodePort
cat redis/values.yaml |grep "service:"

master:
  service:
    type: NodePort
replica:
  service:
    type: NodePort

修改存储StorageClass和密码

动态pv创建请参考前期存储文章

cat redis/values.yaml |grep "storageClass:"
sed -rn '/storageClass: /{s/storageClass: (.*)/storageClass: "managed-nfs-storage"/p}' redis/values.yaml
sed -ri '/storageClass: /{s/storageClass: (.*)/storageClass: "managed-nfs-storage"/}' redis/values.yaml

vi redis/values.yaml
storageClass: "managed-nfs-storage"

配置(可选):配置密码

cat redis/values.yaml |grep "password:"
sed -rn '/password: /{s/password: (.*)/password: "123456"/p}' redis/values.yaml
sed -ri '/password: /{s/password: (.*)/password: "123456"/}' redis/values.yaml

vi redis/values.yaml
password: "123456"

查看值

helm show values ./redis|grep replicaCount:

先检查语法

helm lint ./redis
==> Linting ./redis

1 chart(s) linted, 0 chart(s) failed
安装到test命名空间

这里使用主从模式安装
helm install luo-redis ./redis -n test –create-namespace

NAME: luo-redis
LAST DEPLOYED: Mon Dec  4 09:54:42 2023
NAMESPACE: test
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: redis
CHART VERSION: 18.4.0
APP VERSION: 7.2.3

** Please be patient while the chart is being deployed **

Redis® can be accessed on the following DNS names from within your cluster:

    luo-redis-master.test.svc.cluster.local for read/write operations (port 6379)
    luo-redis-replicas.test.svc.cluster.local for read-only operations (port 6379)

To get your password run:

    export REDIS_PASSWORD=$(kubectl get secret --namespace test luo-redis -o jsonpath="{.data.redis-password}" | base64 -d)

To connect to your Redis® server:

1. Run a Redis® pod that you can use as a client:

   kubectl run --namespace test redis-client --restart='Never'  --env REDIS_PASSWORD=$REDIS_PASSWORD  --image repo.k8s.local/docker.io/bitnami/redis:7.2.3-debian-11-r1 --command -- sleep infinity

   Use the following command to attach to the pod:

   kubectl exec --tty -i redis-client \
   --namespace test -- bash

2. Connect using the Redis® CLI:
   REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h luo-redis-master
   REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h luo-redis-replicas

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace test svc/luo-redis-master 6379:6379 &
    REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h 127.0.0.1 -p 6379

查看

kubectl get all -n test
kubectl get pod -n test
kubectl get pods,svc -n test -owide

kubectl get pv,pvc -n test
kubectl describe pvc redis-data-luo-redis-master-0 

删pvc

kubectl patch pvc redis-data-luo-redis-master-0  -p '{"metadata":{"finalizers":null}}' -n test
kubectl patch pvc redis-data-luo-redis-replicas-0  -p '{"metadata":{"finalizers":null}}' -n test

kubectl delete pvc redis-data-luo-redis-master-0 --grace-period=0 --force -n test
kubectl delete pvc redis-data-luo-redis-replicas-0 --grace-period=0 --force -n test

kubectl describe pod luo-redis-master-0 -n test
kubectl logs -f luo-redis-master-0 -n test

kubectl describe luo-redis-replicas-0 -n test

升级

helm upgrade luo-redis ./redis -n test

查看历史

helm history luo-redis

回退到指定版本

helm rollback luo-redis 3

删除

helm delete luo-redis -n test

查看数据文件

在nfs上查看

cd /nfs/k8s/dpv/test-redis-data-luo-redis-master-0-pvc-9d72c5b3-4cc4-444c-89f5-20487ed694d6
ll appendonlydir/
total 8
-rw-r--r--. 1 1001 root 88 Dec  4 09:54 appendonly.aof.1.base.rdb
-rw-r--r--. 1 1001 root  0 Dec  4 09:54 appendonly.aof.1.incr.aof
-rw-r--r--. 1 1001 root 88 Dec  4 09:54 appendonly.aof.manifest

客户端访问redis

方式一 没有redis-cli
按安装后的提示步骤创建客户端后连入redis master 查看信息

role:master
connected_slaves:3
slave0:ip=luo-redis-replicas-0.luo-redis-headless.test.svc.cluster.local,port=6379,state=online,offset=560,lag=1
slave1:ip=luo-redis-replicas-1.luo-redis-headless.test.svc.cluster.local,port=6379,state=online,offset=560,lag=1
slave2:ip=luo-redis-replicas-2.luo-redis-headless.test.svc.cluster.local,port=6379,state=online,offset=560,lag=1
kubectl -n test describe pod redis-client

kubectl scale deployment/redis-client -n test --replicas=0 
kubectl delete pod redis-client  -n test

方式二 有redis-cli

kubectl exec -it redis-master-0 -n redis -- redis-cli -h redis-master -a $(kubectl get secret --namespace redis redis -o jsonpath="{.data.redis-password}" | base64 -d)

调度redis从机

我们有2个node但pod有3个,并且部署在同一node上

kubectl get pods,svc -n test -owide
NAME                               READY   STATUS    RESTARTS        AGE     IP             NODE               NOMINATED NODE   READINESS GATES
pod/luo-redis-master-0             1/1     Running   0               42m     10.244.2.99    node02.k8s.local   <none>           <none>
pod/luo-redis-replicas-0           1/1     Running   0               42m     10.244.2.100   node02.k8s.local   <none>           <none>
pod/luo-redis-replicas-1           1/1     Running   0               41m     10.244.1.187   node01.k8s.local   <none>           <none>
pod/luo-redis-replicas-2           1/1     Running   0               41m     10.244.2.101   node02.k8s.local   <none>           <none>

将redis从机改成2

修改 配置文件
vi redis/values.yaml

replica:
  ## @param replica.kind Use either DaemonSet or StatefulSet (default)
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
  ##
  kind: StatefulSet
  ## @param replica.replicaCount Number of Redis® replicas to deploy
  ##
  replicaCount: 2

在线缩容为2个

kubectl scale sts/luo-redis-replicas -n test --replicas=2 
kubectl get pods,svc -n test -owide
NAME                               READY   STATUS    RESTARTS        AGE     IP             NODE               NOMINATED NODE   READINESS GATES
pod/luo-redis-master-0             1/1     Running   0               93m     10.244.2.99    node02.k8s.local   <none>           <none>
pod/luo-redis-replicas-0           1/1     Running   0               93m     10.244.2.100   node02.k8s.local   <none>           <none>
pod/luo-redis-replicas-1           1/1     Running   0               92m     10.244.1.187   node01.k8s.local   <none>           <none>

redis从机 已缩为2个,但pvc和pv依然存在,每个节点都有自己的Persistent Volume。如果删除或缩小 pod,与它们关联的卷将不会被删除,因此数据会保留

kubectl get pv,pvc -n test
NAME                                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/redis-data-luo-redis-master-0     Bound    pvc-9d72c5b3-4cc4-444c-89f5-20487ed694d6   8Gi        RWO            managed-nfs-storage   93m
persistentvolumeclaim/redis-data-luo-redis-replicas-0   Bound    pvc-521abe8f-918e-4940-99df-9647a2de33c8   8Gi        RWO            managed-nfs-storage   93m
persistentvolumeclaim/redis-data-luo-redis-replicas-1   Bound    pvc-5f7ef942-1cef-45bd-8adf-15126cdcf8de   8Gi        RWO            managed-nfs-storage   92m
persistentvolumeclaim/redis-data-luo-redis-replicas-2   Bound    pvc-56ccb5a3-43bb-4713-a927-d74332e8b673   8Gi        RWO            managed-nfs-storage   91m

在线扩容为3个

当你upgrade发布时,其修订号会增加。在内部,Helm 存储版本的所有修订,允许你在需要时返回到以前的修订

helm upgrade luo-redis ./redis  -n test --set cluster.replicaCount=3
helm upgrade luo-redis ./redis  -n test --set cluster.replicaCount=3 --reuse-values

传入–reuse-values,它指示 Helm 将你的更改基于已部署的版本,保留以前的配置。

要回滚版本,请使用helm rollback

helm list -n test
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
luo-redis       test            2               2023-12-04 15:41:19.336430771 +0800 CST deployed        redis-18.4.0    7.2.3      
helm history -n test luo-redis
REVISION        UPDATED                         STATUS          CHART           APP VERSION     DESCRIPTION     
1               Mon Dec  4 15:34:40 2023        superseded      redis-18.4.0    7.2.3           Install complete
2               Mon Dec  4 15:41:19 2023        superseded      redis-18.4.0    7.2.3           Upgrade complete
3               Mon Dec  4 16:01:56 2023        deployed        redis-18.4.0    7.2.3           Upgrade complete

回滚第一版

helm rollback luo-redis -n test 1
Rollback was a success! Happy Helming!

查看历史版本

helm history -n test luo-redis
REVISION        UPDATED                         STATUS          CHART           APP VERSION     DESCRIPTION     
1               Mon Dec  4 15:34:40 2023        superseded      redis-18.4.0    7.2.3           Install complete
2               Mon Dec  4 15:41:19 2023        superseded      redis-18.4.0    7.2.3           Upgrade complete
3               Mon Dec  4 16:01:56 2023        superseded      redis-18.4.0    7.2.3           Upgrade complete
4               Mon Dec  4 16:16:04 2023        deployed        redis-18.4.0    7.2.3           Rollback to 1  

安装和后续升级

values.yaml 中包含了默认的安装参数,但是诸如数据库的ip、username、password,若我们不想去修改安装包,如何在安装的时候进行覆盖呢?我们只要在 install 时使用 set 选项,设置想要覆盖的参数值即可。

Helm install myChart-test myChart–set config.mysql.server=100.71.32.11

用户也可以在安装时指定自己的values.yaml配置。例如用户在升级的时候用 upgrade 命令,指定新的参数配置文件,即可实现在原有部署好的应用的基础上变更配置。命令如下:

install myChart-test02 myChart -f my-values.yaml
Helm upgrade myChart-test02 myChart -f my-new-values.yaml

Posted in Memcached/redis, 安装k8s/kubernetes.

Tagged with , , .


k8s_安装13_operator_argocd

Argo CD 是什么

descript

Argo CD 是一款针对 Kubernetes 的开源 GitOps Operator,它是 Argo 家族中的一员。Argo CD 专注于应用程序交付的使用场景。
Argo CD 提供了一个用户友好的 Web 界面。使用 Web 界面,你可以获得跨多个集群部署的所用应用程序的高级视图,以及有关每个应用程序资源非常详细的信息。
Argo CD不直接使用任何数据库(Redis被用作缓存),所以它看起来没有任何状态。
Argo CD 可以理解为一个 Kubernetes 控制器,它会持续监控正在运行的应用,并将当前的实际状态与 Git 仓库中声明的期望状态进行比较,如果实际状态不符合期望状态,就会更新应用的实际状态以匹配期望状态。Argo CD是一个持续交付(CD)工具,而持续集成(CI)部分可以由 Jenkins,Gitlab Runner来完成。

Argo CD官方文档地址

https://argo-cd.readthedocs.io/en/stable/

Argo CD 架构

descript

分离配置库和源代码库

使用单独的Git存储库来保存kubernetes清单,将配置与应用程序源代码分开,强烈推荐使用,原因如下:

  • 清晰分离了应用程序代码与应用程序配置。有时您希望只修改清单,而不触发整个CI构建。
    例如,如果您只是希望增加部署规范中的副本数量,那么您可能不希望触发构建(由于构建周期可能较长)
  • 更清洁的审计日志。出于审计目的,只保存配置库历史更改记录,而不是掺有日常开发提交的日志记录。
  • 微服务场景下,应用程序可能由多个Git库构建的服务组成,但是作为单个单元部署(比如同一pod内)。
    通常,微服务应用由不同版本和不同发布周期的服务组成(如ELK, Kafka + Zookeeper)。
    将配置清单存储在单个组件的一个源代码库中可能没有意义
  • 访问的分离。开发应用程序的开发人员不一定是能够/应该推送到生产环境的同一个人,无论是有意的还是无意的。
    通过使用单独的库,可以将提交访问权限授予源代码库,而不是应用程序配置库
  • 自动化CI Pipeline场景下,将清单更改推送到同一个Git存储库可能会触发构建作业和Git提交触发器的无限循环。
    使用一个单独的repo来推送配置更改,可以防止这种情况发生。

git设置分支命名参考

  • master (main) 线上主分支,永远是可用的稳定版本
  • develop 默认,开发主分支
  • prerelease (staging) 预发布分支
  • release/*分支(发布分支,短期从develop创建,主要用于测试和版本发布)

临时性分支

  • feature-xxx (功能开发分支,在develop上创建分支,以自己开发功能模块命名,功能测试正常后合并到develop分支)
  • feature-xxx-fix(功能bug修复分支,feature分支合并之后发现bug,在develop上创建分支进行修复,之后合并回develop分支)
  • hotfix-xxx(紧急bug修改分支,在master分支上创建,修复完成后合并到master)
  • bugfix/*分支 (短期从develop创建)
  • backup (备份分支)

gitlab 配制argocd 拉取 token

connect repo
需要注意的是这里的密码需要使用 AccessToken,我们可以前往 GitLab 的页面 http://gitlab.k8s.local/-/profile/personal_access_tokens 创建。
git->settings->Access Tokens
api
read_api
read_repository

产生token
-4m8nyfa4SvLtEsxVFzU

Argocd触发CD

Argo CD每三分钟轮询一次Git存储库,以检测清单的变化。如果Applications设置为Auto Sync,那么会重新部署。
同时Argo CD也支持接收Webhook事件,可以消除轮询带来的延迟.

Argo CD Image Updater
根据镜像仓库的镜像 Tag 变化,完成服务镜像更新。
目前,它仅适用于使用Kustomize或Helm工具构建的应用程序。 尚不支持从纯 YAML 或自定义工具构建的应用程序。

在 ArgoCD 中启动/停止应用程序

首先禁用自动同步
您需要编辑部署的定义,将其设置replicas为0如下所示:

apiVersion: ...
kind: Deployment
spec:
  replicas: 0
  ...

如果删除applications,那么有状态服务如mysql使用StorageClass会重新绑一个pvc.和 kubectl delete -f . 的操作不同

argocd安装

  • 方式一 Helm
  • 方式二 使用 红帽 operator 的方式部署 ArgoCD
  • 方式三 使用 argo-cd github operator

方式一 Helm

https://artifacthub.io/packages/helm/argo/argo-cd
cd /root/k8s/helm
helm repo add argo https://argoproj.github.io/argo-helm
helm pull argo/argo
helm install my-argo-cd argo/argo-cd –version 6.7.7

方式二 使用 红帽 operator 的方式部署 ArgoCD

官网地址: https://operatorhub.io/operator/argocd-operator

安装 OLM 组件
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.26.0/install.sh | bash -s v0.26.0
kubectl create -f https://operatorhub.io/install/argocd-operator.yaml
kubectl get csv -n operators

https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.26.0/crds.yaml
https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.26.0/olm.yaml
operatorhub.io网络太慢,无法安装,放弃,改用github https://github.com/argoproj/argo-cd

方式三 argo-cd github operator

参考: https://blog.csdn.net/engchina/article/details/129611785

在 K8S 中部署 ArgoCD

单机版Non-HA:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.9.2/manifests/install.yaml

高可用HA:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.9.2/manifests/ha/install.yaml

单机版Non-HA安装

这里安装单机版 Non-HA,如果是生产,那么建议使用ha版本

wget –no-check-certificate https://raw.githubusercontent.com/argoproj/argo-cd/v2.9.2/manifests/install.yaml -O argo-cd-v2.9.2.yaml
wget –no-check-certificate https://raw.githubusercontent.com/argoproj/argo-cd/v2.10.0/manifests/install.yaml -O argo-cd-v2.10.0.yaml

准备image

将镜像拉取到harbor私仓,方便以后重复使用 ,没有私仓则跳过此步骤。

cat argo-cd-v2.9.2.yaml|grep image:|sed -e ‘s/.*image: //’|sort|uniq

ghcr.io/dexidp/dex:v2.37.0
quay.io/argoproj/argocd:v2.9.2
redis:7.0.11-alpine

docker pull quay.io/argoproj/argocd:v2.9.2
docker pull redis:7.0.11-alpine
docker pull ghcr.io/dexidp/dex:v2.37.0
#docker pull quay.nju.edu.cn/argoproj/argocd:v2.9.2 #加速地址 426MB

docker tag quay.nju.edu.cn/argoproj/argocd:v2.9.2 repo.k8s.local/quay.io/argoproj/argocd:v2.9.2
docker tag redis:7.0.11-alpine repo.k8s.local/docker.io/redis:7.0.11-alpine
docker tag ghcr.io/dexidp/dex:v2.37.0 repo.k8s.local/ghcr.io/dexidp/dex:v2.37.0

docker push repo.k8s.local/quay.io/argoproj/argocd:v2.9.2
docker push repo.k8s.local/docker.io/redis:7.0.11-alpine
docker push repo.k8s.local/ghcr.io/dexidp/dex:v2.37.0

docker rmi quay.nju.edu.cn/argoproj/argocd:v2.9.2
docker rmi redis:7.0.11-alpine
docker rmi ghcr.io/dexidp/dex:v2.37.0

cat argo-cd-v2.10.0.yaml|grep image:|sed -e ‘s/.*image: //’|sort|uniq

ghcr.io/dexidp/dex:v2.37.0
quay.io/argoproj/argocd:v2.10.0
redis:7.0.14-alpine

#docker pull quay.nju.edu.cn/argoproj/argocd:v2.10.0 #加速地址 426MB
docker pull argoproj/argocd:v2.10.0
docker pull redis:7.0.14-alpine
docker pull ghcr.io/dexidp/dex:v2.37.0

docker tag quay.nju.edu.cn/argoproj/argocd:v2.10.2 repo.k8s.local/quay.io/argoproj/argocd:v2.10.2
docker tag docker.io/library/redis:7.0.14-alpine repo.k8s.local/docker.io/library/redis:7.0.14-alpine
docker tag ghcr.io/dexidp/dex:v2.37.0 repo.k8s.local/ghcr.io/dexidp/dex:v2.37.0

docker push repo.k8s.local/quay.io/argoproj/argocd:v2.10.2
docker push repo.k8s.local/docker.io/redis:7.0.14-alpine
docker push repo.k8s.local/ghcr.io/dexidp/dex:v2.37.0

docker rmi quay.nju.edu.cn/argoproj/argocd:v2.10.2
docker rmi redis:7.0.14-alpine
docker rmi ghcr.io/dexidp/dex:v2.37.0

替换yaml中image地址为私仓

没有私仓则跳过此步骤,也可替换为其它镜像源。

#测试
sed -n "/image:/{s/image: redis/image: repo.k8s.local\/docker.io\/redis/p}" argo-cd-v2.9.2.yaml
sed -n "/image:/{s/image: ghcr.io/image: repo.k8s.local\/ghcr.io/p}" argo-cd-v2.9.2.yaml
sed -n "/image:/{s/image: quay.io/image: repo.k8s.local\/quay.io/p}" argo-cd-v2.9.2.yaml

#替换
sed -i "/image:/{s/image: redis/image: repo.k8s.local\/docker.io\/redis/}" argo-cd-v2.9.2.yaml
sed -i "/image:/{s/image: ghcr.io/image: repo.k8s.local\/ghcr.io/}" argo-cd-v2.9.2.yaml
sed -i "/image:/{s/image: quay.io/image: repo.k8s.local\/quay.io/}" argo-cd-v2.9.2.yaml

#重新验证
cat argo-cd-v2.9.2.yaml|grep image:|sed -e 's/.*image: //'

先创建argocd 命名空间,再部署argocd服务

kubectl create namespace argocd
kubectl apply -n argocd -f argo-cd-v2.9.2.yaml

argocd-application-controller: controller 是argocd的处理器,主要是帮你管理你的k8s 资源,基本上你之前用kubectl 做的的操作它都集成了,operater的controller。
argocd-dex-server: 认证token服务,为后面实现gitlab登录等。高可用版本时候不支持多pod,只能单个pod。
argocd-redis: 缓存所用。
argocd-repo-server: 这个服务主要功能是去git 你的gitlab 公有/私有仓库到argocd-repo-server这个pod里面最后让argocd进行相应的kubectl 操作。高可用建议:多个pod来处理多个应用在一个repo的场景。repo管理建议:repo里面主要存放配置管理文件以免消耗过多的本地空间,因为argocd-repo-server会拉取你的repo 到本地。如果repo实在是太大的话,建议挂载磁盘到该服务的/tmp目录。
argocd-server: argocd 的前后端服务,整个web服务。里面还内置helm/kubectl 等工具,具体可以进入到pod里面去查看。

kubectl get pods -n argocd
NAME                                                READY   STATUS    RESTARTS   AGE
argocd-application-controller-0                     1/1     Running   0          20m
argocd-applicationset-controller-5b5f95888b-lwfjd   1/1     Running   0          20m
argocd-dex-server-cb9f4d4b-4vgmc                    1/1     Running   0          20m
argocd-notifications-controller-5c6d9d776f-4hfdw    1/1     Running   0          20m
argocd-redis-6b68b8b86d-62jv8                       1/1     Running   0          20m
argocd-repo-server-67855f9d8c-995c8                 1/1     Running   0          20m
argocd-server-7bcff8887b-qfnb2                      1/1     Running   0          20m

kubectl -n argocd describe pod/argocd-dex-server-f7648d898-fgklf 
kubectl -n argocd logs pod/argocd-dex-server-f7648d898-fgklf 

下载安装Argo CD CLI

方式一

VERSION=$(curl --silent "https://api.github.com/repos/argoproj/argo-cd/releases/latest" | grep '"tag_name"' | sed -E 's/.*"([^"]+)".*/\1/')
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/$VERSION/argocd-linux-amd64
chmod +x /usr/local/bin/argocd

方式二

curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64
argocd version
argocd: v2.9.2+c5ea5c4
  BuildDate: 2023-11-20T17:37:53Z
  GitCommit: c5ea5c4df52943a6fff6c0be181fde5358970304
  GitTreeState: clean
  GoVersion: go1.21.4
  Compiler: gc
  Platform: linux/amd64
FATA[0000] Argo CD server address unspecified   

配置ingress 访问Argo CD web页面

ingress配置文档地址: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/

Argo CD 在同一端口 (443) 上提供多个协议 (gRPC/HTTPS),所以当我们为 argocd 服务定义单个 nginx ingress 对象和规则的时候有点麻烦,因为 nginx.ingress.kubernetes.io/backend -protocol 这个 annotation 只能接受一个后端协议(例如 HTTP、HTTPS、GRPC、GRPCS)。
为了使用单个 ingress 规则和主机名来暴露 Argo CD APIServer,必须使用 nginx.ingress.kubernetes.io/ssl-passthrough 这个 annotation 来传递 TLS 连接并校验 Argo CD APIServer 上的 TLS。

除此之外,由于 ingress-nginx 的每个 Ingress 对象仅支持一个协议,因此另一种方法是定义两个 Ingress 对象。一个用于 HTTP/HTTPS,另一个用于 gRPC

cat > argocd-ingress.yaml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd-server-ingress
  namespace: argocd
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: argocd
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    # If you encounter a redirect loop or are getting a 307 response code
    # then you need to force the nginx ingress to connect to the backend using HTTPS.
    #
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  ingressClassName: int-ingress-nginx
  rules:
  - host: argocd.k8s.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: argocd-server
            port:
              name: https
  tls:
  - hosts:
    - argocd.k8s.local
    secretName: argocd-server-tls # as expected by argocd-server
EOF

在Ingress配置文档中可以找到上面的yaml文件内容,创建ingress的yaml文件,Argocd是https访问模式,其中的访问证书tls secret Argo CD已经提供,我们不需要改变,我们只需要改一下hosts并 配置域名就可以。hosts是我自己的,大家需要改成自己喜欢的域名

kubectl delete -f argocd-ingress.yaml
kubectl apply -f argocd-ingress.yaml
kubectl get svc -n argocd
NAME                                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
argocd-applicationset-controller          ClusterIP   10.96.248.76    <none>        7000/TCP,8080/TCP            49m
argocd-dex-server                         ClusterIP   10.96.159.94    <none>        5556/TCP,5557/TCP,5558/TCP   49m
argocd-metrics                            ClusterIP   10.96.31.187    <none>        8082/TCP                     49m
argocd-notifications-controller-metrics   ClusterIP   10.96.199.238   <none>        9001/TCP                     49m
argocd-redis                              ClusterIP   10.96.94.14     <none>        6379/TCP                     49m
argocd-repo-server                        ClusterIP   10.96.255.97    <none>        8081/TCP,8084/TCP            49m
argocd-server                             ClusterIP   10.96.10.228    <none>        80/TCP,443/TCP               49m
argocd-server-metrics                     ClusterIP   10.96.117.128   <none>        8083/TCP                     49m

kubectl get ingress -n argocd
NAME                    CLASS   HOSTS              ADDRESS     PORTS     AGE
argocd-server-ingress   nginx   argocd.k8s.local   localhost   80, 443   57s
curl -k  -H "Host:argocd.k8s.local"  http://10.96.10.228:80/
curl -k  -H "Host:argocd.k8s.local"  https://10.96.10.228:443/
curl -k  -H "Host:argocd.k8s.local"  http://192.168.244.7:80/
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx</center>
</body>
</html>

curl -k  -H "Host:argocd.k8s.local"  https://192.168.244.7:443/
<!doctype html><html lang="en"><head><meta charset="UTF-8"><title>Argo CD</title><base href="/"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" type="image/png" href="assets/favicon/favicon-32x32.png" sizes="32x32"/><link rel="icon" type="image/png" href="assets/favicon/favicon-16x16.png" sizes="16x16"/><link href="assets/fonts.css" rel="stylesheet"><script defer="defer" src="main.9a9248cc50f345c063e3.js"></script></head><body><noscript><p>Your browser does not support JavaScript. Please enable JavaScript to view the site. Alternatively, Argo CD can be used with the <a href="https://argoproj.github.io/argo-cd/cli_installation/">Argo CD CLI</a>.</p></noscript><div id="app"></div></body><script defer="defer" src="extensions.js"></script></html>

登录Argo CD web页面

用户名是 admin ,初始密码在名为 argocd-initial-admin-secret 的 Secret 对象下的 password 字段中可以用一下命令获取

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d 
peWrTVUl6d0ivUg9
#也可以通过以下命令来修改登录密码:
$ argocd account update-password --account admin --current-password xxxx --new-password xxxx
argocd account update-password --account admin --current-password peWrTVUl6d0ivUg9- --new-password c1gstudio

输入的Ip地址就是argocd-server的ClusterIP 可以通过命令查询
kubectl get svc -n argocd |grep argocd-server
也可以使用Argo CD CLI 登录

argocd login 10.96.10.228
WARNING: server certificate had error: tls: failed to verify certificate: x509: cannot validate certificate for 10.96.10.228 because it doesn't contain any IP SANs. Proceed insecurely (y/n)? y
Username: admin
Password: 
'admin:login' logged in successfully
Context '10.96.10.228' updated

argocd version
argocd: v2.9.2+c5ea5c4
  BuildDate: 2023-11-20T17:37:53Z
  GitCommit: c5ea5c4df52943a6fff6c0be181fde5358970304
  GitTreeState: clean
  GoVersion: go1.21.4
  Compiler: gc
  Platform: linux/amd64
argocd-server: v2.9.2+c5ea5c4
  BuildDate: 2023-11-20T17:18:26Z
  GitCommit: c5ea5c4df52943a6fff6c0be181fde5358970304
  GitTreeState: clean
  GoVersion: go1.21.3
  Compiler: gc
  Platform: linux/amd64
  Kustomize Version: v5.2.1 2023-10-19T20:13:51Z
  Helm Version: v3.13.2+g2a2fb3b
  Kubectl Version: v0.24.2
  Jsonnet Version: v0.20.0

argocd app list
NAME                   CLUSTER                         NAMESPACE  PROJECT  STATUS   HEALTH   SYNCPOLICY  CONDITIONS  REPO                                             PATH                 TARGET
argocd/guestbook       https://kubernetes.default.svc  guestbook  default  Unknown  Healthy  <none>      <none>      https://github.com/argoproj/argocd-example-apps  kustomize-guestbook  HEAD
argocd/test-openresty  https://kubernetes.default.svc  test       default  Synced   Healthy  Auto        <none>      http://git/argocdtest/test-openresty.git      .                    develop
argocd/test-pod-sts    https://kubernetes.default.svc  test       default  Synced   Healthy  Auto-Prune  <none>      http://git/argocdtest/test-pod-sts.git        .                    develop

argocd repo list
TYPE  NAME  REPO                                         INSECURE  OCI    LFS    CREDS  STATUS      MESSAGE  PROJECT
git         http://git/argocdtest/test-openresty.git  false     false  false  true   Successful           default
git         http://git/argocdtest/test-pod-sts.git    false     false  false  true   Successful           default

Prometheus监控

Argo CD 本身暴露了两组 Prometheus 指标
如果开启了 endpoints 这种类型的服务自动发现,那么我们可以在几个指标的 Service 上添加 prometheus.io/scrape: "true" 这样的 annotation:

kubectl edit svc argocd-metrics -n argocd
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: "true"

kubectl edit svc argocd-server-metrics -n argocd
    prometheus.io/scrape: "true"
    prometheus.io/port: "8083"  # 指定8083端口为指标端口

kubectl edit svc argocd-repo-server -n argocd
    prometheus.io/scrape: "true"
    prometheus.io/port: "8084"  # 指定8084端口为指标端口

Argo CD 部署样例

Argo CD 提供了一个官网样例,我们就创建一下这个项目吧
样例github地址: https://github.com/argoproj/argocd-example-apps
样例gitee地址: https://gitee.com/cnych/argocd-example-apps
同步说明: https://argo-cd.readthedocs.io/en/latest/user-guide/sync-options/

修改CoreDNS Hosts配置,添加git地址解析

kubectl get configmaps -n kube-system coredns -oyaml
kubectl edit configmaps -n kube-system coredns -oyaml
修改CoreDNS配置文件,将自定义域名添加到hosts中。
例如将www.example.com指向192.168.1.1,通过CoreDNS解析www.example.com时,会返回192.168.1.1。
例如将c1ggit指向10.100.5.1 ,通过CoreDNS解析c1ggit时,会返回10.100.5.1 。

        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        hosts {
           10.100.5.1   c1ggit
           fallthrough
        }
        prometheus 0.0.0.0:9153

各node上创建git的地址
echo ‘10.100.5.1 c1ggit’ >> /etc/hosts

测试外部git地址

kubectl exec busybox-curl -n default — ping c1ggit

PING c1ggit (10.100.5.1): 56 data bytes
64 bytes from 10.100.5.1: seq=0 ttl=60 time=1.755 ms
64 bytes from 10.100.5.1: seq=1 ttl=60 time=1.057 ms
  • Application Name: app的名称,填写的是样例项目的名称 guestbook
  • Project: 是一种资源,用于组织和管理不同的 Kubernetes 应用(Application,目前先写default)
  • SYNC POLICY: 同步策略,有手动和自动,样例项目,我们先选择手动Manual
  • AUTO-CREATE NAMESPACE: 这里选自动创建应用部署的k8s的命名空间
  • SOURCE: Git 仓库,https://github.com/argoproj/argocd-example-apps 就是样例项目的github仓库地址
  • Revision: HEAD 分支名
  • Path: kustomize-guestbook 资源文件所在的相对路径,Argo CD目前支持多种 Kubernetes 清单,这里需要选择使用那种资源配置模式就选择哪一个路径下的资源清单manifests.根目录可用.
  • Cluster URL: Kubernetes API Server 的访问地址,由于 Argo CD 和下发应用的 Kubernetes 集群是同 一个,因此可以直接使用 https://kubernetes.default.svc 来访问
  • Namespace: 应用部署在k8s中的命名空间
  • PRUNE RESOURCES可以选择是否删除在git中移除的资源,如某个git版本中创建了一个svc,随后又删除了,如果不勾选该选项,则argocd不会自动删除该svc
  • SELF HEAL可以理解为自愈,即检测到定义的资源状态和git中不一致的时候,是否自动同步为一致;如git中定义副本数为10,随后手动扩容至20但并未修改git中的配置文件,勾选这一项则会触发argocd自动将副本数缩减回10
  • SKIP SCHEMA VALIDATION跳过语法检查,个人不建议开启
  • AUTO-CREATE NAMESPACE可以设置如果对应的namespace不存在的话是否自动创建
  • APPLY OUT OF SYNC ONLY类似于增量部署而不是全量部署,仅部署和git版本中不一样的资源,而不是全部重新部署一次,可以降低K8S集群的压力
  • REPLACE会将部署的命令从默认的kubectl apply变更为kubectl replace/create,可以解决部分资源修改之后无法直接apply部署的问题,同时也有可能会导致资源的重复创建
  • RETRY可以设定部署失败后重试的次数和频率

点击页面上面的create按钮
然后手动同步

测试kubernetes.default.svc网络

kubectl exec -it pod/test-pod-0 -n test -- curl -k https://kubernetes.default.svc
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {},
  "code": 403
}

sync同步后可能拉取镜像失败 Back-off pulling image "gcr.io/heptio-images/ks-guestbook-demo:0.1"
https://github.com/argoproj/argocd-example-apps/tree/master/kustomize-guestbook

推送镜像到私仓
gcr.io/heptio-images/ks-guestbook-demo:0.1

docker pull m.daocloud.io/gcr.io/heptio-images/ks-guestbook-demo:0.1
docker tag m.daocloud.io/gcr.io/heptio-images/ks-guestbook-demo:0.1 repo.k8s.local/gcr.io/heptio-images/ks-guestbook-demo:0.1
docker push repo.k8s.local/gcr.io/heptio-images/ks-guestbook-demo:0.1
## 删除原标记
docker rmi m.daocloud.io/gcr.io/heptio-images/ks-guestbook-demo:0.1

在线编辑yaml更新image为repo.k8s.local/gcr.io/heptio-images/ks-guestbook-demo:0.1
注意:当argo重启重新拉取yaml后又会不能拉取镜像。

kubectl get pods -n guestbook
NAME                                      READY   STATUS    RESTARTS   AGE
kustomize-guestbook-ui-6c5b4568dc-s2tbh   1/1     Running   0          16m

kubectl get svc -n guestbook
NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kustomize-guestbook-ui   ClusterIP   10.96.111.171   <none>        80/TCP    16m

改成nodeport

kubectl edit svc kustomize-guestbook-ui  -n guestbook
nodePort: 30041
type: NodePort

curl http://192.168.244.7:30041/

curl http://10.96.111.171
<html ng-app="redis">
  <head>
    <title>Guestbook</title>
    <link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css">
    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.12/angular.min.js"></script>
    <script src="controllers.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/0.13.0/ui-bootstrap-tpls.js"></script>
  </head>
  <body ng-controller="RedisCtrl">
    <div style="margin-left:20px;">
      <div class="row" style="width: 50%;">
        <div class="col-sm-6">
          <h2>Guestbook</h2>
        </div>
        <fieldset class="col-sm-6" style="margin-top:15px">
          <div class="col-sm-8">
            <input ng-model="query" placeholder="Query here" class="form-control" type="text" name="input"><br>
          </div>
          <div class="col-sm-4">
            <button type="button" class="btn btn-primary" ng-click="controller.onSearch()">Search</button>
          </div>
        </fieldset>
      </div>
      <div ng-show="showMain" class="main-ui col-sm-6">
        <form>
        <fieldset>
        <input ng-model="msg" placeholder="Messages" class="form-control" type="text" name="input"><br>
        <button type="button" class="btn btn-primary" ng-click="controller.onRedis()">Submit</button>
        </fieldset>
        </form>
        <div>
          <div ng-repeat="msg in messages track by $index">
            {{msg}}
          </div>
        </div>
      </div>
      <div ng-hide="showMain" class="search-results row">
      </div>
    </div>
  </body>
</html>

#使用测试pod用域名访问
kubectl exec -it pod/test-pod-1 -n test -- curl http://kustomize-guestbook-ui.guestbook.svc

CI/CD

新建公开群组argocdtest
新建项目->导入项目->从url导入仓库
https://github.com/argoproj/argocd-example-apps

开发人员每天把代码提交到 Gitlab 代码仓库
Jenkins 从 Gitlab 代码仓库中拉取项目源码,进行maven编译并打成 jar 包;然后Dockerfile构建成 Docker 镜像,将镜像推送到 Harbor 私有镜像仓库
将镜像推送到 Harbor 私有镜像仓库

  • 使用shell,docker build,docker login,docker tag,docker push
  • Jenkins 插件式发布镜像,安装 CloudBees Docker Build and Publish

argocd先在git为每个项目创建yaml布署文件,以后监控yaml变化或镜像来自动部署.
Argo CD 默认情况下每 3 分钟会检测 Git 仓库一次,用于判断应用实际状态是否和 Git 中声明的期望状态一致,如果不一致,状态就转换为 OutOfSync。默认情况下并不会触发更新,除非通过 syncPolicy 配置了自动同步。

gitlab

添加拉取用户argocd
将argocd邀请到argocd组,并赋予Developer权限,确认对该组下项目有拉取权限

argocd拉取git项目

1.创建库
Settings->Repositories-> + connect repo

Choose your connection method:VIA HTTPS
Type:git
Project:default #argocd中的命名空间
Repository URL: http://c1ggit/argocdtest/simplenginx.git
username:
Password:
Force HTTP basic auth:true
如通信成功该项目CONNECTION STATUS为Successful.

2.创建应用
Settings->Repositories-> 点击仓库三个点->Create application
Application Name:simplenginx
Project Name:default #argocd中的命名空间
SYNC POLICY:Manual #Automatic
选中AUTO-CREATE NAMESPACE

Repository URL:http://c1ggit/argocdtest/simplenginx.git
Revision:master #git中对应的Branches分支
Path:. #.当前根目录

Cluster URL:https://kubernetes.default.svc
Namespace:test

3.同步应用
点击SYNC,SYNC STATUS为Synced表示成功

对接LDAP(可选)

ldap-patch-dex.yaml

vi ldap-patch-dex.yaml
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  dex.config: |-
    connectors:
    - type: ldap
      name: ..................
      id: ldap
      config:
        # Ldap server address
        host: 192.168.5.16:389
        insecureNoSSL: true
        insecureSkipVerify: true
        # Variable name stores ldap bindDN in argocd-secret
        bindDN: "$dex.ldap.bindDN"
        # Variable name stores ldap bind password in argocd-secret
        bindPW: "$dex.ldap.bindPW"
        usernamePrompt: .........
        # Ldap user serch attributes
        userSearch:
          baseDN: "ou=people,dc=xxx,dc=com"
          filter: "(objectClass=person)"
          username: uid
          idAttr: uid
          emailAttr: mail
          nameAttr: cn
        # Ldap group serch attributes
        groupSearch:
          baseDN: "ou=argocd,ou=group,dc=xxx,dc=com"
          filter: "(objectClass=groupOfUniqueNames)"
          userAttr: DN
          groupAttr: uniqueMember
          nameAttr: cn
  # 注意:这个是argocd的访问地址,必须配置,否则会导致不会跳转.
  url: https://192.168.80.180:30984

configMap

kubectl -n argocd patch configmaps argocd-cm --patch "$(cat ldap-patch-dex.yaml)"
kubectl edit cm argocd-cm -n argocd

secret

# bindDN是cn=admin,dc=xxx,dc=com
kubectl -n argocd patch secrets argocd-secret --patch "{\"data\":{\"dex.ldap.bindDN\":\"$(echo cn=admin,dc=xxx,dc=com | base64 -w 0)\"}}"

# 密码bindPW是123456
kubectl -n argocd patch secrets argocd-secret --patch "{\"data\":{\"dex.ldap.bindPW\":\"$(echo 123456 | base64 -w 0)\"}}"

删除POD,以重启,让上面的ldap配置生效。

Argo CD Image Updater

wget https://raw.githubusercontent.com/argoproj-labs/argocd-image-updater/stable/manifests/install.yaml -O argocd-image-updater.yaml
cat argocd-image-updater.yaml|grep image:

image: quay.io/argoprojlabs/argocd-image-updater:v0.12.0

本地私仓镜像

docker pull quay.io/argoprojlabs/argocd-image-updater:v0.12.0
docker tag quay.io/argoprojlabs/argocd-image-updater:v0.12.0 repo.k8s.local/quay.io/argoprojlabs/argocd-image-updater:v0.12.0
docker push repo.k8s.local/quay.io/argoprojlabs/argocd-image-updater:v0.12.0
docker rmi quay.io/argoprojlabs/argocd-image-updater:v0.12.0
#测试
sed -n "/image:/{s/image: /image: repo.k8s.local\//p}" argocd-image-updater.yaml
#替换
sed -i "/image:/{s/image: /image: repo.k8s.local\//}" argocd-image-updater.yaml

kubectl apply -n argocd -f argocd-image-updater.yaml

serviceaccount/argocd-image-updater created
role.rbac.authorization.k8s.io/argocd-image-updater created
rolebinding.rbac.authorization.k8s.io/argocd-image-updater created
configmap/argocd-image-updater-config created
configmap/argocd-image-updater-ssh-config created
secret/argocd-image-updater-secret created
deployment.apps/argocd-image-updater created

ArgoCD 中的用户和角色

管理员CLI 登录

输入的Ip地址就是argocd-server的ClusterIP 可以通过命令查询
kubectl get svc -n argocd

NAME                                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
argocd-applicationset-controller          ClusterIP   10.96.248.76    <none>        7000/TCP,8080/TCP            25d
argocd-dex-server                         ClusterIP   10.96.159.94    <none>        5556/TCP,5557/TCP,5558/TCP   25d
argocd-metrics                            ClusterIP   10.96.31.187    <none>        8082/TCP                     25d
argocd-notifications-controller-metrics   ClusterIP   10.96.199.238   <none>        9001/TCP                     25d
argocd-redis                              ClusterIP   10.96.94.14     <none>        6379/TCP                     25d
argocd-repo-server                        ClusterIP   10.96.255.97    <none>        8081/TCP,8084/TCP            25d
argocd-server                             ClusterIP   10.96.10.228    <none>        80/TCP,443/TCP               25d
argocd-server-metrics                     ClusterIP   10.96.117.128   <none>        8083/TCP                     25d

argocd login argocd-server.argocd.svc.cluster.local

kubectl get svc -n argocd |grep argocd-server|head -n1 |awk ‘{print $3}’

argocd login 10.96.10.228
WARNING: server certificate had error: tls: failed to verify certificate: x509: cannot validate certificate for 10.96.10.228 because it doesn't contain any IP SANs. Proceed insecurely (y/n)? y
Username: admin
Password: 
'admin:login' logged in successfully
Context '10.96.10.228' updated

一句话登录

echo y | argocd login $(kubectl get svc -n argocd |grep argocd-server|head -n1 |awk ‘{print $3}’)–password ‘c1gstudio’ –username admin

登出:

#argocd logout argocd-server.argocd.svc.cluster.local  
argocd logout 10.96.10.228

添加 用户

在 argocd/argocd-cm
添加gitops 用户,有生成 apiKey 和 login 权限。
添加system用户代替admin,后继关闭admin用户
添加测试用户dev_user
添加发布用户pre_user
kubectl edit cm argocd-cm -n argocd

apiVersion: v1
data:
  accounts.gitops: apiKey, login
  accounts.system.enabled: "true"  
  accounts.dev_user: login
  accounts.system.enabled: "true"  
  accounts.pre_user: login
  accounts.system.enabled: "true"  
  accounts.system: login
  accounts.system.enabled: "true"
  admin.enabled: "true" 
kind: ConfigMap

修改后,会热加载,无需重启任何服务。
用 admin 用户登录后,修改 gitops 的密码为 gitops@smallsoup(注意 current-password 是当前登录用户的密码,如果用 admin 登录的,就是 admin 的密码)

查账号

argocd account get –account gitops

列出账号

argocd account list

NAME      ENABLED  CAPABILITIES
admin     true     login
dev_user  true     login
gitops    true     login
pre_user  true     login
system    true     login

更新用户密码

argocd account update-password \
  --account gitops \
  --current-password 'c1gstudio' \
  --new-password 'gitopsPass123'

argocd account update-password \
  --account system \
  --current-password 'c1gstudio' \
  --new-password 'Pass123456'

argocd account update-password \
  --account dev_user \
  --current-password 'c1gstudio' \
  --new-password 'Pass123456'

argocd account update-password \
  --account pre_user \
  --current-password 'c1gstudio' \
  --new-password 'Pass123456'

测试登录

echo y | argocd login $(kubectl get svc -n argocd |grep argocd-server|head -n1 |awk ‘{print $3}’) –password ‘gitopsPass123’ –username gitops
目前还没有权限查看资源
argocd account list
argocd cluster list
argocd app list

生成Token (这样生成的 token 从不过期,可以加–expires-in 参数设置过期时长)

argocd account generate-token –account gitops

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJhcmdvY2QiLCJzdWIiOiJnaXRvcHM6YXBpS2V5IiwibmJmIjoxNzAyODgzMDEwLCJpYXQiOjE3MDI4ODMwMTAsImp0aSI6IjM3M2U0NTRhLTlkMjktNGU4My04ZTgyLWIwNWE1MWMyZjVhNiJ9.esoLwNwxBGp1MXt6-eFBSL-4lbI9_a-CRgk6NZrQyG4

#使用Token查看

argocd app list --auth-token eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJhcmdvY2QiLCJzdWIiOiJnaXRvcHM6YXBpS2V5IiwibmJmIjoxNzAyODgzMDEwLCJpYXQiOjE3MDI4ODMwMTAsImp0aSI6IjM3M2U0NTRhLTlkMjktNGU4My04ZTgyLWIwNWE1MWMyZjVhNiJ9.esoLwNwxBGp1MXt6-eFBSL-4lbI9_a-CRgk6NZrQyG4 --server $(kubectl get svc -n argocd |grep argocd-server|head -n1 |awk '{print $3}' --insecure

权限

Kubernetes Argo CD权限提升漏洞

2.3.3之前版本
kubectl get -n argocd cm argocd-cm -o jsonpath=’{.data.users.anonymous.enabled}’
如果此命令的结果为空或 "false",则表示未启用对该实例的匿名访问。如果结果是 "true",则意味着实例很容易受到攻击。

默认用户组

内置两条用户组 ,只读用户组和管理员用户组

role:readonly – read-only access to all resources
role:admin – unrestricted access to all resources

禁用 匿名访问

kubectl patch -n argocd cm argocd-cm –type=json -p='[{"op":"add","path":"/data/users.anonymous.enabled","value":"false"}]’

权限格式

p, <role/user/group>, <resource>, <action>, <object>
p, <role/user/group>, <resource>, <action>, <appproject>/<object>

资源和动作有下面这些:

Resources: clusters, projects, applications, applicationsets, repositories, certificates, accounts, gpgkeys, logs, exec ,extensions
Actions: get, create, update, delete, sync, override, action/<group/kind/action-name>

sync, override, and action/<group/kind/action-name> 仅对 applications 有效

给权限

给system管理员权限
gitops给gitops组权限
pre_user给qagroup组权限,不能创建projects
test_user都可以看,但只能操作localtest项目权限,相当于命名空间隔离。
test_user可以对default项目中的simplenginx应用,执行同步操作。
dev_user只能操作dev-开头的项目

默认只读
policy.default: role:readonly

在 argocd-rbac-cm Configmaps 中给增加以下 policy.csv 就可以看到 admin 创建的 app、仓库等信息了:
kubectl edit cm argocd-rbac-cm -n argocd

data:
  policy.default: role:readonly
  policy.csv: |
    p, role:gitops, applications, get, *, allow
    p, role:gitops, applications, create, *, allow
    p, role:gitops, applications, update, *, allow
    p, role:gitops, applications, sync, *, allow
    p, role:gitops, applications, override, *, allow
    p, role:gitops, applications, delete, *, allow
    p, role:gitops, applications, action/argoproj.io/Rollout/*, *, allow
    p, role:gitops, repositories, get, *, allow
    p, role:gitops, repositories, create, *, allow
    p, role:gitops, repositories, update, *, allow
    p, role:gitops, projects, create, *, allow
    p, role:gitops, projects, get, *, allow
    p, role:gitops, clusters, get, *, allow
    p, role:gitops, clusters, list, *, allow
    p, role:gitops, exec, create, */*, allow
    p, role:qagroup, applications, get, *, allow
    p, role:qagroup, applications, create, *, allow
    p, role:qagroup, applications, update, *, allow
    p, role:qagroup, applications, sync, *, allow
    p, role:qagroup, applications, override, *, allow
    p, role:qagroup, applications, delete, *, allow
    p, role:qagroup, applications, action/argoproj.io/Rollout/*, *, allow
    p, role:qagroup, repositories, get, *, allow
    p, role:qagroup, repositories, create, *, allow
    p, role:qagroup, repositories, update, *, allow
    p, role:qagroup, projects, get, *, allow
    p, role:qagroup, clusters, get, *, allow
    p, role:qagroup, clusters, list, *, allow
    p, role:qagroup, exec, create, *, allow
    p, role:devgroup, applications, get, dev-*/*, allow
    p, role:devgroup, applications, create, dev-*/*, allow
    p, role:devgroup, applications, update, dev-*/*, allow
    p, role:devgroup, applications, sync, dev-*/*, allow
    p, role:devgroup, applications, override, dev-*/*, allow
    p, role:devgroup, applications, delete, dev-*/*, allow
    p, role:devgroup, applications, action/argoproj.io/Rollout/*, *, allow
    p, role:devgroup, repositories, get, dev-*/*, allow
    p, role:devgroup, repositories, create, dev-*/*, allow
    p, role:devgroup, repositories, update, dev-*/*, allow
    p, role:devgroup, projects, get, dev-*/*, allow
    p, role:devgroup, clusters, get, dev-*/*, allow
    p, role:devgroup, clusters, list, dev-*/*, allow
    p, role:devgroup, exec, create, dev-*/*, allow   
    p, test_user, *, *, localtest/*, allow
    p, test_user, applications, create, default/simplenginx, deny
    p, test_user, applications, update, default/simplenginx, allow    
    p, test_user, applications, sync, default/simplenginx, allow
    p, test_user, applications, delete, default/simplenginx, allow
    p, test_user, applications, override, default/simplenginx, allow
    g, pre_user, role:qagroup
    g, dev_user, role:devgroup
    g, gitops, role:gitops
    g, system, role:admin
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: argocd-rbac-cm
    app.kubernetes.io/part-of: argocd
  name: argocd-rbac-cm
  namespace: argocd

参考文档:

用户管理:https://argoproj.github.io/argo-cd/operator-manual/user-management/

RBAC控制:https://argoproj.github.io/argo-cd/operator-manual/rbac/
https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/

禁用 admin

方式一

kubectl patch -n argocd cm argocd-cm --type=json -p='[{"op":"replace","path":"/data/admin.enabled","value":"false"}]'

方式二

kubectl edit cm argocd-cm -n argocd
  admin.enabled: "false"

删除用户

kubectl patch -n argocd cm argocd-cm –type=’json’ -p='[{"op": "remove", "path": "/data/accounts.dev_user"}]’
kubectl patch -n argocd secrets argocd-secret –type=’json’ -p='[{"op": "remove", "path": "/data/accounts.dev_user.password"}]’

Web-based Terminal 终端功能

参考:https://argo-cd.readthedocs.io/en/stable/operator-manual/web_based_terminal/
从 Argo CD v2.4起,默认情况下会禁用此功能,它允许用户在他们拥有exec/create权限的应用程序管理的任何Pod上运行任意代码。
descript

1.打开exec功能

kubectl edit cm argocd-cm -n argocd

data:
  exec.enabled: "true"

2.添加权限

添加到尾部
kubectl get clusterrole argocd-server
kubectl edit clusterrole argocd-server

- apiGroups:
  - ""
  resources:
  - pods/exec
  verbs:
  - create

3.添加RBAC权限

在相应组下添加权限
kubectl edit cm argocd-rbac-cm -n argocd

p, role:myrole, exec, create, */*, allow

Posted in 安装k8s/kubernetes.

Tagged with , .


k8s_安装12_operator_Prometheus+grafana

十二 operator安装Prometheus+grafana

Prometheus

Promtheus 本身只支持单机部署,没有自带支持集群部署,也不支持高可用以及水平扩容,它的存储空间受限于本地磁盘的容量。同时随着数据采集量的增加,单台 Prometheus 实例能够处理的时间序列数会达到瓶颈,这时 CPU 和内存都会升高,一般内存先达到瓶颈,主要原因有:

  • Prometheus 的内存消耗主要是因为每隔 2 小时做一个 Block 数据落盘,落盘之前所有数据都在内存里面,因此和采集量有关。
  • 加载历史数据时,是从磁盘到内存的,查询范围越大,内存越大。这里面有一定的优化空间。
  • 一些不合理的查询条件也会加大内存,如 Group 或大范围 Rate。
    这个时候要么加内存,要么通过集群分片来减少每个实例需要采集的指标。
    Prometheus 主张根据功能或服务维度进行拆分,即如果要采集的服务比较多,一个 Prometheus 实例就配置成仅采集和存储某一个或某一部分服务的指标,这样根据要采集的服务将 Prometheus 拆分成多个实例分别去采集,也能一定程度上达到水平扩容的目的。

安装选型

  • 原生 prometheus
    自行创造一切
    如果您已准备好了Prometheus组件、及其先决条件,则可以通过参考其相互之间的依赖关系,以正确的顺序为Prometheus、Alertmanager、Grafana的所有密钥、以及ConfigMaps等每个组件,手动部署YAML规范文件。这种方法通常非常耗时,并且需要花费大量的精力,去部署和管理Prometheus生态系统。同时,它还需要构建强大的文档,以便将其复制到其他环境中。

  • prometheus-operator
    Prometheus operator并非Prometheus官方组件,是由CoreOS公司研发
    使用Kubernetes Custom Resource简化部署与配置Prometheus、Alertmanager等相关的监控组件 ​
    官方安装文档: https://prometheus-operator.dev/docs/user-guides/getting-started/
    ​Prometheus Operator requires use of Kubernetes v1.16.x and up.)需要Kubernetes版本至少在v1.16.x以上 ​
    官方Github地址:https://github.com/prometheus-operator/prometheus-operator

  • kube-prometheus
    kube-prometheus提供基于Prometheus & Prometheus Operator完整的集群监控配置示例,包括多实例Prometheus & Alertmanager部署与配置及node exporter的metrics采集,以及scrape Prometheus target各种不同的metrics endpoints,Grafana,并提供Alerting rules一些示例,触发告警集群潜在的问题 ​
    官方安装文档:https://prometheus-operator.dev/docs/prologue/quick-start/
    安装要求:https://github.com/prometheus-operator/kube-prometheus#compatibility
    ​官方Github地址:https://github.com/prometheus-operator/kube-prometheus

  • helm chart prometheus-community/kube-prometheus-stack
    提供类似kube-prometheus的功能,但是该项目是由Prometheus-community来维护,
    具体信息参考https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#kube-prometheus-stack

k8s operator 方式安装 Prometheus+grafana

用operator的方式部署Prometheus+Grafana,这是一种非常简单使用的方法
打开Prometheus operator的GitHub主页https://github.com/prometheus-operator/kube-prometheus,首先确认自己的kubernetes版本应该使用哪个版本的Prometheus operator.
我这里的kubernetes是1.28版本,因此使用的operator应该是release-0.13
https://github.com/prometheus-operator/kube-prometheus/tree/release-0.13

资源准备

安装资源准备

wget –no-check-certificate https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.13.0.zip -O prometheus-0.13.0.zip
unzip prometheus-0.13.0.zip
cd kube-prometheus-0.13.0

提取image

cat manifests/.yaml|grep image:|sed -e ‘s/.image: //’|sort|uniq
提取出image地址

grafana/grafana:9.5.3
jimmidyson/configmap-reload:v0.5.0
quay.io/brancz/kube-rbac-proxy:v0.14.2
quay.io/prometheus/alertmanager:v0.26.0
quay.io/prometheus/blackbox-exporter:v0.24.0
quay.io/prometheus/node-exporter:v1.6.1
quay.io/prometheus-operator/prometheus-operator:v0.67.1
quay.io/prometheus/prometheus:v2.46.0
registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2
registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.11.1

推送到私仓

手动下载网络不好的,并推送至私仓repo.k8s.local
注意:事配制好私仓repo.k8s.local,并建立相应项目及权限.

#registry.k8s.io地址下的
registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2
registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.11.1

docker pull k8s.dockerproxy.com/kube-state-metrics/kube-state-metrics:v2.9.2
docker pull k8s.dockerproxy.com/prometheus-adapter/prometheus-adapter:v0.11.1

docker tag k8s.dockerproxy.com/kube-state-metrics/kube-state-metrics:v2.9.2 repo.k8s.local/registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2
docker tag k8s.dockerproxy.com/prometheus-adapter/prometheus-adapter:v0.11.1 repo.k8s.local/registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.11.1

docker push repo.k8s.local/registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2
docker push repo.k8s.local/registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.11.1
#重命名docker.io下的
docker pull jimmidyson/configmap-reload:v0.5.0
docker pull grafana/grafana:9.5.3

docker tag jimmidyson/configmap-reload:v0.5.0 repo.k8s.local/docker.io/jimmidyson/configmap-reload:v0.5.0
docker tag grafana/grafana:9.5.3 repo.k8s.local/docker.io/grafana/grafana:9.5.3
docker push repo.k8s.local/docker.io/jimmidyson/configmap-reload:v0.5.0
docker push repo.k8s.local/docker.io/grafana/grafana:9.5.3
kube-prometheus-0.13.0/manifests/prometheusOperator-deployment.yaml
#     - --prometheus-config-reloader=repo.k8s.local/quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
#quay.io单独一个
docker pull  quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
docker tag quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1 repo.k8s.local/quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
docker push repo.k8s.local/quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
#使用脚本批量下载quay.io
vi images.txt
quay.io/prometheus/alertmanager:v0.26.0
quay.io/prometheus/blackbox-exporter:v0.24.0
quay.io/brancz/kube-rbac-proxy:v0.14.2
quay.io/prometheus/node-exporter:v1.6.1
quay.io/prometheus-operator/prometheus-operator:v0.67.1
quay.io/prometheus/prometheus:v2.46.0

vim auto-pull-and-push-images.sh

#!/bin/bash
#新镜像标签:默认取当前时间作为标签名
imageNewTag=`date +%Y%m%d-%H%M%S`
#镜像仓库地址
registryAddr="repo.k8s.local/"

#循环读取images.txt,并存入list中
n=0

for line in $(cat images.txt | grep ^[^#])
do
    list[$n]=$line
    ((n+=1))
done

echo "需推送的镜像地址如下:"
for variable in ${list[@]}
do
    echo ${variable}
done

for variable in ${list[@]}
do
    #下载镜像
    echo "准备拉取镜像: $variable"
    docker pull $variable

    # #获取拉取的镜像ID
    imageId=`docker images -q $variable`
    echo "[$variable]拉取完成后的镜像ID: $imageId"

    #获取完整的镜像名
    imageFormatName=`docker images --format "{{.Repository}}:{{.Tag}}:{{.ID}}" |grep $variable`
    echo "imageFormatName:$imageFormatName"

    #最开头地址
  #如:quay.io/prometheus-operator/prometheus-operator:v0.67.1  -> quay.io
  repository=${imageFormatName}
    repositoryurl=${imageFormatName%%/*}
    echo "repositoryurl :$repositoryurl"

    #删掉第一个:及其右边的字符串
  #如:quay.io/prometheus-operator/prometheus-operator:v0.67.11:b6ec194a1a0 -> quay.io/prometheus-operator/prometheus-operator:v0.67.11
    repository=${repository%:*}

    echo "新镜像地址: $registryAddr$repository"

    #重新打镜像标签
    docker tag $imageId $registryAddr$repository

    # #推送镜像
    docker push $registryAddr$repository
  echo -e "\n"
done

chmod 755 auto-pull-and-push-images.sh
./auto-pull-and-push-images.sh

替换yaml中image地址为私仓

#测试
sed -n "/image:/{s/image: jimmidyson/image: repo.k8s.local\/docker.io\/jimmidyson/p}" `grep 'image: jimmidyson' ./manifests/ -rl`
sed -n "/image:/{s/image: grafana/image: repo.k8s.local\/docker.io\/grafana/p}" `grep 'image: grafana' ./manifests/ -rl`
sed -n "/image:/{s/image: registry.k8s.io/image: repo.k8s.local\/registry.k8s.io/p}" `grep 'image: registry.k8s.io' ./manifests/ -rl`
sed -n "/image:/{s/image: quay.io/image: repo.k8s.local\/quay.io/p}" `grep 'image: quay.io' ./manifests/ -rl`

#替换
sed -i "/image:/{s/image: jimmidyson/image: repo.k8s.local\/docker.io\/jimmidyson/}" `grep 'image: jimmidyson' ./manifests/ -rl`
sed -i "/image:/{s/image: grafana/image: repo.k8s.local\/docker.io\/grafana/}" `grep 'image: grafana' ./manifests/ -rl`
sed -i "/image:/{s/image: registry.k8s.io/image: repo.k8s.local\/registry.k8s.io/}" `grep 'image: registry.k8s.io' ./manifests/ -rl`
sed -i "/image:/{s/image: quay.io/image: repo.k8s.local\/quay.io/}" `grep 'image: quay.io' ./manifests/ -rl`

#重新验证
cat manifests/*.yaml|grep image:|sed -e 's/.*image: //'
manifests/prometheusOperator-deployment.yaml
      containers:
      - args:
        - --kubelet-service=kube-system/kubelet
        - --prometheus-config-reloader=repo.k8s.local/quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
        image: repo.k8s.local/quay.io/prometheus-operator/prometheus-operator:v0.67.1
        name: prometheus-operator
修改prometheus-config-reloader
       - --prometheus-config-reloader=repo.k8s.local/quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1

安装Prometheus+Grafana(安装和启动)

首先,回到kube-prometheus-0.13.0 目录,执行以下命令开始安装

kubectl apply --server-side -f manifests/setup

customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusagents.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/scrapeconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com serverside-applied
namespace/monitoring serverside-applied
kubectl apply -f manifests/
alertmanager.monitoring.coreos.com/main created
networkpolicy.networking.k8s.io/alertmanager-main created
poddisruptionbudget.policy/alertmanager-main created
prometheusrule.monitoring.coreos.com/alertmanager-main-rules created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager-main created
clusterrole.rbac.authorization.k8s.io/blackbox-exporter created
clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created
configmap/blackbox-exporter-configuration created
deployment.apps/blackbox-exporter created
networkpolicy.networking.k8s.io/blackbox-exporter created
service/blackbox-exporter created
serviceaccount/blackbox-exporter created
servicemonitor.monitoring.coreos.com/blackbox-exporter created
secret/grafana-config created
secret/grafana-datasources created
configmap/grafana-dashboard-alertmanager-overview created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-grafana-overview created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-multicluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes-darwin created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
networkpolicy.networking.k8s.io/grafana created
prometheusrule.monitoring.coreos.com/grafana-rules created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
prometheusrule.monitoring.coreos.com/kube-prometheus-rules created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
networkpolicy.networking.k8s.io/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
networkpolicy.networking.k8s.io/node-exporter created
prometheusrule.monitoring.coreos.com/node-exporter-rules created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
networkpolicy.networking.k8s.io/prometheus-k8s created
poddisruptionbudget.policy/prometheus-k8s created
prometheus.monitoring.coreos.com/k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-k8s created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader configured
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
networkpolicy.networking.k8s.io/prometheus-adapter created
poddisruptionbudget.policy/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
servicemonitor.monitoring.coreos.com/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
networkpolicy.networking.k8s.io/prometheus-operator created
prometheusrule.monitoring.coreos.com/prometheus-operator-rules created
service/prometheus-operator created
serviceaccount/prometheus-operator created
servicemonitor.monitoring.coreos.com/prometheus-operator created
kubectl get pods -o wide -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE    IP              NODE                 NOMINATED NODE   READINESS GATES
alertmanager-main-0                    2/2     Running   0          82s    10.244.1.6      node01.k8s.local     <none>           <none>
alertmanager-main-1                    2/2     Running   0          82s    10.244.1.7      node01.k8s.local     <none>           <none>
alertmanager-main-2                    2/2     Running   0          82s    10.244.2.3      node02.k8s.local     <none>           <none>
blackbox-exporter-76847bbff-wt77c      3/3     Running   0          104s   10.244.2.252    node02.k8s.local     <none>           <none>
grafana-5955685bfd-shf4s               1/1     Running   0          103s   10.244.2.253    node02.k8s.local     <none>           <none>
kube-state-metrics-7dddfffd96-2ktrs    3/3     Running   0          103s   10.244.1.4      node01.k8s.local     <none>           <none>
node-exporter-g8d5k                    2/2     Running   0          102s   192.168.244.4   master01.k8s.local   <none>           <none>
node-exporter-mqqkc                    2/2     Running   0          102s   192.168.244.7   node02.k8s.local     <none>           <none>
node-exporter-zpfl2                    2/2     Running   0          102s   192.168.244.5   node01.k8s.local     <none>           <none>
prometheus-adapter-6db6c659d4-25lgm    1/1     Running   0          100s   10.244.1.5      node01.k8s.local     <none>           <none>
prometheus-adapter-6db6c659d4-ps5mz    1/1     Running   0          100s   10.244.2.254    node02.k8s.local     <none>           <none>
prometheus-k8s-0                       2/2     Running   0          81s    10.244.1.8      node01.k8s.local     <none>           <none>
prometheus-k8s-1                       2/2     Running   0          81s    10.244.2.4      node02.k8s.local     <none>           <none>
prometheus-operator-797d795d64-4wnw2   2/2     Running   0          99s    10.244.2.2      node02.k8s.local     <none>           <none>
kubectl get svc -n monitoring -o wide
NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE     SELECTOR
alertmanager-main       ClusterIP   10.96.71.121   <none>        9093/TCP,8080/TCP            2m10s   app.kubernetes.io/component=alert-router,app.kubernetes.io/instance=main,app.kubernetes.io/name=alertmanager,app.kubernetes.io/part-of=kube-prometheus
alertmanager-operated   ClusterIP   None           <none>        9093/TCP,9094/TCP,9094/UDP   108s    app.kubernetes.io/name=alertmanager
blackbox-exporter       ClusterIP   10.96.33.150   <none>        9115/TCP,19115/TCP           2m10s   app.kubernetes.io/component=exporter,app.kubernetes.io/name=blackbox-exporter,app.kubernetes.io/part-of=kube-prometheus
grafana                 ClusterIP   10.96.12.88    <none>        3000/TCP                     2m9s    app.kubernetes.io/component=grafana,app.kubernetes.io/name=grafana,app.kubernetes.io/part-of=kube-prometheus
kube-state-metrics      ClusterIP   None           <none>        8443/TCP,9443/TCP            2m9s    app.kubernetes.io/component=exporter,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=kube-prometheus
node-exporter           ClusterIP   None           <none>        9100/TCP                     2m8s    app.kubernetes.io/component=exporter,app.kubernetes.io/name=node-exporter,app.kubernetes.io/part-of=kube-prometheus
prometheus-adapter      ClusterIP   10.96.24.212   <none>        443/TCP                      2m7s    app.kubernetes.io/component=metrics-adapter,app.kubernetes.io/name=prometheus-adapter,app.kubernetes.io/part-of=kube-prometheus
prometheus-k8s          ClusterIP   10.96.57.42    <none>        9090/TCP,8080/TCP            2m8s    app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=k8s,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=kube-prometheus
prometheus-operated     ClusterIP   None           <none>        9090/TCP                     107s    app.kubernetes.io/name=prometheus
prometheus-operator     ClusterIP   None           <none>        8443/TCP                     2m6s    app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=kube-prometheus
kubectl  get svc  -n monitoring 
NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       ClusterIP   10.96.71.121   <none>        9093/TCP,8080/TCP            93m
alertmanager-operated   ClusterIP   None           <none>        9093/TCP,9094/TCP,9094/UDP   93m
blackbox-exporter       ClusterIP   10.96.33.150   <none>        9115/TCP,19115/TCP           93m
grafana                 ClusterIP   10.96.12.88    <none>        3000/TCP                     93m
kube-state-metrics      ClusterIP   None           <none>        8443/TCP,9443/TCP            93m
node-exporter           ClusterIP   None           <none>        9100/TCP                     93m
prometheus-adapter      ClusterIP   10.96.24.212   <none>        443/TCP                      93m
prometheus-k8s          ClusterIP   10.96.57.42    <none>        9090/TCP,8080/TCP            93m
prometheus-operated     ClusterIP   None           <none>        9090/TCP                     93m
prometheus-operator     ClusterIP   None           <none>        8443/TCP                     93m

blackbox_exporter: Prometheus 官方项目,网络探测,dns、ping、http监控
node-exporter:prometheus的exporter,收集Node级别的监控数据,采集机器指标如 CPU、内存、磁盘。
prometheus:监控服务端,从node-exporter拉数据并存储为时序数据。
kube-state-metrics:将prometheus中可以用PromQL查询到的指标数据转换成k8s对应的数据,采集pod、deployment等资源的元信息。
prometheus-adpater:聚合进apiserver,即一种custom-metrics-apiserver实现

创建ingress

方便通过域名访问,前提需安装ingress.

cat > prometheus-ingress.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-prometheus
  namespace: monitoring
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: monitoring
  annotations:
    #kubernetes.io/ingress.class: "nginx"
    #nginx.ingress.kubernetes.io/rewrite-target: /  #rewrite
spec:
  ingressClassName: nginx
  rules:
  - host: prometheus.k8s.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: prometheus-k8s
            port:
              name: web
              #number: 9090
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-grafana
  namespace: monitoring
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: monitoring
  annotations:
    #kubernetes.io/ingress.class: "nginx"
    #nginx.ingress.kubernetes.io/rewrite-target: /  #rewrite
spec:
  ingressClassName: nginx
  rules:
  - host: grafana.k8s.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: grafana
            port:
              name: http
              #number: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-alertmanager
  namespace: monitoring
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: monitoring
  annotations:
    #kubernetes.io/ingress.class: "nginx"
    #nginx.ingress.kubernetes.io/rewrite-target: /  #rewrite
spec:
  ingressClassName: nginx
  rules:
  - host: alertmanager.k8s.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: alertmanager-main
            port:
              name: web
              #number: 9093

EOF
kubectl delete -f  prometheus-ingress.yaml  
kubectl apply -f  prometheus-ingress.yaml  
kubectl get ingress -A

host文件中添加域名

127.0.0.1 prometheus.k8s.local
127.0.0.1 grafana.k8s.local
127.0.0.1 alertmanager.k8s.local
#测试clusterip
curl -k  -H "Host:prometheus.k8s.local"  http://10.96.57.42:9090/graph
curl -k  -H "Host:grafana.k8s.local"  http://10.96.12.88:3000/login
curl -k  -H "Host:alertmanager.k8s.local"  http://10.96.71.121:9093/
#测试dns
curl -k  http://prometheus-k8s.monitoring.svc:9090
#在测试pod中测试
kubectl exec -it pod/test-pod-1 -n test -- ping prometheus-k8s.monitoring

在浏览器上访问
http://prometheus.k8s.local:30180/
http://grafana.k8s.local:30180/
admin/admin
http://alertmanager.k8s.local:30180/#/alerts

#重启pod
kubectl get pods -n monitoring

kubectl rollout restart deployment/grafana -n monitoring
kubectl rollout restart sts/prometheus-k8s -n monitoring

卸载

kubectl delete –ignore-not-found=true -f manifests/ -f manifests/setup

更改 Prometheus 的显示时区

Prometheus 为避免时区混乱,在所有组件中专门使用 Unix Time 和 Utc 进行显示。不支持在配置文件中设置时区,也不能读取本机 /etc/timezone 时区。

其实这个限制是不影响使用的:

如果做可视化,Grafana是可以做时区转换的。

如果是调接口,拿到了数据中的时间戳,你想怎么处理都可以。

如果因为 Prometheus 自带的 UI 不是本地时间,看着不舒服,2.16 版本的新版 Web UI已经引入了Local Timezone 的选项。

更改 Grafana 的显示时区

默认prometheus显示的是UTC时间,比上海少了8小时。
对于已导入的模板通用设置中的时区及个人资料中的修改无效

helm安装修改values.yaml

   ##defaultDashboardsTimezone: utc
<   defaultDashboardsTimezone: "Asia/Shanghai"

方式一
每次改查询时时区

方式二
另导出一份改了时区的模板

方式三
修改导入模板时区
cat grafana-dashboardDefinitions.yaml|grep -C 2 timezone

              ]
          },
          "timezone": "utc",
          "title": "Alertmanager / Overview",
          "uid": "alertmanager-overview",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / API server",
          "uid": "09ec8aa1e996d6ffcd6817bbaff4db1b",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Networking / Cluster",
          "uid": "ff635a025bcfea7bc3dd4f508990a3e9",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Controller Manager",
          "uid": "72e0e05bef5099e5f049b05fdc429ed4",
--
              ]
          },
          "timezone": "",
          "title": "Grafana Overview",
          "uid": "6be0s85Mk",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Cluster",
          "uid": "efa86fd1d0c121a26444b636a3f509a8",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources /  Multi-Cluster",
          "uid": "b59e6c9f2fcbe2e16d77fc492374cc4f",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Namespace (Pods)",
          "uid": "85a562078cdf77779eaa1add43ccec1e",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Node (Pods)",
          "uid": "200ac8fdbfbb74b39aff88118e4d1c2c",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Pod",
          "uid": "6581e46e4e5c7ba40a07646395ef7b23",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Workload",
          "uid": "a164a7f0339f99e89cea5cb47e9be617",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Namespace (Workloads)",
          "uid": "a87fb0d919ec0ea5f6543124e16c42a5",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Kubelet",
          "uid": "3138fa155d5915769fbded898ac09fd9",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Networking / Namespace (Pods)",
          "uid": "8b7a8b326d7a6f1f04244066368c67af",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Networking / Namespace (Workload)",
          "uid": "bbb2a765a623ae38130206c7d94a160f",
--
              ]
          },
          "timezone": "utc",
          "title": "Node Exporter / USE Method / Cluster",
          "version": 0
--
              ]
          },
          "timezone": "utc",
          "title": "Node Exporter / USE Method / Node",
          "version": 0
--
              ]
          },
          "timezone": "utc",
          "title": "Node Exporter / MacOS",
          "version": 0
--
              ]
          },
          "timezone": "utc",
          "title": "Node Exporter / Nodes",
          "version": 0
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Persistent Volumes",
          "uid": "919b92a8e8041bd567af9edab12c840c",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Networking / Pod",
          "uid": "7a18067ce943a40ae25454675c19ff5c",
--
              ]
          },
          "timezone": "browser",
          "title": "Prometheus / Remote Write",
          "version": 0
--
              ]
          },
          "timezone": "utc",
          "title": "Prometheus / Overview",
          "uid": "",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Proxy",
          "uid": "632e265de029684c40b21cb76bca4f94",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Scheduler",
          "uid": "2e6b6a3b4bddf1427b3a55aa1311c656",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Networking / Workload",
          "uid": "728bf77cc1166d2f3133bf25846876cc",

删除utc时区
sed -rn ‘/"timezone":/{s/"timezone": "."/"timezone": ""/p}’ grafana-dashboardDefinitions.yaml
sed -i ‘/"timezone":/{s/"timezone": ".
"/"timezone": ""/}’ grafana-dashboardDefinitions.yaml

数据持久化

默认没有持久化,重启pod后配制就丢了

准备pvc

提前准备好StorageClass
注意namesapce要和 service一致

cat > grafana-pvc.yaml  << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: grafana-pvc
  namespace: monitoring
spec:
  storageClassName: managed-nfs-storage  
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
EOF

修改yaml

grafana的存储
grafana-deployment.yaml

      serviceAccountName: grafana
      volumes:
      - emptyDir: {}
        name: grafana-storage

修改成

      serviceAccountName: grafana
      volumes:
      - PersistentVolumeClaim: 
          claimName:grafana-pvc
        name: grafana-storage

spec:添加storage
prometheus-prometheus.yaml

  namespace: monitoring
spec:
  storage:
      volumeClaimTemplate:
        spec:
          storageClassName: managed-nfs-storage
          resources:
            requests:
              storage: 10Gi

新增一些权限配置,修改完毕后的完整内容如下所示,新增的位置主要在resources和varbs两处
prometheus-clusterRole.yaml

rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - get
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  - services
  - endpoints
  - pods
  verbs:
  - get
  - list
  - watch
- nonResourceURLs:
  - /metrics
  verbs:
  - get

再执行以下操作,给prometheus增加管理员身份(可酌情选择)

kubectl create clusterrolebinding kube-state-metrics-admin-binding \
--clusterrole=cluster-admin  \
--user=system:serviceaccount:monitoring:kube-state-metrics
kubectl apply -f grafana-pvc.yaml
kubectl apply -f prometheus-clusterRole.yaml

kubectl apply -f grafana-deployment.yaml
kubectl apply -f prometheus-prometheus.yaml

kubectl get pv,pvc -o wide
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                           STORAGECLASS          REASON   AGE   VOLUMEMODE
persistentvolume/pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19   10Gi       RWX            Delete           Bound      monitoring/grafana-pvc                          managed-nfs-storage            25h   Filesystem
persistentvolume/pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e   10Gi       RWO            Delete           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-1   managed-nfs-storage            25h   Filesystem
persistentvolume/pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca   10Gi       RWO            Delete           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-0   managed-nfs-storage            25h   Filesystem

修改动态pv回收为Retain,否测重启pod会删数据

kubectl edit pv -n default pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19 
persistentVolumeReclaimPolicy: Retain

kubectl edit pv -n default pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e
kubectl edit pv -n default pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca

kubectl get pods -n monitoring

在nfs上查看是否有数据生成

ll /nfs/k8s/dpv/
total 0
drwxrwxrwx. 2 root root  6 Oct 24 18:19 default-test-pvc2-pvc-f9153444-5653-4684-a845-83bb313194d1
drwxrwxrwx. 2 root root  6 Nov 22 15:45 monitoring-grafana-pvc-pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19
drwxrwxrwx. 3 root root 27 Nov 22 15:52 monitoring-prometheus-k8s-db-prometheus-k8s-0-pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca
drwxrwxrwx. 3 root root 27 Nov 22 15:52 monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e

kubectl logs -f prometheus-k8s-0 prometheus -n monitoring

自定义pod/service自动发现配置

目标:
用户启动的service或pod,在annotation中添加label后,可以自动被prometheus发现:

annotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "9121"
  1. secret保存自动发现的配置
    若要特定的annotation被发现,需要为prometheus增加如下配置:
    prometheus-additional.yaml

    cat > prometheus-additional.yaml << EOF
    - job_name: 'kubernetes-service-endpoints'
    kubernetes_sd_configs:
    - role: endpoints
    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
    action: keep
    regex: true
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
    action: replace
    target_label: __scheme__
    regex: (https?)
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
    action: replace
    target_label: __address__
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: \$1:\$2
    - action: labelmap
    regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
    action: replace
    target_label: kubernetes_name
    EOF
    有变量再次查看
    cat prometheus-additional.yaml

    上述配置会筛选endpoints:prometheus.io/scrape=True

在需要监控的服务中添加

  annotations: 
     prometheus.io/scrape: "True"

将上述配置保存为secret:

kubectl delete secret additional-configs -n monitoring
kubectl create secret generic additional-configs --from-file=prometheus-additional.yaml -n monitoring
secret "additional-configs" created
kubectl get secret additional-configs -n monitoring  -o yaml 
  1. 将配置添加到prometheus实例
    修改prometheus CRD,将上面的secret添加进去:

vi prometheus-prometheus.yaml

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  labels:
    prometheus: k8s
  name: k8s
  namespace: monitoring
spec:
  ......
  additionalScrapeConfigs:
    name: additional-configs
    key: prometheus-additional.yaml
  serviceAccountName: prometheus-k8s
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: 2.46.0

kubectl apply -f prometheus-prometheus.yaml

prometheus CRD修改完毕,可以到prometheus dashboard查看config是否被修改。
http://prometheus.k8s.local:30180/targets?search=#pool-kubernetes-service-endpoints

kubectl get pods -n monitoring -o wide
kubectl rollout restart sts/prometheus-k8s -n monitoring
kubectl logs -f prometheus-k8s-0 prometheus -n monitoring

nfs重启后服务503,无法关闭pod

#df -h 无反应,nfs卡死,需重启客户端服务器
kubectl get pods -n monitoring

kubectl delete -f prometheus-prometheus.yaml
kubectl delete pod prometheus-k8s-1  -n monitoring
kubectl delete pod prometheus-k8s-1 --grace-period=0 --force --namespace monitoring

kubectl delete -f grafana-deployment.yaml
kubectl apply -f grafana-deployment.yaml

kubectl apply -f prometheus-prometheus.yaml
kubectl logs -n monitoring pod prometheus-k8s-0 
kubectl describe -n monitoring pod prometheus-k8s-0 
kubectl describe -n monitoring pod prometheus-k8s-1 
kubectl describe -n monitoring pod grafana-65fdddb9c7-xml6m  

kubectl get pv,pvc -o wide

persistentvolume/pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19   10Gi       RWX            Delete           Bound      monitoring/grafana-pvc                          managed-nfs-storage            25h   Filesystem
persistentvolume/pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e   10Gi       RWO            Delete           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-1   managed-nfs-storage            25h   Filesystem
persistentvolume/pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca   10Gi       RWO            Delete           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-0   managed-nfs-storage            25h   Filesystem
persistentvolume/pvc-f9153444-5653-4684-a845-83bb313194d1   300Mi      RWX            Retain           Released   default/test-pvc2                               managed-nfs-storage            29d   Filesystem

#完全删除重装
kubectl delete -f manifests/
kubectl apply -f manifests/

当nfs异常时,网元进程读nfs挂载目录超时卡住,导致线程占满,无法响应k8s心跳检测,一段时间后,k8s重启该网元pod,在终止pod时,由于nfs异常,umount卡住,导致pod一直处于Terminating状态。

去原有布署的node上卸载nfs挂载点
mount -l | grep nfs

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
192.168.244.6:/nfs/k8s/dpv/monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e on /var/lib/kubelet/pods/67309a97-b69c-4423-9353-74863d55b3be/volumes/kubernetes.io~nfs/pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.244.7,local_lock=none,addr=192.168.244.6)
192.168.244.6:/nfs/k8s/dpv/monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e/prometheus-db on /var/lib/kubelet/pods/67309a97-b69c-4423-9353-74863d55b3be/volume-subpaths/pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e/prometheus/2 type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.244.7,local_lock=none,addr=192.168.244.6)
umount -l -f /var/lib/kubelet/pods/67309a97-b69c-4423-9353-74863d55b3be/volumes/kubernetes.io~nfs/pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e

修改默认挂载为Soft方式

vi /etc/nfsmount.conf
Soft=True

soft 当NFS Client以soft挂载Server后,若网络或Server出现问题,造成Client和Server无法传输资料时,Client会一直尝试到 timeout后显示错误并且停止尝试。若使用soft mount的话,可能会在timeout出现时造成资料丢失,故一般不建议使用。
hard 这是默认值。若用hard挂载硬盘时,刚好和soft相反,此时Client会一直尝试连线到Server,若Server有回应就继续刚才的操作,若没有回应NFS Client会一直尝试,此时无法umount或kill,所以常常会配合intr使用。
intr 当使用hard挂载的资源timeout后,若有指定intr可以在timeout后把它中断掉,这避免出问题时系统整个被NFS锁死,建议使用。

StatefulSet删除pv后,pod起来还是会找原pv,不建议删pv

kubectl get pv -o wide
pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e   10Gi       RWO            Retain           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-1   managed-nfs-storage            26h   Filesystem
pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca   10Gi       RWO            Retain           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-0   managed-nfs-storage            26h   Filesystem

kubectl patch pv pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e -p '{"metadata":{"finalizers":null}}'
kubectl patch pv pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca -p '{"metadata":{"finalizers":null}}'
kubectl delete pv pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e
kubectl delete pv pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca 

kubectl describe pvc pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19 | grep Mounted
kubectl patch pv pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19 -p '{"metadata":{"finalizers":null}}'
kubectl delete pv pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19

恢复pv

恢复grafana-pvc,从nfs的动态pv目录下找到原来的挂载点 monitoring-grafana-pvc-pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19

kubectl describe -n monitoring pod grafana-65fdddb9c7-xml6m
default-scheduler 0/3 nodes are available: persistentvolumeclaim "grafana-pvc" bound to non-existent persistentvolume "pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19". preemption: 0/3 nodes are available:
3 Preemption is not helpful for scheduling..

cat > rebuid-grafana-pvc.yaml  << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19
  labels:
    pv: pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName:  managed-nfs-storage 
  nfs:
    path: /nfs/k8s/dpv/monitoring-grafana-pvc-pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19
    server: 192.168.244.6
EOF
kubectl apply -f ../k8s/rebuid-grafana-pvc.yaml 

恢复prometheus-k8s-0

kubectl describe -n monitoring pod prometheus-k8s-0
Warning FailedScheduling 14m (x3 over 24m) default-scheduler 0/3 nodes are available: persistentvolumeclaim "prometheus-k8s-db-prometheus-k8s-0" bound to non-existent persistentvolume "pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca". preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

cat > rebuid-prometheus-k8s-0-pv.yaml  << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca
  labels:
    pv: pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName:  managed-nfs-storage 
  nfs:
    path: /nfs/k8s/dpv/monitoring-prometheus-k8s-db-prometheus-k8s-0-pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca
    server: 192.168.244.6
EOF

kubectl describe -n monitoring pod prometheus-k8s-1
Warning FailedScheduling 19m (x3 over 29m) default-scheduler 0/3 nodes are available: persistentvolumeclaim "prometheus-k8s-db-prometheus-k8s-1" bound to non-existent persistentvolume "pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e". preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

cat > rebuid-prometheus-k8s-1-pv.yaml  << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e
  labels:
    pv: pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName:  managed-nfs-storage 
  nfs:
    path: /nfs/k8s/dpv/monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e
    server: 192.168.244.6
EOF
kubectl apply -f rebuid-prometheus-k8s-0-pv.yaml 
kubectl apply -f rebuid-prometheus-k8s-1-pv.yaml 

kubectl get pv -o wide
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                           STORAGECLASS          REASON   AGE     VOLUMEMODE
pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19   10Gi       RWX            Retain           Bound    monitoring/grafana-pvc                          managed-nfs-storage            9m17s   Filesystem
pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e   10Gi       RWX            Retain           Bound    monitoring/prometheus-k8s-db-prometheus-k8s-1   managed-nfs-storage            17s     Filesystem
pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca   10Gi       RWX            Retain           Bound    monitoring/prometheus-k8s-db-prometheus-k8s-0   managed-nfs-storage            2m37s   Filesystem
kubectl get pods -n monitoring
kubectl -n monitoring logs -f prometheus-k8s-1

Error from server (BadRequest): container "prometheus" in pod "prometheus-k8s-1" is waiting to start: PodInitializing
iowait很高
iostat -kx 1
有很多挂载进程
ps aux|grep mount

mount -t nfs 192.168.244.6:/nfs/k8s/dpv/monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e ./tmp
showmount -e 192.168.244.6
Export list for 192.168.244.6:
/nfs/k8s/dpv     *
/nfs/k8s/spv_003 *
/nfs/k8s/spv_002 *
/nfs/k8s/spv_001 *
/nfs/k8s/web     *

mount -v -t nfs 192.168.244.6:/nfs/k8s/web ./tmp
mount.nfs: timeout set for Fri Nov 24 14:33:04 2023
mount.nfs: trying text-based options 'soft,vers=4.1,addr=192.168.244.6,clientaddr=192.168.244.5'

mount -v -t nfs -o vers=3  192.168.244.6:/nfs/k8s/web ./tmp
#nfs3可以挂载

如果客户端正在挂载使用,服务器端 NFS 服务突然间停掉了,那么在客户端就会出现执行 df -h命令卡死的现象。
可以杀死挂载点,重启客户端和服务端nfs服务,重新挂载,或重启服务器。

Posted in 安装k8s/kubernetes.

Tagged with , , , .


k8s_安装11_部署openresty

十一 部署openresty

准备镜像

docker search openresty 
#可以自选使用哪个镜像

#oprnresty 官方镜像
docker pull openresty/openresty 
docker images |grep openresty
openresty/openresty                                                            latest                eaeb31afac25   4 weeks ago     93.2MB
docker inspect openresty/openresty:latest
docker tag docker.io/openresty/openresty:latest repo.k8s.local/docker.io/openresty/openresty:latest
docker tag docker.io/openresty/openresty:latest repo.k8s.local/docker.io/openresty/openresty:1.19.9.1
docker push repo.k8s.local/docker.io/openresty/openresty:latest
docker push repo.k8s.local/docker.io/openresty/openresty:1.19.9.1

查看镜像

docker inspect openresty/openresty

                "resty_deb_version": "=1.19.9.1-1~bullseye1",
docker run -it openresty/openresty sh

apt-get update
apt-get install tree procps inetutils-ping net-tools

nginx -v
nginx version: openresty/1.19.9.1

ls /usr/local/openresty/nginx/conf

从镜像中复制出配制文件

docker run --name openresty -d openresty/openresty
docker cp openresty:/usr/local/openresty/nginx/conf/nginx.conf ./
docker stop openresty

从文件创建configmap

kubectl get cm -ntest
kubectl create -ntest configmap test-openresty-nginx-conf --from-file=nginx.conf=./nginx.conf
configmap/test-openresty-nginx-conf created

通过 edit 命令直接修改 configma

kubectl edit -ntest cm test-openresty-nginx-conf

通过 replace 替换

由于 configmap 我们创建通常都是基于文件创建,并不会编写 yaml 配置文件,因此修改时我们也是直接修改配置文件,而 replace 是没有 –from-file 参数的,因此无法实现基于源配置文件的替换,此时我们可以利用下方的命令实现
该命令的重点在于 –dry-run 参数,该参数的意思打印 yaml 文件,但不会将该文件发送给 apiserver,再结合 -oyaml 输出 yaml 文件就可以得到一个配置好但是没有发给 apiserver 的文件,然后再结合 replace 监听控制台输出得到 yaml 数据,通过 – 将当前输出变为当前命令的输入,即可实现替换

kubectl create -ntest cm  test-openresty-nginx-conf --from-file=nginx.conf --dry-run=client -oyaml | kubectl -ntest replace -f-
configmap/test-openresty-nginx-conf replaced

删除

kubectl delete cm test-openresty-nginx-conf

configmap不可修改

在 configMap 中加入 immutable: true

kubectl edit -ntest cm test-openresty-nginx-conf

准备yaml文件

#test-openresty-deploy.yaml 文件
#bitnami 公司镜像,使用宿主的目录,默认以1001 用户运行.
#指定以500 www 用户运行,在spec:添加
      securityContext:
        runAsUser: 500
#充许绑定1024下端口
      containers:
        capabilities:
          add:
          - NET_BIND_SERVICE
#挂载宿主时区文件,nginx目录
cat > test-openresty-deploy.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openresty
  namespace: test
spec:
  selector :
    matchLabels:
      app: openresty
  replicas: 1
  template:
    metadata:
      labels:
        app: openresty
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      securityContext:
        runAsUser: 500
      containers:
      - name: openresty
        resources:
          limits:
            cpu: "20m"
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 10Mi
        env:
        - name: TZ
          value: Asia/Shanghai
        image: repo.k8s.local/docker.io/openresty/openresty:1.19.9.1
        #command: ["/bin/bash", "-ce", "tail -f /dev/null"]
        #command: ["/opt/bitnami/scripts/openresty/run.sh"]
        ports:
        - containerPort: 80
        volumeMounts:
        #- name: vol-opresty-conf
        #  mountPath: /opt/bitnami/openresty/nginx/conf/
        - name: nginx-conf   # 数据卷名称
          mountPath: /usr/local/openresty/nginx/conf/nginx.conf      # 挂载的路径
          subPath: etc/nginx/nginx.conf         # 与 volumes.[0].items.path 相同
        - name: vol-opresty-html
          mountPath: "/usr/share/nginx/html/"
        - name: vol-opresty-log
          mountPath: "/var/log/nginx/"
      volumes:
      - name: nginx-conf  # 数据卷名称
        configMap:        # 数据卷类型为 configMap
          name: test-openresty-nginx-conf    # configMap 名字
          items:       # 要将configMap中的哪些数据挂载进来
          - key: nginx.conf    # configMap 中的文件名
            path: etc/nginx/nginx.conf           # subPath 路径
      #- name: vol-opresty-conf
      #  hostPath:
      #    path: /nginx/openresty/conf/
      #    type: DirectoryOrCreate
      - name: vol-opresty-html
        hostPath:
          path: /nginx/html/
          type: DirectoryOrCreate
      - name: vol-opresty-log
        hostPath:
          path: /nginx/logs/
          type: DirectoryOrCreate
      affinity: #方式四 尽量分配到不同的node上
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                - amd64
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - openresty
                topologyKey: kubernetes.io/hostname
EOF

#openresty nodeport服务
test-openresty-svc-nodeport.yaml
cat > test-openresty-svc-nodeport.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: svc-openresty
  namespace: test
spec:
  ports:
  - {name: http, nodePort: 32080, port: 31080, protocol: TCP, targetPort: 8089}
  selector: {app: openresty}
  type: NodePort
EOF
#ingress关联
test-openresty-ingress.yaml
cat > test-openresty-ingress.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-openresty
  annotations:
    kubernetes.io/ingress.class: "nginx"
  namespace: test
spec:
  rules:
  - http:
      paths:
      - path: /showvar
        pathType: Prefix
        backend:
          service:
            name: svc-openresty
            port:
              number: 31080
EOF

错误:
server-snippet annotation cannot be used. Snippet directives are disabled by the Ingress administrator

kind: ConfigMap中打开
allow-snippet-annotations: "true"

cat > ingress-nginx-ConfigMap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: "true"
  worker-processes: "auto" #worker_processes
  server-name-hash-bucket-size: "128" #server_names_hash_bucket_size
  variables-hash-bucket-size: "256" #variables_hash_bucket_size
  variables-hash-max-size: "2048" #variables_hash_max_size
  client-header-buffer-size: "32k" #client_header_buffer_size
  proxy-body-size: "8m" #client_max_body_size
  large-client-header-buffers: "4 512k" #large_client_header_buffers
  client-body-buffer-size: "512k" #client_body_buffer_size
  proxy-connect-timeout : "5" #proxy_connect_timeout
  proxy-read-timeout: "60" #proxy_read_timeout
  proxy-send-timeout: "5" #proxy_send_timeout
  proxy-buffer-size: "32k" #proxy_buffer_size
  proxy-buffers-number: "8 32k" #proxy_buffers
  keep-alive: "60" #keepalive_timeout
  enable-real-ip: "true" 
  use-forwarded-headers: "true"
  forwarded-for-header: "ns_clientip" #real_ip_header
  compute-full-forwarded-for: "true"
  enable-underscores-in-headers: "true" #underscores_in_headers on
  proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16  #set_real_ip_from
  access-log-path: "/var/log/nginx/access_$hostname.log"
  error-log-path: "/var/log/nginx/error.log"
  #log-format-escape-json: "true"
  log-format-upstream: '{"timestamp": "$time_iso8601", "requestID": "$req_id", "proxyUpstreamName":
    "$proxy_upstream_name","hostname": "$hostname","host": "$host","body_bytes_sent": "$body_bytes_sent","proxyAlternativeUpstreamName": "$proxy_alternative_upstream_name","upstreamStatus":
    "$upstream_status", "geoip_country_code": "$geoip_country_code","upstreamAddr": "$upstream_addr","request_time":
    "$request_time","httpRequest":{ "remoteIp": "$remote_addr","realIp": "$realip_remote_addr","requestMethod": "$request_method", "requestUrl":
    "$request_uri", "status": $status,"requestSize": "$request_length", "responseSize":
    "$upstream_response_length", "userAgent": "$http_user_agent",
    "referer": "$http_referer","x-forward-for":"$proxy_add_x_forwarded_for","latency": "$upstream_response_time", "protocol":"$server_protocol"}}'
EOF

kubectl delete -f ingress-nginx-ConfigMap.yaml
kubectl apply -f ingress-nginx-ConfigMap.yaml
kubectl edit configmap -n ingress-nginx ingress-nginx-controller
#ingress关联server-snippet
#realip 会在server 段对全域名生效
#ip 白名单whitelist-source-range 会在location = /showvar 生效,使用remoteaddr判定,需要全域白名单时才用.allow 223.2.2.0/24;deny all;

test-openresty-ingress-snippet.yaml
cat > test-openresty-ingress-snippet.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-openresty
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/server-snippet: |
      underscores_in_headers on;
      set_real_ip_from 10.244.0.0/16;
      set_real_ip_from 192.168.0.0/16;
      real_ip_header ns_clientip;
      #real_ip_recursive on;
    nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
  namespace: test
spec:
  rules:
  - http:
      paths:
      - path: /showvar
        pathType: Prefix
        backend:
          service:
            name: svc-openresty
            port:
              number: 31080
EOF

kubectl apply -f test-openresty-ingress-snippet.yaml

在工作node上创建挂载目录

mkdir -p /nginx/openresty/conf
mkdir -p /nginx/{html,logs}
chown -R www:website /nginx/

先启运一下pod

kubectl apply -f test-openresty-deploy.yaml

#查看pod并得到name
kubectl get pods -o wide -n test
NAME                           READY   STATUS    RESTARTS        AGE     IP             NODE               NOMINATED NODE   READINESS GATES
nginx-deploy-7c9674d99-v92pd   1/1     Running   0               4d17h   10.244.1.97    node01.k8s.local   <none>           <none>
openresty-fdc45bdbc-jh67k      1/1     Running   0               8m41s   10.244.2.59    node02.k8s.local   <none>           <none>

将pod内的配制文件复制到master,并传递给工作node

kubectl -n test logs -f openresty-76cf797cfc-gccsl

cd /nginx/openresty/conf
#以下三种方式都行
kubectl cp test/openresty-76cf797cfc-gccsl:/opt/bitnami/openresty/nginx/conf /nginx/openresty/conf

kubectl cp test/openresty-76cf797cfc-gccsl:/opt/bitnami/openresty/nginx/conf ./

kubectl exec "openresty-76cf797cfc-gccsl" -n "test" -- tar cf - "/opt/bitnami/openresty/nginx/conf" | tar xf - 

chown -R www:website .

#从master传到node2
scp -r * [email protected]:/nginx/openresty/conf/

include "/opt/bitnami/openresty/nginx/conf/server_blocks/*.conf";

在node2上 创建web主机文件

对应pod中 /opt/bitnami/openresty/nginx/conf/server_blocks/

vi /nginx/openresty/nginx/conf/server_blocks/default.conf 
server {
    listen       8089;
    server_name  localhost;

    access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    location /showvar {
        default_type text/plain;           
        echo time_local: $time_local;
        echo hostname: $hostname;
        echo server_addr: $server_addr;
        echo server_port: $server_port;
        echo host: $host;
        echo scheme: $scheme;
        echo http_host: $http_host;
        echo uri: $uri;
        echo remote_addr: $remote_addr;
        echo remote_port: $remote_port;
        echo remote_user: $remote_user;
        echo realip_remote_addr: $realip_remote_addr;
        echo realip_remote_port: $realip_remote_port;
        echo http_ns_clientip: $http_ns_clientip;
        echo http_user_agent: $http_user_agent;
        echo http_x_forwarded_for: $http_x_forwarded_for;
        echo proxy_add_x_forwarded_for: $proxy_add_x_forwarded_for;
        echo X-Request-ID: $http_x_request_id;
        echo X-Real-IP: $http_x_real_ip;
        echo X-Forwarded-Host: $http_x_forwarded_host;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

cat > /nginx/openresty/nginx/conf/server_blocks/ngxrealip.conf <<EOF
    underscores_in_headers on;
    #ignore_invalid_headers off;
    set_real_ip_from   10.244.0.0/16;
    real_ip_header    ns_clientip;

EOF

修改test-openresty-deploy.yaml 中openresty的配制目录后重启pod

cat > test-openresty-deploy.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openresty
  namespace: test
spec:
  selector :
    matchLabels:
      app: openresty
  replicas: 1
  template:
    metadata:
      labels:
        app: openresty
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      securityContext:
        runAsNonRoot: true
        runAsUser: 500
        #runAsGroup: 500
      nodeName: 
        node02.k8s.local
      containers:
      - name: openresty
        image: repo.k8s.local/docker.io/bitnami/openresty:latest
        #command: ["/bin/bash", "-ce", "tail -f /dev/null"]
        command: ["/opt/bitnami/scripts/openresty/run.sh"]
        ports:
        - containerPort: 80
        volumeMounts:
        - name: timezone
          mountPath: /etc/localtime  
        - name: vol-opresty-conf
          mountPath: /opt/bitnami/openresty/nginx/conf/
          readOnly: true
        - name: vol-opresty-html
          mountPath: "/usr/share/nginx/html/"
        - name: vol-opresty-log
          mountPath: "/var/log/nginx/"
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  
      - name: vol-opresty-conf
        hostPath:
          path: /nginx/openresty/nginx/conf/
          type: DirectoryOrCreate
      - name: vol-opresty-html
        hostPath:
          path: /nginx/html/
          type: DirectoryOrCreate
      - name: vol-opresty-log
        hostPath:
          path: /nginx/logs/
          type: DirectoryOrCreate
      #nodeSelector:
        #ingresstype: ingress-nginx
EOF
#创建/关闭 test-openresty-deploy.yaml 
kubectl apply -f test-openresty-deploy.yaml
kubectl delete -f test-openresty-deploy.yaml

#创建/关闭 openresty noddeport 服务
kubectl apply -f test-openresty-svc-nodeport.yaml
kubectl delete -f test-openresty-svc-nodeport.yaml

#创建/关闭 openresty ingress 关联服务
kubectl apply -f test-openresty-ingress.yaml
kubectl delete -f test-openresty-ingress.yaml

#查看pod
kubectl get pods -o wide -n test
NAME                            READY   STATUS    RESTARTS     AGE   IP            NODE               NOMINATED NODE   READINESS GATES
openresty-b6d7798f8-h47xj       1/1     Running   0            64m   10.244.2.24   node02.k8s.local   <none>           <none>

#查看service
kubectl get service -n test
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
svc-openresty    NodePort    10.96.30.145    <none>        31080:32080/TCP   3d6h

#查看 ingress 关联
kubectl get  Ingress -n test
NAME                     CLASS    HOSTS   ADDRESS     PORTS   AGE
ingress-svc-openresty    <none>   *       localhost   80      2m22s
ingress-svc-test-nginx   <none>   *       localhost   80      3d22h

#查看ingress
kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.96.111.201   <none>        80:30080/TCP,443:30443/TCP   4d2h
ingress-nginx-controller-admission   ClusterIP   10.96.144.105   <none>        443/TCP                      4d2h

#查看详缰
kubectl -n test describe pod openresty-b6d7798f8-h47xj
kubectl -n test logs -f openresty-b6d7798f8-h47xj

kubectl -n test describe pod openresty-59449454db-6knwv 
kubectl -n test logs -f  openresty-59449454db-mj74d
#进入容器
kubectl exec -it pod/openresty-b6d7798f8-h47xj -n test -- /bin/sh
kubectl exec -it pod/openresty-fdc45bdbc-jh67k -n test -- /bin/sh

#test-openresty-deploy.yaml 中已将pod内openresty 的配置,web根目录,日志都已映设到宿主node2上.
#可以在宿主上操作相应文件,创建一个输出node名的首页
echo `hostname` >  /nginx/html/index.html

#不进入容器重启服务
kubectl exec -it pod/openresty-b6d7798f8-h47xj -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -t'
kubectl exec -it pod/openresty-b6d7798f8-h47xj -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -s reload'
kubectl exec -it pod/openresty-6b59dd984d-bj2xt -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -t'
kubectl exec -it pod/openresty-6b59dd984d-bj2xt -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -s reload'
kubectl exec -it pod/openresty-6b59dd984d-bj2xt -n test -- /bin/sh -c 'ls /opt/bitnami/openresty/nginx/conf/server_blocks/'

查看流量

列出当前节点网卡及ip

for i in `ifconfig | grep -o ^[a-z0-9\.@]*`; do echo -n "$i : ";ifconfig $i|sed -n 2p|awk '{ print $2 }'; done

master01
cni0 : 10.244.0.1
enp0s3 : 192.168.244.4
flannel.1 : 10.244.0.0

node01
cni0 : 10.244.1.1
enp0s3 : 192.168.244.5
flannel.1 : 10.244.1.0

node02
cni0 : 10.244.2.1
enp0s3 : 192.168.244.7
flannel.1 : 10.244.2.0

当前ingress 开了HostNetwork+nodeport 30080,关联到node01,node02,不关联master
当前openresty service 开了 nodeport 32080,只关联到node02.

pod ip+pod targetPort 可以在集群内任意节点访问
在集群内访问openresty clusterip+clusterport,请求会先经过VIP,再由kube-proxy分发到各个pod上面,使用ipvsadm命令来查看这些负载均衡的转发
在集群内外通过任意 nodeip+nodeport 访问openresty service,没有布署的节点会转发一次,NodePort服务的请求路径是从K8S节点IP直接到Pod,并不会经过ClusterIP,但是这个转发逻辑依旧是由kube-proxy实现
在集群内外访问ingress的nodeport,NodePort的解析结果是一个CLUSTER-IP,在集群内部请求的负载均衡逻辑和实现与ClusterIP Service是一致的
在集群内外访问ingress的hostnetwork ,如果没有pod,ingress nginx会跨node反代一次,否则是本机代理。0~1次转发。没有布ingress的不能访问。

openresty pod pod ip+pod targetPort

openresty pod ip+pod port 可以在集群内任意节点访问,集群外不能访问

在node02 访问node02 上 openresty 的pod ip+pod targetPort

curl http://10.244.2.24:8089/showvar
time_local: 30/Oct/2023:16:51:23 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 10.244.2.24
scheme: http
http_host: 10.244.2.24:8089
uri: /showvar
remote_addr: 10.244.2.1
remote_port: 42802
remote_user: 
http_x_forwarded_for: 

流量在本机node02上
10.244.2.1->10.244.2.24

在node01 访问node02 上 openresty 的pod ip+pod targetPort

curl http://10.244.2.24:8089/showvar
time_local: 30/Oct/2023:16:51:25 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 10.244.2.24
scheme: http
http_host: 10.244.2.24:8089
uri: /showvar
remote_addr: 10.244.1.0
remote_port: 39108
remote_user: 
http_x_forwarded_for:

流量在从node01上到node02
10.244.1.0->10.244.2.24

openresty service clusterip+clusterport

在master 访问node02 上 openresty service 的clusterip+clusterport

curl http://10.96.30.145:31080/showvar
time_local: 31/Oct/2023:10:15:49 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 10.96.30.145
scheme: http
http_host: 10.96.30.145:31080
uri: /showvar
remote_addr: 10.244.0.0
remote_port: 1266
remote_user: 
http_x_forwarded_for: 

在node2 访问node02 上 openresty service 的clusterip+clusterport

curl http://10.96.30.145:31080/showvar
time_local: 31/Oct/2023:10:18:01 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 10.96.30.145
scheme: http
http_host: 10.96.30.145:31080
uri: /showvar
remote_addr: 10.244.2.1
remote_port: 55374
remote_user: 
http_x_forwarded_for: 
http_user_agent: curl/7.29.0

在node2中pode内使用service域名访问

kubectl exec -it pod/test-pod-86df6cd59b-x8ndr -n test -- curl http://svc-openresty.test.svc.cluster.local:31080/showvar
time_local: 01/Nov/2023:11:11:46 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: svc-openresty.test.svc.cluster.local
scheme: http
http_host: svc-openresty.test.svc.cluster.local:31080
uri: /showvar
remote_addr: 10.244.2.41
remote_port: 43594
remote_user: 
http_x_forwarded_for: 
http_user_agent: curl/7.81.0

host自动补充

kubectl exec -it pod/test-pod-86df6cd59b-x8ndr -n test -- curl http://svc-openresty:31080/showvar     
time_local: 01/Nov/2023:11:15:26 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: svc-openresty
scheme: http
http_host: svc-openresty:31080
uri: /showvar
remote_addr: 10.244.2.41
remote_port: 57768
remote_user: 
http_x_forwarded_for: 
http_user_agent: curl/7.81.0

在集群内外通过任意nodeip+nodeport访问 openresty service

pod所在的node不转发,没有的node转发一次

curl http://192.168.244.4:32080
node02.k8s.local

curl http://192.168.244.4:32080/showvar
time_local: 30/Oct/2023:16:27:09 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 127.0.0.1
scheme: http
http_host: 127.0.0.1:32080
uri: /showvar
remote_addr: 10.244.0.0
remote_port: 22338
remote_user: 
http_x_forwarded_for: 

流量从master nodeip到node02 clusterip,跨node转发了一次
192.168.244.4->10.244.0.0->10.244.2.24

指向node2的32080
curl http://192.168.244.7:32080/showvar
流量从node2 nodeip到clusterip
192.168.244.7->10.244.2.1->10.244.2.24

在集群内外通过任意nodeip+nodeport访问 ingress service

访问没有布署ingress 的master nodeip+nodeport,会转发一次或两次(负载均衡的原因)

curl http://192.168.244.4:30080/showvar/
time_local: 30/Oct/2023:16:36:16 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 127.0.0.1
scheme: http
http_host: 127.0.0.1:30080
uri: /showvar/
remote_addr: 10.244.1.0
remote_port: 58680
remote_user: 
http_x_forwarded_for: 192.168.244.4

从master nodeip经flannel到ingree所在的node01或node02,再到node02,http_x_forwarded_for多了192.168.244.4
192.168.244.4->10.244.0.0->10.244.1.0->10.244.2.24
http_x_forwarded_for固定192.168.244.4
remote_addr会是10.244.1.0或10.244.2.0

在master上访问有布署ingress 的node02 nodeip+nodeport,会转发一次或两次

#两种情况

curl http://192.168.244.7:30080/showvar/
time_local: 30/Oct/2023:17:07:10 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 127.0.0.1
scheme: http
http_host: 127.0.0.1:30080
uri: /showvar/
remote_addr: 10.244.1.0
remote_port: 44200
remote_user: 
http_x_forwarded_for: 192.168.244.7

在master访问node02 nodeip经过node01再回到node02
192.168.244.7->10.244.2.0->10.244.1.0->10.244.2.24

curl http://192.168.244.7:30080/showvar/
time_local: 30/Oct/2023:17:18:06 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 192.168.244.7
scheme: http
http_host: 192.168.244.7:30080
uri: /showvar/
remote_addr: 10.244.2.1
remote_port: 45772
remote_user: 
http_x_forwarded_for: 192.168.244.4
从node02 nodeip经kubeproxy调度到master再到node02的ingress,再回到node02 
192.168.244.4->192.168.244.7->10.244.2.1->10.244.2.24

ingress hostnetwork

在master上访问node02的hostnetwork ,直接访问,没有跨node转发

curl http://192.168.244.7:80/showvar/
time_local: 30/Oct/2023:17:26:37 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 192.168.244.7
scheme: http
http_host: 192.168.244.7
uri: /showvar/
remote_addr: 10.244.2.1
remote_port: 45630
remote_user: 
http_x_forwarded_for: 192.168.244.4

192.168.244.7->10.244.2.1->10.244.2.24

在master上访问node01的hostnetwork ,经ingress转发一次

curl http://192.168.244.5:80/showvar/
time_local: 30/Oct/2023:17:28:10 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 192.168.244.5
scheme: http
http_host: 192.168.244.5
uri: /showvar/
remote_addr: 10.244.1.0
remote_port: 48512
remote_user: 
http_x_forwarded_for: 192.168.244.4

192.168.244.5->10.244.1.0->10.244.2.24

在master上访问master,因为没布ingress,所以不能访问

curl http://192.168.244.4:80/showvar/
curl: (7) Failed connect to 192.168.244.4:80; Connection refused

错误:

2023/10/30 14:43:23 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /opt/bitnami/openresty/nginx/conf/nginx.conf:2
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /opt/bitnami/openresty/nginx/conf/nginx.conf:2
2023/10/30 14:43:23 [emerg] 1#1: mkdir() "/opt/bitnami/openresty/nginx/tmp/client_body" failed (13: Permission denied)

nginx: [emerg] mkdir() "/opt/bitnami/openresty/nginx/tmp/client_body" failed (13: Permission denied)
权限不对,去除securityContext中runAsGroup

      securityContext:
        runAsUser: 1000
        #runAsGroup: 1000

deployment多个pod

openresty使用nfs存放配制文件及web文件
PV和StorageClass不受限于Namespace,PVC受限于Namespace,如果pod有namespace,那么pvc和pv也需相同的namespace

#准备pv
cat > test-openresty-spv.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-openresty-cfg-spv
  namespace: test
  labels:
    pv: test-openresty-cfg-spv
spec:
  capacity:
    storage: 300Mi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /nfs/k8s/cfg/openresty
    server: 192.168.244.6
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-openresty-web-spv
  namespace: test
  labels:
    pv: test-openresty-web-spv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /nfs/k8s/web/openresty
    server: 192.168.244.6
EOF
#准备pvc
cat > test-openresty-pvc.yaml  << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-openresty-cfg-pvc
  namespace: test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 300Mi
  selector:
    matchLabels:
      pv: test-openresty-cfg-spv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-openresty-web-pvc
  namespace: test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      pv: test-openresty-web-spv
EOF
#准备openresty Deployment
cat > test-openresty-deploy.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openresty
  namespace: test
spec:
  selector :
    matchLabels:
      app: openresty
  replicas: 2
  template:
    metadata:
      labels:
        app: openresty
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      securityContext:
        runAsNonRoot: true
        runAsUser: 500
        #runAsGroup: 500
      #nodeName: 
      #  node02.k8s.local
      containers:
      - name: openresty
        image: repo.k8s.local/docker.io/bitnami/openresty:latest
        #command: ["/bin/bash", "-ce", "tail -f /dev/null"]
        command: ["/opt/bitnami/scripts/openresty/run.sh"]
        ports:
        - containerPort: 80
        volumeMounts:
        - name: timezone
          mountPath: /etc/localtime  
        - name: vol-opresty-conf
          mountPath: /opt/bitnami/openresty/nginx/conf/
          readOnly: true
        - name: vol-opresty-html
          mountPath: "/usr/share/nginx/html/"
        - name: vol-opresty-log
          mountPath: "/var/log/nginx/"
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  
      - name: vol-opresty-conf
        #hostPath:
        #  path: /nginx/openresty/nginx/conf/
        #  type: DirectoryOrCreate
        persistentVolumeClaim:
          claimName: test-openresty-cfg-pvc
      - name: vol-opresty-html
        #hostPath:
        #  path: /nginx/html/
        #  type: DirectoryOrCreate
        persistentVolumeClaim:
          claimName: test-openresty-web-pvc
      - name: vol-opresty-log
        hostPath:
          path: /nginx/logs/
          type: DirectoryOrCreate
      nodeSelector:
        ingresstype: ingress-nginx
EOF
kubectl apply -f test-openresty-spv.yaml
kubectl delete -f test-openresty-spv.yaml
kubectl apply -f test-openresty-pvc.yaml
kubectl delete -f test-openresty-pvc.yaml

#创建/关闭 test-openresty-deploy.yaml 
kubectl apply -f test-openresty-deploy.yaml
kubectl delete -f test-openresty-deploy.yaml

#创建/关闭 openresty noddeport 服务
kubectl apply -f test-openresty-svc-nodeport.yaml
kubectl delete -f test-openresty-svc-nodeport.yaml

#创建/关闭 openresty ingress 关联服务
kubectl apply -f test-openresty-ingress.yaml
kubectl delete -f test-openresty-ingress.yaml

#查看pod
kubectl get pods -o wide -n test
NAME                            READY   STATUS    RESTARTS     AGE   IP            NODE               NOMINATED NODE   READINESS GATES
openresty-6b5c6c6966-h6z6d     1/1     Running   7 (47m ago)     53m     10.244.1.107   node01.k8s.local   <none>           <none>
openresty-6b5c6c6966-l667p     1/1     Running   6 (50m ago)     53m     10.244.2.69    node02.k8s.local   <none>           <none>

kubectl get pv,pvc -n test

kubectl get sc

kubectl describe pvc -n test
storageclass.storage.k8s.io "nfs" not found
从pv和pvc中去除  storageClassName: nfs

#查看详情
kubectl -n test describe pod openresty-6b5c6c6966-l667p
kubectl -n test logs -f openresty-6b5c6c6966-h6z6d

kubectl exec -it pod/openresty-b4475b994-m72qg -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -t'
kubectl exec -it pod/openresty-b4475b994-m72qg -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -s reload'
kubectl exec -it pod/openresty-6b5c6c6966-h6z6d -n test -- /bin/sh -c 'ls /opt/bitnami/openresty/nginx/conf/server_blocks/'

修改
sed -E -n 's/remote_user:.*/remote_user:test2;/p' /nfs/k8s/cfg/openresty/server_blocks/default.conf 
sed -E -i 's/remote_user:.*/remote_user:test5;/' /nfs/k8s/cfg/openresty/server_blocks/default.conf 

1.通过 Rollout 平滑重启 Pod
kubectl rollout restart deployment/openresty -n test

2.kubectl set env
kubectl set env deployment openresty -n test DEPLOY_DATE="$(date)"

3.扩展 Pod 的副本倒计时
kubectl scale deployment/openresty -n test --replicas=3

4.删除单个pod
kubectl delete pod openresty-7ccbdd4f6c-9l566  -n test

kubectl annotate pods openresty-7ccbdd4f6c-wrbl9 restartversion="2" -n test --overwrite

-------------
https://gitee.com/mirrors_openresty/docker-openresty?skip_mobile=true

kubectl create configmap test2-openresty-reload --from-literal=reloadnginx.log=1 -n test2
kubectl get configmaps test2-openresty-reload -o yaml -n test2
kubectl delete configmap test2-openresty-reload -n test2

#隐藏版本信息
#响应信息
sed -i 's/"Server: nginx" CRLF;/"Server:" CRLF;/g' /opt/nginx-1.20.2/src/http/ngx_http_header_filter_module.c
sed -i 's/"Server: " NGINX_VER CRLF;/"Server:" CRLF;/g' /opt/nginx-1.20.2/src/http/ngx_http_header_filter_module.c
#报错页面
sed -i 's/>" NGINX_VER "</></g' /opt/nginx-1.20.2/src/http/ngx_http_special_response.c
sed -i 's/>" NGINX_VER_BUILD "</></g' /opt/nginx-1.20.2/src/http/ngx_http_special_response.c
sed -i 's/>nginx</></g' /opt/nginx-1.20.2/src/http/ngx_http_special_response.c

Posted in 安装k8s/kubernetes.

Tagged with , .


k8s_安装10_日志_elk

十、日志_elk

日志收集内容

在日常使用控制过程中,一般需要收集的日志为以下几类:

服务器系统日志:

/var/log/messages
/var/log/kube-xxx.log

Kubernetes组件日志:

kube-apiserver日志
kube-controller-manager日志
kube-scheduler日志
kubelet日志
kube-proxy日志

应用程序日志

云原生:控制台日志
非云原生:容器内日志文件
网关日志(如ingress-nginx)

服务之间调用链日志

日志收集工具

日志收集技术栈一般分为ELK,EFK,Grafana+Loki
ELK是由Elasticsearch、Logstash、Kibana三者组成
EFK是由Elasticsearch、Fluentd、Kibana三者组成
Filebeat+Kafka+Logstash+ES
Grafana+Loki Loki负责日志的存储和查询、Promtail负责收集日志并将其发送给Loki、Grafana用来展示或查询相关日志

ELK 日志流程可以有多种方案(不同组件可自由组合,根据自身业务配置),常见有以下:

Filebeat、Logstash、Fluentd(采集、处理)—> ElasticSearch (存储)—>Kibana (展示)

Filebeat、Logstash、Fluentd(采集)—> Logstash(聚合、处理)—> ElasticSearch (存储)—>Kibana (展示)

Filebeat、Logstash、Fluentd(采集)—> Kafka/Redis(消峰) —> Logstash(聚合、处理)—> ElasticSearch (存 储)—>Kibana (展示)

Logstash

Logstash 是一个开源的数据收集、处理和传输工具。它可以从多种来源(如日志文件、消息队列等)收集数据,并对数据进行过滤、解析和转换,最终将数据发送到目标存储(如 Elasticsearch)。
优势:有很多插件
缺点:性能以及资源消耗(默认的堆大小是 1GB)

Fluentd/FluentBit

语言:(Ruby + C)
GitHub 地址:https://github.com/fluent/fluentd-kubernetes-daemonset
在线文档:https://docs.fluentd.org/
由于Logstash比较“重”,并且配置稍微有些复杂,所以出现了EFK的日志收集解决方案。相对于ELK中Logstash,Fluentd采用“一锅端”的形式,可以直接将某些日志文件中的内容存储至Elasticsearch,然后通过Kibana进行展示。其中Fluentd只能收集控制台日志(使用logs命令查出来的日志),不能收集非控制台日志,不能很好的满足生产环境的需求。大部分情况下,没有遵循云原生理念开发的程序,往往都会输出很多日志文件,这些容器内的日志无法采集,除非在每个Pod内添加一个Sidecar,将日志文件的内容进行tail -f转成控制台日志,但这也是非常麻烦的。
另外,用来存储日志的Elasticsearch集群是不建议搭建在Kubernetes集群中的,因为会非常浪费Kubernetes集群资源,所以大部分情况下通过Fluentd采集日志输出到外部的Elasticsearch集群中。
优点: Fluentd占用资源小,语法简单
缺点:解析前没有缓冲,可能会导致日志管道出现背压,对转换数据的支持有限,就像您可以使用 Logstash 的 mutate 过滤器或 rsyslog 的变量和模板一样.
Fluentd只能收集控制台日志(使用logs命令查出来的日志),不能收集非控制台日志,不能很好的满足生产环境的需求,依赖Elasticsearch,维护难度和资源使用都是偏高.
和 syslog-ng 一样,它的缓冲只存在与输出端,单线程核心以及 Ruby GIL 实现的插件意味着它 大的节点下性能是受限的

Fluent-bit

语言:C
fluentd精简版
在线文档:https://docs.fluentbit.io/manual/about/fluentd-and-fluent-bit

Filebeat

语言:Golang
GitHub 地址:https://github.com/elastic/beats/tree/master/filebeat
在线文档:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html
优势:只是一个二进制文件没有任何依赖。占用系统的CPU和内存小支持发送Logstash ,Elasticsearch,Kafka 和 Redis
缺点:有限的解析和丰富功能,可以在后端再加一层Logstash。
在早期的ELK架构中,日志收集均以Logstash为主,Logstash负责收集和解析日志,它对内存、CPU、IO资源的消耗比较高,但是Filebeat所占系统的CPU和内存几乎可以忽略不计。

由于Filebeat本身是比较轻量级的日志采集工具,因此Filebeat经常被用于以Sidecar的形式配置在Pod中,用来采集容器内程序输出的自定义日志文件。当然,Filebeat同样可以采用DaemonSet的形式部署在Kubernetes集群中,用于采集系统日志和程序控制台输出的日志。至于Filebeat为什么采用DaemonSet的形式部署而不是采用Deployment和StatefulSet部署,原因有以下几点:

收集节点级别的日志:Filebeat需要能够访问并收集每个节点上的日志文件,包括系统级别的日志和容器日志。Deployment和STS的主要目标是部署和管理应用程序的Pod,而不是关注节点级别的日志收集。因此,使用DaemonSet更适合收集节点级别的日志
自动扩展:Deployment和STS旨在管理应用程序的副本数,并确保所需的Pod数目在故障恢复和水平扩展时保持一致。但对于Filebeat来说,并不需要根据负载或应用程序的副本数来调整Pod数量。Filebeat只需在每个节点上运行一个实例即可,因此使用DaemonSet可以更好地满足这个需求
高可用性:Deployment和STS提供了副本管理和故障恢复的机制,确保应用程序的高可用性。然而,对于Filebeat而言,它是作为一个日志收集代理来收集日志,不同于应用程序,其故障恢复的机制和需求通常不同。使用DaemonSet可以确保在每个节点上都有一个运行中的Filebeat实例,即使某些节点上的Filebeat Pod不可用,也能保持日志收集的连续性
Fluentd和Logstash可以将采集的日志输出到Elasticsearch集群,Filebeat同样可以将日志直接存储到Elasticsearch中,Filebeat 也会和 Logstash 一样记住上次读取的偏移,但是为了更好地分析日志或者减轻Elasticsearch的压力,一般都是将日志先输出到Kafka,再由Logstash进行简单的处理,最后输出到Elasticsearch中。

LogAgent:

语言:JS
GitHub 地址:https://github.com/sematext/logagent-js
在线文档:https://sematext.com/docs/logagent/
优势:可以获取 /var/log 下的所有信息,解析各种格式(Elasticsearch,Solr,MongoDB,Apache HTTPD等等,以 掩盖敏感的数据信息 , Logagent 有本地缓冲,所以不像 Logstash ,在数据传输目的地不可用时会丢失日志
劣势:没有 Logstash 灵活

logtail:

阿里云日志服务的生产者,目前在阿里集团内部机器上运行,经过 3 年多时间的考验,目前为阿 里公有云用户提供日志收集服务
 采用 C++语言实现,对稳定性、资源控制、管理等下过很大的功夫,性能良好。相比于 logstash、fluentd 的社区支持,logtail 功能较为单一,专注日志收集功能。
优势:
  logtail 占用机器 cpu、内存资源最少,结合阿里云日志服务的 E2E 体验良好
劣势:
  logtail 目前对特定日志类型解析的支持较弱,后续需要把这一块补起来。

rsyslog

绝大多数 Linux 发布版本默认的 syslog 守护进程
优势:是经测试过的最快的传输工具
rsyslog 适合那些非常轻的应用(应用,小 VM,Docker 容器)。如果需要在另一个传输工具(例 如,Logstash)中进行处理,可以直接通过 TCP 转发 JSON ,或者连接 Kafka/Redis 缓冲

syslog-ng

优势:和 rsyslog 一样,作为一个轻量级的传输工具,它的性能也非常好

Grafana Loki

Loki 及其生态系统是 ELK 堆栈的替代方案, ELK 相比,摄取速度更快:索引更少,无需合并
优势:小存储占用:较小的索引,数据只写入一次到长期存储
缺点:与 ELK 相比,较长时间范围内的查询和分析速度较慢,log shippers选项更少(例如 Promtail 或 Fluentd)

ElasticSearch

一个正常es集群中只有一个主节点(Master),主节点负责管理整个集群。如创建或删除索引,跟踪哪些节点是群集的一部分,并决定哪些分片分配给相关的节点。集群的所有节点都会选择同一个节点作为主节点

脑裂现象:

脑裂问题的出现就是因为从节点在选择主节点上出现分歧导致一个集群出现多个主节点从而使集群分裂,使得集群处于异常状态。主节点的角色既为master又为data。数据访问量较大时,可能会导致Master节点停止响应(假死状态)

避免脑裂:

1.网络原因:discovery.zen.ping.timeout 超时时间配置大一点。默认是3S
2.节点负载:角色分离策略
3.JVM内存回收:修改 config/jvm.options 文件的 -Xms 和 -Xmx 为服务器的内存一半。

5个管理节点,其中一个是工作主节点,其余4个是备选节点,集群脑裂因子设置是3.

节点类型/角色

Elasticsearch 7.9 之前的版本中的节点类型主要有4种,:数据节点、协调节点、候选主节点、ingest 节点.
7.9 以及之后节点类型升级为节点角色(Node roles)。

ES集群由多节点组成,每个节点通过node.name指定节点的名称
一个节点可以支持多个角色,也可以支持一种角色。

1、master节点

配置文件中node.master属性为true,就有资格被选为
master节点用于控制整个集群的操作,比如创建和删除索引,以及管理非master节点,管理集群元数据信息,集群节点信息,集群索引元数据信息;
node.master: true
node.data: false

2、data数据节点

配置文件中node.data属于为true,就有资格被选为data节点,存储实际数据,提供初步联合查询,初步聚合查询,也可以作为协调节点
主要用于执行数据相关的操作
node.master: false
node.data: true

3、客户端节点

配置文件中node.master和node.data均为false(既不能为master也不能为data)
用于响应客户的请求,把请求转发到其他节点
node.master: false
node.data: false

4、部落节点

当一个节点配置tribe.*的时候,它是一个特殊的客户端,可以连接多个集群,在所有集群上执行索引和操作

其它角色汇总
7.9 以后 角色缩写 英文释义 中文释义
c cold node 冷数据节点
d data node 数据节点
f frozen node 冷冻数据节点
h hot node 热数据节点
i ingest node 数据预处理节点
l machine learning node 机器学习节点
m master-eligible node 候选主节点
r remote cluster client node 远程节点
s content node 内容数据节点
t transform node 转换节点
v voting-only node 仅投票节点
w warm node 温数据节点
coordinating node only 仅协调节点

新版使用node.roles 定义
node.roles: [data,master]

关于节点角色和硬件配置的关系,也是经常被提问的问题,推荐配置参考: 角色 描述 存储 内存 计算 网络
数据节点 存储和检索数据 极高
主节点 管理集群状态
Ingest 节点 转换输入数据
机器学习节点 机器学习 极高 极高
协调节点 请求转发和合并检索结果

集群选举

主从架构模式,一个集群只能有一个工作状态的管理节点,其余管理节点是备选,备选数量原则上不限制。很多大数据产品管理节点仅支持一主一从,如Greenplum、Hadoop、Prestodb;
工作管理节点自动选举,工作管理节点关闭之后自动触发集群重新选举,无需外部三方应用,无需人工干预。很多大数据产品需要人工切换或者借助第三方软件应用,如Greenplum、Hadoop、Prestodb。
discovery.zen.minimum_master_nodes = (master_eligible_nodes / 2) + 1
以1个主节点+4个候选节点为例设为3

conf/elasticsearch.yml:
    discovery.zen.minimum_master_nodes: 3

协调路由

Elasticsearch集群中有多个节点,其中任一节点都可以查询数据或者写入数据,集群内部节点会有路由机制协调,转发请求到索引分片所在的节点。我们在迁移集群时采用应用代理切换,外部访问从旧集群数据节点切换到新集群数据节点,就是基于此特点。

查询主节点

http://192.168.111.200:9200/_cat/nodes?v
含有 * 的代表当前主节点
http://192.168.111.200:9200/_cat/master

排查
集群数据平衡

Elastic自身设计了集群分片的负载平衡机制,当有新数据节点加入集群或者离开集群,集群会自动平衡分片的负载分布。
索引分片会在数据节点之间平衡漂移,达到平均分布之后停止,频繁的集群节点加入或者下线会严重影响集群的IO,影响集群响应速度,所以要尽量避免次情况发生。如果频繁关闭重启,这样很容易造成集群问题。

#集群迁移时先关,迁移后再开
#禁用集群新创建索引分配
cluster.routing.allocation.enable: false 
#禁用集群自动平衡
cluster.routing.rebalance.enable: false

ES 慢查询日志 打开

切换集群访问

Hadoop

Hadoop平台离线数据写入ES,从ES抽取数据。Elastic提供了Hadoop直连访问驱动。如Hive是通过创建映射表与Elasticsearch索引关联的,新的数据节点启动之后,原有所有Hive-Es映射表需要全部重新创建,更换其中的IP+PORT指向;由于Hive有很多与Elastic关联的表,所以短时间内没有那么快替换完成,新旧数据节点需要共存一段时间,不能在数据迁移完成之后马上关闭

#Hive指定连接
es.nodes=多个数据节点IP+PORT

业务系统应用实时查询

Elastic集群对外提供了代理访问

数据写入

kafka队列

安装

ELK数据处理流程
数据由Beats采集后,可以选择直接推送给Elasticsearch检索,或者先发送给Logstash处理,再推送给Elasticsearch,最后都通过Kibana进行数据可视化的展示

镜像文件准备

docker pull docker.io/fluent/fluentd-kubernetes-daemonset:v1.16.2-debian-elasticsearch8-amd64-1.1
wget https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch.yaml

在hub.docker.com 查询elasticsearch 可用版本
elasticsearch7.17.14
elasticsearch8.11.0

docker pull docker.elastic.co/elasticsearch/elasticsearch:8.11.0
docker pull docker.elastic.co/kibana/kibana:8.11.0
docker pull docker.elastic.co/logstash/logstash:8.11.0
docker pull docker.elastic.co/beats/filebeat:8.11.0

docker tag docker.elastic.co/elasticsearch/elasticsearch:8.11.0 repo.k8s.local/docker.elastic.co/elasticsearch/elasticsearch:8.11.0
docker tag docker.elastic.co/kibana/kibana:8.11.0 repo.k8s.local/docker.elastic.co/kibana/kibana:8.11.0
docker tag docker.elastic.co/beats/filebeat:8.11.0 repo.k8s.local/docker.elastic.co/beats/filebeat:8.11.0
docker tag docker.elastic.co/logstash/logstash:8.11.0 repo.k8s.local/docker.elastic.co/logstash/logstash:8.11.0

docker push repo.k8s.local/docker.elastic.co/elasticsearch/elasticsearch:8.11.0
docker push repo.k8s.local/docker.elastic.co/kibana/kibana:8.11.0
docker push repo.k8s.local/docker.elastic.co/beats/filebeat:8.11.0
docker push repo.k8s.local/docker.elastic.co/logstash/logstash:8.11.0

docker rmi docker.elastic.co/elasticsearch/elasticsearch:8.11.0
docker rmi docker.elastic.co/kibana/kibana:8.11.0
docker rmi docker.elastic.co/beats/filebeat:8.11.0
docker rmi docker.elastic.co/logstash/logstash:8.11.0

搭建elasticsearch+kibana

node.name定义节点名,使用metadata.name名称,需要能dns解析
cluster.initial_master_nodes 对应metadata.name名称加编号,编号从0开始
elasticsearch配置文件:

cat > log-es-elasticsearch.yml <<EOF
cluster.name: log-es
node.name: "log-es-elastic-sts-0"
path.data: /usr/share/elasticsearch/data
#path.logs: /var/log/elasticsearch
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
#transport.tcp.port: 9300
#discovery.seed_hosts: ["127.0.0.1", "[::1]"]
cluster.initial_master_nodes: ["log-es-elastic-sts-0"]
xpack.security.enabled: "false"
xpack.security.transport.ssl.enabled: "false"
#增加参数,使head插件可以访问es
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
EOF

kibana配置文件:
statefulset管理的Pod名称是有序的,删除指定Pod后自动创建的Pod名称不会改变。
statefulset创建时必须指定server名称,如果server没有IP地址,则会对server进行DNS解析,找到对应的Pod域名。
statefulset具有volumeclaimtemplate卷管理模板,创建出来的Pod都具有独立卷,相互没有影响。
statefulset创建出来的Pod,拥有独立域名,我们在指定访问Pod资源时,可以使用域名指定,IP会发生改变,但是域名不会(域名组成:Pod名称.svc名称.svc名称空间.svc.cluster.local)

cat > log-es-kibana.yml <<EOF
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: "http://localhost:9200"
i18n.locale: "zh-CN"
EOF

创建k8s命名空间

kubectl create namespace log-es
kubectl get namespace log-es
kubectl describe namespace  log-es

创建elasticsearch和kibana的配置文件configmap:

1、configmap是以明文的形式将配置信息给pod内使用的办法。它的大小有限,不能超过1Mi

2、可以将文件、目录等多种形式做成configmap,并且通过env或者volume的形式供pod内使用。

3、它可以在不重新构建镜像或重启容器的情况下在线更新,但是需要一定时间间隔
以subPath方式挂载时,configmap更新,容器不会更新。

kubectl create configmap log-es-elastic-config -n log-es --from-file=log-es-elasticsearch.yml
kubectl create configmap log-es-kibana-config -n log-es --from-file=log-es-kibana.yml

更新方式1
#kubectl create configmap log-es-kibana-config --from-file log-es-kibana.yml -o yaml --dry-run=client | kubectl apply -f -
#kubectl get cm log-es-kibana-config -n log-es -o yaml > log-es-kibana.yaml  && kubectl replace -f log-es-kibana.yaml -n log-es
#测试下来不行

更新方式2
kubectl edit configmap log-es-elastic-config -n log-es
kubectl edit configmap log-es-kibana-config -n log-es

查看列表
kubectl get configmap  -n log-es 

删除
kubectl delete cm log-es-elastic-config -n log-es
kubectl delete cm log-es-kibana-config -n log-es

kibana

kibana为有状态的固定节点,不需负载均衡,可以建无头服务
这个 Service 被创建后并不会被分配一个 VIP,而是会以 DNS 记录的方式暴露出它所代理的 Pod

<pod-name>.<svc-name>.<namespace>.svc.cluster.local  
$(podname)-$(ordinal).$(servicename).$(namespace).svc.cluster.local  
log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local  
cat > log-es-kibana-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  labels:
    app: log-es-svc
  name: es-kibana-svc
  namespace: log-es
spec:
  ports:
  - name: 9200-9200
    port: 9200
    protocol: TCP
    targetPort: 9200
    nodePort: 9200
  - name: 5601-5601
    port: 5601
    protocol: TCP
    targetPort: 5601
    nodePort: 5601
  #clusterIP: None 
  selector:
    app: log-es-elastic-sts
  type: NodePort
  #type: ClusterIP
EOF

创建es-kibana的有状态资源 yaml配置文件:

cat > log-es-kibana-sts.yaml <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: log-es-elastic-sts
  name: log-es-elastic-sts
  namespace: log-es
spec:
  replicas: 1
  selector:
    matchLabels:
      app: log-es-elastic-sts
  serviceName: "es-kibana-svc"  #关联svc名称
  template:
    metadata:
      labels:
        app: log-es-elastic-sts
    spec:
      #imagePullSecrets:
      #- name: registry-pull-secret
      containers:
      - name: log-es-elasticsearch
        image: repo.k8s.local/docker.elastic.co/elasticsearch/elasticsearch:8.11.0
        imagePullPolicy: IfNotPresent
#        lifecycle:
#          postStart:
#            exec:
#              command: [ "/bin/bash", "-c",  touch /tmp/start" ] #sysctl -w vm.max_map_count=262144;ulimit -HSn 65535; 请在宿主机设定
        #command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        #command: [ "/bin/bash", "-c", "--" ]
        #args: [ "while true; do sleep 30; done;" ]
        #command: [ "/bin/bash", "-c","ulimit -HSn 65535;" ]
        resources:
          requests:
            memory: "800Mi"
            cpu: "800m"
          limits:
            memory: "1.2Gi"
            cpu: "2000m"
        ports:
        - containerPort: 9200
        - containerPort: 9300
        volumeMounts:
        - name: log-es-elastic-config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          subPath: log-es-elasticsearch.yml  #对应configmap log-es-elastic-config 中文件名称
        - name: log-es-persistent-storage
          mountPath: /usr/share/elasticsearch/data
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: ES_JAVA_OPTS
          value: -Xms512m -Xmx512m
      - image: repo.k8s.local/docker.elastic.co/kibana/kibana:8.11.0
        imagePullPolicy: IfNotPresent
        #command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
        name: log-es-kibana
        ports:
        - containerPort: 5601
        env:
        - name: TZ
          value: Asia/Shanghai
        volumeMounts:
        - name: log-es-kibana-config
          mountPath: /usr/share/kibana/config/kibana.yml
          subPath: log-es-kibana.yml   #对应configmap log-es-kibana-config 中文件名称
      volumes:
      - name: log-es-elastic-config
        configMap:
          name: log-es-elastic-config
      - name: log-es-kibana-config
        configMap:
          name: log-es-kibana-config
      - name: log-es-persistent-storage
        hostPath:
          path: /localdata/es/data
          type: DirectoryOrCreate
      #hostNetwork: true
      #dnsPolicy: ClusterFirstWithHostNet
      nodeSelector:
         kubernetes.io/hostname: node02.k8s.local
EOF

单独调试文件,测试ulimit失败问题

cat > log-es-kibana-sts.yaml <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: log-es-elastic-sts
  name: log-es-elastic-sts
  namespace: log-es
spec:
  replicas: 1
  selector:
    matchLabels:
      app: log-es-elastic-sts
  serviceName: "es-kibana-svc"  #关联svc名称
  template:
    metadata:
      labels:
        app: log-es-elastic-sts
    spec:
      #imagePullSecrets:
      #- name: registry-pull-secret
#      initContainers:        # 初始化容器
#      - name: init-vm-max-map
#        image: repo.k8s.local/google_containers/busybox:9.9 
#        imagePullPolicy: IfNotPresent
#        command: ["sysctl","-w","vm.max_map_count=262144"]
#        securityContext:
#          privileged: true
#      - name: init-fd-ulimit
#        image: repo.k8s.local/google_containers/busybox:9.9 
#        imagePullPolicy: IfNotPresent
#        command: ["sh","-c","ulimit -HSn 65535;ulimit -n >/tmp/index/init.log"]
#        securityContext:
#          privileged: true
#        volumeMounts:
#        - name: init-test
#          mountPath: /tmp/index
#        terminationMessagePath: /dev/termination-log
#        terminationMessagePolicy: File
      containers:
      - name: log-es-elasticsearch
        image: repo.k8s.local/docker.elastic.co/elasticsearch/elasticsearch:8.11.0
        imagePullPolicy: IfNotPresent
#        securityContext:
#          privileged: true
#          capabilities:
#          add: ["SYS_RESOURCE"]
#        lifecycle:
#          postStart:
#            exec:
#              command: [ "/bin/bash", "-c", "sysctl -w vm.max_map_count=262144; ulimit -l unlimited;echo 'Container started';" ]
#        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        resources:
          requests:
            memory: "800Mi"
            cpu: "800m"
          limits:
            memory: "1Gi"
            cpu: "1000m"
        ports:
        - containerPort: 9200
        - containerPort: 9300
        volumeMounts:
#        - name: ulimit-config
#          mountPath: /etc/security/limits.conf
#          #readOnly: true
#          #subPath: limits.conf
        - name: log-es-elastic-config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          subPath: log-es-elasticsearch.yml  #对应configmap log-es-elastic-config 中文件名称
        - name: log-es-persistent-storage
          mountPath: /usr/share/elasticsearch/data
#        - name: init-test
#          mountPath: /tmp/index
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: ES_JAVA_OPTS
          value: -Xms512m -Xmx512m
      volumes:
#      - name: init-test
#        emptyDir: {}
      - name: log-es-elastic-config
        configMap:
          name: log-es-elastic-config
      - name: log-es-persistent-storage
        hostPath:
          path: /localdata/es/data
          type: DirectoryOrCreate
#      - name: ulimit-config
#        hostPath:
#          path: /etc/security/limits.conf 
      #hostNetwork: true
      #dnsPolicy: ClusterFirstWithHostNet
      nodeSelector:
         kubernetes.io/hostname: node02.k8s.local
EOF

elastic

elastic索引目录
在node02上,pod默认运行用户id=1000

mkdir -p /localdata/es/data
chmod 777 /localdata/es/data
chown 1000:1000 /localdata/es/data

在各节点上建filebeat registry 目录,包括master
使用daemonSet运行filebeat需要挂载/usr/share/filebeat/data,该目录下有一个registry文件,里面记录了filebeat采集日志位置的相关内容,比如文件offset、source、timestamp等,如果Pod发生异常后K8S自动将Pod进行重启,不挂载的情况下registry会被重置,将导致日志文件又从offset=0开始采集,结果就是es中日志重复一份,这点非常重要.

mkdir -p /localdata/filebeat/data
chown 1000:1000 /localdata/filebeat/data
chmod 777 /localdata/filebeat/data
kubectl apply -f log-es-kibana-sts.yaml
kubectl delete -f log-es-kibana-sts.yaml
kubectl apply -f log-es-kibana-svc.yaml
kubectl delete -f log-es-kibana-svc.yaml

kubectl get pods -o wide -n log-es
NAME               READY   STATUS              RESTARTS   AGE   IP       NODE               NOMINATED NODE   READINESS GATES
log-es-elastic-sts-0   0/2     ContainerCreating   0          66s   <none>   node02.k8s.local   <none>           <none>

#查看详情
kubectl -n log-es describe pod log-es-elastic-sts-0 
kubectl -n log-es logs -f log-es-elastic-sts-0 -c log-es-elasticsearch
kubectl -n log-es logs -f log-es-elastic-sts-0 -c log-es-kibana

kubectl -n log-es logs log-es-elastic-sts-0 -c log-es-elasticsearch
kubectl -n log-es logs log-es-elastic-sts-0 -c log-es-kibana

kubectl -n log-es logs -f --tail=20 log-es-elastic-sts-0 -c log-es-elasticsearch

kubectl exec -it log-es-elasticsearch -n log-es -- /bin/sh 
进入指定 pod 中指定容器
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh 
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-kibana  -- /bin/sh 

kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'cat /etc/security/limits.conf'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'ulimit -HSn 65535'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'ulimit -n'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'cat /etc/security/limits.d/20-nproc.conf'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'cat /etc/pam.d/login|grep  pam_limits'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'ls /etc/pam.d/sshd'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'cat /etc/profile'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c '/usr/share/elasticsearch/bin/elasticsearch'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'ls /tmp/'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'cat /dev/termination-log'

kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'ps -aux'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-kibana -- /bin/sh  -c 'ps -aux'

启动失败主要原因

文件语柄不够
ulimit -HSn 65535
最大虚拟内存太小
sysctl -w vm.max_map_count=262144
数据目录权限不对
/usr/share/elasticsearch/data
xpack权限不对
xpack.security.enabled: "false"

容器添加调试,进入容器中查看不再退出

        #command: [ "/bin/bash", "-c", "--" ]
        #args: [ "while true; do sleep 30; done;" ]

查看运行用户
id
uid=1000(elasticsearch) gid=1000(elasticsearch) groups=1000(elasticsearch),0(root)

查看数据目录权限
touch /usr/share/elasticsearch/data/test
ls /usr/share/elasticsearch/data/

测试启动
/usr/share/elasticsearch/bin/elasticsearch

{"@timestamp":"2023-11-09T05:20:55.298Z", "log.level":"ERROR", "message":"node validation exception\n[2] bootstrap checks failed. You must address the points described in the following [2] lines before starting Elasticsearch. For more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.11/bootstrap-checks.html]\nbootstrap check failure [1] of [2]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.11/_file_descriptor_check.html]\nbootstrap check failure [2] of [2]: Transport SSL must be enabled if security is enabled. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.11/bootstrap-checks-xpack.html#bootstrap-checks-tls]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"node-1","elasticsearch.cluster.name":"log-es"}
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/log-es.log

启动elasticsearch时,提示max virtual memory areas vm.max_map_count [65530] is too low, increase to at least

sysctl: setting key "vm.max_map_count", ignoring: Read-only file system

ulimit: max locked memory: cannot modify limit: Operation not permitted

错误
master not discovered yet, this node has not previously joined a bootstrapped cluster, and this node must discover master-eligible nodes
没有发现主节点
在/etc/elasticsearch/elasticsearch.yml文件中加入主节点名:cluster.initial_master_nodes: ["master","node"]

  1. 找到pod的CONTAINER 名称
    在pod对应node下运行

    crictl ps
    CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
    d3f8f373be7cb       7f03b6ec0c6f6       About an hour ago   Running             log-es-elasticsearch                   0                   cd3cebe50807f       log-es-elastic-sts-0
  2. 找到pod的pid

    crictl inspect d3f8f373be7cb |grep -i pid
    "pid": 8420,
            "pid": 1
            "type": "pid"
  3. 容器外执行容器内命令

    
    nsenter -t 8420 -n hostname
    node02.k8s.local

cat /proc/8420/limits |grep "open files"
Max open files 4096 4096 files


参考
https://imroc.cc/kubernetes/trick/deploy/set-sysctl/
https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
解决:  
postStart中执行无效,改成宿主机上执行,映射到容器  
在node02上执行

sysctl -w vm.max_map_count=262144

sysctl -a|grep vm.max_map_count

cat /etc/security/limits.conf

cat > /etc/security/limits.d/20-nofile.conf <<EOF
root soft nofile 65535
root hard nofile 65535

  • soft nofile 65535
  • hard nofile 65535
    EOF

cat > /etc/security/limits.d/20-nproc.conf <<EOF

    • nproc 65535
      root soft nproc unlimited
      root hard nproc unlimited
      EOF

      在CentOS 7版本中为/etc/security/limits.d/20-nproc.conf,在CentOS 6版本中为/etc/security/limits.d/90-nproc.conf

      echo "* soft nofile 65535" >> /etc/security/limits.conf

      echo "* hard nofile 65535" >> /etc/security/limits.conf

      echo "andychu soft nofile 65535" >> /etc/security/limits.conf

      echo "andychu hard nofile 65535" >> /etc/security/limits.conf

      echo "ulimit -HSn 65535" >> /etc/rc.local

ulimit -a
sysctl -p

systemctl show sshd |grep LimitNOFILE

cat /etc/systemd/system.conf|grep DefaultLimitNOFILE
sed -n 's/#DefaultLimitNOFILE=/DefaultLimitNOFILE=65535/p' /etc/systemd/system.conf
sed -i 's/^#DefaultLimitNOFILE=/DefaultLimitNOFILE=65535/' /etc/systemd/system.conf

systemctl daemon-reexec

systemctl restart containerd
systemctl restart kubelet

crictl inspect c30a814bcf048 |grep -i pid

cat /proc/53657/limits |grep "open files"
Max open files 65335 65335 files

kubectl get pods -o wide -n log-es
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
log-es-elastic-sts-0 2/2 Running 0 2m19s 10.244.2.131 node02.k8s.local <none> <none>

[root@node01 nginx]# curl http://10.244.2.131:9200
{
"name" : "node-1",
"cluster_name" : "log-es",
"cluster_uuid" : "Agfoz8qmS3qob_R6bp2cAw",
"version" : {
"number" : "8.11.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "d9ec3fa628c7b0ba3d25692e277ba26814820b20",
"build_date" : "2023-11-04T10:04:57.184859352Z",
"build_snapshot" : false,
"lucene_version" : "9.8.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}

kubectl get pod -n log-es
kubectl get pod -n test

查看service

kubectl get service -n log-es
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-kibana-svc ClusterIP None <none> 9200/TCP,5601/TCP 53s

kubectl apply -f log-es-kibana-svc.yaml
kubectl delete -f log-es-kibana-svc.yaml

kubectl exec -it pod/test-pod-1 -n test — ping www.c1gstudio.com
kubectl exec -it pod/test-pod-1 -n test — ping svc-openresty.test
kubectl exec -it pod/test-pod-1 -n test — nslookup log-es-elastic-sts-0.es-kibana-svc.log-es
kubectl exec -it pod/test-pod-1 -n test — ping log-es-elastic-sts-0.es-kibana-svc.log-es

kubectl exec -it pod/test-pod-1 -n test — curl http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200
kubectl exec -it pod/test-pod-1 -n test — curl -L http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:5601

cat > log-es-kibana-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: log-es-svc
name: es-kibana-svc
namespace: log-es
spec:
ports:

  • name: 9200-9200
    port: 9200
    protocol: TCP
    targetPort: 9200

    nodePort: 9200

  • name: 5601-5601
    port: 5601
    protocol: TCP
    targetPort: 5601

    nodePort: 5601

    clusterIP: None

    selector:
    app: log-es-elastic-sts
    type: NodePort

    type: ClusterIP

    EOF

kubectl get service -n log-es
NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
es-kibana-svc   NodePort   10.96.128.50   <none>        9200:30118/TCP,5601:31838/TCP   16m

使用nodeip+port访问,本次端口为31838
curl -L http://192.168.244.7:31838
curl -L http://10.96.128.50:5601

外部nat转发后访问
http://127.0.0.1:5601/

ingress

cat > log-es-kibana-ingress.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-kibana
  namespace: log-es
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: kibana
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  ingressClassName: nginx
  rules:
  - host: kibana.k8s.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: es-kibana-svc
            port:
              number: 5601
EOF             

kubectl apply -f log-es-kibana-ingress.yaml

kubectl get ingress -n log-es
curl -L -H "Host:kibana.k8s.local" http://10.96.128.50:5601

filebeat

#https://www.elastic.co/guide/en/beats/filebeat/8.11/drop-fields.html
#https://raw.githubusercontent.com/elastic/beats/7.9/deploy/kubernetes/filebeat-kubernetes.yaml
cat > log-es-filebeat-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: log-es-filebeat-config
  namespace: log-es
data:
  filebeat.yml: |-
    filebeat.inputs:
      - type: container
        containers.ids:
          - "*"
        id: 99
        enabled: true
        tail_files: true
        paths:
          - /var/log/containers/*.log
          #- /var/lib/docker/containers/*/*.log
          #- /var/log/pods/*/*/*.log   
        processors:
        - add_kubernetes_metadata:
            in_cluster: true
            matchers:
              - logs_path:
                  logs_path: "/var/log/containers/"     
        fields_under_root: true
        exclude_files: ['\.gz$']
        tags: ["k8s"]  
        fields:
          source: "container"     
      - type: filestream
        id: 100
        enabled: true
        tail_files: true
        paths:
          - /var/log/nginx/access*.log
        processors:
          - decode_json_fields:
              fields: [ 'message' ]
              target: "" # 指定日志字段message,头部以json标注,如果不要json标注则设置为空如:target: ""
              overwrite_keys: false # 默认情况下,解码后的 JSON 位于输出文档中的“json”键下。如果启用此设置,则键将在输出文档中的顶层复制。默认值为 false
              process_array: false
              max_depth: 1
          - drop_fields: 
              fields: ["agent","ecs.version"]
              ignore_missing: true
        fields_under_root: true
        tags: ["ingress-nginx-access"]
        fields:
          source: "ingress-nginx-access"          
      - type: filestream
        id: 101
        enabled: true
        tail_files: true
        paths:
          - /var/log/nginx/error.log
        close_inactive: 5m
        ignore_older: 24h
        clean_inactive: 96h
        clean_removed: true
        fields_under_root: true
        tags: ["ingress-nginx-error"]   
        fields:
          source: "ingress-nginx-error"             
      - type: filestream
        id: 102
        enabled: true
        tail_files: true
        paths:
          - /nginx/logs/*.log
        exclude_files: ['\.gz$','error.log']
        close_inactive: 5m
        ignore_older: 24h
        clean_inactive: 96h
        clean_removed: true
        fields_under_root: true
        tags: ["web-log"]  
        fields:
          source: "nginx-access"            
    output.logstash:
      hosts: ["logstash.log-es.svc.cluster.local:5044"]                           
    #output.elasticsearch:
      #hosts: ["http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200"]
      #index: "log-%{[fields.tags]}-%{+yyyy.MM.dd}"
      #indices:
        #- index: "log-ingress-nginx-access-%{+yyyy.MM.dd}"
          #when.contains:
            #tags: "ingress-nginx-access"
        #- index: "log-ingress-nginx-error-%{+yyyy.MM.dd}"
          #when.contains:
            #tags: "ingress-nginx-error" 
        #- index: "log-web-log-%{+yyyy.MM.dd}"
          #when.contains:
            #tags: "web-log" 
        #- index: "log-k8s-%{+yyyy.MM.dd}"
          #when.contains:
            #tags: "k8s"                                
    json.keys_under_root: true # 默认情况下,解码后的 JSON 位于输出文档中的“json”键下。如果启用此设置,则键将在输出文档中的顶层复制。默认值为 false
    json.overwrite_keys: true # 如果启用了此设置,则解码的 JSON 对象中的值将覆盖 Filebeat 在发生冲突时通常添加的字段(类型、源、偏移量等)
    setup.template.enabled: false  #false不使用默认的filebeat-%{[agent.version]}-%{+yyyy.MM.dd}索引
    setup.template.overwrite: true #开启新设置的模板
    setup.template.name: "log" #设置一个新的模板,模板的名称
    setup.template.pattern: "log-*" #模板匹配那些索引
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    #setup.template.settings:
    #  index.number_of_shards: 1
    #  index.number_of_replicas: 1
    setup.ilm.enabled: false # 修改索引名称,要关闭索引生命周期管理ilm功能
    logging.level: warning #debug、info、warning、error
    logging.to_syslog: false
    logging.metrics.period: 300s
    logging.to_files: true
    logging.files:
      path: /tmp/
      name: "filebeat.log"
      rotateeverybytes: 10485760
      keepfiles: 7
EOF
cat > log-es-filebeat-daemonset.yaml <<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: log-es
spec:
  #replicas: 1
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      serviceAccount: filebeat
      containers:
      - name: filebeat
        image: repo.k8s.local/docker.elastic.co/beats/filebeat:8.11.0
        imagePullPolicy: IfNotPresent
        env:
        - name: TZ
          value: "Asia/Shanghai"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi              
        volumeMounts:
        - name: filebeat-config
          readOnly: true
          mountPath: /config/filebeat.yml    # Filebeat 配置
          subPath: filebeat.yml
        - name: fb-data
          mountPath: /usr/share/filebeat/data         
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          readOnly: true          
          mountPath: /var/lib/docker/containers
        - name: varlogingress
          readOnly: true          
          mountPath: /var/log/nginx      
        - name: varlogweb
          readOnly: true          
          mountPath: /nginx/logs              
        args:
        - -c
        - /config/filebeat.yml
      volumes:
      - name: fb-data
        hostPath:
          path: /localdata/filebeat/data 
          type: DirectoryOrCreate    
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlogingress
        hostPath:
          path: /var/log/nginx
      - name: varlogweb
        hostPath:
          path: /nginx/logs      
      - name: filebeat-config
        configMap:
          name: log-es-filebeat-config
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
EOF
kubectl apply -f log-es-filebeat-configmap.yaml
kubectl delete -f log-es-filebeat-configmap.yaml
kubectl apply -f log-es-filebeat-daemonset.yaml
kubectl delete -f log-es-filebeat-daemonset.yaml 
kubectl get pod -n log-es  -o wide 
kubectl get cm -n log-es
kubectl get ds -n log-es
kubectl edit configmap log-es-filebeat-config -n log-es
kubectl get service -n log-es

#更新configmap后需手动重启pod
kubectl rollout restart  ds/filebeat -n log-es
kubectl patch  ds filebeat  -n log-es   --patch '{"spec": {"template": {"metadata": {"annotations": {"version/config": "202311141" }}}}}'

#重启es
kubectl rollout restart  sts/log-es-elastic-sts -n log-es
#删除 congfig
kubectl delete cm filebeat -n log-es
kubectl delete cm log-es-filebeat-config -n log-es

#查看详细
kubectl -n log-es describe pod filebeat-mdldl
kubectl -n log-es logs -f filebeat-4kgpl

#查看pod
kubectl exec -n log-es -it filebeat-hgpnl  -- /bin/sh 
kubectl exec -n log-es -it filebeat-q69f5   -- /bin/sh  -c 'ps aux'
kubectl exec -n log-es -it filebeat-wx4x2  -- /bin/sh  -c 'cat  /config/filebeat.yml'
kubectl exec -n log-es -it filebeat-4j2qd -- /bin/sh  -c 'cat  /tmp/filebeat*'
kubectl exec -n log-es -it filebeat-9qx6f -- /bin/sh  -c 'cat  /tmp/filebeat*'
kubectl exec -n log-es -it filebeat-9qx6f -- /bin/sh  -c 'ls /usr/share/filebeat/data'
kubectl exec -n log-es -it filebeat-9qx6f -- /bin/sh  -c 'filebeat modules list'

kubectl exec -n log-es -it filebeat-kmrcc  -- /bin/sh -c 'curl http://localhost:5066/?pretty'

kubectl exec -n log-es -it filebeat-hqz9b  -- /bin/sh -c 'curl http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200/_cat/indices?v'
curl -XGET 'http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200/_cat/indices?v'

curl http://10.96.128.50:9200/_cat/indices?v

错误 serviceaccount
Failed to watch v1.Node: failed to list v1.Node: nodes "node02.k8s.local" is forbidden: User "system:serviceaccount:log-es:default" cannot list resource "nodes" in API group "" at the cluster scope
当你创建namespace的时候,会默认为该namespace创建一个名为default的serviceaccount。这个的错误的信息代表的意思是,pod用namespace默认的serviceaccout是没有权限访问K8s的 API group的。可以通过命令查看:
kubectl get sa -n log-es

解决方法
创建一个 tillerServiceAccount.yaml,并使用 kubectl apply -f tiller-ServiceAccount.yaml 创建账号解角色,其中kube-system就是xxxx就是你的命名空间

vi tiller-ServiceAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: log-es
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: log-es

kubectl apply -f tiller-ServiceAccount.yaml
和当前运行pod关联,同时修改yaml
kubectl patch ds –namespace log-es filebeat -p ‘{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}’

vi filebeat-ServiceAccount.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: log-es
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat
  namespace: log-es
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: log-es
roleRef:
  kind: Role
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat-kubeadm-config
  namespace: log-es
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: log-es
roleRef:
  kind: Role
  name: filebeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
- apiGroups: ["apps"]
  resources:
    - replicasets
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat
  # should be the namespace where filebeat is running
  namespace: log-es
  labels:
    k8s-app: filebeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat-kubeadm-config
  namespace: log-es
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: log-es
  labels:
    k8s-app: filebeat
kubectl apply -f filebeat-ServiceAccount.yaml 
kubectl get sa -n log-es

nginx模块不好用
在filebeat.yml中启用模块

    filebeat.modules:
      - module: nginx 
 ```     
Exiting: module nginx is configured but has no enabled filesets  
还需要改文件名

./filebeat modules enable nginx

./filebeat --modules nginx  
需写在modules.d/nginx.yml中

Filebeat的output
  1、Elasticsearch Output   (Filebeat收集到数据,输出到es里。默认的配置文件里是有的,也可以去官网上去找)
  2、Logstash Output      (Filebeat收集到数据,输出到logstash里。默认的配置文件里是有的,也可以得去官网上去找)
  3、Redis Output       (Filebeat收集到数据,输出到redis里。默认的配置文件里是没有的,得去官网上去找)
  4、File Output         (Filebeat收集到数据,输出到file里。默认的配置文件里是有的,也可以去官网上去找)
  5、Console Output       (Filebeat收集到数据,输出到console里。默认的配置文件里是有的,也可以去官网上去找)

https://www.elastic.co/guide/en/beats/filebeat/8.12/configuring-howto-filebeat.html

### 增加logstash来对采集到的原始日志进行业务需要的清洗
vi log-es-logstash-deploy.yaml

apiVersion: apps/v1

kind: DaemonSet

kind: StatefulSet

kind: Deployment
metadata:
name: logstash
namespace: log-es
labels:
app: logstash
spec:
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
terminationGracePeriodSeconds: 30

hostNetwork: true

  #dnsPolicy: ClusterFirstWithHostNet
  containers:
  - name: logstash
    ports:
    - containerPort: 5044
      name: logstash
    command:
    - logstash
    - '-f'
    - '/etc/logstash_c/logstash.conf'
    image: repo.k8s.local/docker.elastic.co/logstash/logstash:8.11.0
    env:
    - name: TZ
      value: "Asia/Shanghai"
    - name: NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName        
    volumeMounts:
    - name: config-volume
      mountPath: /etc/logstash_c/
    - name: config-yml-volume
      mountPath: /usr/share/logstash/config/
    resources: #logstash一定要加上资源限制,避免对其他业务造成资源抢占影响
      limits:
        cpu: 1000m
        memory: 2048Mi
      requests:
        cpu: 512m
        memory: 512Mi
  volumes:  
  - name: config-volume
    configMap:
      name: logstash-conf
      items:
      - key: logstash.conf
        path: logstash.conf
  - name: config-yml-volume
    configMap:
      name: logstash-yml
      items:
      - key: logstash.yml
        path: logstash.yml 
  nodeSelector:
    ingresstype: ingress-nginx            
  tolerations:
    - key: node-role.kubernetes.io/master
      effect: NoSchedule

vi log-es-logstash-svc.yaml

apiVersion: v1
kind: Service
metadata:
name: logstash
annotations:
labels:
app: logstash
namespace: log-es
spec:

type: NodePort

type: ClusterIP
ports:

  • name: http
    port: 5044

    nodePort: 30044

    protocol: TCP
    targetPort: 5044
    clusterIP: None
    selector:
    app: logstash


vi log-es-logstash-ConfigMap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-conf
namespace: log-es
labels:
app: logstash
data:
logstash.conf: |-
input {
beats {
port => 5044
}
}
filter{
if [agent][type] == "filebeat" {
mutate{
remove_field => "[agent]"
remove_field => "[ecs]"
remove_field => "[log][offset]"
}
}
if [input][type] == "container" {
mutate{
remove_field => "[kubernetes][node][hostname]"
remove_field => "[kubernetes][labels]"
remove_field => "[kubernetes][namespace_labels]"
remove_field => "[kubernetes][node][labels]"
}
}

处理ingress日志

  if [kubernetes][container][name] == "nginx-ingress-controller" {
    json {
      source => "message"
      target => "ingress_log"
    }
    if [ingress_log][requesttime] {
        mutate {
        convert => ["[ingress_log][requesttime]", "float"]
        }
    }
    if [ingress_log][upstremtime] {
        mutate {
        convert => ["[ingress_log][upstremtime]", "float"]
        }
    }
    if [ingress_log][status] {
        mutate {
        convert => ["[ingress_log][status]", "float"]
        }
    }
    if  [ingress_log][httphost] and [ingress_log][uri] {
        mutate {
          add_field => {"[ingress_log][entry]" => "%{[ingress_log][httphost]}%{[ingress_log][uri]}"}
        }
        mutate{
          split => ["[ingress_log][entry]","/"]
        }
        if [ingress_log][entry][1] {
          mutate{
          add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/%{[ingress_log][entry][1]}"}
          remove_field => "[ingress_log][entry]"
          }
        }
        else{
          mutate{
          add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/"}
          remove_field => "[ingress_log][entry]"
          }
        }
    }
  }
  # 处理以srv进行开头的业务服务日志
  if [kubernetes][container][name] =~ /^srv*/ {
    json {
      source => "message"
      target => "tmp"
    }
    if [kubernetes][namespace] == "kube-system" {
      drop{}
    }
    if [tmp][level] {
      mutate{
        add_field => {"[applog][level]" => "%{[tmp][level]}"}
      }
      if [applog][level] == "debug"{
        drop{}
      }
    }
    if [tmp][msg]{
      mutate{
        add_field => {"[applog][msg]" => "%{[tmp][msg]}"}
      }
    }
    if [tmp][func]{
      mutate{
      add_field => {"[applog][func]" => "%{[tmp][func]}"}
      }
    }
    if [tmp][cost]{
      if "ms" in [tmp][cost]{
        mutate{
          split => ["[tmp][cost]","m"]
          add_field => {"[applog][cost]" => "%{[tmp][cost][0]}"}
          convert => ["[applog][cost]", "float"]
        }
      }
      else{
        mutate{
        add_field => {"[applog][cost]" => "%{[tmp][cost]}"}
        }
      }
    }
    if [tmp][method]{
      mutate{
      add_field => {"[applog][method]" => "%{[tmp][method]}"}
      }
    }
    if [tmp][request_url]{
      mutate{
        add_field => {"[applog][request_url]" => "%{[tmp][request_url]}"}
      }
    }
    if [tmp][meta._id]{
      mutate{
        add_field => {"[applog][traceId]" => "%{[tmp][meta._id]}"}
      }
    }
    if [tmp][project] {
      mutate{
        add_field => {"[applog][project]" => "%{[tmp][project]}"}
      }
    }
    if [tmp][time] {
      mutate{
      add_field => {"[applog][time]" => "%{[tmp][time]}"}
      }
    }
    if [tmp][status] {
      mutate{
        add_field => {"[applog][status]" => "%{[tmp][status]}"}
      convert => ["[applog][status]", "float"]
      }
    }
  }
  mutate{
    rename => ["kubernetes", "k8s"]
    remove_field => "beat"
    remove_field => "tmp"
    remove_field => "[k8s][labels][app]"
    remove_field => "[event][original]"      
  }
}
output{
    if [source] == "container" {      
      elasticsearch {
        hosts => ["http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200"]
        codec => json
        index => "k8s-logstash-container-%{+YYYY.MM.dd}"
      }
      #stdout { codec => rubydebug }
    }
    if [source] == "ingress-nginx-access" {
      elasticsearch {
        hosts => ["http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200"]
        codec => json
        index => "k8s-logstash-ingress-nginx-access-%{+YYYY.MM.dd}"
      }
      #stdout { codec => rubydebug }
    }
    if [source] == "ingress-nginx-error" {
      elasticsearch {
        hosts => ["http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200"]
        codec => json
        index => "k8s-logstash-ingress-nginx-error-%{+YYYY.MM.dd}"
      }
      #stdout { codec => rubydebug }
    }
    if [source] == "nginx-access" {
      elasticsearch {
        hosts => ["http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200"]
        codec => json
        index => "k8s-logstash-nginx-access-%{+YYYY.MM.dd}"
      }
      #stdout { codec => rubydebug }
    }                  
    #stdout { codec => rubydebug }
}

apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-yml
namespace: log-es
labels:
app: logstash
data:
logstash.yml: |-
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200

kubectl apply -f log-es-logstash-ConfigMap.yaml
kubectl delete -f log-es-logstash-ConfigMap.yaml
kubectl apply -f log-es-logstash-deploy.yaml
kubectl delete -f log-es-logstash-deploy.yaml
kubectl apply -f log-es-logstash-svc.yaml
kubectl delete -f log-es-logstash-svc.yaml

kubectl apply -f log-es-filebeat-configmap.yaml

kubectl get pod -n log-es -o wide
kubectl get cm -n log-es
kubectl get ds -n log-es
kubectl edit configmap log-es-filebeat-config -n log-es
kubectl get service -n log-es

查看详细

kubectl -n log-es describe pod filebeat-97l85
kubectl -n log-es logs -f logstash-847d7f5b56-jv5jj
kubectl logs -n log-es $(kubectl get pod -n log-es -o jsonpath='{.items[3].metadata.name}’) -f
kubectl exec -n log-es -it filebeat-97l85 — /bin/sh -c ‘cat /tmp/filebeat*’

更新configmap后需手动重启pod

kubectl rollout restart ds/filebeat -n log-es
kubectl rollout restart deploy/logstash -n log-es

强制关闭

kubectl delete pod filebeat-fncq9 -n log-es –force –grace-period=0

kubectl exec -it pod/test-pod-1 -n test — ping logstash.log-es.svc.cluster.local
kubectl exec -it pod/test-pod-1 -n test — curl http://logstash.log-es.svc.cluster.local:5044

...svc.cluster.local
“`

“`
#停止服务
kubectl delete -f log-es-filebeat-daemonset.yaml
kubectl delete -f log-es-logstash-deploy.yaml
kubectl delete -f log-es-kibana-sts.yaml
kubectl delete -f log-es-kibana-svc.yaml

“`

# filebeat使用syslog接收
log-es-filebeat-configmap.yaml
“`
– type: syslog
format: auto
id: syslog-id
enabled: true
max_message_size: 20KiB
timeout: 10
keep_null: true
processors:
– drop_fields:
fields: [“input”,”agent”,”ecs.version”,”log.offset”,”event”,”syslog”]
ignore_missing: true
protocol.udp:
host: “0.0.0.0:33514”
tags: [“web-access”]
fields:
source: “syslog-web-access”
“`
damonset的node上查看是否启用的hostport
netstat -anup
lsof -i

##pod中安装ping测试
“`
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-6.repo
sed -i -e ‘/mirrors.cloud.aliyuncs.com/d’ -e ‘/mirrors.aliyuncs.com/d’ /etc/yum.repos.d/CentOS-Base.repo
sed -i ‘s/mirrors.aliyun.com/vault.centos.org/g’ /etc/yum.repos.d/CentOS-Base.repo
sed -i ‘s/gpgcheck=1/gpgcheck=0/g’ /etc/yum.repos.d/CentOS-Base.repo

yum clean all && yum makecache
yum install iputils

ping 192.168.244.4
“`
admin
https://nginx.org/en/docs/syslog.html
nginx支持udp发送,不支持tcp
nginx配制文件
“`
access_log syslog:server=192.168.244.7:33514,facility=local5,tag=data_c1gstudiodotnet,severity=info access;
“`
filebeat 可以接收到。
“`
{
“@timestamp”: “2024-02-20T02:35:41.000Z”,
“@metadata”: {
“beat”: “filebeat”,
“type”: “_doc”,
“version”: “8.11.0”,
“truncated”: false
},
“hostname”: “openresty-php5.2-6cbdff6bbd-7fjdc”,
“process”: {
“program”: “data_c1gstudiodotnet”
},
“host”: {
“name”: “node02.k8s.local”
},
“agent”: {
“id”: “cf964318-5fdc-493e-ae2c-d2acb0bc6ca8”,
“name”: “node02.k8s.local”,
“type”: “filebeat”,
“version”: “8.11.0”,
“ephemeral_id”: “42789eee-3658-4f0f-982e-cb96d18fd9a2”
},
“message”: “10.100.3.80 – – [20/Feb/2024:10:35:41 +0800] \”GET /admin/imgcode/imgcode.php HTTP/1.1\” 200 1271 \”https://data.c1gstudio.net:31443/admin/login.php?1\” \”Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36\” “,
“ecs”: {
“version”: “8.0.0”
},
“log”: {
“source”: {
“address”: “10.244.2.216:37255”
}
},
“tags”: [
“syslog-web-log”
],
“fields”: {
“source”: “syslog-nginx-access”
}
}
“`

测试filebeat使用syslog接收
echo "hello" > /dev/udp/192.168.244.4/1514

进阶版,解决filebeat宿主机ip问题。
部署时将node的ip写入hosts中,nginx中使用主机名来通信
“`
containers:
– name: openresty-php-fpm5-2-17
env:
– name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
– name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP

command: [“/bin/sh”, “-c”, “echo \”$(MY_NODE_IP) MY_NODE_IP\” >> /etc/hosts;/opt/lemp start;cd /opt/init/ && ./inotify_reload.sh “]

“`
nginx配制
“`
access_log syslog:server=MY_NODE_IP:33514,facility=local5,tag=data_c1gstudiodotnet,severity=info access;
“`

# filebeat使用unix socket接收.
filebeat将socket共享给宿主,nginx挂载宿主socket,将消息发送给socket
nginx 配制文件
“`
access_log syslog:server=unix:/usr/local/filebeat/filebeat.sock,facility=local5,tag=data_c1gstudiodotnet,severity=info access;
“`

各个node上创建共享目录,自动创建的为root 755 pod内不能写
mkdir -m 0777 /localdata/filebeat/socket
chmod 0777 /localdata/filebeat/socket
filebeat配制文件
“`
– type: unix
enabled: true
id: unix-id
max_message_size: 100KiB
path: “/usr/share/filebeat/socket/filebeat.sock”
socket_type: datagram
#group: “website”
processors:
– syslog:
field: message
– drop_fields:
fields: [“input”,”agent”,”ecs”,”log.syslog.severity”,”log.syslog.facility”,”log.syslog.priority”]
ignore_missing: true
tags: [“web-access”]
fields:
source: “unix-web-access”
“`
filebeat的damonset
“`
volumeMounts:
– name: fb-socket
mountPath: /usr/share/filebeat/socket
volumes:
– name: fb-socket
hostPath:
path: /localdata/filebeat/socket
type: DirectoryOrCreate
“`

nginx的deployment
“`
volumeMounts:
– name: host-filebeat-socket
mountPath: “/usr/local/filebeat”
– name: host-filebeat-socket
volumes:
hostPath:
path: /localdata/filebeat/socket
type: Directory
“`
示例
“`
{
“@timestamp”: “2024-02-20T06:17:08.000Z”,
“@metadata”: {
“beat”: “filebeat”,
“type”: “_doc”,
“version”: “8.11.0”
},
“agent”: {
“type”: “filebeat”,
“version”: “8.11.0”,
“ephemeral_id”: “4546cf71-5f33-4f5d-bc91-5f0a58c9b0fd”,
“id”: “cf964318-5fdc-493e-ae2c-d2acb0bc6ca8”,
“name”: “node02.k8s.local”
},
“message”: “10.100.3.80 – – [20/Feb/2024:14:17:08 +0800] \”GET /admin/imgcode/imgcode.php HTTP/1.1\” 200 1356 \”https://data.c1gstudio.net:31443/admin/login.php?1\” \”Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36\” “,
“tags”: [
“web-access”
],
“ecs”: {
“version”: “8.0.0”
},
“fields”: {
“source”: “unix-web-access”
},
“log”: {
“syslog”: {
“hostname”: “openresty-php5.2-78cb7cb54b-bsgt6”,
“appname”: “data_c1gstudiodotnet”
}
},
“host”: {
“name”: “node02.k8s.local”
}
}
“`

#ingress 配制syslog
可以在configmap中配制,还需要解决nodeip问题及ext-ingress是否共用问题。
但是不支持tag,不能定义来源.有个process.program=nginx
使用http-snippet或access-log-params都不行
“`
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: int-ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.5
name: int-ingress-nginx-controller
namespace: int-ingress-nginx
data:
allow-snippet-annotations: “true”
error-log-level: “warn”
enable-syslog: “true”
syslog-host: “192.168.244.7”
syslog-port: “10514”
“`

“`
{
“@timestamp”: “2024-02-21T03:42:40.000Z”,
“@metadata”: {
“beat”: “filebeat”,
“type”: “_doc”,
“version”: “8.11.0”,
“truncated”: false
},
“process”: {
“program”: “nginx”
},
“upstream”: {
“status”: “200”,
“response_length”: “1281”,
“proxy_alternative”: “”,
“addr”: “10.244.2.228:80”,
“name”: “data-c1gstudio-net-svc-web-http”,
“response_time”: “0.004”
},
“timestamp”: “2024-02-21T11:42:40+08:00”,
“req_id”: “16b868da1aba50a72f32776b4a2f5cb2”,
“agent”: {
“ephemeral_id”: “64c4b6d1-3d5c-4079-8bda-18d1a0d063a5”,
“id”: “3bd77823-c801-4dd1-a3e5-1cf25874c09f”,
“name”: “master01.k8s.local”,
“type”: “filebeat”,
“version”: “8.11.0”
},
“log”: {
“source”: {
“address”: “192.168.244.4:49244”
}
},
“request”: {
“status”: 200,
“bytes_sent”: “1491”,
“request_time”: “0.004”,
“request_length”: “94”,
“referer”: “https://data.c1gstudio.net:31443/admin/login.php?1”,
“user_agent”: “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36”,
“request_method”: “GET”,
“request_uri”: “/admin/imgcode/imgcode.php”,
“remote_user”: “”,
“protocol”: “HTTP/2.0”,
“remote_port”: “”,
“real_port”: “11464”,
“x-forward-for”: “10.100.3.80”,
“remote_addr”: “10.100.3.80”,
“hostname”: “data.c1gstudio.net”,
“body_bytes_sent”: “1269”,
“real_ip”: “192.168.244.2”,
“server_name”: “data.c1gstudio.net”
},
“ingress”: {
“service_port”: “http”,
“hostname”: “master01.k8s.local”,
“addr”: “192.168.244.4”,
“port”: “443”,
“namespace”: “data-c1gstudio-net”,
“ingress_name”: “ingress-data-c1gstudio-net”,
“service_name”: “svc-web”
},
“message”: “{\”timestamp\”: \”2024-02-21T11:42:40+08:00\”, \”source\”: \”int-ingress\”, \”req_id\”: \”16b868da1aba50a72f32776b4a2f5cb2\”, \”ingress\”:{ \”hostname\”: \”master01.k8s.local\”, \”addr\”: \”192.168.244.4\”, \”port\”: \”443\”,\”namespace\”: \”data-c1gstudio-net\”,\”ingress_name\”: \”ingress-data-c1gstudio-net\”,\”service_name\”: \”svc-web\”,\”service_port\”: \”http\” }, \”upstream\”:{ \”addr\”: \”10.244.2.228:80\”, \”name\”: \”data-c1gstudio-net-svc-web-http\”, \”response_time\”: \”0.004\”, \”status\”: \”200\”, \”response_length\”: \”1281\”, \”proxy_alternative\”: \”\”}, \”request\”:{ \”remote_addr\”: \”10.100.3.80\”, \”real_ip\”: \”192.168.244.2\”, \”remote_port\”: \”\”, \”real_port\”: \”11464\”, \”remote_user\”: \”\”, \”request_method\”: \”GET\”, \”server_name\”: \”data.c1gstudio.net\”,\”hostname\”: \”data.c1gstudio.net\”, \”request_uri\”: \”/admin/imgcode/imgcode.php\”, \”status\”: 200, \”body_bytes_sent\”: \”1269\”, \”bytes_sent\”: \”1491\”, \”request_time\”: \”0.004\”, \”request_length\”: \”94\”, \”referer\”: \”https://data.c1gstudio.net:31443/admin/login.php?1\”, \”user_agent\”: \”Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36\”, \”x-forward-for\”: \”10.100.3.80\”, \”protocol\”: \”HTTP/2.0\”}}”,
“fields”: {
“source”: “syslog-web-access”
},
“source”: “int-ingress”,
“ecs”: {
“version”: “8.0.0”
},
“hostname”: “master01.k8s.local”,
“host”: {
“name”: “master01.k8s.local”
},
“tags”: [
“web-access”
]
}
“`

## filebeat @timestamp 时区问题
默认为UTC,相差8小时。不能修改。
方法一,可以格式化另一字段后进行替换。
方法二,添加一个字段
“`
processors:
– add_locale: ~
“`

https://www.elastic.co/guide/en/beats/filebeat/current/processor-timestamp.html

Posted in 安装k8s/kubernetes.

Tagged with , , , , .


k8s_安装9_ingress-nginx

九、 ingress-nginx

部署Ingress控制器

部署方式

Ingress控制器可以通过三种方式部署以接入外部请求流量。

第一种方式 Deployment+NodePort模式的Service

通过Deployment控制器管理Ingress控制器的Pod资源,通过NodePort或LoadBalancer类型的Service对象为其接入集群外部的请求流量,所有这种方式在定义一个Ingress控制器时必须在其前端定义一个专用的Service资源.
访问流量先通过nodeport进入到node节点,经iptables (svc) 转发至ingress-controller容器,再根据rule转发至后端各业务的容器中。

同用deployment模式部署ingress-controller,并创建对应的service,但是type为NodePort。这样,ingress就会暴露在集群节点ip的特定端口上。由于nodeport暴露的端口是随机端口,一般会在前面再搭建一套负载均衡器来转发请求。该方式一般用于宿主机是相对固定的环境ip地址不变的场景。
NodePort方式暴露ingress虽然简单方便,但是NodePort多了一层NAT,在请求量级很大时可能对性能会有一定影响。
流量先经过DNS域名解析,然后到达LB,然后流量经过ingress做一次负载分发到service,最后再由service做一次负载分发到对应的pod中

固定nodePort后,LB端指向nodeip+nodeport 任意一个都可以,如果当流量进来负载到某个node上的时候因为Ingress Controller的pod不在这个node上,会走这个node的kube-proxy转发到Ingress Controller的pod上,多转发了一次。nginx接收到的http请求中的source ip将会被转换为接受该请求的node节点的ip,而非真正的client端ip

Deployment 部署的副本 Pod 会分布在各个 Node 上,每个 Node 都可能运行好几个副本,replicas数量(不能大于node节点数) + nodeSeletor / podantifinity。DaemonSet 的不同之处在于:每个 Node 上最多只能运行一个副本。

nodeport优点:一个集群可以部署几个就可以了,你可以一组service对应一个ingress,这样只需要每组service自己维护自己的ingress中NGINX配置,每个ingress之间互不影响,各reload自己的配置,缺点是效率低。如果你使用hostnetwork方式,要么维护所有node上NGINX的配置。

第二种方式 DaemonSet+HostNetwork+nodeSelector

通过DaemonSet控制器确保集群的所有或部分工作节点中每个节点上只运行Ingress控制器的一个Pod资源,并配置这类Pod对象以HostPort或HostNetwork的方式在当前节点接入外部流量。
每个节点都创建一个ingress-controller的容器,容器的网络模式设为hostNetwork。访问请求通过80/443端口将直接进入到pod-nginx中。而后nginx根据ingress规则再将流量转发到对应的web应用容器中。

用DaemonSet结合nodeselector来部署ingress-controller到特定的node上,然后使用HostNetwork直接把该pod与宿主机node的网络打通,直接使用宿主机的80/433端口就能访问服务。这时,ingress-controller所在的node机器就很类似传统架构的边缘节点,比如机房入口的nginx服务器。该方式整个请求链路最简单,性能相对NodePort模式更好。缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress-controller pod。 比较适合大并发的生产环境使用。

不创建nginx svc,效率最高(不使用nodeport的方式进行暴露)。如果我们使用Nodeport的方式,流量是NodeIP->svc->ingress-controller(pod)这样的话会多走一层svc层,不管svc层是使用iptables还是lvs都会降低效率。如果使用hostNetwork的方式就是直接走Node节点的主机网络,唯一要注意的是hostNetwork下pod会继承宿主机的网络协议,也就是使用了主机的dns,会导致svc的请求直接走宿主机的上到公网的dns服务器而非集群里的dns server,需要设置pod的dnsPolicy: ClusterFirstWithHostNet即可解决

写入proxy 配置文件如 nginx.conf 的不是backend service的地址,而是backend service 的 pod 的地址,避免在 service 在增加一层负载均衡转发

hostNetwork需要占用物理机的80和443端口,80和443端口只能在绑定了的node上访问nodeip+80,没绑定的可以用nodeip+nodeport访问

这种方式可能会存在node间无法通信和集群内域名解析的问题
可以布署多套ingress,区分内外网访问,对业务进行拆分壁免reload影响

第三种方式 Deployment+LoadBalancer 模式的 Service

如果要把ingress部署在公有云,那用这种方式比较合适。用Deployment部署ingress-controller,创建一个 type为 LoadBalancer 的 service 关联这组 pod。大部分公有云,都会为 LoadBalancer 的 service 自动创建一个负载均衡器,通常还绑定了公网地址。 只要把域名解析指向该地址,就实现了集群服务的对外暴露

ingres-nginx

k8s社区的ingres-nginx https://github.com/kubernetes/ingress-nginx  
Ingress参考文档:https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/

kubectl get cs  

部署配置Ingress  
ingress-nginx v1.9.3
k8s supported version 1.28, 1.27,1.26, 1.25
Nginx Version 1.21.6
wget -k https://github.com/kubernetes/ingress-nginx/raw/main/deploy/static/provider/kind/deploy.yaml -O ingress-nginx.yaml

cat ingress-nginx.yaml

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - ingress-nginx-leader
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: v1
data:
  allow-snippet-annotations: "false"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
    #nodePort: 30080
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
    #nodePort: 30443
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
  - appProtocol: https
    name: https-webhook
    port: 443
    targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  replicas: 1
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  strategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.9.3
    spec:
      dnsPolicy: ClusterFirstWithHostNet  #既能使用宿主机DNS,又能使用集群DNS
      hostNetwork: true                   #与宿主机共享网络
      #nodeName: node01.k8s.local              #设置只能在k8snode1节点运行
      tolerations:                        #设置能容忍master污点
      - key: node-role.kubernetes.io/master
        operator: Exists
      containers:
      - args:
        - /nginx-ingress-controller
        - --election-id=ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        - --watch-ingress-without-class=true
        - --publish-status-address=localhost
        - --logtostderr=false
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          hostPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          hostPort: 443
          name: https
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 101
        volumeMounts:
        - mountPath: /usr/local/certificates/
          name: webhook-cert
          readOnly: true
        - name: timezone
          mountPath: /etc/localtime  
        - name: vol-ingress-logdir
          mountPath: /var/log/nginx
      #dnsPolicy: ClusterFirst
      nodeSelector:
        ingresstype: ingress-nginx
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 0
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Equal
      - effect: NoSchedule
        key: node-role.kubernetes.io/control-plane
        operator: Equal
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  
      - name: vol-ingress-logdir
        hostPath:
          path: /var/log/nginx
          type: DirectoryOrCreate
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.9.3
      name: ingress-nginx-admission-create
    spec:
      containers:
      - args:
        - create
        - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
        - --namespace=$(POD_NAMESPACE)
        - --secret-name=ingress-nginx-admission
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
        imagePullPolicy: IfNotPresent
        name: create
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.9.3
      name: ingress-nginx-admission-patch
    spec:
      containers:
      - args:
        - patch
        - --webhook-name=ingress-nginx-admission
        - --namespace=$(POD_NAMESPACE)
        - --patch-mutating=false
        - --secret-name=ingress-nginx-admission
        - --patch-failure-policy=Fail
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
        imagePullPolicy: IfNotPresent
        name: patch
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
  namespace: ingress-nginx
spec:
  egress:
  - {}
  podSelector:
    matchLabels:
      app.kubernetes.io/component: admission-webhook
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  policyTypes:
  - Ingress
  - Egress
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1/ingresses
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  sideEffects: None
提取image名称,并在harbor 导入
cat ingress-nginx.yaml |grep image:|sed -e 's/.*image: //'

registry.k8s.io/ingress-nginx/controller:v1.9.3@sha256:8fd21d59428507671ce0fb47f818b1d859c92d2ad07bb7c947268d433030ba98
registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80
registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80
#切换到harbor
docker pull registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.3
docker pull registry.aliyuncs.com/google_containers/kube-webhook-certgen:v20231011-8b53cabe0

docker tag registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.3  repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
docker tag registry.aliyuncs.com/google_containers/kube-webhook-certgen:v20231011-8b53cabe0  repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
docker push repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
docker push repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0

docker images |grep ingress

修改yaml为私仓地址
方式一
修改image: 为私仓地址

sed -n "/image:/{s/image: /image: repo.k8s.local\//p}" ingress-nginx.yaml
sed -i "/image:/{s/image: /image: repo.k8s.local\//}" ingress-nginx.yaml
去除@sha256验证码
sed -rn "s/(\s*image:.*)@sha256:.*$/\1 /gp" ingress-nginx.yaml
sed -ri "s/(\s*image:.*)@sha256:.*$/\1 /g" ingress-nginx.yaml

方式二

合并执行,替换为私仓并删除SHA256验证

sed -rn "s/(\s*image: )(.*)@sha256:.*$/\1 repo.k8s.local\/\2/gp" ingress-nginx.yaml
sed -ri "s/(\s*image: )(.*)@sha256:.*$/\1 repo.k8s.local\/\2/g" ingress-nginx.yaml

方式三
手动编辑文件
vi ingress-nginx.yaml

cat ingress-nginx.yaml |grep image:
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0

Deployment nodeName 方式

通常调式用,分配到指定的node,无法自动调度

vi ingress-nginx.yaml

kind: Deployment

      dnsPolicy: ClusterFirstWithHostNet  #既能使用宿主机DNS,又能使用集群DNS
      hostNetwork: true                   #与宿主机共享网络,接在主机上开辟 80,443端口,无需中间解析,速度更快
      nodeName: node01.k8s.local              #设置只能在k8snode1节点运行

Deployment+nodeSelector 方式

可以调度到带有指定标签的node,可以给node打标签来调度,这里布署ingress更推荐daemonset

vi ingress-nginx.yaml

kind: Deployment
spec:
  replicas: 1 #副本数,默认一个node一个
      dnsPolicy: ClusterFirstWithHostNet  #既能使用宿主机DNS,又能使用集群DNS
      hostNetwork: true                   #与宿主机共享网络,接在主机上开辟 80,443端口,无需中间解析,速度更快
      #nodeName: node01.k8s.local              #设置只能在k8snode1节点运行
      tolerations:                        #设置能容忍master污点
      - key: node-role.kubernetes.io/master
        operator: Exists

      nodeSelector:
        ingresstype: ingress-nginx
        kubernetes.io/os: linux

kind: Service
  ports:
     - name: http
       nodePort: 30080   #固定端口
     - name: https
       nodePort: 30443   #固定端口

DaemonSet 方式

每个node带有标签的node都分配一个

在Deployment:spec指定布署节点
修改Deployment为DaemonSet
修改时同步注释掉

  #replicas: 1
  #strategy:
  #  rollingUpdate:
  #    maxUnavailable: 1
  #  type: RollingUpdate

修改DaemonSet的nodeSelector: ingress-node=true 。这样只需要给node节点打上 ingresstype=ingress-nginx 标签,即可快速的加入/剔除 ingress-controller的数量

vi ingress-nginx.yaml

kind: DaemonSet

      dnsPolicy: ClusterFirstWithHostNet  #既能使用宿主机DNS,又能使用集群DNS
      hostNetwork: true                   #与宿主机共享网络,接在主机上开辟 80,443端口,无需中间解析,速度更快,netstat 可以看到端口
      #nodeName: node01.k8s.local              #设置只能在k8snode1节点运行
      tolerations:                        #设置能容忍master污点,充许布到master
      - key: node-role.kubernetes.io/master
        operator: Exists
      nodeSelector:
        ingresstype: ingress-nginx
        kubernetes.io/os: linux

NodeSelector 只是一种简单的调度策略,更高级的调度策略可以使用 Node Affinity 和 Node Taints 等机制来实现

kubectl apply -f ingress-nginx.yaml

kubectl delete -f ingress-nginx.yaml

kubectl get pods -A
NAMESPACE              NAME                                                    READY   STATUS      RESTARTS        AGE
default                nfs-client-provisioner-db4f6fb8-gnnbm                   1/1     Running     0               24h
ingress-nginx          ingress-nginx-admission-create-wxtlz                    0/1     Completed   0               103s
ingress-nginx          ingress-nginx-admission-patch-8fw72                     0/1     Completed   1               103s
ingress-nginx          ingress-nginx-controller-57c98745dd-2rn7m               0/1     Pending     0               103s

查看详情

kubectl -n ingress-nginx describe pod ingress-nginx-controller-57c98745dd-2rn7m

didn’t match Pod’s node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
无法调度到正确Node
影响调度的因素:nodename、NodeSelector、Node Affinity、Pod Affinity、taint和tolerations
这里还没有给node打标签ingresstype=ingress-nginx,所以不能调度

如果布署成功可以看到分配到哪台node

Service Account: ingress-nginx
Node: node01.k8s.local/192.168.244.5

查看nodes

kubectl get nodes

NAME                 STATUS   ROLES           AGE     VERSION
master01.k8s.local   Ready    control-plane   9d      v1.28.2
node01.k8s.local     Ready    <none>          9d      v1.28.2
node02.k8s.local     Ready    <none>          2d23h   v1.28.2

查看node是否被打污点

kubectl describe nodes | grep Tain

Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Taints:             <none>
Taints:             <none>

查看node的标签

kubectl get node –show-labels

NAME                 STATUS   ROLES           AGE     VERSION   LABELS
master01.k8s.local   Ready    control-plane   12d     v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01.k8s.local,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node01.k8s.local     Ready    <none>          12d     v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01.k8s.local,kubernetes.io/os=linux
node02.k8s.local     Ready    <none>          6d16h   v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02.k8s.local,kubernetes.io/os=linux
kubectl get node -l "ingresstype=ingress-nginx" -n ingress-nginx --show-labels
kubectl get node -l "beta.kubernetes.io/arch=amd64" -n ingress-nginx --show-labels

查看是否是node资源不足

在 Linux 中查看实际剩余的 cpu

kubectl describe node |grep -E '((Name|Roles):\s{6,})|(\s+(memory|cpu)\s+[0-9]+\w{0,2}.+%\))'

给需要的node节点上部署ingress-controller:

因为我们使用的是DaemonSet模式,所以理论上会为所有的节点都安装,但是由于我们的selector使用了筛选标签:ingresstype=ingress-nginx ,所以此时所有的node节点都没有被执行安装;
kubectl get ds -n ingress-nginx

NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                     AGE
ingress-nginx-controller   0         0         0       0            0           ingresstype=ingress-nginx,kubernetes.io/os=linux   60s

查询详细

kubectl describe ds -n ingress-nginx

当我们需要为所有的node节点安装ingress-controller的时候,只需要为对应的节点打上标签:node-role=ingress-ready

kubectl label node node01.k8s.local ingresstype=ingress-nginx
kubectl label node node02.k8s.local ingresstype=ingress-nginx
kubectl label node master01.k8s.local ingresstype=ingress-nginx

修改标签,需要增加–overwrite这个参数(为了与增加标签的命令做区分)

kubectl label node node01.k8s.local ingresstype=ingress-nginx --overwrite

删除node的标签

kubectl label node node01.k8s.local node-role-
kubectl label node node01.k8s.local ingress-

kubectl get node –show-labels

NAME                 STATUS   ROLES           AGE     VERSION   LABELS
NAME                 STATUS   ROLES           AGE     VERSION   LABELS
master01.k8s.local   Ready    control-plane   13d     v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingresstype=ingress-nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01.k8s.local,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node01.k8s.local     Ready              13d     v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingresstype=ingress-nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01.k8s.local,kubernetes.io/os=linux
node02.k8s.local     Ready              6d19h   v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingresstype=ingress-nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02.k8s.local,kubernetes.io/os=linux

查看当前有3个节点

kubectl get ds -n ingress-nginx

NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR               AGE
ingress-nginx-controller   3         3         3       3            3           ingresstype=ingress-nginx,kubernetes.io/os=linux   25s

删除标签后,主节点就被移除了,只有2个节点
kubectl label node master01.k8s.local ingresstype-

kubectl get pods -A

NAMESPACE              NAME                                                    READY   STATUS      RESTARTS       AGE
default                nfs-client-provisioner-db4f6fb8-gnnbm                   1/1     Running     10 (26h ago)   4d20h
ingress-nginx          ingress-nginx-admission-create-zxz7j                    0/1     Completed   0              3m35s
ingress-nginx          ingress-nginx-admission-patch-xswhk                     0/1     Completed   1              3m35s
ingress-nginx          ingress-nginx-controller-7j4nz                          1/1     Running     0              3m35s
ingress-nginx          ingress-nginx-controller-g285w                          1/1     Running     0              3m35s

kubectl get svc -n ingress-nginx

NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.96.116.106           80:30080/TCP,443:30443/TCP   8m26s
ingress-nginx-controller-admission   ClusterIP   10.96.104.116           443/TCP                      8m26s

kubectl -n ingress-nginx describe pod ingress-nginx-controller-7f6c656666-gn4f2

Warning  FailedMount  112s  kubelet                   MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found

从Deployment改成DaemonSet,如果有足够资源可以直接改yaml后apply,分配后资源不足会的pod会一直Pending,老的pod依然running,提示以下资源不足信息.
Warning FailedScheduling 33s default-scheduler 0/3 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
手动删除老pod后,新pod可以自动运行,但老的pod一直重新生成新的pending pod
kubectl -n ingress-nginx delete pod ingress-nginx-controller-6c95999b7f-njzvr

创建一个nginx测试

准备镜像
docker pull docker.io/library/nginx:1.21.4
docker tag docker.io/library/nginx:1.21.4 repo.k8s.local/library/nginx:1.21.4
docker push repo.k8s.local/library/nginx:1.21.4
nginx yaml文件
#使用Deployment+nodeName+hostPath,指定分配到node01上
cat > test-nginx-hostpath.yaml < svc-test-nginx-nodeport.yaml < svc-test-nginx-clusterip.yaml <

Ingress规则,将ingress和service绑一起
podip和clusterip都不固定,但是service name是固定的
namespace 要一致

注意1.22版前,yaml格式有差异

apiVersion: extensions/v1beta1
    backend:
      serviceName: svc-test-nginx
      servicePort: 80
cat > ingress-svc-test-nginx.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-test-nginx
  annotations:
    #kubernetes.io/ingress.class: "nginx"
  namespace: test
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: svc-test-nginx
            port:
              number: 31080
EOF
在node1 上创建本地文件夹,后续pod因spec:nodeName: 会分配到此机。  
mkdir -p /nginx/{html,logs,conf.d}

#生成一个首页
echo hostname > /nginx/html/index.html
echo date >> /nginx/html/index.html

#生成ingress测试页
mkdir  /nginx/html/testpath/
echo hostname > /nginx/html/testpath/index.html

kubectl apply -f test-nginx-hostpath.yaml
kubectl delete -f test-nginx-hostpath.yaml

#service nodeport/clusterip 两者选一
kubectl apply -f svc-test-nginx-nodeport.yaml
kubectl delete -f svc-test-nginx-nodeport.yaml

#service clusterip
kubectl apply -f svc-test-nginx-clusterip.yaml
kubectl delete -f svc-test-nginx-clusterip.yaml

kubectl apply -f ingress-svc-test-nginx.yaml
kubectl delete -f ingress-svc-test-nginx.yaml
kubectl describe ingress ingress-svc-test-hostpath -n test

kubectl get pods -n test
kubectl describe  -n test pod nginx-deploy-5bc84b775f-hnqll 
kubectl get svc -A

注删pod重启后文件会被重写,html/和logs 不会覆盖

ll /nginx/conf.d/ 

total 4
-rw-r--r-- 1 root root 1072 Oct 26 11:06 default.conf

cat /nginx/conf.d/default.conf
server {
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}
EOF

使用nodeport
kubectl get service -n test

NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
svc-test-nginx   NodePort   10.96.148.126           31080:30003/TCP   20s

使用clusterip
kubectl get service -n test

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
svc-test-nginx   ClusterIP   10.96.209.131           31080/TCP   80s

总结一下如何访问到pod内web服务

ip规则
node节点:192.168.244.0/24
pod网段:10.244.0.0/16
service网段(集群网段):10.96.0.0/12

ingress为HostNetwork模式

集群内外可以访问到ingress匹配到的node上的nodeip+80和443
curl http://192.168.244.5:80/

 集群内外通过service nodeport访问任意nodeip+nodePort

ingress service 的nodeip+nodeport
此例中30080为ingress的nodeport
curl http://192.168.244.4:30080/testpath/
node01.k8s.local

nginx service 的nodeip+nodeport
service为 nodeport 在集群内或外部使用任意nodeip+nodeport,访问pod上的nginx
curl http://192.168.244.5:30003
node01.k8s.local
Thu Oct 26 11:11:00 CST 2023

 集群内通过service clusterip

ingress service 的clusterip+clusterport
curl http://10.96.111.201:80/testpath/
node01.k8s.local

nginx service 的clusterip+clusterport
在集群内使用clusterip+cluster port,也就是service 访问内部nginx,只有集群内能访问,每次重启pod clusterip会变动,测试使用
curl http://10.96.148.126:31080
node01.k8s.local
Thu Oct 26 11:11:00 CST 2023

集群内通过pod ip

nginx podip+port
curl http://10.244.1.93:80

pod内可以用service域名来访问

curl http://svc-test-nginx:31080
curl http://svc-test-nginx.test:31080
curl http://svc-test-nginx.test.svc.cluster.local:31080
curl http://10.96.148.126:31080


在node1上可以看到访问日志,注意日期的时区是不对的

tail -f /nginx/logs/access.log

10.244.0.0 - - [26/Oct/2023:03:11:04 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36" "-"

pod中时区问题
时区可以在yaml中用hostpath指到宿主机的时区文件

        volumeMounts:
        - name: timezone
          mountPath: /etc/localtime  
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  

kubectl get pods -o wide -n test

进入容器

kubectl exec -it pod/nginx-deploy-886d78bd5-wlk5l -n test -- /bin/sh

Ingress-nginx 组件添加和设置 header

Ingress-nginx 可以通过 snippets注解 的方式配置,但为了安全起见,默认情况下,snippets注解 不允许的
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#allow-snippet-annotations
这种方式只能给具体的 ingress 资源配置,如果需要给所有ingress 接口配置就很麻烦, 维护起来很不优雅.所以推荐通过官方提供的 自定义Header 的方式来配置

https://help.aliyun.com/zh/ack/ack-managed-and-ack-dedicated/user-guide/ install-the-nginx-ingress-controller-in-high-load-scenarios
ingress默认会丢弃不标准的http头
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-real-ip-cidr
解决:configmaps添加

data:
 enable-underscores-in-headers: "true"
cat > ingress-nginx-ConfigMap.yaml <
#注意文本中含用变量,使用vi 编辑模式修改.
#realip 生效在http段,snippet生效在server段
vi ingress-nginx-ConfigMap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: "true"
  worker-processes: "auto" #worker_processes
  server-name-hash-bucket-size: "128" #server_names_hash_bucket_size
  variables-hash-bucket-size: "256" #variables_hash_bucket_size
  variables-hash-max-size: "2048" #variables_hash_max_size
  client-header-buffer-size: "32k" #client_header_buffer_size
  proxy-body-size: "8m" #client_max_body_size
  large-client-header-buffers: "4 512k" #large_client_header_buffers
  client-body-buffer-size: "512k" #client_body_buffer_size
  proxy-connect-timeout : "5" #proxy_connect_timeout
  proxy-read-timeout: "60" #proxy_read_timeout
  proxy-send-timeout: "5" #proxy_send_timeout
  proxy-buffer-size: "32k" #proxy_buffer_size
  proxy-buffers-number: "8 32k" #proxy_buffers
  keep-alive: "60" #keepalive_timeout
  enable-real-ip: "true" 
  #use-forwarded-headers: "true"
  forwarded-for-header: "ns_clientip" #real_ip_header
  compute-full-forwarded-for: "true"
  enable-underscores-in-headers: "true" #underscores_in_headers on
  proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16  #set_real_ip_from
  access-log-path: "/var/log/nginx/access_ext_ingress_$hostname.log"
  error-log-path: "/var/log/nginx/error_ext_ingress.log"
  log-format-escape-json: "true"
  log-format-upstream: '{"timestamp": "$time_iso8601", "req_id": "$req_id", 
    "geoip_country_code": "$geoip_country_code", "request_time": "$request_time", 
    "ingress":{ "hostname": "$hostname", "addr": "$server_addr", "port": "$server_port","namespace": "$namespace","ingress_name": "$ingress_name","service_name": "$service_name","service_port": "$service_port" }, 
    "upstream":{ "addr": "$upstream_addr", "name": "$proxy_upstream_name", "response_time": "$upstream_response_time", 
    "status": "$upstream_status", "response_length": "$upstream_response_length", "proxy_alternative": "$proxy_alternative_upstream_name"}, 
    "request":{ "remote_addr": "$remote_addr", "real_ip": "$realip_remote_addr", "remote_port": "$remote_port", "real_port": "$realip_remote_port", 
    "remote_user": "$remote_user", "request_method": "$request_method", "hostname": "$host", "request_uri": "$request_uri", "status": $status, 
    "body_bytes_sent": "$body_bytes_sent", "request_length": "$request_length", "referer": "$http_referer", "user_agent": "$http_user_agent",
    "x-forward-for": "$proxy_add_x_forwarded_for", "protocol": "$server_protocol"}}'

创建/关闭 ConfigMap

kubectl apply -f ingress-nginx-ConfigMap.yaml -n ingress-nginx
直接生效,不需重启pod
kubectl delete -f ingress-nginx-ConfigMap.yaml

kubectl get pods -o wide -n ingress-nginx

ingress-nginx-controller-kr8jd         1/1     Running     6 (7m26s ago)   13m     192.168.244.7   node02.k8s.local   

查看ingress-nginx 配制文件

kubectl describe pod/ingress-nginx-controller-z5b4f -n ingress-nginx 
kubectl exec -it pod/ingress-nginx-controller-z5b4f -n ingress-nginx -- /bin/sh
kubectl exec -it pod/ingress-nginx-controller-kr8jd -n ingress-nginx -- head -n200 /etc/nginx/nginx.conf
kubectl exec -it pod/ingress-nginx-controller-kr8jd -n ingress-nginx -- cat /etc/nginx/nginx.conf
kubectl exec -it pod/ingress-nginx-controller-z5b4f -n ingress-nginx -- tail /var/log/nginx/access.log

kubectl exec -it pod/ingress-nginx-controller-kr8jd -n ingress-nginx -- head -n200 /etc/nginx/nginx.conf|grep client_body_buffer_size

客户端->CDN->WAF->SLB->Ingress->Pod

realip

方式一 kind: ConfigMap

  enable-real-ip: "true" 
  #use-forwarded-headers: "true"
  forwarded-for-header: "ns_clientip" #real_ip_header
  compute-full-forwarded-for: "true"
  enable-underscores-in-headers: "true" #underscores_in_headers on
  proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16  #set_real_ip_from

方式二 server-snippet
kind: ConfigMap中打开
allow-snippet-annotations: "true"

kubectl edit configmap -n ingress-nginx ingress-nginx-controller

#ingress关联server-snippet
#realip 会在server 段对全域名生效
#ip 白名单whitelist-source-range 会在location = /showvar 生效,使用remoteaddr判定,需要全域白名单时才用.allow 223.2.2.0/24;deny all;

test-openresty-ingress-snippet.yaml
用cat时在含有变量时需转义\$ ,vi 不用转义
cat > test-openresty-ingress-snippet.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-openresty
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/server-snippet: |
      underscores_in_headers on;
      set_real_ip_from 10.244.0.0/16;
      set_real_ip_from 192.168.0.0/16;
      real_ip_header ns_clientip;
      #real_ip_recursive on;
      proxy_set_header                X-Forwarded-For \$proxy_add_x_forwarded_for;
      add_header      Access-Control-Allow-Headers    \$http_Access_Control_Request_Headers    always;
      add_header      Access-Control-Allow-Origin     \$http_Origin    always;
      add_header      Access-Control-Allow-Credentials        'false' always;
      add_header      Access-Control-Allow-Methods    '*'     always;
      if (\$request_method = 'OPTIONS') {
            return 204;
      }
    nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
  namespace: test
spec:
  rules:
  - http:
      paths:
      - path: /showvar
        pathType: Prefix
        backend:
          service:
            name: svc-openresty
            port:
              number: 31080
EOF

kubectl apply -f test-openresty-ingress-snippet.yaml
  1. enable-real-ip:
    enable-real-ip: "true"
    打开real-ip
    生成的代码

        real_ip_header      X-Forwarded-For;
        real_ip_recursive   on;
        set_real_ip_from    0.0.0.0/0;
  2. use-forwarded-headers:
    use-forwarded-headers: "false" 适用于 Ingress 前无代理层,例如直接挂在 4 层 SLB 上,ingress 默认重写 X-Forwarded-For 为 $remote_addr ,可防止伪造 X-Forwarded-For
    use-forwarded-headers: "true" 适用于 Ingress 前有代理层,风险是可以伪造X-Forwarded-For
    生成的代码

        real_ip_header      X-Forwarded-For;
        real_ip_recursive   on;
        set_real_ip_from    0.0.0.0/0;
  3. enable-underscores-in-headers​​​:
    enable-underscores-in-headers: "true"
    是否在hader头中启用非标的_下划线, 缺省默认为"false",如充许 X_FORWARDED_FOR 头,请设为"true"。​​
    相当于nginx的 underscores_in_headers on;

  4. forwarded-for-header
    默认值 X-Forwarded-For,标识客户端的原始 IP 地址的 Header 字段, 如自定义的header头 X_FORWARDED_FOR
    forwarded-for-header: "X_FORWARDED_FOR"
    相当于nginx的real_ip_header

4.compute-full-forwarded-for
默认会将remote替换X-Forwarded-For​
将 remote address 附加到 X-Forwarded-For Header而不是替换它。

  1. ​​proxy-real-ip-cidr​
    如果启用 ​​use-forwarded-headers​​ 或 ​​use-proxy-protocol​​,则可以使用该参数其定义了外部负载衡器 、网络代理、CDN等地址,多个地址可以以逗号分隔的 CIDR 。默认值: "0.0.0.0/0"
    set_real_ip_from

6.external-traffic-policy
Cluster模式:是默认模式,Kube-proxy不管容器实例在哪,公平转发,会做一次SNAT,所以源IP变成了节点1的IP地址。
Local模式:流量只发给本机的Pod,Kube-proxy转发时会保留源IP,性能(时延)好。
这种模式下的Service类型只能为外部流量,即:LoadBalancer 或者 NodePort 两种,否则会报错

开realip后 http_x_forwarded_for 值被会被 remoteaddr 取代
如果compute-full-forwarded-for: "true" ,那么remoteaddr会被追加到右侧
由于本机不会跨节点转发报文,所以要想所有节点上的容器有负载均衡,就需要依赖上一级的Loadbalancer来实现

日 志

access-log-path: /var/log/nginx/access.log
/var/log/nginx/access.log -> /dev/stdout

error-log-path:/var/log/nginx/error.log
/var/log/nginx/error.log->/dev/stderr

kubectl get ds -A
kubectl get ds -n ingress-nginx ingress-nginx-controller -o=yaml
将当前damonset布署的ingress-nginx-controller 导出成单独yaml,方便修改
kubectl get ds -n ingress-nginx ingress-nginx-controller -o=yaml > ingress-nginx-deployment.yaml

每个node上
自动创建的目录为root:root ,而ingress没权限写入
mkdir -p /var/log/nginx/
chmod 777 /var/log/nginx/

Error: exit status 1
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied)
nginx: the configuration file /tmp/nginx/nginx-cfg1271722019 syntax is ok
2023/11/02 14:05:02 [emerg] 34#34: open() "/var/log/nginx/access.log" failed (13: Permission denied)
nginx: configuration file /tmp/nginx/nginx-cfg1271722019 test failed

kind: Deployment 中 关闭 logtostderr

  • --logtostderr=false
    示例:

    
      containers:
      - args:
  • /nginx-ingress-controller
  • --election-id=ingress-nginx-leader
  • --controller-class=k8s.io/ingress-nginx
  • --ingress-class=nginx
  • --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
  • --validating-webhook=:8443
  • --validating-webhook-certificate=/usr/local/certificates/cert
  • --validating-webhook-key=/usr/local/certificates/key
  • --watch-ingress-without-class=true
  • --publish-status-address=localhost
  • --logtostderr=false

挂载到宿主目录

        volumeMounts:
        - name: timezone
          mountPath: /etc/localtime  
        - name: vol-ingress-logdir
          mountPath: /var/log/nginx
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  
      - name: vol-ingress-logdir
        hostPath:
          path: /var/log/nginx
          type: DirectoryOrCreate

创建/关闭 ingress-nginx-deployment

kubectl apply -f ingress-nginx-deployment.yaml

kubectl get pods -o wide -n ingress-nginx

默认日志格式

        log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';

tail -f /var/log/nginx/access.log
3.2.1.5 - - [02/Nov/2023:14:11:26 +0800] "GET /showvar/?2 HTTP/1.1" 200 316 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36" 764 0.000 [test-svc-openresty-31080] [] 10.244.2.46:8089 316 0.000 200 a9051a75e20e164f1838740e12fa95e3

SpringCloud 微服务 RuoYi-Cloud 部署文档(DevOps版)(2023-10-18) argo-rollouts + istio(金丝雀发布)(渐进式交付)
https://blog.csdn.net/weixin_44797299/article/details/133923956

server-snippet 访问验证 和URL重定向(permanent):

通过Ingress注解nginx.ingress.kubernetes.io/server-snippet配置location,访问/sre,返回401错误代码

cat > test-openresty-ingress-snippet.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-openresty
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/server-snippet: |
      underscores_in_headers on;
      set_real_ip_from 10.244.0.0/16;
      set_real_ip_from 192.168.0.0/16;
      real_ip_header ns_clientip;
      #real_ip_recursive on;      
      location /sre {
        return 401;
      }
      rewrite ^/baidu.com$ https://www.baidu.com redirect;
    nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
  namespace: test
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /showvar
        pathType: Prefix
        backend:
          service:
            name: svc-openresty
            port:
              number: 31080
EOF

kubectl apply -f test-openresty-ingress-snippet.yaml

curl http://192.168.244.7:80/sre/ 

401 Authorization Required

401 Authorization Required


nginx
curl http://192.168.244.7:80/baidu.com 302 Found

302 Found


nginx

configuration-snippet

nginx.ingress.kubernetes.io/denylist-source-range
扩展配置到Location章节
cat > test-openresty-ingress-snippet.yaml << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc-openresty
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/server-snippet: |
underscores_in_headers on;
set_real_ip_from 10.244.0.0/16;
set_real_ip_from 192.168.0.0/16;
real_ip_header ns_clientip;

real_ip_recursive on;

  location /sre {
    return 401;
  }
  rewrite ^/baidu.com$ https://www.baidu.com redirect;
nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
nginx.ingress.kubernetes.io/denylist-source-range:223.2.3.0/24
nginx.ingress.kubernetes.io/configuration-snippet: |
  proxy_set_header X-Pass $proxy_x_pass;
  rewrite ^/v6/(.*)/card/query http://foo.bar.com/v7/#!/card/query permanent;

namespace: test
spec:
ingressClassName: nginx
rules:

  • http:
    paths:

    • path: /showvar
      pathType: Prefix
      backend:
      service:
      name: svc-openresty
      port:
      number: 31080
      EOF

配置HTTPS服务转发到后端容器为HTTPS协议

Nginx Ingress Controller默认使用HTTP协议转发请求到后端业务容器。当您的业务容器为HTTPS协议时,可以通过使用注解nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"来使得Nginx Ingress Controller使用HTTPS协议转发请求到后端业务容器。


apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: backend-https
  annotations:
    #注意这里:必须指定后端服务为HTTPS服务。
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  tls:
  - hosts:
    - 
    secretName: 
  rules:
  - host: 
    http:
      paths:
      - path: /
        backend:
          service:
            name: 
            port: 
              number: 
        pathType: ImplementationSpecific

配置域名支持正则化

在Kubernetes集群中,Ingress资源不支持对域名配置正则表达式,但是可以通过nginx.ingress.kubernetes.io/server-alias注解来实现。
创建Nginx Ingress,以正则表达式~^www.\d+.example.com为例。

cat <<-EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-regex
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/server-alias: '~^www\.\d+\.example\.com$, abc.example.com'
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - path: /foo
        backend:
          service:
            name: http-svc1
            port:
              number: 80
        pathType: ImplementationSpecific
EOF

配置域名支持泛化

在Kubernetes集群中,Nginx Ingress资源支持对域名配置泛域名,例如,可配置*. ingress-regex.com泛域名。

cat <<-EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-regex
  namespace: default
spec:
  rules:
- host: *.ingress-regex.com
    http:
      paths:
      - path: /foo
        backend:
          service:
            name: http-svc1
            port:
              number: 80
        pathType: ImplementationSpecific
EOF

通过注解实现灰度发布

灰度发布功能可以通过设置注解来实现,为了启用灰度发布功能,需要设置注解nginx.ingress.kubernetes.io/canary: "true",通过不同注解可以实现不同的灰度发布功能:

nginx.ingress.kubernetes.io/canary-weight:设置请求到指定服务的百分比(值为0~100的整数)。

nginx.ingress.kubernetes.io/canary-by-header:基于Request Header的流量切分,当配置的hearder值为always时,请求流量会被分配到灰度服务入口;当hearder值为never时,请求流量不会分配到灰度服务;将忽略其他hearder值,并通过灰度优先级将请求流量分配到其他规则设置的灰度服务。

nginx.ingress.kubernetes.io/canary-by-header-value和nginx.ingress.kubernetes.io/canary-by-header:当请求中的hearder和header-value与设置的值匹配时,请求流量会被分配到灰度服务入口;将忽略其他hearder值,并通过灰度优先级将请求流量分配到其他规则设置的灰度服务。

nginx.ingress.kubernetes.io/canary-by-cookie:基于Cookie的流量切分,当配置的cookie值为always时,请求流量将被分配到灰度服务入口;当配置的cookie值为never时,请求流量将不会分配到灰度服务入口。
基于Header灰度(自定义header值):当请求Header为ack: alibaba时将访问灰度服务;其它Header将根据灰度权重将流量分配给灰度服务。

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
nginx.ingress.kubernetes.io/canary-by-header: "ack"
nginx.ingress.kubernetes.io/canary-by-header-value: "alibaba"

默认后端

nginx.ingress.kubernetes.io/default-backend: <svc name>

给后端传递header ns_clientip

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header ns-clientip $remote_addr;

或者是如下这种:

nginx.ingress.kubernetes.io/configuration-snippet: |
  more_set_headers "Request-Id: $req_id";

由于开启了realip,forwarded-for-header: "ns_clientip".
ns_clientip不再传给上游,这里再次指定传递

全局configmap 跨域

nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS" #Default: GET, PUT, POST, DELETE, PATCH, OPTIONS
nginx.ingress.kubernetes.io/cors-allow-headers: "Origin,User-Agent,Authorization, Content-Type, If-Match, If-Modified-Since, If-None-Match, If-Unmodified-Since, X-CSRF-TOKEN, X-Requested-With,token" Default: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization
nginx.ingress.kubernetes.io/cors-expose-headers: "" #Default: empty
nginx.ingress.kubernetes.io/cors-allow-origin: "http://wap.bbs.yingjiesheng.com, https://wap.bbs.yingjiesheng.com " # Default: *
nginx.ingress.kubernetes.io/cors-allow-credentials: "true" #Default: true
nginx.ingress.kubernetes.io/cors-max-age: "1728000" #Default: 1728000

main-snippet string ""
http-snippet string ""
server-snippet string ""
stream-snippet string ""
location-snippet string ""

otel-service-name string "nginx"
otel-service-name : "gateway"

添加自定义header
proxy-set-headers
https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/

打开realip
enable-real-ip bool "false"
enable-real-ip: "true"

realip 的header头打开
use-forwarded-headers bool "false"
use-forwarded-headers: "true"

realip 的认证header头
forwarded-for-header string "X-Forwarded-For"
forwarded-for-header: ns_clientip

realip ip段
proxy-real-ip-cidr
proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16

将 remote address 附加到 X-Forwarded-For Header而不是替换它。
compute-full-forwarded-for bool "false"
compute-full-forwarded-for: "true"

全局ip封禁,优先于annotaion,或ingress规则
denylist-source-range []string []string{}
denylist-source-range: "223.2.4.0/24"

全局ip白名单,优先于denylist,如果设定那么只有此ip能访问,k8s内部不用ingress,所以内网ip不添加
可以和server的annotations配合再封禁某一段。nginx.ingress.kubernetes.io/denylist-source-range:223.2.2.0/24
whitelist-source-range []string []string{}
whitelist-source-range: "127.0.0.1,192.168.244.1,223.0.0.0/8"

全局ip封禁,优先于server白名单
block-cidrs []string ""
封禁223.2.4.0/24,如有多个用,分割
block-cidrs: "223.2.4.0/24"
https://nginx.org/en/docs/http/ngx_http_access_module.html#deny

全局ua封禁
block-user-agents []string ""
封禁含有spider 的ua,不区分大小写
block-user-agents: "~*spider"

全局封禁referer
block-referers []string ""
block-referers: "~*chinahr.com"

查看运行状态的ip
nginx-status-ipv4-whitelist []string "127.0.0.1"
http://127.0.0.1/nginx_status/

Posted in 技术.


k8s_安装8_UI_Dashboard

八、 dashkoard

kubernetes/dashboard

Dashboard 是基于网页的 Kubernetes 用户界面。 你可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。 你可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源 (如 Deployment,Job,DaemonSet 等等)。 例如,你可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。

官方文档:https://kubernetes.io/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard/
开源地址:https://github.com/kubernetes/dashboard
版本兼容性确认:https://github.com/kubernetes/dashboard/releases

cert-manager

https://cert-manager.io/docs/installation/
wget --no-check-certificate https://github.com/cert-manager/cert-manager/releases/download/v1.13.1/cert-manager.yaml -O cert-manager-1.13.1.yaml
wget --no-check-certificate  https://github.com/cert-manager/cert-manager/releases/download/v1.13.2/cert-manager.yaml -O cert-manager-1.13.2.yaml

cat cert-manager-1.13.2.yaml|grep image:|sed -e 's/.*image: //'
"quay.io/jetstack/cert-manager-cainjector:v1.13.2"
"quay.io/jetstack/cert-manager-controller:v1.13.2"
"quay.io/jetstack/cert-manager-webhook:v1.13.2"

docker pull fishchen/cert-manager-controller:v1.13.2
docker pull quay.io/jetstack/cert-manager-webhook:v1.13.2
docker pull quay.io/jetstack/cert-manager-controller:v1.13.2
docker pull quay.nju.edu.cn/jetstack/cert-manager-controller:v1.13.2

quay.dockerproxy.com/
docker pull quay.dockerproxy.com/jetstack/cert-manager-controller:v1.13.1
docker pull quay.dockerproxy.com/jetstack/cert-manager-cainjector:v1.13.1
docker pull quay.dockerproxy.com/jetstack/cert-manager-webhook:v1.13.1

quay.io
docker pull quay.io/jetstack/cert-manager-cainjector:v1.13.1
docker pull quay.io/jetstack/cert-manager-controller:v1.13.1
docker pull quay.io/jetstack/cert-manager-webhook:v1.13.1

quay.nju.edu.cn
docker pull quay.nju.edu.cn/jetstack/cert-manager-cainjector:v1.13.1
docker pull quay.nju.edu.cn/jetstack/cert-manager-controller:v1.13.1
docker pull quay.nju.edu.cn/jetstack/cert-manager-webhook:v1.13.1

docker tag quay.dockerproxy.com/jetstack/cert-manager-cainjector:v1.13.1 repo.k8s.local/quay.io/jetstack/cert-manager-cainjector:v1.13.1
docker tag quay.nju.edu.cn/jetstack/cert-manager-webhook:v1.13.1  repo.k8s.local/quay.io/jetstack/cert-manager-webhook:v1.13.1
docker tag quay.io/jetstack/cert-manager-controller:v1.13.1  repo.k8s.local/quay.io/jetstack/cert-manager-controller:v1.13.1

docker push repo.k8s.local/quay.io/jetstack/cert-manager-cainjector:v1.13.1
docker push repo.k8s.local/quay.io/jetstack/cert-manager-webhook:v1.13.1
docker push repo.k8s.local/quay.io/jetstack/cert-manager-controller:v1.13.1
导入省略,可以参见harbor安装
docker pull ...
docker tag ...
docker push ...
docker images

准备yaml文件

cp cert-manager-1.13.1.yaml  cert-manager-1.13.1.org.yaml

sed -n 's/quay\.io/repo.k8s.local\/quay\.io/p'  cert-manager-1.13.1.yaml
sed -i 's/quay\.io/repo.k8s.local\/quay\.io/'  cert-manager-1.13.1.yaml
cat cert-manager-1.13.1.yaml|grep image:|sed -e 's/.*image: //'

kubectl apply -f cert-manager-1.13.1.yaml

customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

kubectl get pods –namespace cert-manager

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-7f46fcb774-gfvjm              1/1     Running   0          14s
cert-manager-cainjector-55f76bd446-nxkrt   1/1     Running   0          14s
cert-manager-webhook-799cbdc68-4t9zw       1/1     Running   0          14s
准备yaml文件,并显示images地址

在控制节点
dashboard3.0.0-alpha0 需要cert-manager,用service或nodeport安装后无法显示页面 /api/v1/login/status 为404,要用ingress方式访问.

wget  --no-check-certificate  https://raw.githubusercontent.com/kubernetes/dashboard/v3.0.0-alpha0/charts/kubernetes-dashboard.yaml -O kubernetes-dashboard.yaml
cat kubernetes-dashboard.yaml |grep image:|sed -e 's/.*image: //'

docker.io/kubernetesui/dashboard-api:v1.0.0
docker.io/kubernetesui/dashboard-web:v1.0.0
docker.io/kubernetesui/metrics-scraper:v1.0.9
提取image名称,并在harobor 导入
cat kubernetes-dashboard.yaml |grep image:|awk -F'/' '{print $NF}'
dashboard-api:v1.0.0
dashboard-web:v1.0.0
metrics-scraper:v1.0.9

#导入省略,可以参见harbor安装
docker pull ...
docker tag ...
docker push ...
docker images
导入harbor私仓后,替换docker.io为私仓repo.k8s.local 地址

如果拉取缓慢,Pulling fs layer,没有私仓可以用阿里的
registry.aliyuncs.com/google_containers/

cp kubernetes-dashboard.yaml kubernetes-dashboard.org.yaml
sed -n 's/docker\.io\/kubernetesui/repo.k8s.local\/google_containers/p'  kubernetes-dashboard.yaml
sed -i 's/docker\.io\/kubernetesui/repo.k8s.local\/google_containers/'  kubernetes-dashboard.yaml
cat  kubernetes-dashboard.yaml|grep -C2 image:
开始安装kubernetes-dashboard

kubectl apply -f kubernetes-dashboard.yaml

namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
service/kubernetes-dashboard-web created
service/kubernetes-dashboard-api created
service/kubernetes-dashboard-metrics-scraper created
ingress.networking.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard-api created
deployment.apps/kubernetes-dashboard-web created
deployment.apps/kubernetes-dashboard-metrics-scraper created
error: resource mapping not found for name: "selfsigned" namespace: "kubernetes-dashboard" from "kubernetes-dashboard.yaml": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first
查看状态
kubectl get pod -n kubernetes-dashboard -o wide
NAME                                                    READY   STATUS             RESTARTS   AGE
kubernetes-dashboard-api-6bfd48fcf6-njg9s               0/1     ImagePullBackOff   0          12m
kubernetes-dashboard-metrics-scraper-7d8c76dc88-6rn2w   0/1     ImagePullBackOff   0          12m
kubernetes-dashboard-web-7776cdb89f-jdwqt               0/1     ImagePullBackOff   0          12m
没有查到日志
kubectl logs kubernetes-dashboard-api-6bfd48fcf6-njg9s
Error from server (NotFound): pods "kubernetes-dashboard-api-6bfd48fcf6-njg9s" not found
指定namespace查看日志
kubectl describe pods/kubernetes-dashboard-api-6bfd48fcf6-njg9s -n kubernetes-dashboard
Name:             kubernetes-dashboard-api-6bfd48fcf6-njg9s
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             node01.k8s.local/192.168.244.5
Start Time:       Tue, 17 Oct 2023 13:24:23 +0800
Labels:           app.kubernetes.io/component=api
                  app.kubernetes.io/name=kubernetes-dashboard-api
                  app.kubernetes.io/part-of=kubernetes-dashboard
                  app.kubernetes.io/version=v1.0.0
                  pod-template-hash=6bfd48fcf6

Normal   Scheduled  39m                    default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-api-6bfd48fcf6-njg9s to node01.k8s.local
  Normal   Pulling    38m (x4 over 39m)      kubelet            Pulling image "repo.k8s.local/kubernetesui/dashboard-api:v1.0.0"
  Warning  Failed     38m (x4 over 39m)      kubelet            Failed to pull image "repo.k8s.local/kubernetesui/dashboard-api:v1.0.0": failed to pull and unpack image "repo.k8s.local/kubernetesui/dashboard-api:v1.0.0": failed to resolve reference "repo.k8s.local/kubernetesui/dashboard-api:v1.0.0": unexpected status from HEAD request to https://repo.k8s.local/v2/kubernetesui/dashboard-api/manifests/v1.0.0: 401 Unauthorized
  Warning  Failed     38m (x4 over 39m)      kubelet            Error: ErrImagePull
  Warning  Failed     37m (x6 over 39m)      kubelet            Error: ImagePullBackOff
  Normal   BackOff    4m29s (x150 over 39m)  kubelet            Back-off pulling image "repo.k8s.local/kubernetesui/dashboard-api:v1.0.0"
修正后重新安装

repo.k8s.local/kubernetesui/dashboard-api:v1.0.0 应为 repo.k8s.local/google_containers/dashboard-api:v1.0.0

kubectl delete -f kubernetes-dashboard.yaml
kubectl apply -f kubernetes-dashboard.yaml

运行正常

kubectl get pod -n kubernetes-dashboard  -o wide
NAME                                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-api-5fcfcfd7b-nlrnh                1/1     Running   0          15s
kubernetes-dashboard-metrics-scraper-585685f868-f7g5j   1/1     Running   0          15s
kubernetes-dashboard-web-57bd66fd9f-hbc62               1/1     Running   0          15s

kubectl describe pods/kubernetes-dashboard-api-5fcfcfd7b-nlrnh -n kubernetes-dashboard

查看Service暴露端口,我们使用这个端口进行访问:
kubectl get svc -n kubernetes-dashboard
NAME                                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
kubernetes-dashboard-api               ClusterIP   10.96.175.209   <none>        9000/TCP   14s
kubernetes-dashboard-metrics-scraper   ClusterIP   10.96.69.44     <none>        8000/TCP   14s
kubernetes-dashboard-web               ClusterIP   10.96.49.99     <none>        8000/TCP   14s
ClusterIP先行测试
curl http://10.96.175.209:9000/api/v1/login/status
{
 "tokenPresent": false,
 "headerPresent": false,
 "httpsMode": true,
 "impersonationPresent": false,
 "impersonatedUser": ""
}
curl http://10.96.49.99:8000/
<!--
Copyright 2017 The Kubernetes Authors.
创建kubernetes-dashboard管理员角色

默认账号kubernetes-dashboard权限过小

cat > dashboard-svc-account.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kubernetes-dashboard
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1      #需要修改的地方
metadata:
  name: dashboard-admin
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kubernetes-dashboard
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
EOF

kubectl apply -f dashboard-svc-account.yaml 
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
Kubernetes服务帐户没有token?

kubectl get secret -n kubernetes-dashboard|grep admin|awk ‘{print $1}’
之前secret中查token已不适用

之前的版本在创建serviceAccount之后会自动生成secret,可以通过kubectl get secret -n kube-system命令查看,现在需要多执行一步:
自Kubernetes版本1.22以来,默认情况下不会为ServiceAccounts生成令牌,需运行生成token,这种方式创建是临时的

kubectl create token dashboard-admin --namespace kubernetes-dashboard
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9EWUpmSzcyLUdzRlJnQWNhdHpOYWhNX0E4RDZ6Zl9id0JMcXZyMng5bkUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzAwNTM4ODQzLCJpYXQiOjE3MDA1MzUyNDMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkYXNoYm9hcmQtYWRtaW4iLCJ1aWQiOiI3ZmUwYjFiZi05ZDhlLTRjOGItYWEzMy0xZWU3ZDU2YjE2NjUifX0sIm5iZiI6MTcwMDUzNTI0Mywic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.cUGX77qAdY7Mqo3tPbWgcCLD2zmRoNUSlFG1EHlCRBwiA7ffL1PGbOHazE6eTmLrRo5if6nm9ILAK1Mv4Co2woOEW8qIJBVXClpZomvkj7BC2bGd-0X5W1s87CnEX7RnKcBqFVcP6zJY_ycLy1o9X9g4Y1wtMm8mptBgos5xmVAb8HecTgOWHt80W736K3WSB9ovuoAGVZe7-ahQ7DX8WJ_qYqbEE5v9laqYBIddcoJtfAYf8U8yaW-MQsJq46xp_sxU164WDozw_sSe4PIxHHqaG4tulJy3J2fn6D_0xbC8fupX3l8FPLcPQm1rWMFGPjsLhU8i_0ihnvyEmvsA6w

#默认账号
kubectl create token kubernetes-dashboard --namespace kubernetes-dashboard
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9EWUpmSzcyLUdzRlJnQWNhdHpOYWhNX0E4RDZ6Zl9id0JMcXZyMng5bkUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzAwNTM5MzgwLCJpYXQiOjE3MDA1MzU3ODAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInVpZCI6ImU2NWUzODRhLTI5ZDYtNGYwYy04OGI0LWJlZWVkYmRhODMxNiJ9fSwibmJmIjoxNzAwNTM1NzgwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQifQ.rpE8aVSVWGcydSJy_QcCg6LjdxPvE2M45AspWqC-u406HSznOby1cTvpa9c7scQ7KooyrjSdlzW-1JVd4U6aFSt8sKQLmLXSTUoGi7ACkI105wTGUU4WQmB5CaPynPC68hhrNPTrEXvM4fichCDykp2hWaVCKOwSQPU-cMsCrIeg-Jqeikckdbpfr7m5XDW8_ydb-_X49hwDVqJeA8eJ5Qn-qlkts8Lj3m3rWjVTKlVeMJARR6LCbZUFZ3uwmOFyUzIX0UDKUHGktt5-k33LbLMMpvKKRzhwfu9o5WSTQdvFux1EpVskYxtjpsyKW_PEwcz6UzxvaLwToxV4uDq5_w

# 可以加上 --duration 参数设置时间 kubectl create token account -h查看具体命令
kubectl create token dashboard-admin --namespace kubernetes-dashboard --duration 10h
dashboard暴露方式

部署 ingress
kube proxy 代理service ,无法访问api会显示白板
service NodePort 类型,无法访问api会显示白板

需要通过ingress,才能正常显示页面
修改kind: Ingress,将localhost去掉
kubectl edit Ingress kubernetes-dashboard -n kubernetes-dashboard

    #- host: localhost
     - http:

使用ingress的端口访口

curl http://127.0.0.1:30180/

curl http://127.0.0.1:30180/api/v1/login/status
{
 "tokenPresent": false,
 "headerPresent": false,
 "httpsMode": true,
 "impersonationPresent": false,
 "impersonatedUser": ""
}

kubectl delete Ingress kubernetes-dashboard -n kubernetes-dashboard
kubectl apply -f kubernetes-dashboard.yaml

Kubernetes Dashboard 认证时间延长

默认的Token失效时间是900秒,也就是15分钟,这意味着你每隔15分钟就要认证一次。
改成12小时。 – –token-ttl=43200
kubectl edit deployment kubernetes-dashboard-api -n kubernetes-dashboard

          args:
            - --enable-insecure-login
            - --namespace=kubernetes-dashboard
            - --token-ttl=43200
kubectl get pod -n kubernetes-dashboard  -o wide
NAME                                                    READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE   READINESS GATES
kubernetes-dashboard-api-55cf847b6b-7sctx               1/1     Running   0          20h   10.244.2.251   node02.k8s.local   <none>           <none>
kubernetes-dashboard-metrics-scraper-585685f868-hqgpc   1/1     Running   0          40h   10.244.1.254   node01.k8s.local   <none>           <none>
kubernetes-dashboard-web-57bd66fd9f-pghct               1/1     Running   0          40h   10.244.1.253   node01.k8s.local   <none>           <none>

kubectl delete pod kubernetes-dashboard-api-55cf847b6b-7sctx -n kubernetes-dashboard

使用域名

本机host中添加域名
dashboard.k8s.local

kubectl edit Ingress kubernetes-dashboard -n kubernetes-dashboard

  rules:
    #- host: 127.0.0.1 #Invalid value: "127.0.0.1": must be a DNS name, not an IP address
    #- host: localhost
     - host: dashboard.k8s.local
       http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: kubernetes-dashboard-web
                port:
                  #number: 8000
                  name: web

curl -k -H "Host:dashboard.k8s.local" http://10.96.49.99:8000/

    - host: dashboard.k8s.local
      http:

kubectl apply -f kubernetes-dashboard.yaml

metrics-server 安装

metrics-server 采集node 和pod 的cpu/mem,数据存在容器本地,不做持久化。这些数据的使用场景有 kubectl top 和scheduler 调度、hpa 弹性伸缩,以及原生的dashboard 监控数据展示。
metrics-server 和prometheus 没有半毛钱关系。 也没有任何数据或者接口互相依赖关系。

Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server

wget --no-check-certificate   https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.3/components.yaml -O metrics-server-0.6.3.yaml

在deploy中,spec.template.containers.args字段中加上--kubelet-insecure-tls选项,表示不验证客户端证书

cat metrics-server-0.6.3.yaml|grep image:
image: registry.k8s.io/metrics-server/metrics-server:v0.6.3

docker pull registry.aliyuncs.com/google_containers/metrics-server:v0.6.3
docker tag registry.aliyuncs.com/google_containers/metrics-server:v0.6.3  repo.k8s.local/registry.k8s.io/metrics-server/metrics-server:v0.6.3
docker push repo.k8s.local/registry.k8s.io/metrics-server/metrics-server:v0.6.3

sed -n "/image:/{s/image: /image: repo.k8s.local\//p}" metrics-server-0.6.3.yaml
sed -i "/image:/{s/image: /image: repo.k8s.local\//}" metrics-server-0.6.3.yaml

kubectl top nodes

kubectl apply -f metrics-server-0.6.3.yaml

kubectl get pods -n=kube-system |grep metrics
metrics-server-8fc7dd595-n2s6b               1/1     Running   6 (9d ago)      16d

kubectl api-versions|grep metrics
metrics.k8s.io/v1beta1

#top会比dashboard中看到的要高
kubectl top pods
kubectl top nodes
NAME                 CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master01.k8s.local   145m         3%     1490Mi          38%       
node01.k8s.local     54m          2%     1770Mi          46%       
node02.k8s.local     63m          3%     2477Mi          64%       

kubectl -n kube-system describe pod metrics-server-8fc7dd595-n2s6b 
kubectl logs metrics-server-8fc7dd595-n2s6b -n kube-system

kubectl describe  pod kube-apiserver-master01.k8s.local -n kube-system
  Type     Reason     Age                   From     Message
  ----     ------     ----                  ----     -------
  Warning  Unhealthy  35m (x321 over 10d)   kubelet  Liveness probe failed: HTTP probe failed with statuscode: 500
  Warning  Unhealthy  34m (x1378 over 10d)  kubelet  Readiness probe failed: HTTP probe failed with statuscode: 500
error: Metrics API not available

重新执行yaml
kubectl top pods
kubectl apply -f metrics-server-0.6.3.yaml

kubectl -n kube-system describe pod metrics-server-8fc7dd595-lz5kz

Posted in 安装k8s/kubernetes.

Tagged with , .