Skip to content


k8s_安装12_operator_Prometheus+grafana

十二 operator安装Prometheus+grafana

Prometheus

Promtheus 本身只支持单机部署,没有自带支持集群部署,也不支持高可用以及水平扩容,它的存储空间受限于本地磁盘的容量。同时随着数据采集量的增加,单台 Prometheus 实例能够处理的时间序列数会达到瓶颈,这时 CPU 和内存都会升高,一般内存先达到瓶颈,主要原因有:

  • Prometheus 的内存消耗主要是因为每隔 2 小时做一个 Block 数据落盘,落盘之前所有数据都在内存里面,因此和采集量有关。
  • 加载历史数据时,是从磁盘到内存的,查询范围越大,内存越大。这里面有一定的优化空间。
  • 一些不合理的查询条件也会加大内存,如 Group 或大范围 Rate。
    这个时候要么加内存,要么通过集群分片来减少每个实例需要采集的指标。
    Prometheus 主张根据功能或服务维度进行拆分,即如果要采集的服务比较多,一个 Prometheus 实例就配置成仅采集和存储某一个或某一部分服务的指标,这样根据要采集的服务将 Prometheus 拆分成多个实例分别去采集,也能一定程度上达到水平扩容的目的。

安装选型

  • 原生 prometheus
    自行创造一切
    如果您已准备好了Prometheus组件、及其先决条件,则可以通过参考其相互之间的依赖关系,以正确的顺序为Prometheus、Alertmanager、Grafana的所有密钥、以及ConfigMaps等每个组件,手动部署YAML规范文件。这种方法通常非常耗时,并且需要花费大量的精力,去部署和管理Prometheus生态系统。同时,它还需要构建强大的文档,以便将其复制到其他环境中。

  • prometheus-operator
    Prometheus operator并非Prometheus官方组件,是由CoreOS公司研发
    使用Kubernetes Custom Resource简化部署与配置Prometheus、Alertmanager等相关的监控组件 ​
    官方安装文档: https://prometheus-operator.dev/docs/user-guides/getting-started/
    ​Prometheus Operator requires use of Kubernetes v1.16.x and up.)需要Kubernetes版本至少在v1.16.x以上 ​
    官方Github地址:https://github.com/prometheus-operator/prometheus-operator

  • kube-prometheus
    kube-prometheus提供基于Prometheus & Prometheus Operator完整的集群监控配置示例,包括多实例Prometheus & Alertmanager部署与配置及node exporter的metrics采集,以及scrape Prometheus target各种不同的metrics endpoints,Grafana,并提供Alerting rules一些示例,触发告警集群潜在的问题 ​
    官方安装文档:https://prometheus-operator.dev/docs/prologue/quick-start/
    安装要求:https://github.com/prometheus-operator/kube-prometheus#compatibility
    ​官方Github地址:https://github.com/prometheus-operator/kube-prometheus

  • helm chart prometheus-community/kube-prometheus-stack
    提供类似kube-prometheus的功能,但是该项目是由Prometheus-community来维护,
    具体信息参考https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#kube-prometheus-stack

k8s operator 方式安装 Prometheus+grafana

用operator的方式部署Prometheus+Grafana,这是一种非常简单使用的方法
打开Prometheus operator的GitHub主页https://github.com/prometheus-operator/kube-prometheus,首先确认自己的kubernetes版本应该使用哪个版本的Prometheus operator.
我这里的kubernetes是1.28版本,因此使用的operator应该是release-0.13
https://github.com/prometheus-operator/kube-prometheus/tree/release-0.13

资源准备

安装资源准备

wget –no-check-certificate https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.13.0.zip -O prometheus-0.13.0.zip
unzip prometheus-0.13.0.zip
cd kube-prometheus-0.13.0

提取image

cat manifests/.yaml|grep image:|sed -e ‘s/.image: //’|sort|uniq
提取出image地址

grafana/grafana:9.5.3
jimmidyson/configmap-reload:v0.5.0
quay.io/brancz/kube-rbac-proxy:v0.14.2
quay.io/prometheus/alertmanager:v0.26.0
quay.io/prometheus/blackbox-exporter:v0.24.0
quay.io/prometheus/node-exporter:v1.6.1
quay.io/prometheus-operator/prometheus-operator:v0.67.1
quay.io/prometheus/prometheus:v2.46.0
registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2
registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.11.1

推送到私仓

手动下载网络不好的,并推送至私仓repo.k8s.local
注意:事配制好私仓repo.k8s.local,并建立相应项目及权限.

#registry.k8s.io地址下的
registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2
registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.11.1

docker pull k8s.dockerproxy.com/kube-state-metrics/kube-state-metrics:v2.9.2
docker pull k8s.dockerproxy.com/prometheus-adapter/prometheus-adapter:v0.11.1

docker tag k8s.dockerproxy.com/kube-state-metrics/kube-state-metrics:v2.9.2 repo.k8s.local/registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2
docker tag k8s.dockerproxy.com/prometheus-adapter/prometheus-adapter:v0.11.1 repo.k8s.local/registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.11.1

docker push repo.k8s.local/registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2
docker push repo.k8s.local/registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.11.1
#重命名docker.io下的
docker pull jimmidyson/configmap-reload:v0.5.0
docker pull grafana/grafana:9.5.3

docker tag jimmidyson/configmap-reload:v0.5.0 repo.k8s.local/docker.io/jimmidyson/configmap-reload:v0.5.0
docker tag grafana/grafana:9.5.3 repo.k8s.local/docker.io/grafana/grafana:9.5.3
docker push repo.k8s.local/docker.io/jimmidyson/configmap-reload:v0.5.0
docker push repo.k8s.local/docker.io/grafana/grafana:9.5.3
kube-prometheus-0.13.0/manifests/prometheusOperator-deployment.yaml
#     - --prometheus-config-reloader=repo.k8s.local/quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
#quay.io单独一个
docker pull  quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
docker tag quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1 repo.k8s.local/quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
docker push repo.k8s.local/quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
#使用脚本批量下载quay.io
vi images.txt
quay.io/prometheus/alertmanager:v0.26.0
quay.io/prometheus/blackbox-exporter:v0.24.0
quay.io/brancz/kube-rbac-proxy:v0.14.2
quay.io/prometheus/node-exporter:v1.6.1
quay.io/prometheus-operator/prometheus-operator:v0.67.1
quay.io/prometheus/prometheus:v2.46.0

vim auto-pull-and-push-images.sh

#!/bin/bash
#新镜像标签:默认取当前时间作为标签名
imageNewTag=`date +%Y%m%d-%H%M%S`
#镜像仓库地址
registryAddr="repo.k8s.local/"

#循环读取images.txt,并存入list中
n=0

for line in $(cat images.txt | grep ^[^#])
do
    list[$n]=$line
    ((n+=1))
done

echo "需推送的镜像地址如下:"
for variable in ${list[@]}
do
    echo ${variable}
done

for variable in ${list[@]}
do
    #下载镜像
    echo "准备拉取镜像: $variable"
    docker pull $variable

    # #获取拉取的镜像ID
    imageId=`docker images -q $variable`
    echo "[$variable]拉取完成后的镜像ID: $imageId"

    #获取完整的镜像名
    imageFormatName=`docker images --format "{{.Repository}}:{{.Tag}}:{{.ID}}" |grep $variable`
    echo "imageFormatName:$imageFormatName"

    #最开头地址
  #如:quay.io/prometheus-operator/prometheus-operator:v0.67.1  -> quay.io
  repository=${imageFormatName}
    repositoryurl=${imageFormatName%%/*}
    echo "repositoryurl :$repositoryurl"

    #删掉第一个:及其右边的字符串
  #如:quay.io/prometheus-operator/prometheus-operator:v0.67.11:b6ec194a1a0 -> quay.io/prometheus-operator/prometheus-operator:v0.67.11
    repository=${repository%:*}

    echo "新镜像地址: $registryAddr$repository"

    #重新打镜像标签
    docker tag $imageId $registryAddr$repository

    # #推送镜像
    docker push $registryAddr$repository
  echo -e "\n"
done

chmod 755 auto-pull-and-push-images.sh
./auto-pull-and-push-images.sh

替换yaml中image地址为私仓

#测试
sed -n "/image:/{s/image: jimmidyson/image: repo.k8s.local\/docker.io\/jimmidyson/p}" `grep 'image: jimmidyson' ./manifests/ -rl`
sed -n "/image:/{s/image: grafana/image: repo.k8s.local\/docker.io\/grafana/p}" `grep 'image: grafana' ./manifests/ -rl`
sed -n "/image:/{s/image: registry.k8s.io/image: repo.k8s.local\/registry.k8s.io/p}" `grep 'image: registry.k8s.io' ./manifests/ -rl`
sed -n "/image:/{s/image: quay.io/image: repo.k8s.local\/quay.io/p}" `grep 'image: quay.io' ./manifests/ -rl`

#替换
sed -i "/image:/{s/image: jimmidyson/image: repo.k8s.local\/docker.io\/jimmidyson/}" `grep 'image: jimmidyson' ./manifests/ -rl`
sed -i "/image:/{s/image: grafana/image: repo.k8s.local\/docker.io\/grafana/}" `grep 'image: grafana' ./manifests/ -rl`
sed -i "/image:/{s/image: registry.k8s.io/image: repo.k8s.local\/registry.k8s.io/}" `grep 'image: registry.k8s.io' ./manifests/ -rl`
sed -i "/image:/{s/image: quay.io/image: repo.k8s.local\/quay.io/}" `grep 'image: quay.io' ./manifests/ -rl`

#重新验证
cat manifests/*.yaml|grep image:|sed -e 's/.*image: //'
manifests/prometheusOperator-deployment.yaml
      containers:
      - args:
        - --kubelet-service=kube-system/kubelet
        - --prometheus-config-reloader=repo.k8s.local/quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
        image: repo.k8s.local/quay.io/prometheus-operator/prometheus-operator:v0.67.1
        name: prometheus-operator
修改prometheus-config-reloader
       - --prometheus-config-reloader=repo.k8s.local/quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1

安装Prometheus+Grafana(安装和启动)

首先,回到kube-prometheus-0.13.0 目录,执行以下命令开始安装

kubectl apply --server-side -f manifests/setup

customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusagents.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/scrapeconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com serverside-applied
namespace/monitoring serverside-applied
kubectl apply -f manifests/
alertmanager.monitoring.coreos.com/main created
networkpolicy.networking.k8s.io/alertmanager-main created
poddisruptionbudget.policy/alertmanager-main created
prometheusrule.monitoring.coreos.com/alertmanager-main-rules created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager-main created
clusterrole.rbac.authorization.k8s.io/blackbox-exporter created
clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created
configmap/blackbox-exporter-configuration created
deployment.apps/blackbox-exporter created
networkpolicy.networking.k8s.io/blackbox-exporter created
service/blackbox-exporter created
serviceaccount/blackbox-exporter created
servicemonitor.monitoring.coreos.com/blackbox-exporter created
secret/grafana-config created
secret/grafana-datasources created
configmap/grafana-dashboard-alertmanager-overview created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-grafana-overview created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-multicluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes-darwin created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
networkpolicy.networking.k8s.io/grafana created
prometheusrule.monitoring.coreos.com/grafana-rules created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
prometheusrule.monitoring.coreos.com/kube-prometheus-rules created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
networkpolicy.networking.k8s.io/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
networkpolicy.networking.k8s.io/node-exporter created
prometheusrule.monitoring.coreos.com/node-exporter-rules created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
networkpolicy.networking.k8s.io/prometheus-k8s created
poddisruptionbudget.policy/prometheus-k8s created
prometheus.monitoring.coreos.com/k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-k8s created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader configured
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
networkpolicy.networking.k8s.io/prometheus-adapter created
poddisruptionbudget.policy/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
servicemonitor.monitoring.coreos.com/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
networkpolicy.networking.k8s.io/prometheus-operator created
prometheusrule.monitoring.coreos.com/prometheus-operator-rules created
service/prometheus-operator created
serviceaccount/prometheus-operator created
servicemonitor.monitoring.coreos.com/prometheus-operator created
kubectl get pods -o wide -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE    IP              NODE                 NOMINATED NODE   READINESS GATES
alertmanager-main-0                    2/2     Running   0          82s    10.244.1.6      node01.k8s.local     <none>           <none>
alertmanager-main-1                    2/2     Running   0          82s    10.244.1.7      node01.k8s.local     <none>           <none>
alertmanager-main-2                    2/2     Running   0          82s    10.244.2.3      node02.k8s.local     <none>           <none>
blackbox-exporter-76847bbff-wt77c      3/3     Running   0          104s   10.244.2.252    node02.k8s.local     <none>           <none>
grafana-5955685bfd-shf4s               1/1     Running   0          103s   10.244.2.253    node02.k8s.local     <none>           <none>
kube-state-metrics-7dddfffd96-2ktrs    3/3     Running   0          103s   10.244.1.4      node01.k8s.local     <none>           <none>
node-exporter-g8d5k                    2/2     Running   0          102s   192.168.244.4   master01.k8s.local   <none>           <none>
node-exporter-mqqkc                    2/2     Running   0          102s   192.168.244.7   node02.k8s.local     <none>           <none>
node-exporter-zpfl2                    2/2     Running   0          102s   192.168.244.5   node01.k8s.local     <none>           <none>
prometheus-adapter-6db6c659d4-25lgm    1/1     Running   0          100s   10.244.1.5      node01.k8s.local     <none>           <none>
prometheus-adapter-6db6c659d4-ps5mz    1/1     Running   0          100s   10.244.2.254    node02.k8s.local     <none>           <none>
prometheus-k8s-0                       2/2     Running   0          81s    10.244.1.8      node01.k8s.local     <none>           <none>
prometheus-k8s-1                       2/2     Running   0          81s    10.244.2.4      node02.k8s.local     <none>           <none>
prometheus-operator-797d795d64-4wnw2   2/2     Running   0          99s    10.244.2.2      node02.k8s.local     <none>           <none>
kubectl get svc -n monitoring -o wide
NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE     SELECTOR
alertmanager-main       ClusterIP   10.96.71.121   <none>        9093/TCP,8080/TCP            2m10s   app.kubernetes.io/component=alert-router,app.kubernetes.io/instance=main,app.kubernetes.io/name=alertmanager,app.kubernetes.io/part-of=kube-prometheus
alertmanager-operated   ClusterIP   None           <none>        9093/TCP,9094/TCP,9094/UDP   108s    app.kubernetes.io/name=alertmanager
blackbox-exporter       ClusterIP   10.96.33.150   <none>        9115/TCP,19115/TCP           2m10s   app.kubernetes.io/component=exporter,app.kubernetes.io/name=blackbox-exporter,app.kubernetes.io/part-of=kube-prometheus
grafana                 ClusterIP   10.96.12.88    <none>        3000/TCP                     2m9s    app.kubernetes.io/component=grafana,app.kubernetes.io/name=grafana,app.kubernetes.io/part-of=kube-prometheus
kube-state-metrics      ClusterIP   None           <none>        8443/TCP,9443/TCP            2m9s    app.kubernetes.io/component=exporter,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=kube-prometheus
node-exporter           ClusterIP   None           <none>        9100/TCP                     2m8s    app.kubernetes.io/component=exporter,app.kubernetes.io/name=node-exporter,app.kubernetes.io/part-of=kube-prometheus
prometheus-adapter      ClusterIP   10.96.24.212   <none>        443/TCP                      2m7s    app.kubernetes.io/component=metrics-adapter,app.kubernetes.io/name=prometheus-adapter,app.kubernetes.io/part-of=kube-prometheus
prometheus-k8s          ClusterIP   10.96.57.42    <none>        9090/TCP,8080/TCP            2m8s    app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=k8s,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=kube-prometheus
prometheus-operated     ClusterIP   None           <none>        9090/TCP                     107s    app.kubernetes.io/name=prometheus
prometheus-operator     ClusterIP   None           <none>        8443/TCP                     2m6s    app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=kube-prometheus
kubectl  get svc  -n monitoring 
NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       ClusterIP   10.96.71.121   <none>        9093/TCP,8080/TCP            93m
alertmanager-operated   ClusterIP   None           <none>        9093/TCP,9094/TCP,9094/UDP   93m
blackbox-exporter       ClusterIP   10.96.33.150   <none>        9115/TCP,19115/TCP           93m
grafana                 ClusterIP   10.96.12.88    <none>        3000/TCP                     93m
kube-state-metrics      ClusterIP   None           <none>        8443/TCP,9443/TCP            93m
node-exporter           ClusterIP   None           <none>        9100/TCP                     93m
prometheus-adapter      ClusterIP   10.96.24.212   <none>        443/TCP                      93m
prometheus-k8s          ClusterIP   10.96.57.42    <none>        9090/TCP,8080/TCP            93m
prometheus-operated     ClusterIP   None           <none>        9090/TCP                     93m
prometheus-operator     ClusterIP   None           <none>        8443/TCP                     93m

blackbox_exporter: Prometheus 官方项目,网络探测,dns、ping、http监控
node-exporter:prometheus的exporter,收集Node级别的监控数据,采集机器指标如 CPU、内存、磁盘。
prometheus:监控服务端,从node-exporter拉数据并存储为时序数据。
kube-state-metrics:将prometheus中可以用PromQL查询到的指标数据转换成k8s对应的数据,采集pod、deployment等资源的元信息。
prometheus-adpater:聚合进apiserver,即一种custom-metrics-apiserver实现

创建ingress

方便通过域名访问,前提需安装ingress.

cat > prometheus-ingress.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-prometheus
  namespace: monitoring
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: monitoring
  annotations:
    #kubernetes.io/ingress.class: "nginx"
    #nginx.ingress.kubernetes.io/rewrite-target: /  #rewrite
spec:
  ingressClassName: nginx
  rules:
  - host: prometheus.k8s.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: prometheus-k8s
            port:
              name: web
              #number: 9090
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-grafana
  namespace: monitoring
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: monitoring
  annotations:
    #kubernetes.io/ingress.class: "nginx"
    #nginx.ingress.kubernetes.io/rewrite-target: /  #rewrite
spec:
  ingressClassName: nginx
  rules:
  - host: grafana.k8s.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: grafana
            port:
              name: http
              #number: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-alertmanager
  namespace: monitoring
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: monitoring
  annotations:
    #kubernetes.io/ingress.class: "nginx"
    #nginx.ingress.kubernetes.io/rewrite-target: /  #rewrite
spec:
  ingressClassName: nginx
  rules:
  - host: alertmanager.k8s.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: alertmanager-main
            port:
              name: web
              #number: 9093

EOF
kubectl delete -f  prometheus-ingress.yaml  
kubectl apply -f  prometheus-ingress.yaml  
kubectl get ingress -A

host文件中添加域名

127.0.0.1 prometheus.k8s.local
127.0.0.1 grafana.k8s.local
127.0.0.1 alertmanager.k8s.local
#测试clusterip
curl -k  -H "Host:prometheus.k8s.local"  http://10.96.57.42:9090/graph
curl -k  -H "Host:grafana.k8s.local"  http://10.96.12.88:3000/login
curl -k  -H "Host:alertmanager.k8s.local"  http://10.96.71.121:9093/
#测试dns
curl -k  http://prometheus-k8s.monitoring.svc:9090
#在测试pod中测试
kubectl exec -it pod/test-pod-1 -n test -- ping prometheus-k8s.monitoring

在浏览器上访问
http://prometheus.k8s.local:30180/
http://grafana.k8s.local:30180/
admin/admin
http://alertmanager.k8s.local:30180/#/alerts

#重启pod
kubectl get pods -n monitoring

kubectl rollout restart deployment/grafana -n monitoring
kubectl rollout restart sts/prometheus-k8s -n monitoring

卸载

kubectl delete –ignore-not-found=true -f manifests/ -f manifests/setup

更改 Prometheus 的显示时区

Prometheus 为避免时区混乱,在所有组件中专门使用 Unix Time 和 Utc 进行显示。不支持在配置文件中设置时区,也不能读取本机 /etc/timezone 时区。

其实这个限制是不影响使用的:

如果做可视化,Grafana是可以做时区转换的。

如果是调接口,拿到了数据中的时间戳,你想怎么处理都可以。

如果因为 Prometheus 自带的 UI 不是本地时间,看着不舒服,2.16 版本的新版 Web UI已经引入了Local Timezone 的选项。

更改 Grafana 的显示时区

默认prometheus显示的是UTC时间,比上海少了8小时。
对于已导入的模板通用设置中的时区及个人资料中的修改无效

helm安装修改values.yaml

   ##defaultDashboardsTimezone: utc
<   defaultDashboardsTimezone: "Asia/Shanghai"

方式一
每次改查询时时区

方式二
另导出一份改了时区的模板

方式三
修改导入模板时区
cat grafana-dashboardDefinitions.yaml|grep -C 2 timezone

              ]
          },
          "timezone": "utc",
          "title": "Alertmanager / Overview",
          "uid": "alertmanager-overview",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / API server",
          "uid": "09ec8aa1e996d6ffcd6817bbaff4db1b",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Networking / Cluster",
          "uid": "ff635a025bcfea7bc3dd4f508990a3e9",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Controller Manager",
          "uid": "72e0e05bef5099e5f049b05fdc429ed4",
--
              ]
          },
          "timezone": "",
          "title": "Grafana Overview",
          "uid": "6be0s85Mk",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Cluster",
          "uid": "efa86fd1d0c121a26444b636a3f509a8",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources /  Multi-Cluster",
          "uid": "b59e6c9f2fcbe2e16d77fc492374cc4f",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Namespace (Pods)",
          "uid": "85a562078cdf77779eaa1add43ccec1e",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Node (Pods)",
          "uid": "200ac8fdbfbb74b39aff88118e4d1c2c",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Pod",
          "uid": "6581e46e4e5c7ba40a07646395ef7b23",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Workload",
          "uid": "a164a7f0339f99e89cea5cb47e9be617",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Compute Resources / Namespace (Workloads)",
          "uid": "a87fb0d919ec0ea5f6543124e16c42a5",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Kubelet",
          "uid": "3138fa155d5915769fbded898ac09fd9",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Networking / Namespace (Pods)",
          "uid": "8b7a8b326d7a6f1f04244066368c67af",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Networking / Namespace (Workload)",
          "uid": "bbb2a765a623ae38130206c7d94a160f",
--
              ]
          },
          "timezone": "utc",
          "title": "Node Exporter / USE Method / Cluster",
          "version": 0
--
              ]
          },
          "timezone": "utc",
          "title": "Node Exporter / USE Method / Node",
          "version": 0
--
              ]
          },
          "timezone": "utc",
          "title": "Node Exporter / MacOS",
          "version": 0
--
              ]
          },
          "timezone": "utc",
          "title": "Node Exporter / Nodes",
          "version": 0
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Persistent Volumes",
          "uid": "919b92a8e8041bd567af9edab12c840c",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Networking / Pod",
          "uid": "7a18067ce943a40ae25454675c19ff5c",
--
              ]
          },
          "timezone": "browser",
          "title": "Prometheus / Remote Write",
          "version": 0
--
              ]
          },
          "timezone": "utc",
          "title": "Prometheus / Overview",
          "uid": "",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Proxy",
          "uid": "632e265de029684c40b21cb76bca4f94",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Scheduler",
          "uid": "2e6b6a3b4bddf1427b3a55aa1311c656",
--
              ]
          },
          "timezone": "UTC",
          "title": "Kubernetes / Networking / Workload",
          "uid": "728bf77cc1166d2f3133bf25846876cc",

删除utc时区
sed -rn ‘/"timezone":/{s/"timezone": "."/"timezone": ""/p}’ grafana-dashboardDefinitions.yaml
sed -i ‘/"timezone":/{s/"timezone": ".
"/"timezone": ""/}’ grafana-dashboardDefinitions.yaml

数据持久化

默认没有持久化,重启pod后配制就丢了

准备pvc

提前准备好StorageClass
注意namesapce要和 service一致

cat > grafana-pvc.yaml  << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: grafana-pvc
  namespace: monitoring
spec:
  storageClassName: managed-nfs-storage  
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
EOF

修改yaml

grafana的存储
grafana-deployment.yaml

      serviceAccountName: grafana
      volumes:
      - emptyDir: {}
        name: grafana-storage

修改成

      serviceAccountName: grafana
      volumes:
      - PersistentVolumeClaim: 
          claimName:grafana-pvc
        name: grafana-storage

spec:添加storage
prometheus-prometheus.yaml

  namespace: monitoring
spec:
  storage:
      volumeClaimTemplate:
        spec:
          storageClassName: managed-nfs-storage
          resources:
            requests:
              storage: 10Gi

新增一些权限配置,修改完毕后的完整内容如下所示,新增的位置主要在resources和varbs两处
prometheus-clusterRole.yaml

rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - get
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  - services
  - endpoints
  - pods
  verbs:
  - get
  - list
  - watch
- nonResourceURLs:
  - /metrics
  verbs:
  - get

再执行以下操作,给prometheus增加管理员身份(可酌情选择)

kubectl create clusterrolebinding kube-state-metrics-admin-binding \
--clusterrole=cluster-admin  \
--user=system:serviceaccount:monitoring:kube-state-metrics
kubectl apply -f grafana-pvc.yaml
kubectl apply -f prometheus-clusterRole.yaml

kubectl apply -f grafana-deployment.yaml
kubectl apply -f prometheus-prometheus.yaml

kubectl get pv,pvc -o wide
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                           STORAGECLASS          REASON   AGE   VOLUMEMODE
persistentvolume/pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19   10Gi       RWX            Delete           Bound      monitoring/grafana-pvc                          managed-nfs-storage            25h   Filesystem
persistentvolume/pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e   10Gi       RWO            Delete           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-1   managed-nfs-storage            25h   Filesystem
persistentvolume/pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca   10Gi       RWO            Delete           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-0   managed-nfs-storage            25h   Filesystem

修改动态pv回收为Retain,否测重启pod会删数据

kubectl edit pv -n default pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19 
persistentVolumeReclaimPolicy: Retain

kubectl edit pv -n default pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e
kubectl edit pv -n default pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca

kubectl get pods -n monitoring

在nfs上查看是否有数据生成

ll /nfs/k8s/dpv/
total 0
drwxrwxrwx. 2 root root  6 Oct 24 18:19 default-test-pvc2-pvc-f9153444-5653-4684-a845-83bb313194d1
drwxrwxrwx. 2 root root  6 Nov 22 15:45 monitoring-grafana-pvc-pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19
drwxrwxrwx. 3 root root 27 Nov 22 15:52 monitoring-prometheus-k8s-db-prometheus-k8s-0-pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca
drwxrwxrwx. 3 root root 27 Nov 22 15:52 monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e

kubectl logs -f prometheus-k8s-0 prometheus -n monitoring

自定义pod/service自动发现配置

目标:
用户启动的service或pod,在annotation中添加label后,可以自动被prometheus发现:

annotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "9121"
  1. secret保存自动发现的配置
    若要特定的annotation被发现,需要为prometheus增加如下配置:
    prometheus-additional.yaml

    cat > prometheus-additional.yaml << EOF
    - job_name: 'kubernetes-service-endpoints'
    kubernetes_sd_configs:
    - role: endpoints
    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
    action: keep
    regex: true
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
    action: replace
    target_label: __scheme__
    regex: (https?)
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
    action: replace
    target_label: __address__
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: \$1:\$2
    - action: labelmap
    regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
    action: replace
    target_label: kubernetes_name
    EOF
    有变量再次查看
    cat prometheus-additional.yaml

    上述配置会筛选endpoints:prometheus.io/scrape=True

在需要监控的服务中添加

  annotations: 
     prometheus.io/scrape: "True"

将上述配置保存为secret:

kubectl delete secret additional-configs -n monitoring
kubectl create secret generic additional-configs --from-file=prometheus-additional.yaml -n monitoring
secret "additional-configs" created
kubectl get secret additional-configs -n monitoring  -o yaml 
  1. 将配置添加到prometheus实例
    修改prometheus CRD,将上面的secret添加进去:

vi prometheus-prometheus.yaml

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  labels:
    prometheus: k8s
  name: k8s
  namespace: monitoring
spec:
  ......
  additionalScrapeConfigs:
    name: additional-configs
    key: prometheus-additional.yaml
  serviceAccountName: prometheus-k8s
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: 2.46.0

kubectl apply -f prometheus-prometheus.yaml

prometheus CRD修改完毕,可以到prometheus dashboard查看config是否被修改。
http://prometheus.k8s.local:30180/targets?search=#pool-kubernetes-service-endpoints

kubectl get pods -n monitoring -o wide
kubectl rollout restart sts/prometheus-k8s -n monitoring
kubectl logs -f prometheus-k8s-0 prometheus -n monitoring

nfs重启后服务503,无法关闭pod

#df -h 无反应,nfs卡死,需重启客户端服务器
kubectl get pods -n monitoring

kubectl delete -f prometheus-prometheus.yaml
kubectl delete pod prometheus-k8s-1  -n monitoring
kubectl delete pod prometheus-k8s-1 --grace-period=0 --force --namespace monitoring

kubectl delete -f grafana-deployment.yaml
kubectl apply -f grafana-deployment.yaml

kubectl apply -f prometheus-prometheus.yaml
kubectl logs -n monitoring pod prometheus-k8s-0 
kubectl describe -n monitoring pod prometheus-k8s-0 
kubectl describe -n monitoring pod prometheus-k8s-1 
kubectl describe -n monitoring pod grafana-65fdddb9c7-xml6m  

kubectl get pv,pvc -o wide

persistentvolume/pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19   10Gi       RWX            Delete           Bound      monitoring/grafana-pvc                          managed-nfs-storage            25h   Filesystem
persistentvolume/pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e   10Gi       RWO            Delete           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-1   managed-nfs-storage            25h   Filesystem
persistentvolume/pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca   10Gi       RWO            Delete           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-0   managed-nfs-storage            25h   Filesystem
persistentvolume/pvc-f9153444-5653-4684-a845-83bb313194d1   300Mi      RWX            Retain           Released   default/test-pvc2                               managed-nfs-storage            29d   Filesystem

#完全删除重装
kubectl delete -f manifests/
kubectl apply -f manifests/

当nfs异常时,网元进程读nfs挂载目录超时卡住,导致线程占满,无法响应k8s心跳检测,一段时间后,k8s重启该网元pod,在终止pod时,由于nfs异常,umount卡住,导致pod一直处于Terminating状态。

去原有布署的node上卸载nfs挂载点
mount -l | grep nfs

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
192.168.244.6:/nfs/k8s/dpv/monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e on /var/lib/kubelet/pods/67309a97-b69c-4423-9353-74863d55b3be/volumes/kubernetes.io~nfs/pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.244.7,local_lock=none,addr=192.168.244.6)
192.168.244.6:/nfs/k8s/dpv/monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e/prometheus-db on /var/lib/kubelet/pods/67309a97-b69c-4423-9353-74863d55b3be/volume-subpaths/pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e/prometheus/2 type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.244.7,local_lock=none,addr=192.168.244.6)
umount -l -f /var/lib/kubelet/pods/67309a97-b69c-4423-9353-74863d55b3be/volumes/kubernetes.io~nfs/pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e

修改默认挂载为Soft方式

vi /etc/nfsmount.conf
Soft=True

soft 当NFS Client以soft挂载Server后,若网络或Server出现问题,造成Client和Server无法传输资料时,Client会一直尝试到 timeout后显示错误并且停止尝试。若使用soft mount的话,可能会在timeout出现时造成资料丢失,故一般不建议使用。
hard 这是默认值。若用hard挂载硬盘时,刚好和soft相反,此时Client会一直尝试连线到Server,若Server有回应就继续刚才的操作,若没有回应NFS Client会一直尝试,此时无法umount或kill,所以常常会配合intr使用。
intr 当使用hard挂载的资源timeout后,若有指定intr可以在timeout后把它中断掉,这避免出问题时系统整个被NFS锁死,建议使用。

StatefulSet删除pv后,pod起来还是会找原pv,不建议删pv

kubectl get pv -o wide
pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e   10Gi       RWO            Retain           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-1   managed-nfs-storage            26h   Filesystem
pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca   10Gi       RWO            Retain           Bound      monitoring/prometheus-k8s-db-prometheus-k8s-0   managed-nfs-storage            26h   Filesystem

kubectl patch pv pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e -p '{"metadata":{"finalizers":null}}'
kubectl patch pv pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca -p '{"metadata":{"finalizers":null}}'
kubectl delete pv pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e
kubectl delete pv pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca 

kubectl describe pvc pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19 | grep Mounted
kubectl patch pv pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19 -p '{"metadata":{"finalizers":null}}'
kubectl delete pv pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19

恢复pv

恢复grafana-pvc,从nfs的动态pv目录下找到原来的挂载点 monitoring-grafana-pvc-pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19

kubectl describe -n monitoring pod grafana-65fdddb9c7-xml6m
default-scheduler 0/3 nodes are available: persistentvolumeclaim "grafana-pvc" bound to non-existent persistentvolume "pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19". preemption: 0/3 nodes are available:
3 Preemption is not helpful for scheduling..

cat > rebuid-grafana-pvc.yaml  << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19
  labels:
    pv: pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName:  managed-nfs-storage 
  nfs:
    path: /nfs/k8s/dpv/monitoring-grafana-pvc-pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19
    server: 192.168.244.6
EOF
kubectl apply -f ../k8s/rebuid-grafana-pvc.yaml 

恢复prometheus-k8s-0

kubectl describe -n monitoring pod prometheus-k8s-0
Warning FailedScheduling 14m (x3 over 24m) default-scheduler 0/3 nodes are available: persistentvolumeclaim "prometheus-k8s-db-prometheus-k8s-0" bound to non-existent persistentvolume "pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca". preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

cat > rebuid-prometheus-k8s-0-pv.yaml  << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca
  labels:
    pv: pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName:  managed-nfs-storage 
  nfs:
    path: /nfs/k8s/dpv/monitoring-prometheus-k8s-db-prometheus-k8s-0-pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca
    server: 192.168.244.6
EOF

kubectl describe -n monitoring pod prometheus-k8s-1
Warning FailedScheduling 19m (x3 over 29m) default-scheduler 0/3 nodes are available: persistentvolumeclaim "prometheus-k8s-db-prometheus-k8s-1" bound to non-existent persistentvolume "pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e". preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

cat > rebuid-prometheus-k8s-1-pv.yaml  << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e
  labels:
    pv: pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName:  managed-nfs-storage 
  nfs:
    path: /nfs/k8s/dpv/monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e
    server: 192.168.244.6
EOF
kubectl apply -f rebuid-prometheus-k8s-0-pv.yaml 
kubectl apply -f rebuid-prometheus-k8s-1-pv.yaml 

kubectl get pv -o wide
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                           STORAGECLASS          REASON   AGE     VOLUMEMODE
pvc-6dfcbb35-dd1a-4784-8c97-34affe78fe19   10Gi       RWX            Retain           Bound    monitoring/grafana-pvc                          managed-nfs-storage            9m17s   Filesystem
pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e   10Gi       RWX            Retain           Bound    monitoring/prometheus-k8s-db-prometheus-k8s-1   managed-nfs-storage            17s     Filesystem
pvc-c57701e8-6ee1-48f0-b23c-a966fd8a18ca   10Gi       RWX            Retain           Bound    monitoring/prometheus-k8s-db-prometheus-k8s-0   managed-nfs-storage            2m37s   Filesystem
kubectl get pods -n monitoring
kubectl -n monitoring logs -f prometheus-k8s-1

Error from server (BadRequest): container "prometheus" in pod "prometheus-k8s-1" is waiting to start: PodInitializing
iowait很高
iostat -kx 1
有很多挂载进程
ps aux|grep mount

mount -t nfs 192.168.244.6:/nfs/k8s/dpv/monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-bb00943e-c32c-4972-9fb8-e8862fb92d9e ./tmp
showmount -e 192.168.244.6
Export list for 192.168.244.6:
/nfs/k8s/dpv     *
/nfs/k8s/spv_003 *
/nfs/k8s/spv_002 *
/nfs/k8s/spv_001 *
/nfs/k8s/web     *

mount -v -t nfs 192.168.244.6:/nfs/k8s/web ./tmp
mount.nfs: timeout set for Fri Nov 24 14:33:04 2023
mount.nfs: trying text-based options 'soft,vers=4.1,addr=192.168.244.6,clientaddr=192.168.244.5'

mount -v -t nfs -o vers=3  192.168.244.6:/nfs/k8s/web ./tmp
#nfs3可以挂载

如果客户端正在挂载使用,服务器端 NFS 服务突然间停掉了,那么在客户端就会出现执行 df -h命令卡死的现象。
可以杀死挂载点,重启客户端和服务端nfs服务,重新挂载,或重启服务器。

Posted in 安装k8s/kubernetes.

Tagged with , , , .


k8s_安装11_部署openresty

十一 部署openresty

准备镜像

docker search openresty 
#可以自选使用哪个镜像

#oprnresty 官方镜像
docker pull openresty/openresty 
docker images |grep openresty
openresty/openresty                                                            latest                eaeb31afac25   4 weeks ago     93.2MB
docker inspect openresty/openresty:latest
docker tag docker.io/openresty/openresty:latest repo.k8s.local/docker.io/openresty/openresty:latest
docker tag docker.io/openresty/openresty:latest repo.k8s.local/docker.io/openresty/openresty:1.19.9.1
docker push repo.k8s.local/docker.io/openresty/openresty:latest
docker push repo.k8s.local/docker.io/openresty/openresty:1.19.9.1

查看镜像

docker inspect openresty/openresty

                "resty_deb_version": "=1.19.9.1-1~bullseye1",
docker run -it openresty/openresty sh

apt-get update
apt-get install tree procps inetutils-ping net-tools

nginx -v
nginx version: openresty/1.19.9.1

ls /usr/local/openresty/nginx/conf

从镜像中复制出配制文件

docker run --name openresty -d openresty/openresty
docker cp openresty:/usr/local/openresty/nginx/conf/nginx.conf ./
docker stop openresty

从文件创建configmap

kubectl get cm -ntest
kubectl create -ntest configmap test-openresty-nginx-conf --from-file=nginx.conf=./nginx.conf
configmap/test-openresty-nginx-conf created

通过 edit 命令直接修改 configma

kubectl edit -ntest cm test-openresty-nginx-conf

通过 replace 替换

由于 configmap 我们创建通常都是基于文件创建,并不会编写 yaml 配置文件,因此修改时我们也是直接修改配置文件,而 replace 是没有 –from-file 参数的,因此无法实现基于源配置文件的替换,此时我们可以利用下方的命令实现
该命令的重点在于 –dry-run 参数,该参数的意思打印 yaml 文件,但不会将该文件发送给 apiserver,再结合 -oyaml 输出 yaml 文件就可以得到一个配置好但是没有发给 apiserver 的文件,然后再结合 replace 监听控制台输出得到 yaml 数据,通过 – 将当前输出变为当前命令的输入,即可实现替换

kubectl create -ntest cm  test-openresty-nginx-conf --from-file=nginx.conf --dry-run=client -oyaml | kubectl -ntest replace -f-
configmap/test-openresty-nginx-conf replaced

删除

kubectl delete cm test-openresty-nginx-conf

configmap不可修改

在 configMap 中加入 immutable: true

kubectl edit -ntest cm test-openresty-nginx-conf

准备yaml文件

#test-openresty-deploy.yaml 文件
#bitnami 公司镜像,使用宿主的目录,默认以1001 用户运行.
#指定以500 www 用户运行,在spec:添加
      securityContext:
        runAsUser: 500
#充许绑定1024下端口
      containers:
        capabilities:
          add:
          - NET_BIND_SERVICE
#挂载宿主时区文件,nginx目录
cat > test-openresty-deploy.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openresty
  namespace: test
spec:
  selector :
    matchLabels:
      app: openresty
  replicas: 1
  template:
    metadata:
      labels:
        app: openresty
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      securityContext:
        runAsUser: 500
      containers:
      - name: openresty
        resources:
          limits:
            cpu: "20m"
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 10Mi
        env:
        - name: TZ
          value: Asia/Shanghai
        image: repo.k8s.local/docker.io/openresty/openresty:1.19.9.1
        #command: ["/bin/bash", "-ce", "tail -f /dev/null"]
        #command: ["/opt/bitnami/scripts/openresty/run.sh"]
        ports:
        - containerPort: 80
        volumeMounts:
        #- name: vol-opresty-conf
        #  mountPath: /opt/bitnami/openresty/nginx/conf/
        - name: nginx-conf   # 数据卷名称
          mountPath: /usr/local/openresty/nginx/conf/nginx.conf      # 挂载的路径
          subPath: etc/nginx/nginx.conf         # 与 volumes.[0].items.path 相同
        - name: vol-opresty-html
          mountPath: "/usr/share/nginx/html/"
        - name: vol-opresty-log
          mountPath: "/var/log/nginx/"
      volumes:
      - name: nginx-conf  # 数据卷名称
        configMap:        # 数据卷类型为 configMap
          name: test-openresty-nginx-conf    # configMap 名字
          items:       # 要将configMap中的哪些数据挂载进来
          - key: nginx.conf    # configMap 中的文件名
            path: etc/nginx/nginx.conf           # subPath 路径
      #- name: vol-opresty-conf
      #  hostPath:
      #    path: /nginx/openresty/conf/
      #    type: DirectoryOrCreate
      - name: vol-opresty-html
        hostPath:
          path: /nginx/html/
          type: DirectoryOrCreate
      - name: vol-opresty-log
        hostPath:
          path: /nginx/logs/
          type: DirectoryOrCreate
      affinity: #方式四 尽量分配到不同的node上
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                - amd64
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - openresty
                topologyKey: kubernetes.io/hostname
EOF

#openresty nodeport服务
test-openresty-svc-nodeport.yaml
cat > test-openresty-svc-nodeport.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: svc-openresty
  namespace: test
spec:
  ports:
  - {name: http, nodePort: 32080, port: 31080, protocol: TCP, targetPort: 8089}
  selector: {app: openresty}
  type: NodePort
EOF
#ingress关联
test-openresty-ingress.yaml
cat > test-openresty-ingress.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-openresty
  annotations:
    kubernetes.io/ingress.class: "nginx"
  namespace: test
spec:
  rules:
  - http:
      paths:
      - path: /showvar
        pathType: Prefix
        backend:
          service:
            name: svc-openresty
            port:
              number: 31080
EOF

错误:
server-snippet annotation cannot be used. Snippet directives are disabled by the Ingress administrator

kind: ConfigMap中打开
allow-snippet-annotations: "true"

cat > ingress-nginx-ConfigMap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: "true"
  worker-processes: "auto" #worker_processes
  server-name-hash-bucket-size: "128" #server_names_hash_bucket_size
  variables-hash-bucket-size: "256" #variables_hash_bucket_size
  variables-hash-max-size: "2048" #variables_hash_max_size
  client-header-buffer-size: "32k" #client_header_buffer_size
  proxy-body-size: "8m" #client_max_body_size
  large-client-header-buffers: "4 512k" #large_client_header_buffers
  client-body-buffer-size: "512k" #client_body_buffer_size
  proxy-connect-timeout : "5" #proxy_connect_timeout
  proxy-read-timeout: "60" #proxy_read_timeout
  proxy-send-timeout: "5" #proxy_send_timeout
  proxy-buffer-size: "32k" #proxy_buffer_size
  proxy-buffers-number: "8 32k" #proxy_buffers
  keep-alive: "60" #keepalive_timeout
  enable-real-ip: "true" 
  use-forwarded-headers: "true"
  forwarded-for-header: "ns_clientip" #real_ip_header
  compute-full-forwarded-for: "true"
  enable-underscores-in-headers: "true" #underscores_in_headers on
  proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16  #set_real_ip_from
  access-log-path: "/var/log/nginx/access_$hostname.log"
  error-log-path: "/var/log/nginx/error.log"
  #log-format-escape-json: "true"
  log-format-upstream: '{"timestamp": "$time_iso8601", "requestID": "$req_id", "proxyUpstreamName":
    "$proxy_upstream_name","hostname": "$hostname","host": "$host","body_bytes_sent": "$body_bytes_sent","proxyAlternativeUpstreamName": "$proxy_alternative_upstream_name","upstreamStatus":
    "$upstream_status", "geoip_country_code": "$geoip_country_code","upstreamAddr": "$upstream_addr","request_time":
    "$request_time","httpRequest":{ "remoteIp": "$remote_addr","realIp": "$realip_remote_addr","requestMethod": "$request_method", "requestUrl":
    "$request_uri", "status": $status,"requestSize": "$request_length", "responseSize":
    "$upstream_response_length", "userAgent": "$http_user_agent",
    "referer": "$http_referer","x-forward-for":"$proxy_add_x_forwarded_for","latency": "$upstream_response_time", "protocol":"$server_protocol"}}'
EOF

kubectl delete -f ingress-nginx-ConfigMap.yaml
kubectl apply -f ingress-nginx-ConfigMap.yaml
kubectl edit configmap -n ingress-nginx ingress-nginx-controller
#ingress关联server-snippet
#realip 会在server 段对全域名生效
#ip 白名单whitelist-source-range 会在location = /showvar 生效,使用remoteaddr判定,需要全域白名单时才用.allow 223.2.2.0/24;deny all;

test-openresty-ingress-snippet.yaml
cat > test-openresty-ingress-snippet.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-openresty
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/server-snippet: |
      underscores_in_headers on;
      set_real_ip_from 10.244.0.0/16;
      set_real_ip_from 192.168.0.0/16;
      real_ip_header ns_clientip;
      #real_ip_recursive on;
    nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
  namespace: test
spec:
  rules:
  - http:
      paths:
      - path: /showvar
        pathType: Prefix
        backend:
          service:
            name: svc-openresty
            port:
              number: 31080
EOF

kubectl apply -f test-openresty-ingress-snippet.yaml

在工作node上创建挂载目录

mkdir -p /nginx/openresty/conf
mkdir -p /nginx/{html,logs}
chown -R www:website /nginx/

先启运一下pod

kubectl apply -f test-openresty-deploy.yaml

#查看pod并得到name
kubectl get pods -o wide -n test
NAME                           READY   STATUS    RESTARTS        AGE     IP             NODE               NOMINATED NODE   READINESS GATES
nginx-deploy-7c9674d99-v92pd   1/1     Running   0               4d17h   10.244.1.97    node01.k8s.local   <none>           <none>
openresty-fdc45bdbc-jh67k      1/1     Running   0               8m41s   10.244.2.59    node02.k8s.local   <none>           <none>

将pod内的配制文件复制到master,并传递给工作node

kubectl -n test logs -f openresty-76cf797cfc-gccsl

cd /nginx/openresty/conf
#以下三种方式都行
kubectl cp test/openresty-76cf797cfc-gccsl:/opt/bitnami/openresty/nginx/conf /nginx/openresty/conf

kubectl cp test/openresty-76cf797cfc-gccsl:/opt/bitnami/openresty/nginx/conf ./

kubectl exec "openresty-76cf797cfc-gccsl" -n "test" -- tar cf - "/opt/bitnami/openresty/nginx/conf" | tar xf - 

chown -R www:website .

#从master传到node2
scp -r * [email protected]:/nginx/openresty/conf/

include "/opt/bitnami/openresty/nginx/conf/server_blocks/*.conf";

在node2上 创建web主机文件

对应pod中 /opt/bitnami/openresty/nginx/conf/server_blocks/

vi /nginx/openresty/nginx/conf/server_blocks/default.conf 
server {
    listen       8089;
    server_name  localhost;

    access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    location /showvar {
        default_type text/plain;           
        echo time_local: $time_local;
        echo hostname: $hostname;
        echo server_addr: $server_addr;
        echo server_port: $server_port;
        echo host: $host;
        echo scheme: $scheme;
        echo http_host: $http_host;
        echo uri: $uri;
        echo remote_addr: $remote_addr;
        echo remote_port: $remote_port;
        echo remote_user: $remote_user;
        echo realip_remote_addr: $realip_remote_addr;
        echo realip_remote_port: $realip_remote_port;
        echo http_ns_clientip: $http_ns_clientip;
        echo http_user_agent: $http_user_agent;
        echo http_x_forwarded_for: $http_x_forwarded_for;
        echo proxy_add_x_forwarded_for: $proxy_add_x_forwarded_for;
        echo X-Request-ID: $http_x_request_id;
        echo X-Real-IP: $http_x_real_ip;
        echo X-Forwarded-Host: $http_x_forwarded_host;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

cat > /nginx/openresty/nginx/conf/server_blocks/ngxrealip.conf <<EOF
    underscores_in_headers on;
    #ignore_invalid_headers off;
    set_real_ip_from   10.244.0.0/16;
    real_ip_header    ns_clientip;

EOF

修改test-openresty-deploy.yaml 中openresty的配制目录后重启pod

cat > test-openresty-deploy.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openresty
  namespace: test
spec:
  selector :
    matchLabels:
      app: openresty
  replicas: 1
  template:
    metadata:
      labels:
        app: openresty
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      securityContext:
        runAsNonRoot: true
        runAsUser: 500
        #runAsGroup: 500
      nodeName: 
        node02.k8s.local
      containers:
      - name: openresty
        image: repo.k8s.local/docker.io/bitnami/openresty:latest
        #command: ["/bin/bash", "-ce", "tail -f /dev/null"]
        command: ["/opt/bitnami/scripts/openresty/run.sh"]
        ports:
        - containerPort: 80
        volumeMounts:
        - name: timezone
          mountPath: /etc/localtime  
        - name: vol-opresty-conf
          mountPath: /opt/bitnami/openresty/nginx/conf/
          readOnly: true
        - name: vol-opresty-html
          mountPath: "/usr/share/nginx/html/"
        - name: vol-opresty-log
          mountPath: "/var/log/nginx/"
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  
      - name: vol-opresty-conf
        hostPath:
          path: /nginx/openresty/nginx/conf/
          type: DirectoryOrCreate
      - name: vol-opresty-html
        hostPath:
          path: /nginx/html/
          type: DirectoryOrCreate
      - name: vol-opresty-log
        hostPath:
          path: /nginx/logs/
          type: DirectoryOrCreate
      #nodeSelector:
        #ingresstype: ingress-nginx
EOF
#创建/关闭 test-openresty-deploy.yaml 
kubectl apply -f test-openresty-deploy.yaml
kubectl delete -f test-openresty-deploy.yaml

#创建/关闭 openresty noddeport 服务
kubectl apply -f test-openresty-svc-nodeport.yaml
kubectl delete -f test-openresty-svc-nodeport.yaml

#创建/关闭 openresty ingress 关联服务
kubectl apply -f test-openresty-ingress.yaml
kubectl delete -f test-openresty-ingress.yaml

#查看pod
kubectl get pods -o wide -n test
NAME                            READY   STATUS    RESTARTS     AGE   IP            NODE               NOMINATED NODE   READINESS GATES
openresty-b6d7798f8-h47xj       1/1     Running   0            64m   10.244.2.24   node02.k8s.local   <none>           <none>

#查看service
kubectl get service -n test
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
svc-openresty    NodePort    10.96.30.145    <none>        31080:32080/TCP   3d6h

#查看 ingress 关联
kubectl get  Ingress -n test
NAME                     CLASS    HOSTS   ADDRESS     PORTS   AGE
ingress-svc-openresty    <none>   *       localhost   80      2m22s
ingress-svc-test-nginx   <none>   *       localhost   80      3d22h

#查看ingress
kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.96.111.201   <none>        80:30080/TCP,443:30443/TCP   4d2h
ingress-nginx-controller-admission   ClusterIP   10.96.144.105   <none>        443/TCP                      4d2h

#查看详缰
kubectl -n test describe pod openresty-b6d7798f8-h47xj
kubectl -n test logs -f openresty-b6d7798f8-h47xj

kubectl -n test describe pod openresty-59449454db-6knwv 
kubectl -n test logs -f  openresty-59449454db-mj74d
#进入容器
kubectl exec -it pod/openresty-b6d7798f8-h47xj -n test -- /bin/sh
kubectl exec -it pod/openresty-fdc45bdbc-jh67k -n test -- /bin/sh

#test-openresty-deploy.yaml 中已将pod内openresty 的配置,web根目录,日志都已映设到宿主node2上.
#可以在宿主上操作相应文件,创建一个输出node名的首页
echo `hostname` >  /nginx/html/index.html

#不进入容器重启服务
kubectl exec -it pod/openresty-b6d7798f8-h47xj -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -t'
kubectl exec -it pod/openresty-b6d7798f8-h47xj -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -s reload'
kubectl exec -it pod/openresty-6b59dd984d-bj2xt -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -t'
kubectl exec -it pod/openresty-6b59dd984d-bj2xt -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -s reload'
kubectl exec -it pod/openresty-6b59dd984d-bj2xt -n test -- /bin/sh -c 'ls /opt/bitnami/openresty/nginx/conf/server_blocks/'

查看流量

列出当前节点网卡及ip

for i in `ifconfig | grep -o ^[a-z0-9\.@]*`; do echo -n "$i : ";ifconfig $i|sed -n 2p|awk '{ print $2 }'; done

master01
cni0 : 10.244.0.1
enp0s3 : 192.168.244.4
flannel.1 : 10.244.0.0

node01
cni0 : 10.244.1.1
enp0s3 : 192.168.244.5
flannel.1 : 10.244.1.0

node02
cni0 : 10.244.2.1
enp0s3 : 192.168.244.7
flannel.1 : 10.244.2.0

当前ingress 开了HostNetwork+nodeport 30080,关联到node01,node02,不关联master
当前openresty service 开了 nodeport 32080,只关联到node02.

pod ip+pod targetPort 可以在集群内任意节点访问
在集群内访问openresty clusterip+clusterport,请求会先经过VIP,再由kube-proxy分发到各个pod上面,使用ipvsadm命令来查看这些负载均衡的转发
在集群内外通过任意 nodeip+nodeport 访问openresty service,没有布署的节点会转发一次,NodePort服务的请求路径是从K8S节点IP直接到Pod,并不会经过ClusterIP,但是这个转发逻辑依旧是由kube-proxy实现
在集群内外访问ingress的nodeport,NodePort的解析结果是一个CLUSTER-IP,在集群内部请求的负载均衡逻辑和实现与ClusterIP Service是一致的
在集群内外访问ingress的hostnetwork ,如果没有pod,ingress nginx会跨node反代一次,否则是本机代理。0~1次转发。没有布ingress的不能访问。

openresty pod pod ip+pod targetPort

openresty pod ip+pod port 可以在集群内任意节点访问,集群外不能访问

在node02 访问node02 上 openresty 的pod ip+pod targetPort

curl http://10.244.2.24:8089/showvar
time_local: 30/Oct/2023:16:51:23 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 10.244.2.24
scheme: http
http_host: 10.244.2.24:8089
uri: /showvar
remote_addr: 10.244.2.1
remote_port: 42802
remote_user: 
http_x_forwarded_for: 

流量在本机node02上
10.244.2.1->10.244.2.24

在node01 访问node02 上 openresty 的pod ip+pod targetPort

curl http://10.244.2.24:8089/showvar
time_local: 30/Oct/2023:16:51:25 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 10.244.2.24
scheme: http
http_host: 10.244.2.24:8089
uri: /showvar
remote_addr: 10.244.1.0
remote_port: 39108
remote_user: 
http_x_forwarded_for:

流量在从node01上到node02
10.244.1.0->10.244.2.24

openresty service clusterip+clusterport

在master 访问node02 上 openresty service 的clusterip+clusterport

curl http://10.96.30.145:31080/showvar
time_local: 31/Oct/2023:10:15:49 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 10.96.30.145
scheme: http
http_host: 10.96.30.145:31080
uri: /showvar
remote_addr: 10.244.0.0
remote_port: 1266
remote_user: 
http_x_forwarded_for: 

在node2 访问node02 上 openresty service 的clusterip+clusterport

curl http://10.96.30.145:31080/showvar
time_local: 31/Oct/2023:10:18:01 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 10.96.30.145
scheme: http
http_host: 10.96.30.145:31080
uri: /showvar
remote_addr: 10.244.2.1
remote_port: 55374
remote_user: 
http_x_forwarded_for: 
http_user_agent: curl/7.29.0

在node2中pode内使用service域名访问

kubectl exec -it pod/test-pod-86df6cd59b-x8ndr -n test -- curl http://svc-openresty.test.svc.cluster.local:31080/showvar
time_local: 01/Nov/2023:11:11:46 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: svc-openresty.test.svc.cluster.local
scheme: http
http_host: svc-openresty.test.svc.cluster.local:31080
uri: /showvar
remote_addr: 10.244.2.41
remote_port: 43594
remote_user: 
http_x_forwarded_for: 
http_user_agent: curl/7.81.0

host自动补充

kubectl exec -it pod/test-pod-86df6cd59b-x8ndr -n test -- curl http://svc-openresty:31080/showvar     
time_local: 01/Nov/2023:11:15:26 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: svc-openresty
scheme: http
http_host: svc-openresty:31080
uri: /showvar
remote_addr: 10.244.2.41
remote_port: 57768
remote_user: 
http_x_forwarded_for: 
http_user_agent: curl/7.81.0

在集群内外通过任意nodeip+nodeport访问 openresty service

pod所在的node不转发,没有的node转发一次

curl http://192.168.244.4:32080
node02.k8s.local

curl http://192.168.244.4:32080/showvar
time_local: 30/Oct/2023:16:27:09 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 127.0.0.1
scheme: http
http_host: 127.0.0.1:32080
uri: /showvar
remote_addr: 10.244.0.0
remote_port: 22338
remote_user: 
http_x_forwarded_for: 

流量从master nodeip到node02 clusterip,跨node转发了一次
192.168.244.4->10.244.0.0->10.244.2.24

指向node2的32080
curl http://192.168.244.7:32080/showvar
流量从node2 nodeip到clusterip
192.168.244.7->10.244.2.1->10.244.2.24

在集群内外通过任意nodeip+nodeport访问 ingress service

访问没有布署ingress 的master nodeip+nodeport,会转发一次或两次(负载均衡的原因)

curl http://192.168.244.4:30080/showvar/
time_local: 30/Oct/2023:16:36:16 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 127.0.0.1
scheme: http
http_host: 127.0.0.1:30080
uri: /showvar/
remote_addr: 10.244.1.0
remote_port: 58680
remote_user: 
http_x_forwarded_for: 192.168.244.4

从master nodeip经flannel到ingree所在的node01或node02,再到node02,http_x_forwarded_for多了192.168.244.4
192.168.244.4->10.244.0.0->10.244.1.0->10.244.2.24
http_x_forwarded_for固定192.168.244.4
remote_addr会是10.244.1.0或10.244.2.0

在master上访问有布署ingress 的node02 nodeip+nodeport,会转发一次或两次

#两种情况

curl http://192.168.244.7:30080/showvar/
time_local: 30/Oct/2023:17:07:10 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 127.0.0.1
scheme: http
http_host: 127.0.0.1:30080
uri: /showvar/
remote_addr: 10.244.1.0
remote_port: 44200
remote_user: 
http_x_forwarded_for: 192.168.244.7

在master访问node02 nodeip经过node01再回到node02
192.168.244.7->10.244.2.0->10.244.1.0->10.244.2.24

curl http://192.168.244.7:30080/showvar/
time_local: 30/Oct/2023:17:18:06 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 192.168.244.7
scheme: http
http_host: 192.168.244.7:30080
uri: /showvar/
remote_addr: 10.244.2.1
remote_port: 45772
remote_user: 
http_x_forwarded_for: 192.168.244.4
从node02 nodeip经kubeproxy调度到master再到node02的ingress,再回到node02 
192.168.244.4->192.168.244.7->10.244.2.1->10.244.2.24

ingress hostnetwork

在master上访问node02的hostnetwork ,直接访问,没有跨node转发

curl http://192.168.244.7:80/showvar/
time_local: 30/Oct/2023:17:26:37 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 192.168.244.7
scheme: http
http_host: 192.168.244.7
uri: /showvar/
remote_addr: 10.244.2.1
remote_port: 45630
remote_user: 
http_x_forwarded_for: 192.168.244.4

192.168.244.7->10.244.2.1->10.244.2.24

在master上访问node01的hostnetwork ,经ingress转发一次

curl http://192.168.244.5:80/showvar/
time_local: 30/Oct/2023:17:28:10 +0800
hostname: openresty-b6d7798f8-h47xj
server_addr: 10.244.2.24
server_port: 8089
host: 192.168.244.5
scheme: http
http_host: 192.168.244.5
uri: /showvar/
remote_addr: 10.244.1.0
remote_port: 48512
remote_user: 
http_x_forwarded_for: 192.168.244.4

192.168.244.5->10.244.1.0->10.244.2.24

在master上访问master,因为没布ingress,所以不能访问

curl http://192.168.244.4:80/showvar/
curl: (7) Failed connect to 192.168.244.4:80; Connection refused

错误:

2023/10/30 14:43:23 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /opt/bitnami/openresty/nginx/conf/nginx.conf:2
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /opt/bitnami/openresty/nginx/conf/nginx.conf:2
2023/10/30 14:43:23 [emerg] 1#1: mkdir() "/opt/bitnami/openresty/nginx/tmp/client_body" failed (13: Permission denied)

nginx: [emerg] mkdir() "/opt/bitnami/openresty/nginx/tmp/client_body" failed (13: Permission denied)
权限不对,去除securityContext中runAsGroup

      securityContext:
        runAsUser: 1000
        #runAsGroup: 1000

deployment多个pod

openresty使用nfs存放配制文件及web文件
PV和StorageClass不受限于Namespace,PVC受限于Namespace,如果pod有namespace,那么pvc和pv也需相同的namespace

#准备pv
cat > test-openresty-spv.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-openresty-cfg-spv
  namespace: test
  labels:
    pv: test-openresty-cfg-spv
spec:
  capacity:
    storage: 300Mi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /nfs/k8s/cfg/openresty
    server: 192.168.244.6
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-openresty-web-spv
  namespace: test
  labels:
    pv: test-openresty-web-spv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /nfs/k8s/web/openresty
    server: 192.168.244.6
EOF
#准备pvc
cat > test-openresty-pvc.yaml  << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-openresty-cfg-pvc
  namespace: test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 300Mi
  selector:
    matchLabels:
      pv: test-openresty-cfg-spv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-openresty-web-pvc
  namespace: test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      pv: test-openresty-web-spv
EOF
#准备openresty Deployment
cat > test-openresty-deploy.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openresty
  namespace: test
spec:
  selector :
    matchLabels:
      app: openresty
  replicas: 2
  template:
    metadata:
      labels:
        app: openresty
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      securityContext:
        runAsNonRoot: true
        runAsUser: 500
        #runAsGroup: 500
      #nodeName: 
      #  node02.k8s.local
      containers:
      - name: openresty
        image: repo.k8s.local/docker.io/bitnami/openresty:latest
        #command: ["/bin/bash", "-ce", "tail -f /dev/null"]
        command: ["/opt/bitnami/scripts/openresty/run.sh"]
        ports:
        - containerPort: 80
        volumeMounts:
        - name: timezone
          mountPath: /etc/localtime  
        - name: vol-opresty-conf
          mountPath: /opt/bitnami/openresty/nginx/conf/
          readOnly: true
        - name: vol-opresty-html
          mountPath: "/usr/share/nginx/html/"
        - name: vol-opresty-log
          mountPath: "/var/log/nginx/"
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  
      - name: vol-opresty-conf
        #hostPath:
        #  path: /nginx/openresty/nginx/conf/
        #  type: DirectoryOrCreate
        persistentVolumeClaim:
          claimName: test-openresty-cfg-pvc
      - name: vol-opresty-html
        #hostPath:
        #  path: /nginx/html/
        #  type: DirectoryOrCreate
        persistentVolumeClaim:
          claimName: test-openresty-web-pvc
      - name: vol-opresty-log
        hostPath:
          path: /nginx/logs/
          type: DirectoryOrCreate
      nodeSelector:
        ingresstype: ingress-nginx
EOF
kubectl apply -f test-openresty-spv.yaml
kubectl delete -f test-openresty-spv.yaml
kubectl apply -f test-openresty-pvc.yaml
kubectl delete -f test-openresty-pvc.yaml

#创建/关闭 test-openresty-deploy.yaml 
kubectl apply -f test-openresty-deploy.yaml
kubectl delete -f test-openresty-deploy.yaml

#创建/关闭 openresty noddeport 服务
kubectl apply -f test-openresty-svc-nodeport.yaml
kubectl delete -f test-openresty-svc-nodeport.yaml

#创建/关闭 openresty ingress 关联服务
kubectl apply -f test-openresty-ingress.yaml
kubectl delete -f test-openresty-ingress.yaml

#查看pod
kubectl get pods -o wide -n test
NAME                            READY   STATUS    RESTARTS     AGE   IP            NODE               NOMINATED NODE   READINESS GATES
openresty-6b5c6c6966-h6z6d     1/1     Running   7 (47m ago)     53m     10.244.1.107   node01.k8s.local   <none>           <none>
openresty-6b5c6c6966-l667p     1/1     Running   6 (50m ago)     53m     10.244.2.69    node02.k8s.local   <none>           <none>

kubectl get pv,pvc -n test

kubectl get sc

kubectl describe pvc -n test
storageclass.storage.k8s.io "nfs" not found
从pv和pvc中去除  storageClassName: nfs

#查看详情
kubectl -n test describe pod openresty-6b5c6c6966-l667p
kubectl -n test logs -f openresty-6b5c6c6966-h6z6d

kubectl exec -it pod/openresty-b4475b994-m72qg -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -t'
kubectl exec -it pod/openresty-b4475b994-m72qg -n test -- /bin/sh -c '/opt/bitnami/openresty/bin/openresty -s reload'
kubectl exec -it pod/openresty-6b5c6c6966-h6z6d -n test -- /bin/sh -c 'ls /opt/bitnami/openresty/nginx/conf/server_blocks/'

修改
sed -E -n 's/remote_user:.*/remote_user:test2;/p' /nfs/k8s/cfg/openresty/server_blocks/default.conf 
sed -E -i 's/remote_user:.*/remote_user:test5;/' /nfs/k8s/cfg/openresty/server_blocks/default.conf 

1.通过 Rollout 平滑重启 Pod
kubectl rollout restart deployment/openresty -n test

2.kubectl set env
kubectl set env deployment openresty -n test DEPLOY_DATE="$(date)"

3.扩展 Pod 的副本倒计时
kubectl scale deployment/openresty -n test --replicas=3

4.删除单个pod
kubectl delete pod openresty-7ccbdd4f6c-9l566  -n test

kubectl annotate pods openresty-7ccbdd4f6c-wrbl9 restartversion="2" -n test --overwrite

-------------
https://gitee.com/mirrors_openresty/docker-openresty?skip_mobile=true

kubectl create configmap test2-openresty-reload --from-literal=reloadnginx.log=1 -n test2
kubectl get configmaps test2-openresty-reload -o yaml -n test2
kubectl delete configmap test2-openresty-reload -n test2

#隐藏版本信息
#响应信息
sed -i 's/"Server: nginx" CRLF;/"Server:" CRLF;/g' /opt/nginx-1.20.2/src/http/ngx_http_header_filter_module.c
sed -i 's/"Server: " NGINX_VER CRLF;/"Server:" CRLF;/g' /opt/nginx-1.20.2/src/http/ngx_http_header_filter_module.c
#报错页面
sed -i 's/>" NGINX_VER "</></g' /opt/nginx-1.20.2/src/http/ngx_http_special_response.c
sed -i 's/>" NGINX_VER_BUILD "</></g' /opt/nginx-1.20.2/src/http/ngx_http_special_response.c
sed -i 's/>nginx</></g' /opt/nginx-1.20.2/src/http/ngx_http_special_response.c

Posted in 安装k8s/kubernetes.

Tagged with , .


k8s_安装10_日志_elk

十、日志_elk

日志收集内容

在日常使用控制过程中,一般需要收集的日志为以下几类:

服务器系统日志:

/var/log/messages
/var/log/kube-xxx.log

Kubernetes组件日志:

kube-apiserver日志
kube-controller-manager日志
kube-scheduler日志
kubelet日志
kube-proxy日志

应用程序日志

云原生:控制台日志
非云原生:容器内日志文件
网关日志(如ingress-nginx)

服务之间调用链日志

日志收集工具

日志收集技术栈一般分为ELK,EFK,Grafana+Loki
ELK是由Elasticsearch、Logstash、Kibana三者组成
EFK是由Elasticsearch、Fluentd、Kibana三者组成
Filebeat+Kafka+Logstash+ES
Grafana+Loki Loki负责日志的存储和查询、Promtail负责收集日志并将其发送给Loki、Grafana用来展示或查询相关日志

ELK 日志流程可以有多种方案(不同组件可自由组合,根据自身业务配置),常见有以下:

Filebeat、Logstash、Fluentd(采集、处理)—> ElasticSearch (存储)—>Kibana (展示)

Filebeat、Logstash、Fluentd(采集)—> Logstash(聚合、处理)—> ElasticSearch (存储)—>Kibana (展示)

Filebeat、Logstash、Fluentd(采集)—> Kafka/Redis(消峰) —> Logstash(聚合、处理)—> ElasticSearch (存 储)—>Kibana (展示)

Logstash

Logstash 是一个开源的数据收集、处理和传输工具。它可以从多种来源(如日志文件、消息队列等)收集数据,并对数据进行过滤、解析和转换,最终将数据发送到目标存储(如 Elasticsearch)。
优势:有很多插件
缺点:性能以及资源消耗(默认的堆大小是 1GB)

Fluentd/FluentBit

语言:(Ruby + C)
GitHub 地址:https://github.com/fluent/fluentd-kubernetes-daemonset
在线文档:https://docs.fluentd.org/
由于Logstash比较“重”,并且配置稍微有些复杂,所以出现了EFK的日志收集解决方案。相对于ELK中Logstash,Fluentd采用“一锅端”的形式,可以直接将某些日志文件中的内容存储至Elasticsearch,然后通过Kibana进行展示。其中Fluentd只能收集控制台日志(使用logs命令查出来的日志),不能收集非控制台日志,不能很好的满足生产环境的需求。大部分情况下,没有遵循云原生理念开发的程序,往往都会输出很多日志文件,这些容器内的日志无法采集,除非在每个Pod内添加一个Sidecar,将日志文件的内容进行tail -f转成控制台日志,但这也是非常麻烦的。
另外,用来存储日志的Elasticsearch集群是不建议搭建在Kubernetes集群中的,因为会非常浪费Kubernetes集群资源,所以大部分情况下通过Fluentd采集日志输出到外部的Elasticsearch集群中。
优点: Fluentd占用资源小,语法简单
缺点:解析前没有缓冲,可能会导致日志管道出现背压,对转换数据的支持有限,就像您可以使用 Logstash 的 mutate 过滤器或 rsyslog 的变量和模板一样.
Fluentd只能收集控制台日志(使用logs命令查出来的日志),不能收集非控制台日志,不能很好的满足生产环境的需求,依赖Elasticsearch,维护难度和资源使用都是偏高.
和 syslog-ng 一样,它的缓冲只存在与输出端,单线程核心以及 Ruby GIL 实现的插件意味着它 大的节点下性能是受限的

Fluent-bit

语言:C
fluentd精简版
在线文档:https://docs.fluentbit.io/manual/about/fluentd-and-fluent-bit

Filebeat

语言:Golang
GitHub 地址:https://github.com/elastic/beats/tree/master/filebeat
在线文档:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html
优势:只是一个二进制文件没有任何依赖。占用系统的CPU和内存小支持发送Logstash ,Elasticsearch,Kafka 和 Redis
缺点:有限的解析和丰富功能,可以在后端再加一层Logstash。
在早期的ELK架构中,日志收集均以Logstash为主,Logstash负责收集和解析日志,它对内存、CPU、IO资源的消耗比较高,但是Filebeat所占系统的CPU和内存几乎可以忽略不计。

由于Filebeat本身是比较轻量级的日志采集工具,因此Filebeat经常被用于以Sidecar的形式配置在Pod中,用来采集容器内程序输出的自定义日志文件。当然,Filebeat同样可以采用DaemonSet的形式部署在Kubernetes集群中,用于采集系统日志和程序控制台输出的日志。至于Filebeat为什么采用DaemonSet的形式部署而不是采用Deployment和StatefulSet部署,原因有以下几点:

收集节点级别的日志:Filebeat需要能够访问并收集每个节点上的日志文件,包括系统级别的日志和容器日志。Deployment和STS的主要目标是部署和管理应用程序的Pod,而不是关注节点级别的日志收集。因此,使用DaemonSet更适合收集节点级别的日志
自动扩展:Deployment和STS旨在管理应用程序的副本数,并确保所需的Pod数目在故障恢复和水平扩展时保持一致。但对于Filebeat来说,并不需要根据负载或应用程序的副本数来调整Pod数量。Filebeat只需在每个节点上运行一个实例即可,因此使用DaemonSet可以更好地满足这个需求
高可用性:Deployment和STS提供了副本管理和故障恢复的机制,确保应用程序的高可用性。然而,对于Filebeat而言,它是作为一个日志收集代理来收集日志,不同于应用程序,其故障恢复的机制和需求通常不同。使用DaemonSet可以确保在每个节点上都有一个运行中的Filebeat实例,即使某些节点上的Filebeat Pod不可用,也能保持日志收集的连续性
Fluentd和Logstash可以将采集的日志输出到Elasticsearch集群,Filebeat同样可以将日志直接存储到Elasticsearch中,Filebeat 也会和 Logstash 一样记住上次读取的偏移,但是为了更好地分析日志或者减轻Elasticsearch的压力,一般都是将日志先输出到Kafka,再由Logstash进行简单的处理,最后输出到Elasticsearch中。

LogAgent:

语言:JS
GitHub 地址:https://github.com/sematext/logagent-js
在线文档:https://sematext.com/docs/logagent/
优势:可以获取 /var/log 下的所有信息,解析各种格式(Elasticsearch,Solr,MongoDB,Apache HTTPD等等,以 掩盖敏感的数据信息 , Logagent 有本地缓冲,所以不像 Logstash ,在数据传输目的地不可用时会丢失日志
劣势:没有 Logstash 灵活

logtail:

阿里云日志服务的生产者,目前在阿里集团内部机器上运行,经过 3 年多时间的考验,目前为阿 里公有云用户提供日志收集服务
 采用 C++语言实现,对稳定性、资源控制、管理等下过很大的功夫,性能良好。相比于 logstash、fluentd 的社区支持,logtail 功能较为单一,专注日志收集功能。
优势:
  logtail 占用机器 cpu、内存资源最少,结合阿里云日志服务的 E2E 体验良好
劣势:
  logtail 目前对特定日志类型解析的支持较弱,后续需要把这一块补起来。

rsyslog

绝大多数 Linux 发布版本默认的 syslog 守护进程
优势:是经测试过的最快的传输工具
rsyslog 适合那些非常轻的应用(应用,小 VM,Docker 容器)。如果需要在另一个传输工具(例 如,Logstash)中进行处理,可以直接通过 TCP 转发 JSON ,或者连接 Kafka/Redis 缓冲

syslog-ng

优势:和 rsyslog 一样,作为一个轻量级的传输工具,它的性能也非常好

Grafana Loki

Loki 及其生态系统是 ELK 堆栈的替代方案, ELK 相比,摄取速度更快:索引更少,无需合并
优势:小存储占用:较小的索引,数据只写入一次到长期存储
缺点:与 ELK 相比,较长时间范围内的查询和分析速度较慢,log shippers选项更少(例如 Promtail 或 Fluentd)

ElasticSearch

一个正常es集群中只有一个主节点(Master),主节点负责管理整个集群。如创建或删除索引,跟踪哪些节点是群集的一部分,并决定哪些分片分配给相关的节点。集群的所有节点都会选择同一个节点作为主节点

脑裂现象:

脑裂问题的出现就是因为从节点在选择主节点上出现分歧导致一个集群出现多个主节点从而使集群分裂,使得集群处于异常状态。主节点的角色既为master又为data。数据访问量较大时,可能会导致Master节点停止响应(假死状态)

避免脑裂:

1.网络原因:discovery.zen.ping.timeout 超时时间配置大一点。默认是3S
2.节点负载:角色分离策略
3.JVM内存回收:修改 config/jvm.options 文件的 -Xms 和 -Xmx 为服务器的内存一半。

5个管理节点,其中一个是工作主节点,其余4个是备选节点,集群脑裂因子设置是3.

节点类型/角色

Elasticsearch 7.9 之前的版本中的节点类型主要有4种,:数据节点、协调节点、候选主节点、ingest 节点.
7.9 以及之后节点类型升级为节点角色(Node roles)。

ES集群由多节点组成,每个节点通过node.name指定节点的名称
一个节点可以支持多个角色,也可以支持一种角色。

1、master节点

配置文件中node.master属性为true,就有资格被选为
master节点用于控制整个集群的操作,比如创建和删除索引,以及管理非master节点,管理集群元数据信息,集群节点信息,集群索引元数据信息;
node.master: true
node.data: false

2、data数据节点

配置文件中node.data属于为true,就有资格被选为data节点,存储实际数据,提供初步联合查询,初步聚合查询,也可以作为协调节点
主要用于执行数据相关的操作
node.master: false
node.data: true

3、客户端节点

配置文件中node.master和node.data均为false(既不能为master也不能为data)
用于响应客户的请求,把请求转发到其他节点
node.master: false
node.data: false

4、部落节点

当一个节点配置tribe.*的时候,它是一个特殊的客户端,可以连接多个集群,在所有集群上执行索引和操作

其它角色汇总
7.9 以后 角色缩写 英文释义 中文释义
c cold node 冷数据节点
d data node 数据节点
f frozen node 冷冻数据节点
h hot node 热数据节点
i ingest node 数据预处理节点
l machine learning node 机器学习节点
m master-eligible node 候选主节点
r remote cluster client node 远程节点
s content node 内容数据节点
t transform node 转换节点
v voting-only node 仅投票节点
w warm node 温数据节点
coordinating node only 仅协调节点

新版使用node.roles 定义
node.roles: [data,master]

关于节点角色和硬件配置的关系,也是经常被提问的问题,推荐配置参考: 角色 描述 存储 内存 计算 网络
数据节点 存储和检索数据 极高
主节点 管理集群状态
Ingest 节点 转换输入数据
机器学习节点 机器学习 极高 极高
协调节点 请求转发和合并检索结果

集群选举

主从架构模式,一个集群只能有一个工作状态的管理节点,其余管理节点是备选,备选数量原则上不限制。很多大数据产品管理节点仅支持一主一从,如Greenplum、Hadoop、Prestodb;
工作管理节点自动选举,工作管理节点关闭之后自动触发集群重新选举,无需外部三方应用,无需人工干预。很多大数据产品需要人工切换或者借助第三方软件应用,如Greenplum、Hadoop、Prestodb。
discovery.zen.minimum_master_nodes = (master_eligible_nodes / 2) + 1
以1个主节点+4个候选节点为例设为3

conf/elasticsearch.yml:
    discovery.zen.minimum_master_nodes: 3

协调路由

Elasticsearch集群中有多个节点,其中任一节点都可以查询数据或者写入数据,集群内部节点会有路由机制协调,转发请求到索引分片所在的节点。我们在迁移集群时采用应用代理切换,外部访问从旧集群数据节点切换到新集群数据节点,就是基于此特点。

查询主节点

http://192.168.111.200:9200/_cat/nodes?v
含有 * 的代表当前主节点
http://192.168.111.200:9200/_cat/master

排查
集群数据平衡

Elastic自身设计了集群分片的负载平衡机制,当有新数据节点加入集群或者离开集群,集群会自动平衡分片的负载分布。
索引分片会在数据节点之间平衡漂移,达到平均分布之后停止,频繁的集群节点加入或者下线会严重影响集群的IO,影响集群响应速度,所以要尽量避免次情况发生。如果频繁关闭重启,这样很容易造成集群问题。

#集群迁移时先关,迁移后再开
#禁用集群新创建索引分配
cluster.routing.allocation.enable: false 
#禁用集群自动平衡
cluster.routing.rebalance.enable: false

ES 慢查询日志 打开

切换集群访问

Hadoop

Hadoop平台离线数据写入ES,从ES抽取数据。Elastic提供了Hadoop直连访问驱动。如Hive是通过创建映射表与Elasticsearch索引关联的,新的数据节点启动之后,原有所有Hive-Es映射表需要全部重新创建,更换其中的IP+PORT指向;由于Hive有很多与Elastic关联的表,所以短时间内没有那么快替换完成,新旧数据节点需要共存一段时间,不能在数据迁移完成之后马上关闭

#Hive指定连接
es.nodes=多个数据节点IP+PORT

业务系统应用实时查询

Elastic集群对外提供了代理访问

数据写入

kafka队列

安装

ELK数据处理流程
数据由Beats采集后,可以选择直接推送给Elasticsearch检索,或者先发送给Logstash处理,再推送给Elasticsearch,最后都通过Kibana进行数据可视化的展示

镜像文件准备

docker pull docker.io/fluent/fluentd-kubernetes-daemonset:v1.16.2-debian-elasticsearch8-amd64-1.1
wget https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch.yaml

在hub.docker.com 查询elasticsearch 可用版本
elasticsearch7.17.14
elasticsearch8.11.0

docker pull docker.elastic.co/elasticsearch/elasticsearch:8.11.0
docker pull docker.elastic.co/kibana/kibana:8.11.0
docker pull docker.elastic.co/logstash/logstash:8.11.0
docker pull docker.elastic.co/beats/filebeat:8.11.0

docker tag docker.elastic.co/elasticsearch/elasticsearch:8.11.0 repo.k8s.local/docker.elastic.co/elasticsearch/elasticsearch:8.11.0
docker tag docker.elastic.co/kibana/kibana:8.11.0 repo.k8s.local/docker.elastic.co/kibana/kibana:8.11.0
docker tag docker.elastic.co/beats/filebeat:8.11.0 repo.k8s.local/docker.elastic.co/beats/filebeat:8.11.0
docker tag docker.elastic.co/logstash/logstash:8.11.0 repo.k8s.local/docker.elastic.co/logstash/logstash:8.11.0

docker push repo.k8s.local/docker.elastic.co/elasticsearch/elasticsearch:8.11.0
docker push repo.k8s.local/docker.elastic.co/kibana/kibana:8.11.0
docker push repo.k8s.local/docker.elastic.co/beats/filebeat:8.11.0
docker push repo.k8s.local/docker.elastic.co/logstash/logstash:8.11.0

docker rmi docker.elastic.co/elasticsearch/elasticsearch:8.11.0
docker rmi docker.elastic.co/kibana/kibana:8.11.0
docker rmi docker.elastic.co/beats/filebeat:8.11.0
docker rmi docker.elastic.co/logstash/logstash:8.11.0

搭建elasticsearch+kibana

node.name定义节点名,使用metadata.name名称,需要能dns解析
cluster.initial_master_nodes 对应metadata.name名称加编号,编号从0开始
elasticsearch配置文件:

cat > log-es-elasticsearch.yml <<EOF
cluster.name: log-es
node.name: "log-es-elastic-sts-0"
path.data: /usr/share/elasticsearch/data
#path.logs: /var/log/elasticsearch
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
#transport.tcp.port: 9300
#discovery.seed_hosts: ["127.0.0.1", "[::1]"]
cluster.initial_master_nodes: ["log-es-elastic-sts-0"]
xpack.security.enabled: "false"
xpack.security.transport.ssl.enabled: "false"
#增加参数,使head插件可以访问es
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
EOF

kibana配置文件:
statefulset管理的Pod名称是有序的,删除指定Pod后自动创建的Pod名称不会改变。
statefulset创建时必须指定server名称,如果server没有IP地址,则会对server进行DNS解析,找到对应的Pod域名。
statefulset具有volumeclaimtemplate卷管理模板,创建出来的Pod都具有独立卷,相互没有影响。
statefulset创建出来的Pod,拥有独立域名,我们在指定访问Pod资源时,可以使用域名指定,IP会发生改变,但是域名不会(域名组成:Pod名称.svc名称.svc名称空间.svc.cluster.local)

cat > log-es-kibana.yml <<EOF
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: "http://localhost:9200"
i18n.locale: "zh-CN"
EOF

创建k8s命名空间

kubectl create namespace log-es
kubectl get namespace log-es
kubectl describe namespace  log-es

创建elasticsearch和kibana的配置文件configmap:

1、configmap是以明文的形式将配置信息给pod内使用的办法。它的大小有限,不能超过1Mi

2、可以将文件、目录等多种形式做成configmap,并且通过env或者volume的形式供pod内使用。

3、它可以在不重新构建镜像或重启容器的情况下在线更新,但是需要一定时间间隔
以subPath方式挂载时,configmap更新,容器不会更新。

kubectl create configmap log-es-elastic-config -n log-es --from-file=log-es-elasticsearch.yml
kubectl create configmap log-es-kibana-config -n log-es --from-file=log-es-kibana.yml

更新方式1
#kubectl create configmap log-es-kibana-config --from-file log-es-kibana.yml -o yaml --dry-run=client | kubectl apply -f -
#kubectl get cm log-es-kibana-config -n log-es -o yaml > log-es-kibana.yaml  && kubectl replace -f log-es-kibana.yaml -n log-es
#测试下来不行

更新方式2
kubectl edit configmap log-es-elastic-config -n log-es
kubectl edit configmap log-es-kibana-config -n log-es

查看列表
kubectl get configmap  -n log-es 

删除
kubectl delete cm log-es-elastic-config -n log-es
kubectl delete cm log-es-kibana-config -n log-es

kibana

kibana为有状态的固定节点,不需负载均衡,可以建无头服务
这个 Service 被创建后并不会被分配一个 VIP,而是会以 DNS 记录的方式暴露出它所代理的 Pod

<pod-name>.<svc-name>.<namespace>.svc.cluster.local  
$(podname)-$(ordinal).$(servicename).$(namespace).svc.cluster.local  
log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local  
cat > log-es-kibana-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  labels:
    app: log-es-svc
  name: es-kibana-svc
  namespace: log-es
spec:
  ports:
  - name: 9200-9200
    port: 9200
    protocol: TCP
    targetPort: 9200
    nodePort: 9200
  - name: 5601-5601
    port: 5601
    protocol: TCP
    targetPort: 5601
    nodePort: 5601
  #clusterIP: None 
  selector:
    app: log-es-elastic-sts
  type: NodePort
  #type: ClusterIP
EOF

创建es-kibana的有状态资源 yaml配置文件:

cat > log-es-kibana-sts.yaml <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: log-es-elastic-sts
  name: log-es-elastic-sts
  namespace: log-es
spec:
  replicas: 1
  selector:
    matchLabels:
      app: log-es-elastic-sts
  serviceName: "es-kibana-svc"  #关联svc名称
  template:
    metadata:
      labels:
        app: log-es-elastic-sts
    spec:
      #imagePullSecrets:
      #- name: registry-pull-secret
      containers:
      - name: log-es-elasticsearch
        image: repo.k8s.local/docker.elastic.co/elasticsearch/elasticsearch:8.11.0
        imagePullPolicy: IfNotPresent
#        lifecycle:
#          postStart:
#            exec:
#              command: [ "/bin/bash", "-c",  touch /tmp/start" ] #sysctl -w vm.max_map_count=262144;ulimit -HSn 65535; 请在宿主机设定
        #command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        #command: [ "/bin/bash", "-c", "--" ]
        #args: [ "while true; do sleep 30; done;" ]
        #command: [ "/bin/bash", "-c","ulimit -HSn 65535;" ]
        resources:
          requests:
            memory: "800Mi"
            cpu: "800m"
          limits:
            memory: "1.2Gi"
            cpu: "2000m"
        ports:
        - containerPort: 9200
        - containerPort: 9300
        volumeMounts:
        - name: log-es-elastic-config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          subPath: log-es-elasticsearch.yml  #对应configmap log-es-elastic-config 中文件名称
        - name: log-es-persistent-storage
          mountPath: /usr/share/elasticsearch/data
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: ES_JAVA_OPTS
          value: -Xms512m -Xmx512m
      - image: repo.k8s.local/docker.elastic.co/kibana/kibana:8.11.0
        imagePullPolicy: IfNotPresent
        #command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
        name: log-es-kibana
        ports:
        - containerPort: 5601
        env:
        - name: TZ
          value: Asia/Shanghai
        volumeMounts:
        - name: log-es-kibana-config
          mountPath: /usr/share/kibana/config/kibana.yml
          subPath: log-es-kibana.yml   #对应configmap log-es-kibana-config 中文件名称
      volumes:
      - name: log-es-elastic-config
        configMap:
          name: log-es-elastic-config
      - name: log-es-kibana-config
        configMap:
          name: log-es-kibana-config
      - name: log-es-persistent-storage
        hostPath:
          path: /localdata/es/data
          type: DirectoryOrCreate
      #hostNetwork: true
      #dnsPolicy: ClusterFirstWithHostNet
      nodeSelector:
         kubernetes.io/hostname: node02.k8s.local
EOF

单独调试文件,测试ulimit失败问题

cat > log-es-kibana-sts.yaml <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: log-es-elastic-sts
  name: log-es-elastic-sts
  namespace: log-es
spec:
  replicas: 1
  selector:
    matchLabels:
      app: log-es-elastic-sts
  serviceName: "es-kibana-svc"  #关联svc名称
  template:
    metadata:
      labels:
        app: log-es-elastic-sts
    spec:
      #imagePullSecrets:
      #- name: registry-pull-secret
#      initContainers:        # 初始化容器
#      - name: init-vm-max-map
#        image: repo.k8s.local/google_containers/busybox:9.9 
#        imagePullPolicy: IfNotPresent
#        command: ["sysctl","-w","vm.max_map_count=262144"]
#        securityContext:
#          privileged: true
#      - name: init-fd-ulimit
#        image: repo.k8s.local/google_containers/busybox:9.9 
#        imagePullPolicy: IfNotPresent
#        command: ["sh","-c","ulimit -HSn 65535;ulimit -n >/tmp/index/init.log"]
#        securityContext:
#          privileged: true
#        volumeMounts:
#        - name: init-test
#          mountPath: /tmp/index
#        terminationMessagePath: /dev/termination-log
#        terminationMessagePolicy: File
      containers:
      - name: log-es-elasticsearch
        image: repo.k8s.local/docker.elastic.co/elasticsearch/elasticsearch:8.11.0
        imagePullPolicy: IfNotPresent
#        securityContext:
#          privileged: true
#          capabilities:
#          add: ["SYS_RESOURCE"]
#        lifecycle:
#          postStart:
#            exec:
#              command: [ "/bin/bash", "-c", "sysctl -w vm.max_map_count=262144; ulimit -l unlimited;echo 'Container started';" ]
#        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        resources:
          requests:
            memory: "800Mi"
            cpu: "800m"
          limits:
            memory: "1Gi"
            cpu: "1000m"
        ports:
        - containerPort: 9200
        - containerPort: 9300
        volumeMounts:
#        - name: ulimit-config
#          mountPath: /etc/security/limits.conf
#          #readOnly: true
#          #subPath: limits.conf
        - name: log-es-elastic-config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          subPath: log-es-elasticsearch.yml  #对应configmap log-es-elastic-config 中文件名称
        - name: log-es-persistent-storage
          mountPath: /usr/share/elasticsearch/data
#        - name: init-test
#          mountPath: /tmp/index
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: ES_JAVA_OPTS
          value: -Xms512m -Xmx512m
      volumes:
#      - name: init-test
#        emptyDir: {}
      - name: log-es-elastic-config
        configMap:
          name: log-es-elastic-config
      - name: log-es-persistent-storage
        hostPath:
          path: /localdata/es/data
          type: DirectoryOrCreate
#      - name: ulimit-config
#        hostPath:
#          path: /etc/security/limits.conf 
      #hostNetwork: true
      #dnsPolicy: ClusterFirstWithHostNet
      nodeSelector:
         kubernetes.io/hostname: node02.k8s.local
EOF

elastic

elastic索引目录
在node02上,pod默认运行用户id=1000

mkdir -p /localdata/es/data
chmod 777 /localdata/es/data
chown 1000:1000 /localdata/es/data

在各节点上建filebeat registry 目录,包括master
使用daemonSet运行filebeat需要挂载/usr/share/filebeat/data,该目录下有一个registry文件,里面记录了filebeat采集日志位置的相关内容,比如文件offset、source、timestamp等,如果Pod发生异常后K8S自动将Pod进行重启,不挂载的情况下registry会被重置,将导致日志文件又从offset=0开始采集,结果就是es中日志重复一份,这点非常重要.

mkdir -p /localdata/filebeat/data
chown 1000:1000 /localdata/filebeat/data
chmod 777 /localdata/filebeat/data
kubectl apply -f log-es-kibana-sts.yaml
kubectl delete -f log-es-kibana-sts.yaml
kubectl apply -f log-es-kibana-svc.yaml
kubectl delete -f log-es-kibana-svc.yaml

kubectl get pods -o wide -n log-es
NAME               READY   STATUS              RESTARTS   AGE   IP       NODE               NOMINATED NODE   READINESS GATES
log-es-elastic-sts-0   0/2     ContainerCreating   0          66s   <none>   node02.k8s.local   <none>           <none>

#查看详情
kubectl -n log-es describe pod log-es-elastic-sts-0 
kubectl -n log-es logs -f log-es-elastic-sts-0 -c log-es-elasticsearch
kubectl -n log-es logs -f log-es-elastic-sts-0 -c log-es-kibana

kubectl -n log-es logs log-es-elastic-sts-0 -c log-es-elasticsearch
kubectl -n log-es logs log-es-elastic-sts-0 -c log-es-kibana

kubectl -n log-es logs -f --tail=20 log-es-elastic-sts-0 -c log-es-elasticsearch

kubectl exec -it log-es-elasticsearch -n log-es -- /bin/sh 
进入指定 pod 中指定容器
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh 
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-kibana  -- /bin/sh 

kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'cat /etc/security/limits.conf'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'ulimit -HSn 65535'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'ulimit -n'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'cat /etc/security/limits.d/20-nproc.conf'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'cat /etc/pam.d/login|grep  pam_limits'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'ls /etc/pam.d/sshd'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'cat /etc/profile'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c '/usr/share/elasticsearch/bin/elasticsearch'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'ls /tmp/'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'cat /dev/termination-log'

kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-elasticsearch -- /bin/sh  -c 'ps -aux'
kubectl exec -n log-es -it log-es-elastic-sts-0 -c log-es-kibana -- /bin/sh  -c 'ps -aux'

启动失败主要原因

文件语柄不够
ulimit -HSn 65535
最大虚拟内存太小
sysctl -w vm.max_map_count=262144
数据目录权限不对
/usr/share/elasticsearch/data
xpack权限不对
xpack.security.enabled: "false"

容器添加调试,进入容器中查看不再退出

        #command: [ "/bin/bash", "-c", "--" ]
        #args: [ "while true; do sleep 30; done;" ]

查看运行用户
id
uid=1000(elasticsearch) gid=1000(elasticsearch) groups=1000(elasticsearch),0(root)

查看数据目录权限
touch /usr/share/elasticsearch/data/test
ls /usr/share/elasticsearch/data/

测试启动
/usr/share/elasticsearch/bin/elasticsearch

{"@timestamp":"2023-11-09T05:20:55.298Z", "log.level":"ERROR", "message":"node validation exception\n[2] bootstrap checks failed. You must address the points described in the following [2] lines before starting Elasticsearch. For more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.11/bootstrap-checks.html]\nbootstrap check failure [1] of [2]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.11/_file_descriptor_check.html]\nbootstrap check failure [2] of [2]: Transport SSL must be enabled if security is enabled. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.11/bootstrap-checks-xpack.html#bootstrap-checks-tls]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"node-1","elasticsearch.cluster.name":"log-es"}
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/log-es.log

启动elasticsearch时,提示max virtual memory areas vm.max_map_count [65530] is too low, increase to at least

sysctl: setting key "vm.max_map_count", ignoring: Read-only file system

ulimit: max locked memory: cannot modify limit: Operation not permitted

错误
master not discovered yet, this node has not previously joined a bootstrapped cluster, and this node must discover master-eligible nodes
没有发现主节点
在/etc/elasticsearch/elasticsearch.yml文件中加入主节点名:cluster.initial_master_nodes: ["master","node"]

  1. 找到pod的CONTAINER 名称
    在pod对应node下运行

    crictl ps
    CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
    d3f8f373be7cb       7f03b6ec0c6f6       About an hour ago   Running             log-es-elasticsearch                   0                   cd3cebe50807f       log-es-elastic-sts-0
  2. 找到pod的pid

    crictl inspect d3f8f373be7cb |grep -i pid
    "pid": 8420,
            "pid": 1
            "type": "pid"
  3. 容器外执行容器内命令

    
    nsenter -t 8420 -n hostname
    node02.k8s.local

cat /proc/8420/limits |grep "open files"
Max open files 4096 4096 files


参考
https://imroc.cc/kubernetes/trick/deploy/set-sysctl/
https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
解决:  
postStart中执行无效,改成宿主机上执行,映射到容器  
在node02上执行

sysctl -w vm.max_map_count=262144

sysctl -a|grep vm.max_map_count

cat /etc/security/limits.conf

cat > /etc/security/limits.d/20-nofile.conf <<EOF
root soft nofile 65535
root hard nofile 65535

  • soft nofile 65535
  • hard nofile 65535
    EOF

cat > /etc/security/limits.d/20-nproc.conf <<EOF

    • nproc 65535
      root soft nproc unlimited
      root hard nproc unlimited
      EOF

      在CentOS 7版本中为/etc/security/limits.d/20-nproc.conf,在CentOS 6版本中为/etc/security/limits.d/90-nproc.conf

      echo "* soft nofile 65535" >> /etc/security/limits.conf

      echo "* hard nofile 65535" >> /etc/security/limits.conf

      echo "andychu soft nofile 65535" >> /etc/security/limits.conf

      echo "andychu hard nofile 65535" >> /etc/security/limits.conf

      echo "ulimit -HSn 65535" >> /etc/rc.local

ulimit -a
sysctl -p

systemctl show sshd |grep LimitNOFILE

cat /etc/systemd/system.conf|grep DefaultLimitNOFILE
sed -n 's/#DefaultLimitNOFILE=/DefaultLimitNOFILE=65535/p' /etc/systemd/system.conf
sed -i 's/^#DefaultLimitNOFILE=/DefaultLimitNOFILE=65535/' /etc/systemd/system.conf

systemctl daemon-reexec

systemctl restart containerd
systemctl restart kubelet

crictl inspect c30a814bcf048 |grep -i pid

cat /proc/53657/limits |grep "open files"
Max open files 65335 65335 files

kubectl get pods -o wide -n log-es
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
log-es-elastic-sts-0 2/2 Running 0 2m19s 10.244.2.131 node02.k8s.local <none> <none>

[root@node01 nginx]# curl http://10.244.2.131:9200
{
"name" : "node-1",
"cluster_name" : "log-es",
"cluster_uuid" : "Agfoz8qmS3qob_R6bp2cAw",
"version" : {
"number" : "8.11.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "d9ec3fa628c7b0ba3d25692e277ba26814820b20",
"build_date" : "2023-11-04T10:04:57.184859352Z",
"build_snapshot" : false,
"lucene_version" : "9.8.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}

kubectl get pod -n log-es
kubectl get pod -n test

查看service

kubectl get service -n log-es
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-kibana-svc ClusterIP None <none> 9200/TCP,5601/TCP 53s

kubectl apply -f log-es-kibana-svc.yaml
kubectl delete -f log-es-kibana-svc.yaml

kubectl exec -it pod/test-pod-1 -n test — ping www.c1gstudio.com
kubectl exec -it pod/test-pod-1 -n test — ping svc-openresty.test
kubectl exec -it pod/test-pod-1 -n test — nslookup log-es-elastic-sts-0.es-kibana-svc.log-es
kubectl exec -it pod/test-pod-1 -n test — ping log-es-elastic-sts-0.es-kibana-svc.log-es

kubectl exec -it pod/test-pod-1 -n test — curl http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200
kubectl exec -it pod/test-pod-1 -n test — curl -L http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:5601

cat > log-es-kibana-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: log-es-svc
name: es-kibana-svc
namespace: log-es
spec:
ports:

  • name: 9200-9200
    port: 9200
    protocol: TCP
    targetPort: 9200

    nodePort: 9200

  • name: 5601-5601
    port: 5601
    protocol: TCP
    targetPort: 5601

    nodePort: 5601

    clusterIP: None

    selector:
    app: log-es-elastic-sts
    type: NodePort

    type: ClusterIP

    EOF

kubectl get service -n log-es
NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
es-kibana-svc   NodePort   10.96.128.50   <none>        9200:30118/TCP,5601:31838/TCP   16m

使用nodeip+port访问,本次端口为31838
curl -L http://192.168.244.7:31838
curl -L http://10.96.128.50:5601

外部nat转发后访问
http://127.0.0.1:5601/

ingress

cat > log-es-kibana-ingress.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-kibana
  namespace: log-es
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: kibana
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  ingressClassName: nginx
  rules:
  - host: kibana.k8s.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: es-kibana-svc
            port:
              number: 5601
EOF             

kubectl apply -f log-es-kibana-ingress.yaml

kubectl get ingress -n log-es
curl -L -H "Host:kibana.k8s.local" http://10.96.128.50:5601

filebeat

#https://www.elastic.co/guide/en/beats/filebeat/8.11/drop-fields.html
#https://raw.githubusercontent.com/elastic/beats/7.9/deploy/kubernetes/filebeat-kubernetes.yaml
cat > log-es-filebeat-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: log-es-filebeat-config
  namespace: log-es
data:
  filebeat.yml: |-
    filebeat.inputs:
      - type: container
        containers.ids:
          - "*"
        id: 99
        enabled: true
        tail_files: true
        paths:
          - /var/log/containers/*.log
          #- /var/lib/docker/containers/*/*.log
          #- /var/log/pods/*/*/*.log   
        processors:
        - add_kubernetes_metadata:
            in_cluster: true
            matchers:
              - logs_path:
                  logs_path: "/var/log/containers/"     
        fields_under_root: true
        exclude_files: ['\.gz$']
        tags: ["k8s"]  
        fields:
          source: "container"     
      - type: filestream
        id: 100
        enabled: true
        tail_files: true
        paths:
          - /var/log/nginx/access*.log
        processors:
          - decode_json_fields:
              fields: [ 'message' ]
              target: "" # 指定日志字段message,头部以json标注,如果不要json标注则设置为空如:target: ""
              overwrite_keys: false # 默认情况下,解码后的 JSON 位于输出文档中的“json”键下。如果启用此设置,则键将在输出文档中的顶层复制。默认值为 false
              process_array: false
              max_depth: 1
          - drop_fields: 
              fields: ["agent","ecs.version"]
              ignore_missing: true
        fields_under_root: true
        tags: ["ingress-nginx-access"]
        fields:
          source: "ingress-nginx-access"          
      - type: filestream
        id: 101
        enabled: true
        tail_files: true
        paths:
          - /var/log/nginx/error.log
        close_inactive: 5m
        ignore_older: 24h
        clean_inactive: 96h
        clean_removed: true
        fields_under_root: true
        tags: ["ingress-nginx-error"]   
        fields:
          source: "ingress-nginx-error"             
      - type: filestream
        id: 102
        enabled: true
        tail_files: true
        paths:
          - /nginx/logs/*.log
        exclude_files: ['\.gz$','error.log']
        close_inactive: 5m
        ignore_older: 24h
        clean_inactive: 96h
        clean_removed: true
        fields_under_root: true
        tags: ["web-log"]  
        fields:
          source: "nginx-access"            
    output.logstash:
      hosts: ["logstash.log-es.svc.cluster.local:5044"]                           
    #output.elasticsearch:
      #hosts: ["http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200"]
      #index: "log-%{[fields.tags]}-%{+yyyy.MM.dd}"
      #indices:
        #- index: "log-ingress-nginx-access-%{+yyyy.MM.dd}"
          #when.contains:
            #tags: "ingress-nginx-access"
        #- index: "log-ingress-nginx-error-%{+yyyy.MM.dd}"
          #when.contains:
            #tags: "ingress-nginx-error" 
        #- index: "log-web-log-%{+yyyy.MM.dd}"
          #when.contains:
            #tags: "web-log" 
        #- index: "log-k8s-%{+yyyy.MM.dd}"
          #when.contains:
            #tags: "k8s"                                
    json.keys_under_root: true # 默认情况下,解码后的 JSON 位于输出文档中的“json”键下。如果启用此设置,则键将在输出文档中的顶层复制。默认值为 false
    json.overwrite_keys: true # 如果启用了此设置,则解码的 JSON 对象中的值将覆盖 Filebeat 在发生冲突时通常添加的字段(类型、源、偏移量等)
    setup.template.enabled: false  #false不使用默认的filebeat-%{[agent.version]}-%{+yyyy.MM.dd}索引
    setup.template.overwrite: true #开启新设置的模板
    setup.template.name: "log" #设置一个新的模板,模板的名称
    setup.template.pattern: "log-*" #模板匹配那些索引
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    #setup.template.settings:
    #  index.number_of_shards: 1
    #  index.number_of_replicas: 1
    setup.ilm.enabled: false # 修改索引名称,要关闭索引生命周期管理ilm功能
    logging.level: warning #debug、info、warning、error
    logging.to_syslog: false
    logging.metrics.period: 300s
    logging.to_files: true
    logging.files:
      path: /tmp/
      name: "filebeat.log"
      rotateeverybytes: 10485760
      keepfiles: 7
EOF
cat > log-es-filebeat-daemonset.yaml <<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: log-es
spec:
  #replicas: 1
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      serviceAccount: filebeat
      containers:
      - name: filebeat
        image: repo.k8s.local/docker.elastic.co/beats/filebeat:8.11.0
        imagePullPolicy: IfNotPresent
        env:
        - name: TZ
          value: "Asia/Shanghai"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi              
        volumeMounts:
        - name: filebeat-config
          readOnly: true
          mountPath: /config/filebeat.yml    # Filebeat 配置
          subPath: filebeat.yml
        - name: fb-data
          mountPath: /usr/share/filebeat/data         
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          readOnly: true          
          mountPath: /var/lib/docker/containers
        - name: varlogingress
          readOnly: true          
          mountPath: /var/log/nginx      
        - name: varlogweb
          readOnly: true          
          mountPath: /nginx/logs              
        args:
        - -c
        - /config/filebeat.yml
      volumes:
      - name: fb-data
        hostPath:
          path: /localdata/filebeat/data 
          type: DirectoryOrCreate    
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlogingress
        hostPath:
          path: /var/log/nginx
      - name: varlogweb
        hostPath:
          path: /nginx/logs      
      - name: filebeat-config
        configMap:
          name: log-es-filebeat-config
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
EOF
kubectl apply -f log-es-filebeat-configmap.yaml
kubectl delete -f log-es-filebeat-configmap.yaml
kubectl apply -f log-es-filebeat-daemonset.yaml
kubectl delete -f log-es-filebeat-daemonset.yaml 
kubectl get pod -n log-es  -o wide 
kubectl get cm -n log-es
kubectl get ds -n log-es
kubectl edit configmap log-es-filebeat-config -n log-es
kubectl get service -n log-es

#更新configmap后需手动重启pod
kubectl rollout restart  ds/filebeat -n log-es
kubectl patch  ds filebeat  -n log-es   --patch '{"spec": {"template": {"metadata": {"annotations": {"version/config": "202311141" }}}}}'

#重启es
kubectl rollout restart  sts/log-es-elastic-sts -n log-es
#删除 congfig
kubectl delete cm filebeat -n log-es
kubectl delete cm log-es-filebeat-config -n log-es

#查看详细
kubectl -n log-es describe pod filebeat-mdldl
kubectl -n log-es logs -f filebeat-4kgpl

#查看pod
kubectl exec -n log-es -it filebeat-hgpnl  -- /bin/sh 
kubectl exec -n log-es -it filebeat-q69f5   -- /bin/sh  -c 'ps aux'
kubectl exec -n log-es -it filebeat-wx4x2  -- /bin/sh  -c 'cat  /config/filebeat.yml'
kubectl exec -n log-es -it filebeat-4j2qd -- /bin/sh  -c 'cat  /tmp/filebeat*'
kubectl exec -n log-es -it filebeat-9qx6f -- /bin/sh  -c 'cat  /tmp/filebeat*'
kubectl exec -n log-es -it filebeat-9qx6f -- /bin/sh  -c 'ls /usr/share/filebeat/data'
kubectl exec -n log-es -it filebeat-9qx6f -- /bin/sh  -c 'filebeat modules list'

kubectl exec -n log-es -it filebeat-kmrcc  -- /bin/sh -c 'curl http://localhost:5066/?pretty'

kubectl exec -n log-es -it filebeat-hqz9b  -- /bin/sh -c 'curl http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200/_cat/indices?v'
curl -XGET 'http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200/_cat/indices?v'

curl http://10.96.128.50:9200/_cat/indices?v

错误 serviceaccount
Failed to watch v1.Node: failed to list v1.Node: nodes "node02.k8s.local" is forbidden: User "system:serviceaccount:log-es:default" cannot list resource "nodes" in API group "" at the cluster scope
当你创建namespace的时候,会默认为该namespace创建一个名为default的serviceaccount。这个的错误的信息代表的意思是,pod用namespace默认的serviceaccout是没有权限访问K8s的 API group的。可以通过命令查看:
kubectl get sa -n log-es

解决方法
创建一个 tillerServiceAccount.yaml,并使用 kubectl apply -f tiller-ServiceAccount.yaml 创建账号解角色,其中kube-system就是xxxx就是你的命名空间

vi tiller-ServiceAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: log-es
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: log-es

kubectl apply -f tiller-ServiceAccount.yaml
和当前运行pod关联,同时修改yaml
kubectl patch ds –namespace log-es filebeat -p ‘{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}’

vi filebeat-ServiceAccount.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: log-es
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat
  namespace: log-es
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: log-es
roleRef:
  kind: Role
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat-kubeadm-config
  namespace: log-es
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: log-es
roleRef:
  kind: Role
  name: filebeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
- apiGroups: ["apps"]
  resources:
    - replicasets
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat
  # should be the namespace where filebeat is running
  namespace: log-es
  labels:
    k8s-app: filebeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat-kubeadm-config
  namespace: log-es
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: log-es
  labels:
    k8s-app: filebeat
kubectl apply -f filebeat-ServiceAccount.yaml 
kubectl get sa -n log-es

nginx模块不好用
在filebeat.yml中启用模块

    filebeat.modules:
      - module: nginx 
 ```     
Exiting: module nginx is configured but has no enabled filesets  
还需要改文件名

./filebeat modules enable nginx

./filebeat --modules nginx  
需写在modules.d/nginx.yml中

Filebeat的output
  1、Elasticsearch Output   (Filebeat收集到数据,输出到es里。默认的配置文件里是有的,也可以去官网上去找)
  2、Logstash Output      (Filebeat收集到数据,输出到logstash里。默认的配置文件里是有的,也可以得去官网上去找)
  3、Redis Output       (Filebeat收集到数据,输出到redis里。默认的配置文件里是没有的,得去官网上去找)
  4、File Output         (Filebeat收集到数据,输出到file里。默认的配置文件里是有的,也可以去官网上去找)
  5、Console Output       (Filebeat收集到数据,输出到console里。默认的配置文件里是有的,也可以去官网上去找)

https://www.elastic.co/guide/en/beats/filebeat/8.12/configuring-howto-filebeat.html

### 增加logstash来对采集到的原始日志进行业务需要的清洗
vi log-es-logstash-deploy.yaml

apiVersion: apps/v1

kind: DaemonSet

kind: StatefulSet

kind: Deployment
metadata:
name: logstash
namespace: log-es
labels:
app: logstash
spec:
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
terminationGracePeriodSeconds: 30

hostNetwork: true

  #dnsPolicy: ClusterFirstWithHostNet
  containers:
  - name: logstash
    ports:
    - containerPort: 5044
      name: logstash
    command:
    - logstash
    - '-f'
    - '/etc/logstash_c/logstash.conf'
    image: repo.k8s.local/docker.elastic.co/logstash/logstash:8.11.0
    env:
    - name: TZ
      value: "Asia/Shanghai"
    - name: NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName        
    volumeMounts:
    - name: config-volume
      mountPath: /etc/logstash_c/
    - name: config-yml-volume
      mountPath: /usr/share/logstash/config/
    resources: #logstash一定要加上资源限制,避免对其他业务造成资源抢占影响
      limits:
        cpu: 1000m
        memory: 2048Mi
      requests:
        cpu: 512m
        memory: 512Mi
  volumes:  
  - name: config-volume
    configMap:
      name: logstash-conf
      items:
      - key: logstash.conf
        path: logstash.conf
  - name: config-yml-volume
    configMap:
      name: logstash-yml
      items:
      - key: logstash.yml
        path: logstash.yml 
  nodeSelector:
    ingresstype: ingress-nginx            
  tolerations:
    - key: node-role.kubernetes.io/master
      effect: NoSchedule

vi log-es-logstash-svc.yaml

apiVersion: v1
kind: Service
metadata:
name: logstash
annotations:
labels:
app: logstash
namespace: log-es
spec:

type: NodePort

type: ClusterIP
ports:

  • name: http
    port: 5044

    nodePort: 30044

    protocol: TCP
    targetPort: 5044
    clusterIP: None
    selector:
    app: logstash


vi log-es-logstash-ConfigMap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-conf
namespace: log-es
labels:
app: logstash
data:
logstash.conf: |-
input {
beats {
port => 5044
}
}
filter{
if [agent][type] == "filebeat" {
mutate{
remove_field => "[agent]"
remove_field => "[ecs]"
remove_field => "[log][offset]"
}
}
if [input][type] == "container" {
mutate{
remove_field => "[kubernetes][node][hostname]"
remove_field => "[kubernetes][labels]"
remove_field => "[kubernetes][namespace_labels]"
remove_field => "[kubernetes][node][labels]"
}
}

处理ingress日志

  if [kubernetes][container][name] == "nginx-ingress-controller" {
    json {
      source => "message"
      target => "ingress_log"
    }
    if [ingress_log][requesttime] {
        mutate {
        convert => ["[ingress_log][requesttime]", "float"]
        }
    }
    if [ingress_log][upstremtime] {
        mutate {
        convert => ["[ingress_log][upstremtime]", "float"]
        }
    }
    if [ingress_log][status] {
        mutate {
        convert => ["[ingress_log][status]", "float"]
        }
    }
    if  [ingress_log][httphost] and [ingress_log][uri] {
        mutate {
          add_field => {"[ingress_log][entry]" => "%{[ingress_log][httphost]}%{[ingress_log][uri]}"}
        }
        mutate{
          split => ["[ingress_log][entry]","/"]
        }
        if [ingress_log][entry][1] {
          mutate{
          add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/%{[ingress_log][entry][1]}"}
          remove_field => "[ingress_log][entry]"
          }
        }
        else{
          mutate{
          add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/"}
          remove_field => "[ingress_log][entry]"
          }
        }
    }
  }
  # 处理以srv进行开头的业务服务日志
  if [kubernetes][container][name] =~ /^srv*/ {
    json {
      source => "message"
      target => "tmp"
    }
    if [kubernetes][namespace] == "kube-system" {
      drop{}
    }
    if [tmp][level] {
      mutate{
        add_field => {"[applog][level]" => "%{[tmp][level]}"}
      }
      if [applog][level] == "debug"{
        drop{}
      }
    }
    if [tmp][msg]{
      mutate{
        add_field => {"[applog][msg]" => "%{[tmp][msg]}"}
      }
    }
    if [tmp][func]{
      mutate{
      add_field => {"[applog][func]" => "%{[tmp][func]}"}
      }
    }
    if [tmp][cost]{
      if "ms" in [tmp][cost]{
        mutate{
          split => ["[tmp][cost]","m"]
          add_field => {"[applog][cost]" => "%{[tmp][cost][0]}"}
          convert => ["[applog][cost]", "float"]
        }
      }
      else{
        mutate{
        add_field => {"[applog][cost]" => "%{[tmp][cost]}"}
        }
      }
    }
    if [tmp][method]{
      mutate{
      add_field => {"[applog][method]" => "%{[tmp][method]}"}
      }
    }
    if [tmp][request_url]{
      mutate{
        add_field => {"[applog][request_url]" => "%{[tmp][request_url]}"}
      }
    }
    if [tmp][meta._id]{
      mutate{
        add_field => {"[applog][traceId]" => "%{[tmp][meta._id]}"}
      }
    }
    if [tmp][project] {
      mutate{
        add_field => {"[applog][project]" => "%{[tmp][project]}"}
      }
    }
    if [tmp][time] {
      mutate{
      add_field => {"[applog][time]" => "%{[tmp][time]}"}
      }
    }
    if [tmp][status] {
      mutate{
        add_field => {"[applog][status]" => "%{[tmp][status]}"}
      convert => ["[applog][status]", "float"]
      }
    }
  }
  mutate{
    rename => ["kubernetes", "k8s"]
    remove_field => "beat"
    remove_field => "tmp"
    remove_field => "[k8s][labels][app]"
    remove_field => "[event][original]"      
  }
}
output{
    if [source] == "container" {      
      elasticsearch {
        hosts => ["http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200"]
        codec => json
        index => "k8s-logstash-container-%{+YYYY.MM.dd}"
      }
      #stdout { codec => rubydebug }
    }
    if [source] == "ingress-nginx-access" {
      elasticsearch {
        hosts => ["http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200"]
        codec => json
        index => "k8s-logstash-ingress-nginx-access-%{+YYYY.MM.dd}"
      }
      #stdout { codec => rubydebug }
    }
    if [source] == "ingress-nginx-error" {
      elasticsearch {
        hosts => ["http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200"]
        codec => json
        index => "k8s-logstash-ingress-nginx-error-%{+YYYY.MM.dd}"
      }
      #stdout { codec => rubydebug }
    }
    if [source] == "nginx-access" {
      elasticsearch {
        hosts => ["http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200"]
        codec => json
        index => "k8s-logstash-nginx-access-%{+YYYY.MM.dd}"
      }
      #stdout { codec => rubydebug }
    }                  
    #stdout { codec => rubydebug }
}

apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-yml
namespace: log-es
labels:
app: logstash
data:
logstash.yml: |-
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: http://log-es-elastic-sts-0.es-kibana-svc.log-es.svc.cluster.local:9200

kubectl apply -f log-es-logstash-ConfigMap.yaml
kubectl delete -f log-es-logstash-ConfigMap.yaml
kubectl apply -f log-es-logstash-deploy.yaml
kubectl delete -f log-es-logstash-deploy.yaml
kubectl apply -f log-es-logstash-svc.yaml
kubectl delete -f log-es-logstash-svc.yaml

kubectl apply -f log-es-filebeat-configmap.yaml

kubectl get pod -n log-es -o wide
kubectl get cm -n log-es
kubectl get ds -n log-es
kubectl edit configmap log-es-filebeat-config -n log-es
kubectl get service -n log-es

查看详细

kubectl -n log-es describe pod filebeat-97l85
kubectl -n log-es logs -f logstash-847d7f5b56-jv5jj
kubectl logs -n log-es $(kubectl get pod -n log-es -o jsonpath='{.items[3].metadata.name}’) -f
kubectl exec -n log-es -it filebeat-97l85 — /bin/sh -c ‘cat /tmp/filebeat*’

更新configmap后需手动重启pod

kubectl rollout restart ds/filebeat -n log-es
kubectl rollout restart deploy/logstash -n log-es

强制关闭

kubectl delete pod filebeat-fncq9 -n log-es –force –grace-period=0

kubectl exec -it pod/test-pod-1 -n test — ping logstash.log-es.svc.cluster.local
kubectl exec -it pod/test-pod-1 -n test — curl http://logstash.log-es.svc.cluster.local:5044

...svc.cluster.local
“`

“`
#停止服务
kubectl delete -f log-es-filebeat-daemonset.yaml
kubectl delete -f log-es-logstash-deploy.yaml
kubectl delete -f log-es-kibana-sts.yaml
kubectl delete -f log-es-kibana-svc.yaml

“`

# filebeat使用syslog接收
log-es-filebeat-configmap.yaml
“`
– type: syslog
format: auto
id: syslog-id
enabled: true
max_message_size: 20KiB
timeout: 10
keep_null: true
processors:
– drop_fields:
fields: [“input”,”agent”,”ecs.version”,”log.offset”,”event”,”syslog”]
ignore_missing: true
protocol.udp:
host: “0.0.0.0:33514”
tags: [“web-access”]
fields:
source: “syslog-web-access”
“`
damonset的node上查看是否启用的hostport
netstat -anup
lsof -i

##pod中安装ping测试
“`
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-6.repo
sed -i -e ‘/mirrors.cloud.aliyuncs.com/d’ -e ‘/mirrors.aliyuncs.com/d’ /etc/yum.repos.d/CentOS-Base.repo
sed -i ‘s/mirrors.aliyun.com/vault.centos.org/g’ /etc/yum.repos.d/CentOS-Base.repo
sed -i ‘s/gpgcheck=1/gpgcheck=0/g’ /etc/yum.repos.d/CentOS-Base.repo

yum clean all && yum makecache
yum install iputils

ping 192.168.244.4
“`
admin
https://nginx.org/en/docs/syslog.html
nginx支持udp发送,不支持tcp
nginx配制文件
“`
access_log syslog:server=192.168.244.7:33514,facility=local5,tag=data_c1gstudiodotnet,severity=info access;
“`
filebeat 可以接收到。
“`
{
“@timestamp”: “2024-02-20T02:35:41.000Z”,
“@metadata”: {
“beat”: “filebeat”,
“type”: “_doc”,
“version”: “8.11.0”,
“truncated”: false
},
“hostname”: “openresty-php5.2-6cbdff6bbd-7fjdc”,
“process”: {
“program”: “data_c1gstudiodotnet”
},
“host”: {
“name”: “node02.k8s.local”
},
“agent”: {
“id”: “cf964318-5fdc-493e-ae2c-d2acb0bc6ca8”,
“name”: “node02.k8s.local”,
“type”: “filebeat”,
“version”: “8.11.0”,
“ephemeral_id”: “42789eee-3658-4f0f-982e-cb96d18fd9a2”
},
“message”: “10.100.3.80 – – [20/Feb/2024:10:35:41 +0800] \”GET /admin/imgcode/imgcode.php HTTP/1.1\” 200 1271 \”https://data.c1gstudio.net:31443/admin/login.php?1\” \”Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36\” “,
“ecs”: {
“version”: “8.0.0”
},
“log”: {
“source”: {
“address”: “10.244.2.216:37255”
}
},
“tags”: [
“syslog-web-log”
],
“fields”: {
“source”: “syslog-nginx-access”
}
}
“`

测试filebeat使用syslog接收
echo "hello" > /dev/udp/192.168.244.4/1514

进阶版,解决filebeat宿主机ip问题。
部署时将node的ip写入hosts中,nginx中使用主机名来通信
“`
containers:
– name: openresty-php-fpm5-2-17
env:
– name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
– name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP

command: [“/bin/sh”, “-c”, “echo \”$(MY_NODE_IP) MY_NODE_IP\” >> /etc/hosts;/opt/lemp start;cd /opt/init/ && ./inotify_reload.sh “]

“`
nginx配制
“`
access_log syslog:server=MY_NODE_IP:33514,facility=local5,tag=data_c1gstudiodotnet,severity=info access;
“`

# filebeat使用unix socket接收.
filebeat将socket共享给宿主,nginx挂载宿主socket,将消息发送给socket
nginx 配制文件
“`
access_log syslog:server=unix:/usr/local/filebeat/filebeat.sock,facility=local5,tag=data_c1gstudiodotnet,severity=info access;
“`

各个node上创建共享目录,自动创建的为root 755 pod内不能写
mkdir -m 0777 /localdata/filebeat/socket
chmod 0777 /localdata/filebeat/socket
filebeat配制文件
“`
– type: unix
enabled: true
id: unix-id
max_message_size: 100KiB
path: “/usr/share/filebeat/socket/filebeat.sock”
socket_type: datagram
#group: “website”
processors:
– syslog:
field: message
– drop_fields:
fields: [“input”,”agent”,”ecs”,”log.syslog.severity”,”log.syslog.facility”,”log.syslog.priority”]
ignore_missing: true
tags: [“web-access”]
fields:
source: “unix-web-access”
“`
filebeat的damonset
“`
volumeMounts:
– name: fb-socket
mountPath: /usr/share/filebeat/socket
volumes:
– name: fb-socket
hostPath:
path: /localdata/filebeat/socket
type: DirectoryOrCreate
“`

nginx的deployment
“`
volumeMounts:
– name: host-filebeat-socket
mountPath: “/usr/local/filebeat”
– name: host-filebeat-socket
volumes:
hostPath:
path: /localdata/filebeat/socket
type: Directory
“`
示例
“`
{
“@timestamp”: “2024-02-20T06:17:08.000Z”,
“@metadata”: {
“beat”: “filebeat”,
“type”: “_doc”,
“version”: “8.11.0”
},
“agent”: {
“type”: “filebeat”,
“version”: “8.11.0”,
“ephemeral_id”: “4546cf71-5f33-4f5d-bc91-5f0a58c9b0fd”,
“id”: “cf964318-5fdc-493e-ae2c-d2acb0bc6ca8”,
“name”: “node02.k8s.local”
},
“message”: “10.100.3.80 – – [20/Feb/2024:14:17:08 +0800] \”GET /admin/imgcode/imgcode.php HTTP/1.1\” 200 1356 \”https://data.c1gstudio.net:31443/admin/login.php?1\” \”Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36\” “,
“tags”: [
“web-access”
],
“ecs”: {
“version”: “8.0.0”
},
“fields”: {
“source”: “unix-web-access”
},
“log”: {
“syslog”: {
“hostname”: “openresty-php5.2-78cb7cb54b-bsgt6”,
“appname”: “data_c1gstudiodotnet”
}
},
“host”: {
“name”: “node02.k8s.local”
}
}
“`

#ingress 配制syslog
可以在configmap中配制,还需要解决nodeip问题及ext-ingress是否共用问题。
但是不支持tag,不能定义来源.有个process.program=nginx
使用http-snippet或access-log-params都不行
“`
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: int-ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.5
name: int-ingress-nginx-controller
namespace: int-ingress-nginx
data:
allow-snippet-annotations: “true”
error-log-level: “warn”
enable-syslog: “true”
syslog-host: “192.168.244.7”
syslog-port: “10514”
“`

“`
{
“@timestamp”: “2024-02-21T03:42:40.000Z”,
“@metadata”: {
“beat”: “filebeat”,
“type”: “_doc”,
“version”: “8.11.0”,
“truncated”: false
},
“process”: {
“program”: “nginx”
},
“upstream”: {
“status”: “200”,
“response_length”: “1281”,
“proxy_alternative”: “”,
“addr”: “10.244.2.228:80”,
“name”: “data-c1gstudio-net-svc-web-http”,
“response_time”: “0.004”
},
“timestamp”: “2024-02-21T11:42:40+08:00”,
“req_id”: “16b868da1aba50a72f32776b4a2f5cb2”,
“agent”: {
“ephemeral_id”: “64c4b6d1-3d5c-4079-8bda-18d1a0d063a5”,
“id”: “3bd77823-c801-4dd1-a3e5-1cf25874c09f”,
“name”: “master01.k8s.local”,
“type”: “filebeat”,
“version”: “8.11.0”
},
“log”: {
“source”: {
“address”: “192.168.244.4:49244”
}
},
“request”: {
“status”: 200,
“bytes_sent”: “1491”,
“request_time”: “0.004”,
“request_length”: “94”,
“referer”: “https://data.c1gstudio.net:31443/admin/login.php?1”,
“user_agent”: “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36”,
“request_method”: “GET”,
“request_uri”: “/admin/imgcode/imgcode.php”,
“remote_user”: “”,
“protocol”: “HTTP/2.0”,
“remote_port”: “”,
“real_port”: “11464”,
“x-forward-for”: “10.100.3.80”,
“remote_addr”: “10.100.3.80”,
“hostname”: “data.c1gstudio.net”,
“body_bytes_sent”: “1269”,
“real_ip”: “192.168.244.2”,
“server_name”: “data.c1gstudio.net”
},
“ingress”: {
“service_port”: “http”,
“hostname”: “master01.k8s.local”,
“addr”: “192.168.244.4”,
“port”: “443”,
“namespace”: “data-c1gstudio-net”,
“ingress_name”: “ingress-data-c1gstudio-net”,
“service_name”: “svc-web”
},
“message”: “{\”timestamp\”: \”2024-02-21T11:42:40+08:00\”, \”source\”: \”int-ingress\”, \”req_id\”: \”16b868da1aba50a72f32776b4a2f5cb2\”, \”ingress\”:{ \”hostname\”: \”master01.k8s.local\”, \”addr\”: \”192.168.244.4\”, \”port\”: \”443\”,\”namespace\”: \”data-c1gstudio-net\”,\”ingress_name\”: \”ingress-data-c1gstudio-net\”,\”service_name\”: \”svc-web\”,\”service_port\”: \”http\” }, \”upstream\”:{ \”addr\”: \”10.244.2.228:80\”, \”name\”: \”data-c1gstudio-net-svc-web-http\”, \”response_time\”: \”0.004\”, \”status\”: \”200\”, \”response_length\”: \”1281\”, \”proxy_alternative\”: \”\”}, \”request\”:{ \”remote_addr\”: \”10.100.3.80\”, \”real_ip\”: \”192.168.244.2\”, \”remote_port\”: \”\”, \”real_port\”: \”11464\”, \”remote_user\”: \”\”, \”request_method\”: \”GET\”, \”server_name\”: \”data.c1gstudio.net\”,\”hostname\”: \”data.c1gstudio.net\”, \”request_uri\”: \”/admin/imgcode/imgcode.php\”, \”status\”: 200, \”body_bytes_sent\”: \”1269\”, \”bytes_sent\”: \”1491\”, \”request_time\”: \”0.004\”, \”request_length\”: \”94\”, \”referer\”: \”https://data.c1gstudio.net:31443/admin/login.php?1\”, \”user_agent\”: \”Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36\”, \”x-forward-for\”: \”10.100.3.80\”, \”protocol\”: \”HTTP/2.0\”}}”,
“fields”: {
“source”: “syslog-web-access”
},
“source”: “int-ingress”,
“ecs”: {
“version”: “8.0.0”
},
“hostname”: “master01.k8s.local”,
“host”: {
“name”: “master01.k8s.local”
},
“tags”: [
“web-access”
]
}
“`

## filebeat @timestamp 时区问题
默认为UTC,相差8小时。不能修改。
方法一,可以格式化另一字段后进行替换。
方法二,添加一个字段
“`
processors:
– add_locale: ~
“`

https://www.elastic.co/guide/en/beats/filebeat/current/processor-timestamp.html

Posted in 安装k8s/kubernetes.

Tagged with , , , , .


k8s_安装9_ingress-nginx

九、 ingress-nginx

部署Ingress控制器

部署方式

Ingress控制器可以通过三种方式部署以接入外部请求流量。

第一种方式 Deployment+NodePort模式的Service

通过Deployment控制器管理Ingress控制器的Pod资源,通过NodePort或LoadBalancer类型的Service对象为其接入集群外部的请求流量,所有这种方式在定义一个Ingress控制器时必须在其前端定义一个专用的Service资源.
访问流量先通过nodeport进入到node节点,经iptables (svc) 转发至ingress-controller容器,再根据rule转发至后端各业务的容器中。

同用deployment模式部署ingress-controller,并创建对应的service,但是type为NodePort。这样,ingress就会暴露在集群节点ip的特定端口上。由于nodeport暴露的端口是随机端口,一般会在前面再搭建一套负载均衡器来转发请求。该方式一般用于宿主机是相对固定的环境ip地址不变的场景。
NodePort方式暴露ingress虽然简单方便,但是NodePort多了一层NAT,在请求量级很大时可能对性能会有一定影响。
流量先经过DNS域名解析,然后到达LB,然后流量经过ingress做一次负载分发到service,最后再由service做一次负载分发到对应的pod中

固定nodePort后,LB端指向nodeip+nodeport 任意一个都可以,如果当流量进来负载到某个node上的时候因为Ingress Controller的pod不在这个node上,会走这个node的kube-proxy转发到Ingress Controller的pod上,多转发了一次。nginx接收到的http请求中的source ip将会被转换为接受该请求的node节点的ip,而非真正的client端ip

Deployment 部署的副本 Pod 会分布在各个 Node 上,每个 Node 都可能运行好几个副本,replicas数量(不能大于node节点数) + nodeSeletor / podantifinity。DaemonSet 的不同之处在于:每个 Node 上最多只能运行一个副本。

nodeport优点:一个集群可以部署几个就可以了,你可以一组service对应一个ingress,这样只需要每组service自己维护自己的ingress中NGINX配置,每个ingress之间互不影响,各reload自己的配置,缺点是效率低。如果你使用hostnetwork方式,要么维护所有node上NGINX的配置。

第二种方式 DaemonSet+HostNetwork+nodeSelector

通过DaemonSet控制器确保集群的所有或部分工作节点中每个节点上只运行Ingress控制器的一个Pod资源,并配置这类Pod对象以HostPort或HostNetwork的方式在当前节点接入外部流量。
每个节点都创建一个ingress-controller的容器,容器的网络模式设为hostNetwork。访问请求通过80/443端口将直接进入到pod-nginx中。而后nginx根据ingress规则再将流量转发到对应的web应用容器中。

用DaemonSet结合nodeselector来部署ingress-controller到特定的node上,然后使用HostNetwork直接把该pod与宿主机node的网络打通,直接使用宿主机的80/433端口就能访问服务。这时,ingress-controller所在的node机器就很类似传统架构的边缘节点,比如机房入口的nginx服务器。该方式整个请求链路最简单,性能相对NodePort模式更好。缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress-controller pod。 比较适合大并发的生产环境使用。

不创建nginx svc,效率最高(不使用nodeport的方式进行暴露)。如果我们使用Nodeport的方式,流量是NodeIP->svc->ingress-controller(pod)这样的话会多走一层svc层,不管svc层是使用iptables还是lvs都会降低效率。如果使用hostNetwork的方式就是直接走Node节点的主机网络,唯一要注意的是hostNetwork下pod会继承宿主机的网络协议,也就是使用了主机的dns,会导致svc的请求直接走宿主机的上到公网的dns服务器而非集群里的dns server,需要设置pod的dnsPolicy: ClusterFirstWithHostNet即可解决

写入proxy 配置文件如 nginx.conf 的不是backend service的地址,而是backend service 的 pod 的地址,避免在 service 在增加一层负载均衡转发

hostNetwork需要占用物理机的80和443端口,80和443端口只能在绑定了的node上访问nodeip+80,没绑定的可以用nodeip+nodeport访问

这种方式可能会存在node间无法通信和集群内域名解析的问题
可以布署多套ingress,区分内外网访问,对业务进行拆分壁免reload影响

第三种方式 Deployment+LoadBalancer 模式的 Service

如果要把ingress部署在公有云,那用这种方式比较合适。用Deployment部署ingress-controller,创建一个 type为 LoadBalancer 的 service 关联这组 pod。大部分公有云,都会为 LoadBalancer 的 service 自动创建一个负载均衡器,通常还绑定了公网地址。 只要把域名解析指向该地址,就实现了集群服务的对外暴露

ingres-nginx

k8s社区的ingres-nginx https://github.com/kubernetes/ingress-nginx  
Ingress参考文档:https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/

kubectl get cs  

部署配置Ingress  
ingress-nginx v1.9.3
k8s supported version 1.28, 1.27,1.26, 1.25
Nginx Version 1.21.6
wget -k https://github.com/kubernetes/ingress-nginx/raw/main/deploy/static/provider/kind/deploy.yaml -O ingress-nginx.yaml

cat ingress-nginx.yaml

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - ingress-nginx-leader
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: v1
data:
  allow-snippet-annotations: "false"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
    #nodePort: 30080
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
    #nodePort: 30443
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
  - appProtocol: https
    name: https-webhook
    port: 443
    targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  replicas: 1
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  strategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.9.3
    spec:
      dnsPolicy: ClusterFirstWithHostNet  #既能使用宿主机DNS,又能使用集群DNS
      hostNetwork: true                   #与宿主机共享网络
      #nodeName: node01.k8s.local              #设置只能在k8snode1节点运行
      tolerations:                        #设置能容忍master污点
      - key: node-role.kubernetes.io/master
        operator: Exists
      containers:
      - args:
        - /nginx-ingress-controller
        - --election-id=ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        - --watch-ingress-without-class=true
        - --publish-status-address=localhost
        - --logtostderr=false
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          hostPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          hostPort: 443
          name: https
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 101
        volumeMounts:
        - mountPath: /usr/local/certificates/
          name: webhook-cert
          readOnly: true
        - name: timezone
          mountPath: /etc/localtime  
        - name: vol-ingress-logdir
          mountPath: /var/log/nginx
      #dnsPolicy: ClusterFirst
      nodeSelector:
        ingresstype: ingress-nginx
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 0
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Equal
      - effect: NoSchedule
        key: node-role.kubernetes.io/control-plane
        operator: Equal
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  
      - name: vol-ingress-logdir
        hostPath:
          path: /var/log/nginx
          type: DirectoryOrCreate
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.9.3
      name: ingress-nginx-admission-create
    spec:
      containers:
      - args:
        - create
        - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
        - --namespace=$(POD_NAMESPACE)
        - --secret-name=ingress-nginx-admission
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
        imagePullPolicy: IfNotPresent
        name: create
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.9.3
      name: ingress-nginx-admission-patch
    spec:
      containers:
      - args:
        - patch
        - --webhook-name=ingress-nginx-admission
        - --namespace=$(POD_NAMESPACE)
        - --patch-mutating=false
        - --secret-name=ingress-nginx-admission
        - --patch-failure-policy=Fail
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
        imagePullPolicy: IfNotPresent
        name: patch
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
  namespace: ingress-nginx
spec:
  egress:
  - {}
  podSelector:
    matchLabels:
      app.kubernetes.io/component: admission-webhook
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  policyTypes:
  - Ingress
  - Egress
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1/ingresses
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  sideEffects: None
提取image名称,并在harbor 导入
cat ingress-nginx.yaml |grep image:|sed -e 's/.*image: //'

registry.k8s.io/ingress-nginx/controller:v1.9.3@sha256:8fd21d59428507671ce0fb47f818b1d859c92d2ad07bb7c947268d433030ba98
registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80
registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80
#切换到harbor
docker pull registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.3
docker pull registry.aliyuncs.com/google_containers/kube-webhook-certgen:v20231011-8b53cabe0

docker tag registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.3  repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
docker tag registry.aliyuncs.com/google_containers/kube-webhook-certgen:v20231011-8b53cabe0  repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
docker push repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
docker push repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0

docker images |grep ingress

修改yaml为私仓地址
方式一
修改image: 为私仓地址

sed -n "/image:/{s/image: /image: repo.k8s.local\//p}" ingress-nginx.yaml
sed -i "/image:/{s/image: /image: repo.k8s.local\//}" ingress-nginx.yaml
去除@sha256验证码
sed -rn "s/(\s*image:.*)@sha256:.*$/\1 /gp" ingress-nginx.yaml
sed -ri "s/(\s*image:.*)@sha256:.*$/\1 /g" ingress-nginx.yaml

方式二

合并执行,替换为私仓并删除SHA256验证

sed -rn "s/(\s*image: )(.*)@sha256:.*$/\1 repo.k8s.local\/\2/gp" ingress-nginx.yaml
sed -ri "s/(\s*image: )(.*)@sha256:.*$/\1 repo.k8s.local\/\2/g" ingress-nginx.yaml

方式三
手动编辑文件
vi ingress-nginx.yaml

cat ingress-nginx.yaml |grep image:
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
        image:  repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0

Deployment nodeName 方式

通常调式用,分配到指定的node,无法自动调度

vi ingress-nginx.yaml

kind: Deployment

      dnsPolicy: ClusterFirstWithHostNet  #既能使用宿主机DNS,又能使用集群DNS
      hostNetwork: true                   #与宿主机共享网络,接在主机上开辟 80,443端口,无需中间解析,速度更快
      nodeName: node01.k8s.local              #设置只能在k8snode1节点运行

Deployment+nodeSelector 方式

可以调度到带有指定标签的node,可以给node打标签来调度,这里布署ingress更推荐daemonset

vi ingress-nginx.yaml

kind: Deployment
spec:
  replicas: 1 #副本数,默认一个node一个
      dnsPolicy: ClusterFirstWithHostNet  #既能使用宿主机DNS,又能使用集群DNS
      hostNetwork: true                   #与宿主机共享网络,接在主机上开辟 80,443端口,无需中间解析,速度更快
      #nodeName: node01.k8s.local              #设置只能在k8snode1节点运行
      tolerations:                        #设置能容忍master污点
      - key: node-role.kubernetes.io/master
        operator: Exists

      nodeSelector:
        ingresstype: ingress-nginx
        kubernetes.io/os: linux

kind: Service
  ports:
     - name: http
       nodePort: 30080   #固定端口
     - name: https
       nodePort: 30443   #固定端口

DaemonSet 方式

每个node带有标签的node都分配一个

在Deployment:spec指定布署节点
修改Deployment为DaemonSet
修改时同步注释掉

  #replicas: 1
  #strategy:
  #  rollingUpdate:
  #    maxUnavailable: 1
  #  type: RollingUpdate

修改DaemonSet的nodeSelector: ingress-node=true 。这样只需要给node节点打上 ingresstype=ingress-nginx 标签,即可快速的加入/剔除 ingress-controller的数量

vi ingress-nginx.yaml

kind: DaemonSet

      dnsPolicy: ClusterFirstWithHostNet  #既能使用宿主机DNS,又能使用集群DNS
      hostNetwork: true                   #与宿主机共享网络,接在主机上开辟 80,443端口,无需中间解析,速度更快,netstat 可以看到端口
      #nodeName: node01.k8s.local              #设置只能在k8snode1节点运行
      tolerations:                        #设置能容忍master污点,充许布到master
      - key: node-role.kubernetes.io/master
        operator: Exists
      nodeSelector:
        ingresstype: ingress-nginx
        kubernetes.io/os: linux

NodeSelector 只是一种简单的调度策略,更高级的调度策略可以使用 Node Affinity 和 Node Taints 等机制来实现

kubectl apply -f ingress-nginx.yaml

kubectl delete -f ingress-nginx.yaml

kubectl get pods -A
NAMESPACE              NAME                                                    READY   STATUS      RESTARTS        AGE
default                nfs-client-provisioner-db4f6fb8-gnnbm                   1/1     Running     0               24h
ingress-nginx          ingress-nginx-admission-create-wxtlz                    0/1     Completed   0               103s
ingress-nginx          ingress-nginx-admission-patch-8fw72                     0/1     Completed   1               103s
ingress-nginx          ingress-nginx-controller-57c98745dd-2rn7m               0/1     Pending     0               103s

查看详情

kubectl -n ingress-nginx describe pod ingress-nginx-controller-57c98745dd-2rn7m

didn’t match Pod’s node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
无法调度到正确Node
影响调度的因素:nodename、NodeSelector、Node Affinity、Pod Affinity、taint和tolerations
这里还没有给node打标签ingresstype=ingress-nginx,所以不能调度

如果布署成功可以看到分配到哪台node

Service Account: ingress-nginx
Node: node01.k8s.local/192.168.244.5

查看nodes

kubectl get nodes

NAME                 STATUS   ROLES           AGE     VERSION
master01.k8s.local   Ready    control-plane   9d      v1.28.2
node01.k8s.local     Ready    <none>          9d      v1.28.2
node02.k8s.local     Ready    <none>          2d23h   v1.28.2

查看node是否被打污点

kubectl describe nodes | grep Tain

Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Taints:             <none>
Taints:             <none>

查看node的标签

kubectl get node –show-labels

NAME                 STATUS   ROLES           AGE     VERSION   LABELS
master01.k8s.local   Ready    control-plane   12d     v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01.k8s.local,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node01.k8s.local     Ready    <none>          12d     v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01.k8s.local,kubernetes.io/os=linux
node02.k8s.local     Ready    <none>          6d16h   v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02.k8s.local,kubernetes.io/os=linux
kubectl get node -l "ingresstype=ingress-nginx" -n ingress-nginx --show-labels
kubectl get node -l "beta.kubernetes.io/arch=amd64" -n ingress-nginx --show-labels

查看是否是node资源不足

在 Linux 中查看实际剩余的 cpu

kubectl describe node |grep -E '((Name|Roles):\s{6,})|(\s+(memory|cpu)\s+[0-9]+\w{0,2}.+%\))'

给需要的node节点上部署ingress-controller:

因为我们使用的是DaemonSet模式,所以理论上会为所有的节点都安装,但是由于我们的selector使用了筛选标签:ingresstype=ingress-nginx ,所以此时所有的node节点都没有被执行安装;
kubectl get ds -n ingress-nginx

NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                     AGE
ingress-nginx-controller   0         0         0       0            0           ingresstype=ingress-nginx,kubernetes.io/os=linux   60s

查询详细

kubectl describe ds -n ingress-nginx

当我们需要为所有的node节点安装ingress-controller的时候,只需要为对应的节点打上标签:node-role=ingress-ready

kubectl label node node01.k8s.local ingresstype=ingress-nginx
kubectl label node node02.k8s.local ingresstype=ingress-nginx
kubectl label node master01.k8s.local ingresstype=ingress-nginx

修改标签,需要增加–overwrite这个参数(为了与增加标签的命令做区分)

kubectl label node node01.k8s.local ingresstype=ingress-nginx --overwrite

删除node的标签

kubectl label node node01.k8s.local node-role-
kubectl label node node01.k8s.local ingress-

kubectl get node –show-labels

NAME                 STATUS   ROLES           AGE     VERSION   LABELS
NAME                 STATUS   ROLES           AGE     VERSION   LABELS
master01.k8s.local   Ready    control-plane   13d     v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingresstype=ingress-nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01.k8s.local,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node01.k8s.local     Ready              13d     v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingresstype=ingress-nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01.k8s.local,kubernetes.io/os=linux
node02.k8s.local     Ready              6d19h   v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingresstype=ingress-nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02.k8s.local,kubernetes.io/os=linux

查看当前有3个节点

kubectl get ds -n ingress-nginx

NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR               AGE
ingress-nginx-controller   3         3         3       3            3           ingresstype=ingress-nginx,kubernetes.io/os=linux   25s

删除标签后,主节点就被移除了,只有2个节点
kubectl label node master01.k8s.local ingresstype-

kubectl get pods -A

NAMESPACE              NAME                                                    READY   STATUS      RESTARTS       AGE
default                nfs-client-provisioner-db4f6fb8-gnnbm                   1/1     Running     10 (26h ago)   4d20h
ingress-nginx          ingress-nginx-admission-create-zxz7j                    0/1     Completed   0              3m35s
ingress-nginx          ingress-nginx-admission-patch-xswhk                     0/1     Completed   1              3m35s
ingress-nginx          ingress-nginx-controller-7j4nz                          1/1     Running     0              3m35s
ingress-nginx          ingress-nginx-controller-g285w                          1/1     Running     0              3m35s

kubectl get svc -n ingress-nginx

NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.96.116.106           80:30080/TCP,443:30443/TCP   8m26s
ingress-nginx-controller-admission   ClusterIP   10.96.104.116           443/TCP                      8m26s

kubectl -n ingress-nginx describe pod ingress-nginx-controller-7f6c656666-gn4f2

Warning  FailedMount  112s  kubelet                   MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found

从Deployment改成DaemonSet,如果有足够资源可以直接改yaml后apply,分配后资源不足会的pod会一直Pending,老的pod依然running,提示以下资源不足信息.
Warning FailedScheduling 33s default-scheduler 0/3 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
手动删除老pod后,新pod可以自动运行,但老的pod一直重新生成新的pending pod
kubectl -n ingress-nginx delete pod ingress-nginx-controller-6c95999b7f-njzvr

创建一个nginx测试

准备镜像
docker pull docker.io/library/nginx:1.21.4
docker tag docker.io/library/nginx:1.21.4 repo.k8s.local/library/nginx:1.21.4
docker push repo.k8s.local/library/nginx:1.21.4
nginx yaml文件
#使用Deployment+nodeName+hostPath,指定分配到node01上
cat > test-nginx-hostpath.yaml < svc-test-nginx-nodeport.yaml < svc-test-nginx-clusterip.yaml <

Ingress规则,将ingress和service绑一起
podip和clusterip都不固定,但是service name是固定的
namespace 要一致

注意1.22版前,yaml格式有差异

apiVersion: extensions/v1beta1
    backend:
      serviceName: svc-test-nginx
      servicePort: 80
cat > ingress-svc-test-nginx.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-test-nginx
  annotations:
    #kubernetes.io/ingress.class: "nginx"
  namespace: test
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: svc-test-nginx
            port:
              number: 31080
EOF
在node1 上创建本地文件夹,后续pod因spec:nodeName: 会分配到此机。  
mkdir -p /nginx/{html,logs,conf.d}

#生成一个首页
echo hostname > /nginx/html/index.html
echo date >> /nginx/html/index.html

#生成ingress测试页
mkdir  /nginx/html/testpath/
echo hostname > /nginx/html/testpath/index.html

kubectl apply -f test-nginx-hostpath.yaml
kubectl delete -f test-nginx-hostpath.yaml

#service nodeport/clusterip 两者选一
kubectl apply -f svc-test-nginx-nodeport.yaml
kubectl delete -f svc-test-nginx-nodeport.yaml

#service clusterip
kubectl apply -f svc-test-nginx-clusterip.yaml
kubectl delete -f svc-test-nginx-clusterip.yaml

kubectl apply -f ingress-svc-test-nginx.yaml
kubectl delete -f ingress-svc-test-nginx.yaml
kubectl describe ingress ingress-svc-test-hostpath -n test

kubectl get pods -n test
kubectl describe  -n test pod nginx-deploy-5bc84b775f-hnqll 
kubectl get svc -A

注删pod重启后文件会被重写,html/和logs 不会覆盖

ll /nginx/conf.d/ 

total 4
-rw-r--r-- 1 root root 1072 Oct 26 11:06 default.conf

cat /nginx/conf.d/default.conf
server {
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}
EOF

使用nodeport
kubectl get service -n test

NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
svc-test-nginx   NodePort   10.96.148.126           31080:30003/TCP   20s

使用clusterip
kubectl get service -n test

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
svc-test-nginx   ClusterIP   10.96.209.131           31080/TCP   80s

总结一下如何访问到pod内web服务

ip规则
node节点:192.168.244.0/24
pod网段:10.244.0.0/16
service网段(集群网段):10.96.0.0/12

ingress为HostNetwork模式

集群内外可以访问到ingress匹配到的node上的nodeip+80和443
curl http://192.168.244.5:80/

 集群内外通过service nodeport访问任意nodeip+nodePort

ingress service 的nodeip+nodeport
此例中30080为ingress的nodeport
curl http://192.168.244.4:30080/testpath/
node01.k8s.local

nginx service 的nodeip+nodeport
service为 nodeport 在集群内或外部使用任意nodeip+nodeport,访问pod上的nginx
curl http://192.168.244.5:30003
node01.k8s.local
Thu Oct 26 11:11:00 CST 2023

 集群内通过service clusterip

ingress service 的clusterip+clusterport
curl http://10.96.111.201:80/testpath/
node01.k8s.local

nginx service 的clusterip+clusterport
在集群内使用clusterip+cluster port,也就是service 访问内部nginx,只有集群内能访问,每次重启pod clusterip会变动,测试使用
curl http://10.96.148.126:31080
node01.k8s.local
Thu Oct 26 11:11:00 CST 2023

集群内通过pod ip

nginx podip+port
curl http://10.244.1.93:80

pod内可以用service域名来访问

curl http://svc-test-nginx:31080
curl http://svc-test-nginx.test:31080
curl http://svc-test-nginx.test.svc.cluster.local:31080
curl http://10.96.148.126:31080


在node1上可以看到访问日志,注意日期的时区是不对的

tail -f /nginx/logs/access.log

10.244.0.0 - - [26/Oct/2023:03:11:04 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36" "-"

pod中时区问题
时区可以在yaml中用hostpath指到宿主机的时区文件

        volumeMounts:
        - name: timezone
          mountPath: /etc/localtime  
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  

kubectl get pods -o wide -n test

进入容器

kubectl exec -it pod/nginx-deploy-886d78bd5-wlk5l -n test -- /bin/sh

Ingress-nginx 组件添加和设置 header

Ingress-nginx 可以通过 snippets注解 的方式配置,但为了安全起见,默认情况下,snippets注解 不允许的
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#allow-snippet-annotations
这种方式只能给具体的 ingress 资源配置,如果需要给所有ingress 接口配置就很麻烦, 维护起来很不优雅.所以推荐通过官方提供的 自定义Header 的方式来配置

https://help.aliyun.com/zh/ack/ack-managed-and-ack-dedicated/user-guide/ install-the-nginx-ingress-controller-in-high-load-scenarios
ingress默认会丢弃不标准的http头
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-real-ip-cidr
解决:configmaps添加

data:
 enable-underscores-in-headers: "true"
cat > ingress-nginx-ConfigMap.yaml <
#注意文本中含用变量,使用vi 编辑模式修改.
#realip 生效在http段,snippet生效在server段
vi ingress-nginx-ConfigMap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: "true"
  worker-processes: "auto" #worker_processes
  server-name-hash-bucket-size: "128" #server_names_hash_bucket_size
  variables-hash-bucket-size: "256" #variables_hash_bucket_size
  variables-hash-max-size: "2048" #variables_hash_max_size
  client-header-buffer-size: "32k" #client_header_buffer_size
  proxy-body-size: "8m" #client_max_body_size
  large-client-header-buffers: "4 512k" #large_client_header_buffers
  client-body-buffer-size: "512k" #client_body_buffer_size
  proxy-connect-timeout : "5" #proxy_connect_timeout
  proxy-read-timeout: "60" #proxy_read_timeout
  proxy-send-timeout: "5" #proxy_send_timeout
  proxy-buffer-size: "32k" #proxy_buffer_size
  proxy-buffers-number: "8 32k" #proxy_buffers
  keep-alive: "60" #keepalive_timeout
  enable-real-ip: "true" 
  #use-forwarded-headers: "true"
  forwarded-for-header: "ns_clientip" #real_ip_header
  compute-full-forwarded-for: "true"
  enable-underscores-in-headers: "true" #underscores_in_headers on
  proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16  #set_real_ip_from
  access-log-path: "/var/log/nginx/access_ext_ingress_$hostname.log"
  error-log-path: "/var/log/nginx/error_ext_ingress.log"
  log-format-escape-json: "true"
  log-format-upstream: '{"timestamp": "$time_iso8601", "req_id": "$req_id", 
    "geoip_country_code": "$geoip_country_code", "request_time": "$request_time", 
    "ingress":{ "hostname": "$hostname", "addr": "$server_addr", "port": "$server_port","namespace": "$namespace","ingress_name": "$ingress_name","service_name": "$service_name","service_port": "$service_port" }, 
    "upstream":{ "addr": "$upstream_addr", "name": "$proxy_upstream_name", "response_time": "$upstream_response_time", 
    "status": "$upstream_status", "response_length": "$upstream_response_length", "proxy_alternative": "$proxy_alternative_upstream_name"}, 
    "request":{ "remote_addr": "$remote_addr", "real_ip": "$realip_remote_addr", "remote_port": "$remote_port", "real_port": "$realip_remote_port", 
    "remote_user": "$remote_user", "request_method": "$request_method", "hostname": "$host", "request_uri": "$request_uri", "status": $status, 
    "body_bytes_sent": "$body_bytes_sent", "request_length": "$request_length", "referer": "$http_referer", "user_agent": "$http_user_agent",
    "x-forward-for": "$proxy_add_x_forwarded_for", "protocol": "$server_protocol"}}'

创建/关闭 ConfigMap

kubectl apply -f ingress-nginx-ConfigMap.yaml -n ingress-nginx
直接生效,不需重启pod
kubectl delete -f ingress-nginx-ConfigMap.yaml

kubectl get pods -o wide -n ingress-nginx

ingress-nginx-controller-kr8jd         1/1     Running     6 (7m26s ago)   13m     192.168.244.7   node02.k8s.local   

查看ingress-nginx 配制文件

kubectl describe pod/ingress-nginx-controller-z5b4f -n ingress-nginx 
kubectl exec -it pod/ingress-nginx-controller-z5b4f -n ingress-nginx -- /bin/sh
kubectl exec -it pod/ingress-nginx-controller-kr8jd -n ingress-nginx -- head -n200 /etc/nginx/nginx.conf
kubectl exec -it pod/ingress-nginx-controller-kr8jd -n ingress-nginx -- cat /etc/nginx/nginx.conf
kubectl exec -it pod/ingress-nginx-controller-z5b4f -n ingress-nginx -- tail /var/log/nginx/access.log

kubectl exec -it pod/ingress-nginx-controller-kr8jd -n ingress-nginx -- head -n200 /etc/nginx/nginx.conf|grep client_body_buffer_size

客户端->CDN->WAF->SLB->Ingress->Pod

realip

方式一 kind: ConfigMap

  enable-real-ip: "true" 
  #use-forwarded-headers: "true"
  forwarded-for-header: "ns_clientip" #real_ip_header
  compute-full-forwarded-for: "true"
  enable-underscores-in-headers: "true" #underscores_in_headers on
  proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16  #set_real_ip_from

方式二 server-snippet
kind: ConfigMap中打开
allow-snippet-annotations: "true"

kubectl edit configmap -n ingress-nginx ingress-nginx-controller

#ingress关联server-snippet
#realip 会在server 段对全域名生效
#ip 白名单whitelist-source-range 会在location = /showvar 生效,使用remoteaddr判定,需要全域白名单时才用.allow 223.2.2.0/24;deny all;

test-openresty-ingress-snippet.yaml
用cat时在含有变量时需转义\$ ,vi 不用转义
cat > test-openresty-ingress-snippet.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-openresty
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/server-snippet: |
      underscores_in_headers on;
      set_real_ip_from 10.244.0.0/16;
      set_real_ip_from 192.168.0.0/16;
      real_ip_header ns_clientip;
      #real_ip_recursive on;
      proxy_set_header                X-Forwarded-For \$proxy_add_x_forwarded_for;
      add_header      Access-Control-Allow-Headers    \$http_Access_Control_Request_Headers    always;
      add_header      Access-Control-Allow-Origin     \$http_Origin    always;
      add_header      Access-Control-Allow-Credentials        'false' always;
      add_header      Access-Control-Allow-Methods    '*'     always;
      if (\$request_method = 'OPTIONS') {
            return 204;
      }
    nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
  namespace: test
spec:
  rules:
  - http:
      paths:
      - path: /showvar
        pathType: Prefix
        backend:
          service:
            name: svc-openresty
            port:
              number: 31080
EOF

kubectl apply -f test-openresty-ingress-snippet.yaml
  1. enable-real-ip:
    enable-real-ip: "true"
    打开real-ip
    生成的代码

        real_ip_header      X-Forwarded-For;
        real_ip_recursive   on;
        set_real_ip_from    0.0.0.0/0;
  2. use-forwarded-headers:
    use-forwarded-headers: "false" 适用于 Ingress 前无代理层,例如直接挂在 4 层 SLB 上,ingress 默认重写 X-Forwarded-For 为 $remote_addr ,可防止伪造 X-Forwarded-For
    use-forwarded-headers: "true" 适用于 Ingress 前有代理层,风险是可以伪造X-Forwarded-For
    生成的代码

        real_ip_header      X-Forwarded-For;
        real_ip_recursive   on;
        set_real_ip_from    0.0.0.0/0;
  3. enable-underscores-in-headers​​​:
    enable-underscores-in-headers: "true"
    是否在hader头中启用非标的_下划线, 缺省默认为"false",如充许 X_FORWARDED_FOR 头,请设为"true"。​​
    相当于nginx的 underscores_in_headers on;

  4. forwarded-for-header
    默认值 X-Forwarded-For,标识客户端的原始 IP 地址的 Header 字段, 如自定义的header头 X_FORWARDED_FOR
    forwarded-for-header: "X_FORWARDED_FOR"
    相当于nginx的real_ip_header

4.compute-full-forwarded-for
默认会将remote替换X-Forwarded-For​
将 remote address 附加到 X-Forwarded-For Header而不是替换它。

  1. ​​proxy-real-ip-cidr​
    如果启用 ​​use-forwarded-headers​​ 或 ​​use-proxy-protocol​​,则可以使用该参数其定义了外部负载衡器 、网络代理、CDN等地址,多个地址可以以逗号分隔的 CIDR 。默认值: "0.0.0.0/0"
    set_real_ip_from

6.external-traffic-policy
Cluster模式:是默认模式,Kube-proxy不管容器实例在哪,公平转发,会做一次SNAT,所以源IP变成了节点1的IP地址。
Local模式:流量只发给本机的Pod,Kube-proxy转发时会保留源IP,性能(时延)好。
这种模式下的Service类型只能为外部流量,即:LoadBalancer 或者 NodePort 两种,否则会报错

开realip后 http_x_forwarded_for 值被会被 remoteaddr 取代
如果compute-full-forwarded-for: "true" ,那么remoteaddr会被追加到右侧
由于本机不会跨节点转发报文,所以要想所有节点上的容器有负载均衡,就需要依赖上一级的Loadbalancer来实现

日 志

access-log-path: /var/log/nginx/access.log
/var/log/nginx/access.log -> /dev/stdout

error-log-path:/var/log/nginx/error.log
/var/log/nginx/error.log->/dev/stderr

kubectl get ds -A
kubectl get ds -n ingress-nginx ingress-nginx-controller -o=yaml
将当前damonset布署的ingress-nginx-controller 导出成单独yaml,方便修改
kubectl get ds -n ingress-nginx ingress-nginx-controller -o=yaml > ingress-nginx-deployment.yaml

每个node上
自动创建的目录为root:root ,而ingress没权限写入
mkdir -p /var/log/nginx/
chmod 777 /var/log/nginx/

Error: exit status 1
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied)
nginx: the configuration file /tmp/nginx/nginx-cfg1271722019 syntax is ok
2023/11/02 14:05:02 [emerg] 34#34: open() "/var/log/nginx/access.log" failed (13: Permission denied)
nginx: configuration file /tmp/nginx/nginx-cfg1271722019 test failed

kind: Deployment 中 关闭 logtostderr

  • --logtostderr=false
    示例:

    
      containers:
      - args:
  • /nginx-ingress-controller
  • --election-id=ingress-nginx-leader
  • --controller-class=k8s.io/ingress-nginx
  • --ingress-class=nginx
  • --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
  • --validating-webhook=:8443
  • --validating-webhook-certificate=/usr/local/certificates/cert
  • --validating-webhook-key=/usr/local/certificates/key
  • --watch-ingress-without-class=true
  • --publish-status-address=localhost
  • --logtostderr=false

挂载到宿主目录

        volumeMounts:
        - name: timezone
          mountPath: /etc/localtime  
        - name: vol-ingress-logdir
          mountPath: /var/log/nginx
      volumes:
      - name: timezone       
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai  
      - name: vol-ingress-logdir
        hostPath:
          path: /var/log/nginx
          type: DirectoryOrCreate

创建/关闭 ingress-nginx-deployment

kubectl apply -f ingress-nginx-deployment.yaml

kubectl get pods -o wide -n ingress-nginx

默认日志格式

        log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';

tail -f /var/log/nginx/access.log
3.2.1.5 - - [02/Nov/2023:14:11:26 +0800] "GET /showvar/?2 HTTP/1.1" 200 316 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36" 764 0.000 [test-svc-openresty-31080] [] 10.244.2.46:8089 316 0.000 200 a9051a75e20e164f1838740e12fa95e3

SpringCloud 微服务 RuoYi-Cloud 部署文档(DevOps版)(2023-10-18) argo-rollouts + istio(金丝雀发布)(渐进式交付)
https://blog.csdn.net/weixin_44797299/article/details/133923956

server-snippet 访问验证 和URL重定向(permanent):

通过Ingress注解nginx.ingress.kubernetes.io/server-snippet配置location,访问/sre,返回401错误代码

cat > test-openresty-ingress-snippet.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-openresty
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/server-snippet: |
      underscores_in_headers on;
      set_real_ip_from 10.244.0.0/16;
      set_real_ip_from 192.168.0.0/16;
      real_ip_header ns_clientip;
      #real_ip_recursive on;      
      location /sre {
        return 401;
      }
      rewrite ^/baidu.com$ https://www.baidu.com redirect;
    nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
  namespace: test
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /showvar
        pathType: Prefix
        backend:
          service:
            name: svc-openresty
            port:
              number: 31080
EOF

kubectl apply -f test-openresty-ingress-snippet.yaml

curl http://192.168.244.7:80/sre/ 

401 Authorization Required

401 Authorization Required


nginx
curl http://192.168.244.7:80/baidu.com 302 Found

302 Found


nginx

configuration-snippet

nginx.ingress.kubernetes.io/denylist-source-range
扩展配置到Location章节
cat > test-openresty-ingress-snippet.yaml << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc-openresty
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/server-snippet: |
underscores_in_headers on;
set_real_ip_from 10.244.0.0/16;
set_real_ip_from 192.168.0.0/16;
real_ip_header ns_clientip;

real_ip_recursive on;

  location /sre {
    return 401;
  }
  rewrite ^/baidu.com$ https://www.baidu.com redirect;
nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
nginx.ingress.kubernetes.io/denylist-source-range:223.2.3.0/24
nginx.ingress.kubernetes.io/configuration-snippet: |
  proxy_set_header X-Pass $proxy_x_pass;
  rewrite ^/v6/(.*)/card/query http://foo.bar.com/v7/#!/card/query permanent;

namespace: test
spec:
ingressClassName: nginx
rules:

  • http:
    paths:

    • path: /showvar
      pathType: Prefix
      backend:
      service:
      name: svc-openresty
      port:
      number: 31080
      EOF

配置HTTPS服务转发到后端容器为HTTPS协议

Nginx Ingress Controller默认使用HTTP协议转发请求到后端业务容器。当您的业务容器为HTTPS协议时,可以通过使用注解nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"来使得Nginx Ingress Controller使用HTTPS协议转发请求到后端业务容器。


apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: backend-https
  annotations:
    #注意这里:必须指定后端服务为HTTPS服务。
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  tls:
  - hosts:
    - 
    secretName: 
  rules:
  - host: 
    http:
      paths:
      - path: /
        backend:
          service:
            name: 
            port: 
              number: 
        pathType: ImplementationSpecific

配置域名支持正则化

在Kubernetes集群中,Ingress资源不支持对域名配置正则表达式,但是可以通过nginx.ingress.kubernetes.io/server-alias注解来实现。
创建Nginx Ingress,以正则表达式~^www.\d+.example.com为例。

cat <<-EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-regex
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/server-alias: '~^www\.\d+\.example\.com$, abc.example.com'
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - path: /foo
        backend:
          service:
            name: http-svc1
            port:
              number: 80
        pathType: ImplementationSpecific
EOF

配置域名支持泛化

在Kubernetes集群中,Nginx Ingress资源支持对域名配置泛域名,例如,可配置*. ingress-regex.com泛域名。

cat <<-EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-regex
  namespace: default
spec:
  rules:
- host: *.ingress-regex.com
    http:
      paths:
      - path: /foo
        backend:
          service:
            name: http-svc1
            port:
              number: 80
        pathType: ImplementationSpecific
EOF

通过注解实现灰度发布

灰度发布功能可以通过设置注解来实现,为了启用灰度发布功能,需要设置注解nginx.ingress.kubernetes.io/canary: "true",通过不同注解可以实现不同的灰度发布功能:

nginx.ingress.kubernetes.io/canary-weight:设置请求到指定服务的百分比(值为0~100的整数)。

nginx.ingress.kubernetes.io/canary-by-header:基于Request Header的流量切分,当配置的hearder值为always时,请求流量会被分配到灰度服务入口;当hearder值为never时,请求流量不会分配到灰度服务;将忽略其他hearder值,并通过灰度优先级将请求流量分配到其他规则设置的灰度服务。

nginx.ingress.kubernetes.io/canary-by-header-value和nginx.ingress.kubernetes.io/canary-by-header:当请求中的hearder和header-value与设置的值匹配时,请求流量会被分配到灰度服务入口;将忽略其他hearder值,并通过灰度优先级将请求流量分配到其他规则设置的灰度服务。

nginx.ingress.kubernetes.io/canary-by-cookie:基于Cookie的流量切分,当配置的cookie值为always时,请求流量将被分配到灰度服务入口;当配置的cookie值为never时,请求流量将不会分配到灰度服务入口。
基于Header灰度(自定义header值):当请求Header为ack: alibaba时将访问灰度服务;其它Header将根据灰度权重将流量分配给灰度服务。

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
nginx.ingress.kubernetes.io/canary-by-header: "ack"
nginx.ingress.kubernetes.io/canary-by-header-value: "alibaba"

默认后端

nginx.ingress.kubernetes.io/default-backend: <svc name>

给后端传递header ns_clientip

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header ns-clientip $remote_addr;

或者是如下这种:

nginx.ingress.kubernetes.io/configuration-snippet: |
  more_set_headers "Request-Id: $req_id";

由于开启了realip,forwarded-for-header: "ns_clientip".
ns_clientip不再传给上游,这里再次指定传递

全局configmap 跨域

nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS" #Default: GET, PUT, POST, DELETE, PATCH, OPTIONS
nginx.ingress.kubernetes.io/cors-allow-headers: "Origin,User-Agent,Authorization, Content-Type, If-Match, If-Modified-Since, If-None-Match, If-Unmodified-Since, X-CSRF-TOKEN, X-Requested-With,token" Default: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization
nginx.ingress.kubernetes.io/cors-expose-headers: "" #Default: empty
nginx.ingress.kubernetes.io/cors-allow-origin: "http://wap.bbs.yingjiesheng.com, https://wap.bbs.yingjiesheng.com " # Default: *
nginx.ingress.kubernetes.io/cors-allow-credentials: "true" #Default: true
nginx.ingress.kubernetes.io/cors-max-age: "1728000" #Default: 1728000

main-snippet string ""
http-snippet string ""
server-snippet string ""
stream-snippet string ""
location-snippet string ""

otel-service-name string "nginx"
otel-service-name : "gateway"

添加自定义header
proxy-set-headers
https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/

打开realip
enable-real-ip bool "false"
enable-real-ip: "true"

realip 的header头打开
use-forwarded-headers bool "false"
use-forwarded-headers: "true"

realip 的认证header头
forwarded-for-header string "X-Forwarded-For"
forwarded-for-header: ns_clientip

realip ip段
proxy-real-ip-cidr
proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16

将 remote address 附加到 X-Forwarded-For Header而不是替换它。
compute-full-forwarded-for bool "false"
compute-full-forwarded-for: "true"

全局ip封禁,优先于annotaion,或ingress规则
denylist-source-range []string []string{}
denylist-source-range: "223.2.4.0/24"

全局ip白名单,优先于denylist,如果设定那么只有此ip能访问,k8s内部不用ingress,所以内网ip不添加
可以和server的annotations配合再封禁某一段。nginx.ingress.kubernetes.io/denylist-source-range:223.2.2.0/24
whitelist-source-range []string []string{}
whitelist-source-range: "127.0.0.1,192.168.244.1,223.0.0.0/8"

全局ip封禁,优先于server白名单
block-cidrs []string ""
封禁223.2.4.0/24,如有多个用,分割
block-cidrs: "223.2.4.0/24"
https://nginx.org/en/docs/http/ngx_http_access_module.html#deny

全局ua封禁
block-user-agents []string ""
封禁含有spider 的ua,不区分大小写
block-user-agents: "~*spider"

全局封禁referer
block-referers []string ""
block-referers: "~*chinahr.com"

查看运行状态的ip
nginx-status-ipv4-whitelist []string "127.0.0.1"
http://127.0.0.1/nginx_status/

Posted in 技术.


k8s_安装8_UI_Dashboard

八、 dashkoard

kubernetes/dashboard

Dashboard 是基于网页的 Kubernetes 用户界面。 你可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。 你可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源 (如 Deployment,Job,DaemonSet 等等)。 例如,你可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。

官方文档:https://kubernetes.io/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard/
开源地址:https://github.com/kubernetes/dashboard
版本兼容性确认:https://github.com/kubernetes/dashboard/releases

cert-manager

https://cert-manager.io/docs/installation/
wget --no-check-certificate https://github.com/cert-manager/cert-manager/releases/download/v1.13.1/cert-manager.yaml -O cert-manager-1.13.1.yaml
wget --no-check-certificate  https://github.com/cert-manager/cert-manager/releases/download/v1.13.2/cert-manager.yaml -O cert-manager-1.13.2.yaml

cat cert-manager-1.13.2.yaml|grep image:|sed -e 's/.*image: //'
"quay.io/jetstack/cert-manager-cainjector:v1.13.2"
"quay.io/jetstack/cert-manager-controller:v1.13.2"
"quay.io/jetstack/cert-manager-webhook:v1.13.2"

docker pull fishchen/cert-manager-controller:v1.13.2
docker pull quay.io/jetstack/cert-manager-webhook:v1.13.2
docker pull quay.io/jetstack/cert-manager-controller:v1.13.2
docker pull quay.nju.edu.cn/jetstack/cert-manager-controller:v1.13.2

quay.dockerproxy.com/
docker pull quay.dockerproxy.com/jetstack/cert-manager-controller:v1.13.1
docker pull quay.dockerproxy.com/jetstack/cert-manager-cainjector:v1.13.1
docker pull quay.dockerproxy.com/jetstack/cert-manager-webhook:v1.13.1

quay.io
docker pull quay.io/jetstack/cert-manager-cainjector:v1.13.1
docker pull quay.io/jetstack/cert-manager-controller:v1.13.1
docker pull quay.io/jetstack/cert-manager-webhook:v1.13.1

quay.nju.edu.cn
docker pull quay.nju.edu.cn/jetstack/cert-manager-cainjector:v1.13.1
docker pull quay.nju.edu.cn/jetstack/cert-manager-controller:v1.13.1
docker pull quay.nju.edu.cn/jetstack/cert-manager-webhook:v1.13.1

docker tag quay.dockerproxy.com/jetstack/cert-manager-cainjector:v1.13.1 repo.k8s.local/quay.io/jetstack/cert-manager-cainjector:v1.13.1
docker tag quay.nju.edu.cn/jetstack/cert-manager-webhook:v1.13.1  repo.k8s.local/quay.io/jetstack/cert-manager-webhook:v1.13.1
docker tag quay.io/jetstack/cert-manager-controller:v1.13.1  repo.k8s.local/quay.io/jetstack/cert-manager-controller:v1.13.1

docker push repo.k8s.local/quay.io/jetstack/cert-manager-cainjector:v1.13.1
docker push repo.k8s.local/quay.io/jetstack/cert-manager-webhook:v1.13.1
docker push repo.k8s.local/quay.io/jetstack/cert-manager-controller:v1.13.1
导入省略,可以参见harbor安装
docker pull ...
docker tag ...
docker push ...
docker images

准备yaml文件

cp cert-manager-1.13.1.yaml  cert-manager-1.13.1.org.yaml

sed -n 's/quay\.io/repo.k8s.local\/quay\.io/p'  cert-manager-1.13.1.yaml
sed -i 's/quay\.io/repo.k8s.local\/quay\.io/'  cert-manager-1.13.1.yaml
cat cert-manager-1.13.1.yaml|grep image:|sed -e 's/.*image: //'

kubectl apply -f cert-manager-1.13.1.yaml

customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

kubectl get pods –namespace cert-manager

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-7f46fcb774-gfvjm              1/1     Running   0          14s
cert-manager-cainjector-55f76bd446-nxkrt   1/1     Running   0          14s
cert-manager-webhook-799cbdc68-4t9zw       1/1     Running   0          14s
准备yaml文件,并显示images地址

在控制节点
dashboard3.0.0-alpha0 需要cert-manager,用service或nodeport安装后无法显示页面 /api/v1/login/status 为404,要用ingress方式访问.

wget  --no-check-certificate  https://raw.githubusercontent.com/kubernetes/dashboard/v3.0.0-alpha0/charts/kubernetes-dashboard.yaml -O kubernetes-dashboard.yaml
cat kubernetes-dashboard.yaml |grep image:|sed -e 's/.*image: //'

docker.io/kubernetesui/dashboard-api:v1.0.0
docker.io/kubernetesui/dashboard-web:v1.0.0
docker.io/kubernetesui/metrics-scraper:v1.0.9
提取image名称,并在harobor 导入
cat kubernetes-dashboard.yaml |grep image:|awk -F'/' '{print $NF}'
dashboard-api:v1.0.0
dashboard-web:v1.0.0
metrics-scraper:v1.0.9

#导入省略,可以参见harbor安装
docker pull ...
docker tag ...
docker push ...
docker images
导入harbor私仓后,替换docker.io为私仓repo.k8s.local 地址

如果拉取缓慢,Pulling fs layer,没有私仓可以用阿里的
registry.aliyuncs.com/google_containers/

cp kubernetes-dashboard.yaml kubernetes-dashboard.org.yaml
sed -n 's/docker\.io\/kubernetesui/repo.k8s.local\/google_containers/p'  kubernetes-dashboard.yaml
sed -i 's/docker\.io\/kubernetesui/repo.k8s.local\/google_containers/'  kubernetes-dashboard.yaml
cat  kubernetes-dashboard.yaml|grep -C2 image:
开始安装kubernetes-dashboard

kubectl apply -f kubernetes-dashboard.yaml

namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
service/kubernetes-dashboard-web created
service/kubernetes-dashboard-api created
service/kubernetes-dashboard-metrics-scraper created
ingress.networking.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard-api created
deployment.apps/kubernetes-dashboard-web created
deployment.apps/kubernetes-dashboard-metrics-scraper created
error: resource mapping not found for name: "selfsigned" namespace: "kubernetes-dashboard" from "kubernetes-dashboard.yaml": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first
查看状态
kubectl get pod -n kubernetes-dashboard -o wide
NAME                                                    READY   STATUS             RESTARTS   AGE
kubernetes-dashboard-api-6bfd48fcf6-njg9s               0/1     ImagePullBackOff   0          12m
kubernetes-dashboard-metrics-scraper-7d8c76dc88-6rn2w   0/1     ImagePullBackOff   0          12m
kubernetes-dashboard-web-7776cdb89f-jdwqt               0/1     ImagePullBackOff   0          12m
没有查到日志
kubectl logs kubernetes-dashboard-api-6bfd48fcf6-njg9s
Error from server (NotFound): pods "kubernetes-dashboard-api-6bfd48fcf6-njg9s" not found
指定namespace查看日志
kubectl describe pods/kubernetes-dashboard-api-6bfd48fcf6-njg9s -n kubernetes-dashboard
Name:             kubernetes-dashboard-api-6bfd48fcf6-njg9s
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             node01.k8s.local/192.168.244.5
Start Time:       Tue, 17 Oct 2023 13:24:23 +0800
Labels:           app.kubernetes.io/component=api
                  app.kubernetes.io/name=kubernetes-dashboard-api
                  app.kubernetes.io/part-of=kubernetes-dashboard
                  app.kubernetes.io/version=v1.0.0
                  pod-template-hash=6bfd48fcf6

Normal   Scheduled  39m                    default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-api-6bfd48fcf6-njg9s to node01.k8s.local
  Normal   Pulling    38m (x4 over 39m)      kubelet            Pulling image "repo.k8s.local/kubernetesui/dashboard-api:v1.0.0"
  Warning  Failed     38m (x4 over 39m)      kubelet            Failed to pull image "repo.k8s.local/kubernetesui/dashboard-api:v1.0.0": failed to pull and unpack image "repo.k8s.local/kubernetesui/dashboard-api:v1.0.0": failed to resolve reference "repo.k8s.local/kubernetesui/dashboard-api:v1.0.0": unexpected status from HEAD request to https://repo.k8s.local/v2/kubernetesui/dashboard-api/manifests/v1.0.0: 401 Unauthorized
  Warning  Failed     38m (x4 over 39m)      kubelet            Error: ErrImagePull
  Warning  Failed     37m (x6 over 39m)      kubelet            Error: ImagePullBackOff
  Normal   BackOff    4m29s (x150 over 39m)  kubelet            Back-off pulling image "repo.k8s.local/kubernetesui/dashboard-api:v1.0.0"
修正后重新安装

repo.k8s.local/kubernetesui/dashboard-api:v1.0.0 应为 repo.k8s.local/google_containers/dashboard-api:v1.0.0

kubectl delete -f kubernetes-dashboard.yaml
kubectl apply -f kubernetes-dashboard.yaml

运行正常

kubectl get pod -n kubernetes-dashboard  -o wide
NAME                                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-api-5fcfcfd7b-nlrnh                1/1     Running   0          15s
kubernetes-dashboard-metrics-scraper-585685f868-f7g5j   1/1     Running   0          15s
kubernetes-dashboard-web-57bd66fd9f-hbc62               1/1     Running   0          15s

kubectl describe pods/kubernetes-dashboard-api-5fcfcfd7b-nlrnh -n kubernetes-dashboard

查看Service暴露端口,我们使用这个端口进行访问:
kubectl get svc -n kubernetes-dashboard
NAME                                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
kubernetes-dashboard-api               ClusterIP   10.96.175.209   <none>        9000/TCP   14s
kubernetes-dashboard-metrics-scraper   ClusterIP   10.96.69.44     <none>        8000/TCP   14s
kubernetes-dashboard-web               ClusterIP   10.96.49.99     <none>        8000/TCP   14s
ClusterIP先行测试
curl http://10.96.175.209:9000/api/v1/login/status
{
 "tokenPresent": false,
 "headerPresent": false,
 "httpsMode": true,
 "impersonationPresent": false,
 "impersonatedUser": ""
}
curl http://10.96.49.99:8000/
<!--
Copyright 2017 The Kubernetes Authors.
创建kubernetes-dashboard管理员角色

默认账号kubernetes-dashboard权限过小

cat > dashboard-svc-account.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kubernetes-dashboard
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1      #需要修改的地方
metadata:
  name: dashboard-admin
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kubernetes-dashboard
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
EOF

kubectl apply -f dashboard-svc-account.yaml 
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
Kubernetes服务帐户没有token?

kubectl get secret -n kubernetes-dashboard|grep admin|awk ‘{print $1}’
之前secret中查token已不适用

之前的版本在创建serviceAccount之后会自动生成secret,可以通过kubectl get secret -n kube-system命令查看,现在需要多执行一步:
自Kubernetes版本1.22以来,默认情况下不会为ServiceAccounts生成令牌,需运行生成token,这种方式创建是临时的

kubectl create token dashboard-admin --namespace kubernetes-dashboard
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9EWUpmSzcyLUdzRlJnQWNhdHpOYWhNX0E4RDZ6Zl9id0JMcXZyMng5bkUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzAwNTM4ODQzLCJpYXQiOjE3MDA1MzUyNDMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkYXNoYm9hcmQtYWRtaW4iLCJ1aWQiOiI3ZmUwYjFiZi05ZDhlLTRjOGItYWEzMy0xZWU3ZDU2YjE2NjUifX0sIm5iZiI6MTcwMDUzNTI0Mywic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.cUGX77qAdY7Mqo3tPbWgcCLD2zmRoNUSlFG1EHlCRBwiA7ffL1PGbOHazE6eTmLrRo5if6nm9ILAK1Mv4Co2woOEW8qIJBVXClpZomvkj7BC2bGd-0X5W1s87CnEX7RnKcBqFVcP6zJY_ycLy1o9X9g4Y1wtMm8mptBgos5xmVAb8HecTgOWHt80W736K3WSB9ovuoAGVZe7-ahQ7DX8WJ_qYqbEE5v9laqYBIddcoJtfAYf8U8yaW-MQsJq46xp_sxU164WDozw_sSe4PIxHHqaG4tulJy3J2fn6D_0xbC8fupX3l8FPLcPQm1rWMFGPjsLhU8i_0ihnvyEmvsA6w

#默认账号
kubectl create token kubernetes-dashboard --namespace kubernetes-dashboard
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9EWUpmSzcyLUdzRlJnQWNhdHpOYWhNX0E4RDZ6Zl9id0JMcXZyMng5bkUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzAwNTM5MzgwLCJpYXQiOjE3MDA1MzU3ODAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInVpZCI6ImU2NWUzODRhLTI5ZDYtNGYwYy04OGI0LWJlZWVkYmRhODMxNiJ9fSwibmJmIjoxNzAwNTM1NzgwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQifQ.rpE8aVSVWGcydSJy_QcCg6LjdxPvE2M45AspWqC-u406HSznOby1cTvpa9c7scQ7KooyrjSdlzW-1JVd4U6aFSt8sKQLmLXSTUoGi7ACkI105wTGUU4WQmB5CaPynPC68hhrNPTrEXvM4fichCDykp2hWaVCKOwSQPU-cMsCrIeg-Jqeikckdbpfr7m5XDW8_ydb-_X49hwDVqJeA8eJ5Qn-qlkts8Lj3m3rWjVTKlVeMJARR6LCbZUFZ3uwmOFyUzIX0UDKUHGktt5-k33LbLMMpvKKRzhwfu9o5WSTQdvFux1EpVskYxtjpsyKW_PEwcz6UzxvaLwToxV4uDq5_w

# 可以加上 --duration 参数设置时间 kubectl create token account -h查看具体命令
kubectl create token dashboard-admin --namespace kubernetes-dashboard --duration 10h
dashboard暴露方式

部署 ingress
kube proxy 代理service ,无法访问api会显示白板
service NodePort 类型,无法访问api会显示白板

需要通过ingress,才能正常显示页面
修改kind: Ingress,将localhost去掉
kubectl edit Ingress kubernetes-dashboard -n kubernetes-dashboard

    #- host: localhost
     - http:

使用ingress的端口访口

curl http://127.0.0.1:30180/

curl http://127.0.0.1:30180/api/v1/login/status
{
 "tokenPresent": false,
 "headerPresent": false,
 "httpsMode": true,
 "impersonationPresent": false,
 "impersonatedUser": ""
}

kubectl delete Ingress kubernetes-dashboard -n kubernetes-dashboard
kubectl apply -f kubernetes-dashboard.yaml

Kubernetes Dashboard 认证时间延长

默认的Token失效时间是900秒,也就是15分钟,这意味着你每隔15分钟就要认证一次。
改成12小时。 – –token-ttl=43200
kubectl edit deployment kubernetes-dashboard-api -n kubernetes-dashboard

          args:
            - --enable-insecure-login
            - --namespace=kubernetes-dashboard
            - --token-ttl=43200
kubectl get pod -n kubernetes-dashboard  -o wide
NAME                                                    READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE   READINESS GATES
kubernetes-dashboard-api-55cf847b6b-7sctx               1/1     Running   0          20h   10.244.2.251   node02.k8s.local   <none>           <none>
kubernetes-dashboard-metrics-scraper-585685f868-hqgpc   1/1     Running   0          40h   10.244.1.254   node01.k8s.local   <none>           <none>
kubernetes-dashboard-web-57bd66fd9f-pghct               1/1     Running   0          40h   10.244.1.253   node01.k8s.local   <none>           <none>

kubectl delete pod kubernetes-dashboard-api-55cf847b6b-7sctx -n kubernetes-dashboard

使用域名

本机host中添加域名
dashboard.k8s.local

kubectl edit Ingress kubernetes-dashboard -n kubernetes-dashboard

  rules:
    #- host: 127.0.0.1 #Invalid value: "127.0.0.1": must be a DNS name, not an IP address
    #- host: localhost
     - host: dashboard.k8s.local
       http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: kubernetes-dashboard-web
                port:
                  #number: 8000
                  name: web

curl -k -H "Host:dashboard.k8s.local" http://10.96.49.99:8000/

    - host: dashboard.k8s.local
      http:

kubectl apply -f kubernetes-dashboard.yaml

metrics-server 安装

metrics-server 采集node 和pod 的cpu/mem,数据存在容器本地,不做持久化。这些数据的使用场景有 kubectl top 和scheduler 调度、hpa 弹性伸缩,以及原生的dashboard 监控数据展示。
metrics-server 和prometheus 没有半毛钱关系。 也没有任何数据或者接口互相依赖关系。

Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server

wget --no-check-certificate   https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.3/components.yaml -O metrics-server-0.6.3.yaml

在deploy中,spec.template.containers.args字段中加上--kubelet-insecure-tls选项,表示不验证客户端证书

cat metrics-server-0.6.3.yaml|grep image:
image: registry.k8s.io/metrics-server/metrics-server:v0.6.3

docker pull registry.aliyuncs.com/google_containers/metrics-server:v0.6.3
docker tag registry.aliyuncs.com/google_containers/metrics-server:v0.6.3  repo.k8s.local/registry.k8s.io/metrics-server/metrics-server:v0.6.3
docker push repo.k8s.local/registry.k8s.io/metrics-server/metrics-server:v0.6.3

sed -n "/image:/{s/image: /image: repo.k8s.local\//p}" metrics-server-0.6.3.yaml
sed -i "/image:/{s/image: /image: repo.k8s.local\//}" metrics-server-0.6.3.yaml

kubectl top nodes

kubectl apply -f metrics-server-0.6.3.yaml

kubectl get pods -n=kube-system |grep metrics
metrics-server-8fc7dd595-n2s6b               1/1     Running   6 (9d ago)      16d

kubectl api-versions|grep metrics
metrics.k8s.io/v1beta1

#top会比dashboard中看到的要高
kubectl top pods
kubectl top nodes
NAME                 CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master01.k8s.local   145m         3%     1490Mi          38%       
node01.k8s.local     54m          2%     1770Mi          46%       
node02.k8s.local     63m          3%     2477Mi          64%       

kubectl -n kube-system describe pod metrics-server-8fc7dd595-n2s6b 
kubectl logs metrics-server-8fc7dd595-n2s6b -n kube-system

kubectl describe  pod kube-apiserver-master01.k8s.local -n kube-system
  Type     Reason     Age                   From     Message
  ----     ------     ----                  ----     -------
  Warning  Unhealthy  35m (x321 over 10d)   kubelet  Liveness probe failed: HTTP probe failed with statuscode: 500
  Warning  Unhealthy  34m (x1378 over 10d)  kubelet  Readiness probe failed: HTTP probe failed with statuscode: 500
error: Metrics API not available

重新执行yaml
kubectl top pods
kubectl apply -f metrics-server-0.6.3.yaml

kubectl -n kube-system describe pod metrics-server-8fc7dd595-lz5kz

Posted in 安装k8s/kubernetes.

Tagged with , .


k8s_安装7_存储

七、存储

持久化存储pv,pvc
k8s存储支持多种模式:

  • 本地存储:hostPath/emptyDir
  • 传递网络存储:iscsi/nfs
  • 分布式网络存储:glusterfs/rbd/cephfs等
  • 云存储:AWS,EBS等
  • k8s资源: configmap,secret等

emptyDir 数据卷

临时数据卷,与pod生命周期绑定在一起,pod删除了,数据卷也会被删除。
作用:持久化,随着pod的生命周期而存在(删除容器不代表pod被删除了,不影响数据)
多个pod之间不能通信,同一pod中的容器可以共享数据

spec:
  containers:
  - name: nginx1
    image: nginx
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  - name: nginx2
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /data/
    command: ['/bin/bash','-c','while true;do echo $(date) >> /data/index.html;sleep 10;done']
  volumes:
  - name: html
    emptyDir: {}

hostPath 数据卷

实现同一个node节点的多个pod实现数据共享
也可以设置type字段,支持的类型有File、FileOrCreate、 Directory、DirectoryOrCreate、Socket、CharDevice和BlockDevice。

spec:
  containers:
  - name: nginx1
    image: nginx
    ports:
    - name: http
      containerPort: 80
    volumeMounts:        
    - name: html    ##使用的存储卷名称,如果跟下面volume字段name值相同,则表示使用volume的这个存储卷
      mountPath: /usr/share/nginx/html/   ##挂载至容器中哪个目录
      readOnly: false     #读写挂载方式,默认为读写模式false
  volumes: 
  - name: html     #存储卷名称
    hostPath:      
      path: /data/pod/volume   #在宿主机上目录的路径
      type: DirectoryOrCreate  #定义类型,这表示如果宿主机没有此目录则会自动创建

NFS数据卷

不同的node节点的pod实现数据共享,但是存在单点故障

spec:
  containers:
  - name: nginx1
    image: nginx
    ports:
    - name: http
      containerPort: 80
    volumeMounts:        
    - name: html    ##使用的存储卷名称,如果跟下面volume字段name值相同,则表示使用volume的这个存储卷
      mountPath: /usr/share/nginx/html/   ##挂载至容器中哪个目录
      readOnly: false     #读写挂载方式,默认为读写模式false
  volumes: 
  - name: html     #存储卷名称
    nfs:      
      path: /nfs/k8s/data/volume   #在宿主机上目录的路径
      server: 192.168.244.6

NFS

NFS安装

k8s集群所有节点都需要安装NFS服务。本章节实验我们选用k8s的harbor节点作为NFS服务的server端.

nfs服务提供机安装

yum install -y nfs-utils rpcbind

创建nfs共享目录
mkdir -p /nfs/k8s/{yml,data,cfg,log,web}

#pv
mkdir -p /nfs/k8s/{spv_r1,spv_w1,spv_w2,dpv}  
mkdir -p /nfs/k8s/{spv_001,spv_002,spv_003,dpv}

#修改权限
chmod -R 777 /nfs/k8s
开始共享
#编辑export文件
/nfs/k8s:是共享的数据目录
*:               #表示任何人都有权限连接,当然也可以是一个网段,一个 IP,也可以是域名
rw:              #读写的权限
sync              #表示文件同时写入硬盘和内存
async             # 非同步模式,也就是每隔一段时间才会把内存的数据写入硬盘,能保证磁盘效率,但当异常宕机/断电时,会丢失内存里的数据
no_root_squash    # 当登录 NFS 主机使用共享目录的使用者是 root 时,其权限将被转换成为匿名使用者,通常它的 UID 与 GID,都会变成 nobody 身份
root_squash       # 跟no_root_squash相反,客户端上的root用户受到这些挂载选项的限制,被当成普通用户
all_squash        # 客户端上的所有用户在使用NFS共享目录时都被限定为一个普通用户
anonuid           # 上面的几个squash用于把客户端的用户限定为普通用户,而anouid用于限定这个普通用户的uid,这个uid与服务端的/etc/passwd文件相对应,如:anouid=1000 
                  # 比如我客户端用xiaoming这个用户去创建文件,那么服务端同步这个文件的时候,文件的属主会变成服务端的uid(1000)所对应的用户
anongid           # 同上,用于限定这个普通用户的gid

vi /etc/exports
/nfs/k8s/yml *(rw,no_root_squash,sync)  
/nfs/k8s/data *(rw,no_root_squash,sync)  
/nfs/k8s/cfg *(rw,no_root_squash,sync)  
/nfs/k8s/log *(rw,no_root_squash,sync)  
/nfs/k8s/web *(rw,no_root_squash,sync)  
/nfs/k8s/spv_001 *(rw,no_root_squash,sync)  
/nfs/k8s/spv_002 *(rw,no_root_squash,sync)  
/nfs/k8s/spv_003 *(rw,no_root_squash,sync)  
/nfs/k8s/dpv *(rw,no_root_squash,sync)  

#配置生效
exportfs -r
启动服务
#启动顺序,先启动rpc,再启动nfs
systemctl enable rpcbind
systemctl start rpcbind
systemctl status rpcbind
systemctl restart rpcbind

systemctl enable nfs
systemctl start nfs
systemctl status nfs
systemctl restart nfs

#查看相关信息
rpcinfo -p|grep nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl

#查看
cat /var/lib/nfs/etab 
/nfs/k8s/log    *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
/nfs/k8s/cfg    *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
/nfs/k8s/data   *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
/nfs/k8s/yml    *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)

#查看
showmount -e 
Export list for 192.168.244.6:
Export list for repo.k8s.local:
/nfs/k8s/log  *
/nfs/k8s/cfg  *
/nfs/k8s/data *
/nfs/k8s/yml  *
客户端操作
安装服务

yum -y install nfs-utils

查看

showmount -e 192.168.244.6
Export list for 192.168.244.6:
/nfs/k8s/log
/nfs/k8s/cfg

/nfs/k8s/data
/nfs/k8s/yml

/nfs/k8s/web *

工作节点创建挂载点

mkdir -p /data/k8s/{yml,data,cfg,log,web}

systemctl enable rpcbind
systemctl start rpcbind
systemctl status rpcbind

手动挂载
mount 192.168.244.6:/nfs/k8s/yml /data/k8s/yml
mount 192.168.244.6:/nfs/k8s/data /data/k8s/data
mount 192.168.244.6:/nfs/k8s/cfg /data/k8s/cfg
mount 192.168.244.6:/nfs/k8s/log /data/k8s/log

df -Th | grep /data/k8s/
192.168.244.6:/nfs/k8s/data nfs4       26G  8.9G   18G  35% /data/k8s/data
192.168.244.6:/nfs/k8s/cfg  nfs4       26G  8.9G   18G  35% /data/k8s/cfg
192.168.244.6:/nfs/k8s/log  nfs4       26G  8.9G   18G  35% /data/k8s/log
192.168.244.6:/nfs/k8s/yml  nfs4       26G  8.9G   18G  35% /data/k8s/yml

touch /data/k8s/cfg/test

#加入启动
# 追加插入以下内容
cat >> /etc/fstab <<EOF
192.168.244.6:/nfs/k8s/yml /data/k8s/yml nfs rw,rsize=8192,wsize=8192,soft,intr 0 0
192.168.244.6:/nfs/k8s/data /data/k8s/data nfs rw,rsize=8192,wsize=8192,soft,intr 0 0
192.168.244.6:/nfs/k8s/cfg /data/k8s/cfg nfs rw,rsize=8192,wsize=8192,soft,intr 0 0
192.168.244.6:/nfs/k8s/log /data/k8s/log nfs rw,rsize=8192,wsize=8192,soft,intr 0 0
EOF

PV与PVC的简介

PV(PersistentVolume)是持久化卷的意思,是对底层的共享存储的一种抽象,管理员已经提供好的一块存储。在k8s集群中,PV像Node一样,是一个资源。
PV类型:NFS iSCSI CephFS Glusterfs HostPath AzureDisk 等等
PVC(PersistentVolumeClaim)是持久化卷声明的意思,是用户对于存储需求的一种声明。PVC对于PV就像Pod对于Node一样,Pod可以申请CPU和Memory资源,而PVC也可以申请PV的大小与权限。
有了PersistentVolumeClaim,用户只需要告诉Kubernetes需要什么样的存储资源,而不必关心真正的空间从哪里分配,如何访问等底层细节信息。这些Storage Provider的底层信息交给管理员来处理,只有管理员才应该关心创建PersistentVolume的细节信息

PVC和PV是一一对应的。
PV和StorageClass不受限于Namespace,PVC受限于Namespace,Pod在引用PVC时同样受Namespace的限制,只有相同Namespace中的PVC才能挂载到Pod内。
虽然PersistentVolumeClaims允许用户使用抽象存储资源,但是PersistentVolumes对于不同的问题,用户通常需要具有不同属性(例如性能)。群集管理员需要能够提供各种PersistentVolumes不同的方式,而不仅仅是大小和访问模式,而不会让用户了解这些卷的实现方式。对于这些需求,有StorageClass 资源。
StorageClass为管理员提供了一种描述他们提供的存储的“类”的方法。 不同的类可能映射到服务质量级别,或备份策略,或者由群集管理员确定的任意策略。 Kubernetes本身对于什么类别代表是不言而喻的。 这个概念有时在其他存储系统中称为“配置文件”。
一个PVC只能绑定一个PV,一个PV只能对应一种后端存储,一个Pod可以使用多个PVC,一个PVC也可以给多个Pod使用

生命周期

Provisioning ——-> Binding ——–>Using——>Releasing——>Recycling

供应准备Provisioning
静态供给 Static Provision

集群管理员通过手动方式先创建好应用所需不同大小,是否读写的PV,在创建PVC时云匹配相近的。
在创建PVC时会根据需求属性(大小,只读等)匹配最合适的PV,有一定随机,可以定义回收时清除PV中数据。
如创建一个6G的存储,不会匹配5G的PV,会匹配比6G大的PV,如10G的PV.

如何将PVC绑定到特定的PV上?给PV打上一个label,然后让PVC去匹配这个label即可,
方式一,在 PVC 的 YAML 文件中指定 spec.volumeName 字段
方式二,spec.selector.matchLabels.pv

动态供给 Dynamic Provision

当管理员创建的静态PV都不匹配用户的PVC,集群可能会尝试为PVC动态配置卷,实现了存储卷的按需创建,不需要提前创建 PV,此配置基于StorageClasse.
用户创建了PVC需求,然后由组件云动态创建的PV,PVC必须请求一个类,并且管理员必须已创建并配置该类才能进行动态配置。 要求该类的声明有效地为自己禁用动态配置。

Binding 绑定

用户创建pvc并指定需要的资源和访问模式。在找到可用pv之前,pvc会保持未绑定状态。

Using 使用

用户可在pod中像volume一样使用pvc。

Releasing 释放

用户删除pvc来回收存储资源,pv将变成“released”状态。由于还保留着之前的数据,这些数据需要根据不同的策略来处理,否则这些存储资源无法被其他pvc使用。

回收Recycling

pv可以设置三种回收策略:保留(Retain),回收(Recycle)和删除(Delete)。

  • 保留策略Retain :允许人工处理保留的数据。
  • 删除策略Delete :将删除pv和外部关联的存储资源,需要插件支持。
  • 回收策略Recycle :将执行清除操作,之后可以被新的pvc使用,需要插件支持。
    PVC 只能和 Available 状态的 PV 进行绑定
PV卷阶段状态

Available – 资源尚未被pvc使用
Bound – 卷已经被绑定到pvc了
Released – pvc被删除,卷处于释放状态,但未被集群回收。
Failed – 卷自动回收失败

StorageClass运行原理及部署流程

  • 1.自动创建的 PV 以${namespace}-${pvcName}-${pvName}这样的命名格式创建在 NFS 服务器上的共享数据目录中

  • 2.而当这个 PV 被回收后会以archieved-${namespace}-${pvcName}-${pvName}这样的命名格式存在 NFS 服务器上
    StorageClass一旦被创建,就无法修改,如需修改,只能删除重建。

配置说明:

capacity 指定 PV 的容量为 1G。
accessModes 指定访问模式为 ReadWriteOnce,支持的访问模式有:

    ReadWriteOnce – PV 能以 read-write 模式 mount 到单个节点。
    ReadOnlyMany – PV 能以 read-only 模式 mount 到多个节点。
    ReadWriteMany – PV 能以 read-write 模式 mount 到多个节点。

 persistentVolumeReclaimPolicy 指定当 PV 的回收策略为 Recycle,支持的策略有:

    Retain – 需要管理员手工回收。
    Recycle – 清除 PV 中的数据,效果相当于执行 rm -rf /thevolume/*。目前只支持NFS和hostPath支持此操作
    Delete – 删除 Storage Provider 上的对应存储资源,仅部分支持云端存储系统支持,例如 AWS EBS、GCE PD、Azure
    Disk、OpenStack Cinder Volume 等。

storageClassName 指定 PV 的 class 为 nfs。相当于为 PV 设置了一个分类,PVC 可以指定 class 申请相应 class 的 PV。

静态创建PV卷

# 注意命名和域名一样,只充许小写字母数字-.
# 1g大小,多节点只读,保留数据
# cat nfs-pv001.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-spv001
  labels:
    pv: nfs-spv001
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfs/k8s/spv_001
    server: 192.168.244.6

# 2g大小,单节点读写,清除数据
# cat nfs-pv002.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-spv002
  labels:
    pv: nfs-spv002
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfs/k8s/spv_002
    server: 192.168.244.6

# 3g大小,多节点读写,保留数据
pv上可设nfs参数,pod中不可以
# cat nfs-pv003.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-spv003
  labels:
    pv: nfs-spv003
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  mountOptions:
    - hard
    - intr
    - timeo=60
    - retrans=2
    - noresvport
    - nfsvers=4.1
  nfs:
    path: /nfs/k8s/spv_003
    server: 192.168.244.6
pv生效
kubectl apply -f nfs-pv001.yaml
persistentvolume/nfs-spv001 created
kubectl apply -f nfs-pv002.yaml
persistentvolume/nfs-spv002 created
kubectl apply -f nfs-pv003.yaml
persistentvolume/nfs-spv003 created

kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-spv001r   1Gi        ROX            Retain           Available           nfs                     36s
nfs-spv002    2Gi        RWO            Recycle          Available           nfs                     21s
nfs-spv003    3Gi        RWX            Retain           Available           nfs                     18s
删pv

kubectl delete -f nfs-pv001.yaml

回收pv
#,当pv标记为Recycle时,删除pvc,pv STATUS 为Released/Failed ,删除claimRef段后恢复 Available
kubectl edit pv nfs-spv002
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: nfs-pvc002
    namespace: default
    resourceVersion: "835243"
    uid: 3bd83223-fd84-4b53-a0db-1a5f62e433fa

kubectl patch pv nfs-spv002 –patch ‘{"spec": {"claimRef":null}}’

创建 PVC

### 1G大小,只读,指定pv
# cat nfs-pvc1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc001
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nfs-spv001

# 指定pv,注意pvc空间比pv小,实际创建已匹配到的pv为准
# cat nfs-pvc2.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc002
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nfs-spv002

# 指定pv,注意pvc空间比pv小
# cat nfs-pvc3.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc003
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nfs-spv003
生效pvc
kubectl apply -f nfs-pvc1.yaml
persistentvolumeclaim/nfs-pvc001 created
kubectl apply -f nfs-pvc2.yaml
persistentvolumeclaim/nfs-pvc002 created
kubectl apply -f nfs-pvc3.yaml
persistentvolumeclaim/nfs-pvc003 created

kubectl get pvc
NAME         STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc001   Bound    nfs-spv001   1Gi        ROX            nfs            4s
nfs-pvc002   Bound    nfs-spv002   2Gi        RWO            nfs            5m3s
nfs-pvc003   Bound    nfs-spv003   3Gi        RWX            nfs            5m1s
删除pvc

kubectl delete -f nfs-pvc1.yaml

如果删pvc前先删了pv,pv STATUS:Terminating ,pvc 看不出异常.删除pvc后pv成功删除
没有pv前创建pvc,pvc STATUS Pending,创建pv后,pvc 正常
删除Recycle的pvc 后,在没回收前无法再使用相关pv

k8s进行pvc扩容

确认pv的回收策略,十分的重要!!!!!,确认回收策略必须为Retain不能是Delete不然解除绑定后PV就被删了,并修改pv大小

  • 1.根据pod找到绑定的pvc信息
    kubectl edit pods -n namespace podName
    在spec.volumes.persistentVolumeClaim.claimName可以找到我们需要的pvc名称

  • 2.根据PVC找到绑定的PV
    kubectl edit pvc -n namespace pvcName
    在spec.volumeName可以找到我们需要的pv名称

  • 3.确认pv信息,修改回收策略为Retain,修改pv大小,修改labels信息
    kubectl edit pv -n namespace nfs-spv001

修改spec.persistentVolumeReclaimPolicy为Retain
修改spec.capacity.storage的大小例如30Gi

或用patch
kubectl patch pv nfs-spv001 –type merge –patch ‘{"spec": {"capacity": {"storage": "1.3Gi"},"persistentVolumeReclaimPolicy":"Retain"}}’
kubectl get pv nfs-spv001

注意静态pvc 不能动态扩容
动态pvc可扩容,storageclass是否存在allowVolumeExpansion字段,不能缩容,需重启服务

k8s里面pod使用pv的大小跟pv的容量没有直接关系,而是跟挂载的文件系统大小有关系,pv的容量可以理解为给pv预留容量,当文件系统的可用容量<PV预留容量 时,pv创建失败,Pod也会Pending。
pv的实际使用量主要看挂载的文件系统大小,会出现nfs 存储类pv使用容量超出pv预留容量的情况。

基于NFS动态创建PV、PVC

https://kubernetes.io/docs/concepts/storage/storage-classes/
因为NFS不支持动态存储,所以我们需要借用这个存储插件nfs-client
https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy

搭建StorageClass+NFS,大致有以下几个步骤:

  • 1).创建一个可用的NFS Serve
  • 2).创建Service Account.这是用来管控NFS provisioner在k8s集群中运行的权限
  • 3).创建StorageClass.负责建立PVC并调用NFS provisioner进行预定的工作,并让PV与PVC建立管理
  • 4).创建NFS provisioner.有两个功能,一个是在NFS共享目录下创建挂载点(volume),另一个则是建了PV并将PV与NFS的挂载点建立关联

nfs-provisioner-rbac.yaml 集群角色,普通角色,sa用户

wget https://github.com/kubernetes-retired/external-storage/raw/master/nfs-client/deploy/rbac.yaml -O nfs-provisioner-rbac.yaml
如果之前布署过,请先修改 namespace: default

# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ kubectl create -f deploy/rbac.yaml
cat nfs-provisioner-rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

StorageClass


cat > nfs-StorageClass.yaml << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'  #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters:
  archiveOnDelete: "false"
  type: nfs
reclaimPolicy: Retain
allowVolumeExpansion: true     #允许扩容
mountOptions:
  - hard
  - intr
  - timeo=60
  - retrans=2
  - noresvport
  - nfsvers=4.1
volumeBindingMode: Immediate     #pv和pvc绑定
EOF

请确认reclaimPolicy 是否为 Retain
请确认reclaimPolicy 是否为 Retain
请确认reclaimPolicy 是否为 Retain
重要事情三遍

在启用动态供应模式的情况下,一旦用户删除了PVC,与之绑定的PV也将根据其默认的回收策略“Delete”被删除。如果需要保留PV(用户数据),则在动态绑定成功后,用户需要将系统自动生成PV的回收策略从“Delete”改为“Retain”(保留)。
kubectl edit pv -n default nfs-spv001

使用如下命令可以修改pv回收策略:
kubectl patch pv -p ‘{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}’

provisioner 提供者 deployment.yaml

wget https://github.com/kubernetes-retired/external-storage/raw/master/nfs-client/deploy/deployment.yaml -O nfs-provisioner-deployment.yaml
namespace: default #与RBAC文件中的namespace保持一致
修改的参数包括NFS服务器所在的IP地址,共享的路径

cat nfs-provisioner-deployment.yaml


apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.244.6
            - name: NFS_PATH
              value: /nfs/k8s/dpv
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.244.6
            path: /nfs/k8s/dpv

准备镜像

cat nfs-provisioner-deployment.yaml  |grep image:|sed -e 's/.*image: //'
quay.io/external_storage/nfs-client-provisioner:latest

docker pull quay.io/external_storage/nfs-client-provisioner:latest
docker tag quay.io/external_storage/nfs-client-provisioner:latest repo.k8s.local/google_containers/nfs-client-provisioner:latest
docker push repo.k8s.local/google_containers/nfs-client-provisioner:latest
docker rmi quay.io/external_storage/nfs-client-provisioner:latest

替换镜像地址

cp nfs-provisioner-deployment.yaml nfs-provisioner-deployment.org.yaml
sed -n '/image:/{s/quay.io\/external_storage/repo.k8s.local\/google_containers/p}' nfs-provisioner-deployment.yaml
sed -i '/image:/{s/quay.io\/external_storage/repo.k8s.local\/google_containers/}' nfs-provisioner-deployment.yaml
cat nfs-provisioner-deployment.yaml  |grep image:

执行

kubectl apply -f nfs-provisioner-rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

#kubectl delete -f nfs-StorageClass.yaml
kubectl apply -f nfs-StorageClass.yaml
storageclass.storage.k8s.io/managed-nfs-storage created

kubectl apply -f nfs-provisioner-deployment.yaml 
deployment.apps/nfs-client-provisioner created

#查看sa
kubectl get sa

#查看storageclass
kubectl get sc

#查看deploy控制器
kubectl get deploy

#查看pod
kubectl get pod

[root@master01 k8s]# kubectl get sa
NAME                     SECRETS   AGE
default                  0         8d
nfs-client-provisioner   0         2m42s
[root@master01 k8s]# kubectl get sc
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  60s

注意 RECLAIMPOLICY 如果是Delete,数据会被删除

[root@master01 k8s]# kubectl get deploy
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           30s
[root@master01 k8s]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-54698bfc75-ld8fj   1/1     Running   0          37s

创建测试

#storageClassName 与nfs-StorageClass.yaml metadata.name保持一致
cat > test-pvc.yaml  << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: managed-nfs-storage  
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Mi
EOF

#执行
kubectl apply -f test-pvc.yaml

kubectl get pvc
NAME         STATUS    VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim   Pending                                          managed-nfs-storage   14m

#可以看到一直是Pending状态

kubectl describe pvc test-claim
Name:          test-claim
Namespace:     default
StorageClass:  managed-nfs-storage
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: fuseim.pri/ifs
               volume.kubernetes.io/storage-provisioner: fuseim.pri/ifs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  ExternalProvisioning  35s (x63 over 15m)  persistentvolume-controller  Waiting for a volume to be created either by the external provisioner 'fuseim.pri/ifs' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-54698bfc75-ld8fj   1/1     Running   0          21m

kubectl logs nfs-client-provisioner-54698bfc75-ld8fj 
E1019 07:43:08.625350       1 controller.go:1004] provision "default/test-claim" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference

Kubernetes 1.20及以后版本废弃了 selfLink 所致。
相关issue链接:https://github.com/kubernetes/kubernetes/pull/94397

解决方案一 (不适用1.26.6以上)

添加 – –advertise-address=192.168.244.4 后重启apiserver

vi /etc/kubernetes/manifests/kube-apiserver.yaml
  - command:
    - kube-apiserver
    - --feature-gates=RemoveSelfLink=false
    - --advertise-address=192.168.244.4
    - --allow-privileged=true

systemctl daemon-reload
systemctl restart kubelet

kubectl get nodes
The connection to the server 192.168.244.4:6443 was refused - did you specify the right host or port?

sed n '/insecure-port/s/^/#/gp' -i /etc/kubernetes/manifests/kube-apiserver.yaml
sed -e '/insecure-port/s/^/#/g' -i /etc/kubernetes/manifests/kube-apiserver.yaml

解决方案二 (适用1.20.x以上所有版本)

不用修改–feature-gates=RemoveSelfLink=false
修改 nfs-provisioner-deployment.yaml 镜像为nfs-subdir-external-provisioner:v4.0.2

原镜像
registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
国内加速
m.daocloud.io/gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2

拉取推送私仓

docker pull m.daocloud.io/gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
docker tag m.daocloud.io/gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2 repo.k8s.local/google_containers/nfs-subdir-external-provisioner:v4.0.2
docker push repo.k8s.local/google_containers/nfs-subdir-external-provisioner:v4.0.2
vi nfs-provisioner-deployment.yaml
image: repo.k8s.local/google_containers/nfs-subdir-external-provisioner:v4.0.2

#重新生效
kubectl apply -f nfs-provisioner-deployment.yaml 

kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-db4f6fb8-gnnbm   1/1     Running   0          16m

kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim   Bound    pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b   2Mi        RWX            managed-nfs-storage   99m

kubectl get pvc test-claim -o yaml | grep phase        
  phase: Bound

如果显示 phase 为 Bound,则说明已经创建 PV 且与 PVC 进行了绑定

kubectl describe pvc test-claim  
Normal  ProvisioningSucceeded  18m                fuseim.pri/ifs_nfs-client-provisioner-db4f6fb8-gnnbm_7f8d1cb0-c840-4198-afb0-f066f0ca86da  Successfully provisioned volume pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b

可以看到创建了pv pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b

在nfs目录下可以看自动创建的目录,
创建的目录命名方式为 namespace名称-pvc名称-pv名称,
PV 名称是随机字符串,所以每次只要不删除 PVC,那么 Kubernetes 中的与存储绑定将不会丢失,要是删除 PVC 也就意味着删除了绑定的文件夹,下次就算重新创建相同名称的 PVC,生成的文件夹名称也不会一致,因为 PV 名是随机生成的字符串,而文件夹命名又跟 PV 有关,所以删除 PVC 需谨慎
ll /nfs/k8s/dpv/default-test-claim-pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b/

helm方式 安装

https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=x.x.x.x \
    --set nfs.path=/exported/path

创建测试的 Pod 资源文件

创建一个用于测试的 Pod 资源文件 test-pod.yaml,文件内容如下:

cat > test-nfs-pod.yaml << EOF
kind: Pod
apiVersion: v1
metadata:
  name: test-nfs-pod
spec:
  containers:
  - name: test-pod
    image: busybox:latest
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"  ## 创建一个名称为"SUCCESS"的文件
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF

kubectl apply -f test-nfs-pod.yaml

ll /nfs/k8s/dpv/default-test-claim-pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b/
total 0
-rw-r--r--. 1 root root 0 Oct 19 17:23 SUCCESS

[root@master01 k8s]# kubectl get pv,pvc,pods
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
persistentvolume/nfs-spv001                                 1Gi        ROX            Retain           Bound    default/nfs-pvc001   nfs                            25h
persistentvolume/nfs-spv002                                 2Gi        RWO            Recycle          Bound    default/nfs-pvc002   nfs                            25h
persistentvolume/nfs-spv003                                 3Gi        RWX            Retain           Bound    default/nfs-pvc003   nfs                            25h
persistentvolume/pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b   2Mi        RWX            Delete           Bound    default/test-claim   managed-nfs-storage            32m

NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/nfs-pvc001   Bound    nfs-spv001                                 1Gi        ROX            nfs                   25h
persistentvolumeclaim/nfs-pvc002   Bound    nfs-spv002                                 2Gi        RWO            nfs                   25h
persistentvolumeclaim/nfs-pvc003   Bound    nfs-spv003                                 3Gi        RWX            nfs                   25h
persistentvolumeclaim/test-claim   Bound    pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b   2Mi        RWX            managed-nfs-storage   116m

NAME                                        READY   STATUS      RESTARTS   AGE
pod/nfs-client-provisioner-db4f6fb8-gnnbm   1/1     Running     0          32m
pod/test-nfs-pod                            0/1     Completed   0          5m48s

pod执行完就退出,pvc状态是Completed,pv状态是Delete

测试写入空间限制

cat > test-nfs-pod1.yaml << EOF
kind: Pod
apiVersion: v1
metadata:
  name: test-nfs-pod1
spec:
  containers:
  - name: test-pod
    image: busybox:latest
    command: [ "sleep", "3600" ]
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF
kubectl apply -f test-pvc.yaml

kubectl apply -f test-nfs-pod1.yaml
kubectl get pods -o wide 
kubectl describe pod test-nfs-pod1
kubectl exec -it test-nfs-pod1   /bin/sh

cd /mnt/
写入8M大小文件
time dd if=/dev/zero of=./test_w bs=8k count=100000

ll /nfs/k8s/dpv/default-test-claim-pvc-505ec1a4-4b79-4fb4-a1b6-4a26bccdd65a
-rw-r--r--. 1 root root 7.9M Oct 20 13:36 test_w

当初pvc 是申请了2M,storage: 2Mi,当前写入8M, 可见pvc是无法限制空间大小的,使用的还是nfs的共享大小。

kubectl delete -f test-nfs-pod1.yaml
kubectl delete -f test-pvc.yaml

创建一个nginx测试

准备镜像
docker pull docker.io/library/nginx:1.21.4
docker tag docker.io/library/nginx:1.21.4 repo.k8s.local/library/nginx:1.21.4
docker push repo.k8s.local/library/nginx:1.21.4
nginx yaml文件
cat > test-nginx.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  labels: {app: nginx}
  name: test-nginx
  namespace: test
spec:
  ports:
  - {name: t9080, nodePort: 30002, port: 30080, protocol: TCP, targetPort: 80}
  selector: {app: nginx}
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy
  namespace: test
  labels: {app: nginx}
spec:
  replicas: 1
  selector:
    matchLabels: {app: nginx}
  template:
    metadata:
      name: nginx
      labels: {name: nginx}
    spec:
      containers:
      - name: nginx
        #image: docker.io/library/nginx:1.21.4
        image: repo.k8s.local/library/nginx:1.21.4
        volumeMounts:
        - name: volv
          mountPath: /data
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: test-pvc2
      nodeSelector:
        ingresstype: ingress-nginx
EOF

准备pvc

注意namesapce要和 service一致

cat > test-pvc2.yaml  << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc2
  namespace: test
spec:
  storageClassName: managed-nfs-storage  
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 300Mi
EOF
创建命名空间并执行
kubectl create ns test
kubectl apply -f  test-pvc2.yaml
kubectl apply -f  test-nginx.yaml
kubectl get -f test-nginx.yaml

#修改动态pv加收为Retain
kubectl edit pv -n default pvc-f9153444-5653-4684-a845-83bb313194d1
persistentVolumeReclaimPolicy: Retain

kubectl get pods -o wide -n test                   
NAME                     READY   STATUS    RESTARTS   AGE    IP       NODE     NOMINATED NODE   READINESS GATES
nginx-5ccbddff9d-k2lgs   0/1     Pending   0          8m6s   <none>   <none>   <none>           <none>

kubectl -n test describe pod nginx-5ccbddff9d-k2lgs 
 Warning  FailedScheduling  53s (x2 over 6m11s)  default-scheduler  0/3 nodes are available: persistentvolumeclaim "test-claim" not found. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

get pv -o wide -n test 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE   VOLUMEMODE
pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b   2Mi        RWX            Delete           Bound    default/test-claim   managed-nfs-storage            84m   Filesystem
NAME                     READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
nginx-5bc65c5745-rsp6s   0/1     Pending   0          17h   <none>   <none>   <none>           <none>

kubectl -n test describe pod/nginx-5bc65c5745-rsp6s 
Warning  FailedScheduling  4m54s (x206 over 17h)  default-scheduler  0/3 nodes are available: persistentvolumeclaim "test-pvc2" not found. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
#注意检查pvc的namespace和service是否一致

当前pvc是Delete 状态,还没回收  
kubectl delete -f test-pvc2.yaml  
kubectl delete -f test-nginx.yaml  

kubectl get pv,pvc -o wide  
kubectl get pvc -o wide  
kubectl get pods -o wide -n test
#进入容器,修改内容
kubectl exec -it nginx-5c5c944c4f-4v5g7  -n test -- /bin/sh
cat /etc/nginx/conf.d/default.conf

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

  access_log  /var/log/nginx/access.log  main;
echo `hostname` >> /usr/share/nginx/html/index.html
Ingress资源

namespace 要一致

cat > ingress_svc_test.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-test
  annotations:
    kubernetes.io/ingress.class: "nginx"
  namespace: test
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test-nginx
            port:
              number: 30080
EOF
cat > ingress_svc_test.yaml  << EOF
apiVersion: extensions/v1
kind: Ingress
metadata:
  name: my-ingress
  namespace: test
spec:
  backend:
    serviceName: test-nginx
    servicePort: 30080
EOF

kubectl apply -f ingress_svc_test.yaml
kubectl describe ingress ingress-svc-test -n test

cat > ingress_svc_dashboard.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard
  namespace: kube-system
spec:
  rules:
  - http:
      paths:
      - path: /dashboard
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard
            port:
              number: 443
EOF

查看系统service资源

kubectl get service –all-namespaces

备份pv数据

首先根据pvc找到对应的pv:

kubectl get pv,pvc
kubectl -n default get pvc nfs-pvc001 -o jsonpath='{.spec.volumeName}'
nfs-spv001

找到pv的挂载目录:

kubectl -n default get pv nfs-spv001
kubectl -n default describe pv nfs-spv001
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.0.244.6
    Path:      /nfs/k8s/spv_001

使用rsync命令备份数据:

rsync -avp --delete /nfs/k8s/spv_001 /databak/pvc-data-bak/default/spv_001/

还原

rsync -avp --delete /databak/pvc-data-bak/default/spv_001/ /nfs/k8s/spv_001/

批量回收pv

kubectl get pv 
当pv标记为Recycle时,删除pvc,pv STATUS 为Released/Failed ,删除claimRef段后恢复 Available
kubectl edit pv pvc-6f57e98a-dcc8-4d65-89c6-49826b2a3f18
kubectl patch pv pvc-91e17178-5e99-4ef1-90c2-1c9c2ce33af9 --patch '{"spec": {"claimRef":null}}'

kubectl delete pv pvc-e99c588a-8834-45b2-a1a9-85e31dc211ff
1.导出废弃pv在nfs服务器上的对应路径
kubectl get pv \
    -o custom-columns=STATUS:.status.phase,PATH:.spec.nfs.path \
    |grep Released  \
    |awk '{print $2}' \
> nfsdir.txt
2 清理k8s中的废弃pv

vi k8s_cleanpv.sh

#!/bin/bash
whiteList=`kubectl get pv |grep  Released |awk '{print $1}'`
echo "${whiteList}" | while read line
do
  kubectl patch pv ${line}  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
done
3 清理nfs服务器上的废弃文件

vi ./k8s_pv_cleaner.sh

#!/bin/bash
whiteList=`cat $1`
echo "${whiteList}" | while read line
do
  rm -rf  "$line"
done

./k8s_pv_cleaner.sh nfsdir.txt

错误

no matches for kind "Ingress" in version "networking.k8s.io/v1beta
1.19版本以后,Ingress 所在的 apiServer 和配置文件参数都有所更改
apiVersion: networking.k8s.io/v1beta1 改为apiVersion: networking.k8s.io/v1

进入容器,创建数据

kubectl exec -it pod/test-nfs-pod -- sh
kubectl debug -it pod/test-nfs-pod --image=busybox:1.28 --target=pod_debug

kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh

删除测试的 Pod 资源文件

kubectl delete -f test-nfs-pod.yaml 

删除测试的 PVC 资源文件

kubectl delete -f test-pvc.yaml

错误
当nfs服务器重启后,原有挂nfs的pod不能关闭不能重启。
当nfs异常时,网元进程读nfs挂载目录超时卡住,导致线程占满,无法响应k8s心跳检测,一段时间后,k8s重启该网元pod,在终止pod时,由于nfs异常,umount卡住,导致pod一直处于Terminating状态。
为解决上述问题,需要实现如下两点:
1、若pod一直处于Terminating,也可以强制删除:kubectl delete pod foo –grace-period=0 –force
2、可使用mount -l | grep nfs,umount -l -f 执行强制umount操作
3、也可以先建立本地目录,让本地目录mount nfs设备,再将本地目录通过hostpath挂载到容器中,这样不管参数是否设置,pod均能删除,但是如果不设置参数,读取还会挂住

修改node默认挂载为Soft方式
vi /etc/nfsmount.conf

Soft=True

nfs对于高可用来说存在一个隐患:客户端nfs中有一个内核级别的线程,nfsv4.1-svc,该线程会一直和nfs服务端进行通信,且无法被kill掉。(停止客户端Nfs服务,设置开机不自启动,并卸载nfs,重启主机才能让该线程停掉)。一旦nfs服务端停掉,或者所在主机关机,那么nfs客户端就会找不到nfs服务端,导致nfs客户端所在主机一直处于卡死状态,表现为无法ssh到该主机,不能使用 df -h 等命令,会对客户造成比较严重的影响。

Posted in 安装k8s/kubernetes.

Tagged with , .


k8s_安装6_ipvs

安装六 ipvs

kube-proxy修改为ipvs模式
IPVS是Linux内核实现的四层负载均衡
kube-proxy三种模式中,现在使用的就是只有 iptables 模式 或者 ipvs 模式,不管哪种,这两个模式都是依赖 node 节点上的 iptables 规则
kube-proxy 的功能是将 svc 上的请求转发的 pod 上,无论是 iptables 模式还是 ipvs 模式,这个功能都是通过 iptables 链来完成的,iptables -t nat -L,可以打印出 kube-proxy 相关的链

查看当前模式

通过 kubectl 命令查看 kube-proxy 的配置

kubectl get configmap kube-proxy -n kube-system -o yaml | grep mode
    mode: ""
#空表示iptables

#查看日志
kubectl get pods -n kube-system -o wide| grep proxy
kube-proxy-58mfp                             1/1     Running   0             6d3h   192.168.244.5   node01.k8s.local     <none>           <none>
kube-proxy-z9tpc                             1/1     Running   0             6d3h   192.168.244.4   master01.k8s.local   <none>           <none>

kubectl logs kube-proxy-z9tpc -n kube-system 
1011 03:20:57.180789       1 server_others.go:69] "Using iptables proxy"

#查看端口
netstat -lntp 
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      8204/kube-controlle 
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      22903/kube-proxy 

将kube_proxy 切换为ipvs

kube-proxy默认监听的地址是127.0.0.1:10249,想要修改监听的端口
把metricsBindAddress这段修改成 metricsBindAddress: "0.0.0.0:10249"
ipvs修改mode: "ipvs"

#在master节点,修改编辑kube-proxy 这个configmap文件,修改模式为ipvs,同时改监听ip为全部:
    metricsBindAddress: ""
    mode: ""

kubectl edit configmaps kube-proxy -n kube-system

    kind: KubeProxyConfiguration
    logging:
      flushFrequency: 0
      options:
        json:
          infoBufferSize: "0"
      verbosity: 0
    metricsBindAddress: "0.0.0.0:10249"
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""

重启kube-proxy

# 我们发现修改kube-proxy 这个configmap文件后,查看pod的日志,发现ipvs模式并没有立即生效,所以我们需要删除kube-proxy的pod,这些pod是
# 由DaemonSet控制,删除之后DaemonSet会重新在每个节点创建的

#kubectl get pods -n kube-system | grep kube-proxy |awk '{print $1}' | xargs kubectl delete pods -n kube-system
kubectl  delete pods -n kube-system -l k8s-app=kube-proxy
pod "kube-proxy-58mfp" deleted
pod "kube-proxy-z9tpc" deleted

kubectl get pods -n kube-system -o wide| grep proxy       
kube-proxy-pgr2j                             1/1     Running   0             3s     192.168.244.4   master01.k8s.local   <none>           <none>
kube-proxy-xgmqz                             1/1     Running   0             3s     192.168.244.5   node01.k8s.local     <none>           <none>
kubectl logs kube-proxy-pgr2j -n kube-system 
I1017 07:21:40.118590       1 server_others.go:218] "Using ipvs Proxier"
I1017 07:21:40.118809       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"

kubectl logs kube-proxy-xgmqz -n kube-system 
I1017 07:21:39.951567       1 server_others.go:218] "Using ipvs Proxier"
I1017 07:21:39.951590       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"

#curl请求内部确认
curl 127.0.0.1:10249/proxyMode
ipvs

#查看端口
ss  -antulp |grep :10249
tcp    LISTEN     0      4096   [::]:10249              [::]:*                   users:(("kube-proxy",pid=5764,fd=10))

外部访问测试

http://192.168.244.4:10249/proxyMode
ipvs

查看当前ipvs转发规则

ipvsadm

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  node02.k8s.local:30080 rr
  -> node01.k8s.local:http        Masq    1      0          0         
  -> node02.k8s.local:http        Masq    1      0          0         
TCP  node02.k8s.local:30443 rr
  -> node01.k8s.local:https       Masq    1      0          0         
  -> node02.k8s.local:https       Masq    1      0          0         
TCP  node02.k8s.local:30801 rr
  -> 10.244.2.9:irdmi             Masq    1      0          0         
TCP  node02.k8s.local:32080 rr
  -> 10.244.2.24:8089             Masq    1      0          0         
TCP  node02.k8s.local:https rr
  -> master01.k8s.local:sun-sr-ht Masq    1      4          0         
TCP  node02.k8s.local:domain rr
  -> 10.244.0.10:domain           Masq    1      0          0         
  -> 10.244.0.11:domain           Masq    1      0          0         
TCP  node02.k8s.local:9153 rr
  -> 10.244.0.10:9153             Masq    1      0          0         
  -> 10.244.0.11:9153             Masq    1      0          0         
TCP  node02.k8s.local:irdmi rr
  -> 10.244.2.9:irdmi             Masq    1      0          0         
TCP  node02.k8s.local:31080 rr
  -> 10.244.2.24:8089             Masq    1      0          0         
TCP  node02.k8s.local:cslistener rr
  -> 10.244.2.8:cslistener        Masq    1      0          0         
TCP  node02.k8s.local:http rr
  -> node01.k8s.local:http        Masq    1      0          0         
  -> node02.k8s.local:http        Masq    1      0          0         
TCP  node02.k8s.local:https rr
  -> node01.k8s.local:https       Masq    1      0          0         
  -> node02.k8s.local:https       Masq    1      0          0         
TCP  node02.k8s.local:https rr
  -> node01.k8s.local:pcsync-http Masq    1      0          0         
  -> node02.k8s.local:pcsync-http Masq    1      0          0         
TCP  node02.k8s.local:irdmi rr
  -> 10.244.2.10:irdmi            Masq    1      0          0         
TCP  node02.k8s.local:31080 rr
  -> 10.244.1.93:http             Masq    1      0          0         
TCP  node02.k8s.local:30080 rr
  -> node01.k8s.local:http        Masq    1      0          0         
  -> node02.k8s.local:http        Masq    1      0          0         
TCP  node02.k8s.local:30443 rr
  -> node01.k8s.local:https       Masq    1      0          0         
  -> node02.k8s.local:https       Masq    1      0          0         
TCP  node02.k8s.local:30801 rr
  -> 10.244.2.9:irdmi             Masq    1      0          0         
TCP  node02.k8s.local:32080 rr
  -> 10.244.2.24:8089             Masq    1      0          0         
TCP  node02.k8s.local:30080 rr
  -> node01.k8s.local:http        Masq    1      0          0         
  -> node02.k8s.local:http        Masq    1      0          0         
TCP  node02.k8s.local:30443 rr
  -> node01.k8s.local:https       Masq    1      0          0         
  -> node02.k8s.local:https       Masq    1      0          0         
TCP  node02.k8s.local:30801 rr
  -> 10.244.2.9:irdmi             Masq    1      0          0         
TCP  node02.k8s.local:32080 rr
  -> 10.244.2.24:8089             Masq    1      0          0         
UDP  node02.k8s.local:domain rr
  -> 10.244.0.10:domain           Masq    1      0          0         
  -> 10.244.0.11:domain           Masq    1      0          0  

IPVS支持三种负载均衡模式

Direct Routing(简称DR),Tunneling(也称ipip模式)和NAT(也称Masq模式)

DR

IPVS的DR模式是最广泛的IPVS模式,它工作在L2,即通过Mac地址做LB,而非IP地址。在DR模式下,回程报文不会经过IPVS director 而是直接返回给客户端。因此,DR在带来高性能的同时,对网络也有一定的限制,及要求IPVS的director 和客户端在同一个局域网。另外,比较遗憾的是,DR不支持端口映射,无法支持kubernetes service的所有场景。

TUNNELING

IPVS的Tunneling模式就是用IP包封装IP包,因此也称ipip模式。Tunneling模式下的报文不经过IPVS director,而是直接回复给客户端。Tunneling模式统一不支持端口映射,因此很难被用在kubernetes的service场景中。

NAT

IPVS的NAT模式支持端口映射,回程报文需要经过IPVS director,因此也称Masq(伪装)模式。 kubernetes在用IPVS实现Service时用的正式NAT模式。 当使用NAT模式时,需要注意对报文进行一次SNAT,这也是kubernetes使用IPVS实现Service的微妙之处。

IPVS的八种调度算法

rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq

  1. 轮叫调度 rr
    这种算法是最简单的,就是按依次循环的方式将请求调度到不同的服务器上,该算法最大的特点就是简单。轮询算法假设所有的服务器处理请求的能力都是一样的,调度器会将所有的请求平均分配给每个真实服务器,不管后端 RS 配置和处理能力,非常均衡地分发下去。

  2. 加权轮叫 wrr
    这种算法比 rr 的算法多了一个权重的概念,可以给 RS 设置权重,权重越高,那么分发的请求数越多,权重的取值范围 0 – 100。主要是对rr算法的一种优化和补充, LVS 会考虑每台服务器的性能,并给每台服务器添加要给权值,如果服务器A的权值为1,服务器B的权值为2,则调度到服务器B的请求会是服务器A的2倍。权值越高的服务器,处理的请求越多。

  3. 最少链接 lc
    这个算法会根据后端 RS 的连接数来决定把请求分发给谁,比如 RS1 连接数比 RS2 连接数少,那么请求就优先发给 RS1

  4. 加权最少链接 wlc
    这个算法比 lc 多了一个权重的概念。

  5. 基于局部性的最少连接调度算法 lblc
    这个算法是请求数据包的目标 IP 地址的一种调度算法,该算法先根据请求的目标 IP 地址寻找最近的该目标 IP 地址所有使用的服务器,如果这台服务器依然可用,并且有能力处理该请求,调度器会尽量选择相同的服务器,否则会继续选择其它可行的服务器

  6. 复杂的基于局部性最少的连接算法 lblcr
    记录的不是要给目标 IP 与一台服务器之间的连接记录,它会维护一个目标 IP 到一组服务器之间的映射关系,防止单点服务器负载过高。

  7. 目标地址散列调度算法 dh
    该算法是根据目标 IP 地址通过散列函数将目标 IP 与服务器建立映射关系,出现服务器不可用或负载过高的情况下,发往该目标 IP 的请求会固定发给该服务器。

  8. 源地址散列调度算法 sh
    与目标地址散列调度算法类似,但它是根据源地址散列算法进行静态分配固定的服务器资源。

kube-proxy IPVS参数

在运行基于IPVS的kube-proxy时,需要注意以下参数:

  • -proxy-mode:除了现有的userspace和iptables模式,IPVS模式通过–proxymode=ipvs进行配置。
  • -ipvs-scheduler:用来指定ipvs负载均衡算法,如果不配置则默认使用round-robin(rr)算法。
  • -cleanup-ipvs:类似于–cleanup-iptables参数。如果设置为true,则清除在IPVS模式下创建的IPVS规则;
  • -ipvs-sync-period:表示kube-proxy刷新IPVS规则的最大间隔时间,例如5秒。1分钟等,要求大于0;
  • -ipvs-min-sync-period:表示kube-proxy刷新IPVS规则最小时间间隔,例如5秒,1分钟等,要求大于0
  • -ipvs-exclude-cidrs:用于清除IPVS规则时告知kube-proxy不要清理该参数配置的网段的IPVS规则。因为我们无法区别某条IPVS规则到底是kube-proxy创建的,还是其他用户进程的,配置该参数是为了避免删除用户自己的IPVS规则。

一旦创建一个Service和Endpoint,IPVS模式的kube-proxy会做以下三件事:

  • 1)确保一块dummy网卡(kube-ipvs0)存在,为什么要创建dummy网卡?因为IPVS的netfilter钩子挂载INPUT链,我们需要把Service的访问绑定在dummy网卡上让内核“觉得”虚IP就是本机IP,进而进入INPUT链。
  • 2)把Service的访问IP绑定在dummy网卡上
  • 3)通过socket调用,创建IPVS的virtual server和real server,分别对应kubernetes的Service和Endpoint。

PVS模式中的iptables和ipset
IPVS用于流量转发,它无法处理kube-proxy中的其他问题,例如包过滤、SNAT等。具体来说,IPVS模式的kube-proxy将在以下4中情况依赖iptables

kube-proxy 配置启动参数masquerade-all=true,即集群中所有经过Kube-proxy的包都将做一次SNAT
kube-proxy 启动参数指定集群IP地址范围
支持Load Balance 类型的服务,用于配置白名单
支持NodePort类型的服务,用于在包跨节点前配置MASQUERADE,类似于上文提到的iptables模式

Posted in 安装k8s/kubernetes.

Tagged with , , .


k8s_安装5_主体和CNI

五、k8s集群初始化

 安装 kubernetes 组件

所有节点均执行
由于kubernetes的镜像源在国外,速度比较慢,这里切换成国内的镜像源
创建/etc/yum.repos.d/kubernetes.repo文件并添加如下内容:

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes] 
name=Kubernetes 
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=0 
repo_gpgcheck=0 
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF

wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-archive-6.repo

安装kubeadm、kubelet和kubectl

此处指定版本为 kubectl-1.28.2,如果不指定默认下载最新版本。

yum install --setopt=obsoletes=0 kubelet-1.28.2  kubeadm-1.28.2  kubectl-1.28.2 

配置kubelet的cgroup

# 编辑/etc/sysconfig/kubelet,添加下面的配置 
cat > /etc/sysconfig/kubelet << EOF
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd" 
KUBE_PROXY_MODE="ipvs" 
EOF

设置kubelet开机自启

systemctl enable kubelet
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2", GitCommit:"89a4ea3e1e4ddd7f7572286090359983e0387b2f", GitTreeState:"clean", BuildDate:"2023-09-13T09:34:32Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}

准备集群镜像

以下步骤是在 master 节点进行,如果你可以访问到 http://k8s.gcr.io ,那么就可以直接跳过该步骤。

这种方法是先下载镜像,由于默认拉取镜像地址k8s.gcr.io国内无法访问,
所以在安装kubernetes集群之前,提前准备好集群需要的镜像,所需镜像可以通过下面命令查看

# 要下载镜像版本列表
kubeadm config images list
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1

方式一

推荐在harbor仓库机进行镜像操作,操作docker比contained更方便
节省带宽、时间、空间、不暴露密码、命令更简单。

docker images

docker添加私仓地址

vi /etc/docker/daemon.json
"registry-mirrors":["http://hub-mirror.c.163.com"],
"insecure-registries":["repo.k8s.local:5100"],

重启docker

systemctl daemon-reload
systemctl restart docker
docker info
Client: Docker Engine - Community
 Version:    24.0.6

登录harbor

使用harbor中新建的开发或维护用户登录

docker login  http://repo.k8s.local:5100               
Username: k8s_user1
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

登录成功后,下次就不需再登录

测试

手动推送私仓
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2 
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2  repo.k8s.local:5100/google_containers/kube-apiserver:v1.28.2 
docker push repo.k8s.local:5100/google_containers/kube-apiserver:v1.28.2

docker pull busybox
docker tag busybox repo.k8s.local:5100/google_containers/busybox:9.9
docker images |tail -1
docker push repo.k8s.local:5100/google_containers/busybox:9.9

去Harbor查看是否有对应的镜像,如果有表示成功

在master上测试从私仓拉取

ctr -n k8s.io i pull -k --plain-http repo.k8s.local:5100/google_containers/busybox:9.9
ctr -n k8s.io images ls  |grep busybox
repo.k8s.local:5100/google_containers/busybox:9.9                                                                                       application/vnd.docker.distribution.manifest.v2+json      sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee 2.1 MiB   linux/amd64 
批量打标

不想改布署yml文件

# 下载镜像,重新标记为官方TAG,删除被标记的阿里云的镜像
#!/bin/bash
images=$(kubeadm config images list --kubernetes-version=1.28.2 | awk -F'/' '{print $NF}')
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName 
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName registry.k8s.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
批量打标并推送私仓

repo.k8s.local:5100下已准备好 google_containers 仓库名
如在harbor仓库安装,可以在master导出文本到本地images.txt中,本地脚本中注释kubeadm 行,直接读images.txt

#下载镜像,重新标记为私有仓库,并推送到私有仓库,删除被标记的阿里云的镜像
vi docker_images.sh
#!/bin/bash
imagesfile=images.txt
$(kubeadm config images list --kubernetes-version=1.28.2 | awk -F'/' '{print $NF}' > ${imagesfile})
images=$(cat ${imagesfile})
for i in ${images}
do
echo ${i}
docker pull registry.aliyuncs.com/google_containers/$i
docker tag registry.aliyuncs.com/google_containers/$i repo.k8s.local:5100/google_containers/$i
docker push repo.k8s.local:5100/google_containers/$i
docker rmi registry.aliyuncs.com/google_containers/$i
done
chmod +x docker_images.sh
sh docker_images.sh

kubeadm config images list –kubernetes-version=1.28.2 | awk -F’/’ ‘{print $NF}’ > images.txt

查看下载的镜像
docker images

方式二

在master和node上用containers操作
containers镜像导入命令和docker有所不同,推私仓拉取时需带–all-platforms,此时会占用大量空间和带宽。

# 导入好镜像以后用crictl查看ctr也能查看,但是不直观
crictl images
ctr -n k8s.io images ls  
ctr -n k8s.io image import 加镜像名字 # 或者导入自己的镜像仓库在pull下来

#测试打标成官方,
ctr -n k8s.io i pull -k registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.2 registry.k8s.io/kube-apiserver:v1.28.2
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.2 

测试推送私仓,拉取时需加--all-platforms
ctr -n k8s.io i pull -k --all-platforms registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.2 repo.k8s.local:5100/google_containers/kube-apiserver:v1.28.2
ctr -n k8s.io i push -k --user k8s_user1:k8s_Uus1 --plain-http repo.k8s.local:5100/google_containers/kube-apiserver:v1.28.2

错误1

ctr: content digest sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d: not found

拉取镜像、导出镜像时,都加上–all-platforms

错误2

ctr: failed to do request: Head "https://repo.k8s.local:5100/v2/google_containers/kube-apiserver/blobs/sha256:2248d40e3af29ab47f33357e4ecdc9dca9a89daea07ac3a5a76de583ed06c776": http: server gave HTTP response to HTTPS client

如harbor没开https,推送时加上 -user ${harboruser}:${harborpwd} –plain-http

#下载镜像,重新标记为私有仓库,并推送到私有仓库,删除被标记的阿里云的镜像
vi ctr_images.sh
#!/bin/bash
harboruser=k8s_user1
harborpwd=k8s_Uus1
images=$(kubeadm config images list --kubernetes-version=1.28.2 | awk -F'/' '{print $NF}')
for imageName in ${images[@]} ; do
    ctr -n k8s.io i pull -k -all-platforms registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName 
    #ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName registry.k8s.io/$imageName
    ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName repo.k8s.local:5100/google_containers/$imageName
    ctr -n k8s.io i push --user ${harboruser}:${harborpwd} --plain-http repo.k8s.local:5100/google_containers/$imageName
    #ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
chmod +x ctr_images.sh
./ctr_images.sh
crictl images
FATA[0000] validate service connection: CRI v1 image API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

修复错误

#找到runtime_type 写入"io.containerd.runtime.v1.linux"

vi /etc/containerd/config.toml
      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        privileged_without_host_devices_all_devices_allowed = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = "io.containerd.runtime.v1.linux"
        sandbox_mode = ""
        snapshotter = ""
systemctl restart containerd
crictl images               
IMAGE               TAG                 IMAGE ID            SIZE

还有种方法是关闭cri插件,还没试过

cat /etc/containerd/config.toml |grep cri
sed -i -r '/cri/s/(.*)/#\1/' /etc/containerd/config.toml

创建集群

目前生产部署Kubernetes集群主要有两种方式:

    1. kubeadm安装
      Kubeadm 是一个 K8s 部署工具,提供 kubeadm init 和 kubeadm join,用于快速部 署 Kubernetes 集群。
      官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
    1. 二进制包安装
      从 github 下载发行版的二进制包,手动部署每个组件,组成 Kubernetes 集群。
      Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可 控,推荐使用二进制包部署 Kubernetes 集群,虽然手动部署麻烦点,期间可以学习很 多工作原理,也利于后期维护。

kubeadm 安装

初始化集群

下面的操作只需要在master节点上执行即可

systemctl start kubelet && systemctl enable kubelet && systemctl is-active kubelet
kubeadm config print init-defaults | tee kubernetes-init.yaml

kubeadm 创建集群

参考: https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/

A. 命令行:
kubeadm init \
--kubernetes-version=v1.28.2 \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/16 \
--apiserver-advertise-address=192.168.244.4 \
--cri-socket unix:///var/run/containerd/containerd.sock
kubeadm init 选项说明

为控制平面选择一个特定的 Kubernetes 版本。
–kubernetes-version string 默认值:"stable-1"
选择用于拉取控制平面镜像的容器仓库。
–image-repository string 默认值:"registry.k8s.io"
指明 Pod 网络可以使用的 IP 地址段。如果设置了这个参数,控制平面将会为每一个节点自动分配 CIDR。
–pod-network-cidr string
为服务的虚拟 IP 地址另外指定 IP 地址段。
–service-cidr string 默认值:"10.96.0.0/12"
API 服务器所公布的其正在监听的 IP 地址。如果未设置,则使用默认网络接口。
–apiserver-advertise-address string
要连接的 CRI 套接字的路径。如果为空,则 kubeadm 将尝试自动检测此值; 仅当安装了多个 CRI 或具有非标准 CRI 套接字时,才使用此选项。
–cri-socket string

不做任何更改;只输出将要执行的操作。
–dry-run
指定节点的名称。
–node-name string
为服务另外指定域名,例如:"myorg.internal"。
–service-dns-domain string 默认值:"cluster.local"

B. 配置文件

命令行方式指定参数执行起来非常冗余,尤其参数比较多的情况,此时可以将所有的配置要求写到配置文件中,部署的时候指定对应的配置文件即可,假设取名kubernetes-init.yaml:
写好配置文件后,直接执行:

vi kubernetes-init.yaml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.244.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.2
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/16
scheduler: {}

kubeadm init –config kubeadm.yaml

查看启动service

cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

防雪崩,同时限制pod、k8s系统组件、linux系统守护进程资源

https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/
当工作节点的磁盘,内存资源使用率达到阈值时会出发POD驱逐
工作节点修改/var/lib/kubelet/config.yaml 无效.需修改 /var/lib/kubelet/kubeadm-flags.env

默认的驱逐阈值如下:
–eviction-hard default
imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFress<5%

vi /var/lib/kubelet/config.yaml
只限pod,线上应增加memory.available值,留出kube和system和使用内存。
硬驱逐1G,软驱逐2G

enforceNodeAllocatable:
- pods
evictionHard:
  memory.available: "300Mi"
  nodefs.available: "10%"
  nodefs.inodesFree: "5%"
  imagefs.available: "10%"
evictionMinimumReclaim:
  memory.available: "0Mi"
  nodefs.available: "500Mi"
  imagefs.available: "2Gi"
evictionSoft:
  memory.available: "800Mi"
  nodefs.available: "15%"
  nodefs.inodesFree: "10%"
  imagefs.available: "15%"
evictionSoftGracePeriod:
  memory.available: "120s"
  nodefs.available: "120s"
  nodefs.inodesFree: "120s"
  imagefs.available: "120s"
evictionMaxPodGracePeriod: 30

systemctl restart kubelet
systemctl status kubelet

报错
grace period must be specified for the soft eviction threshold nodefs.available"
如果配制了软驱逐evictionSoft那么相应的evictionSoftGracePeriod也需配制

evictionSoft:
  nodefs.available: "15%"
  nodefs.inodesFree: "10%"
  imagefs.available: "15%"
  • enforceNodeAllocatable[]string
    此标志设置 kubelet 需要执行的各类节点可分配资源策略。此字段接受一组选项列表。 可接受的选项有 none、pods、system-reserved 和 kube-reserved。
    如果设置了 none,则字段值中不可以包含其他选项。
    如果列表中包含 system-reserved,则必须设置 systemReservedCgroup。
    如果列表中包含 kube-reserved,则必须设置 kubeReservedCgroup。
    这个字段只有在 cgroupsPerQOS被设置为 true 才被支持。
    默认值:["pods"]

  • evictionHardmap[string]string
    evictionHard 是一个映射,是从信号名称到定义硬性驱逐阈值的映射。 例如:{"memory.available": "300Mi"}。 如果希望显式地禁用,可以在任意资源上将其阈值设置为 0% 或 100%。
    默认值:
    memory.available: "100Mi"
    nodefs.available: "10%"
    nodefs.inodesFree: "5%"
    imagefs.available: "15%"

  • evictionSoftmap[string]string
    evictionSoft 是一个映射,是从信号名称到定义软性驱逐阈值的映射。 例如:{"memory.available": "300Mi"}。
    默认值:nil

  • evictionSoftGracePeriodmap[string]string
    evictionSoftGracePeriod 是一个映射,是从信号名称到每个软性驱逐信号的宽限期限。 例如:{"memory.available": "30s"}。
    默认值:nil

  • evictionPressureTransitionPeriod
    evictionPressureTransitionPeriod 设置 kubelet 离开驱逐压力状况之前必须要等待的时长。
    默认值:"5m"

  • evictionMaxPodGracePeriod int32
    evictionMaxPodGracePeriod 是指达到软性逐出阈值而引起 Pod 终止时, 可以赋予的宽限期限最大值(按秒计)。这个值本质上限制了软性逐出事件发生时, Pod 可以获得的 terminationGracePeriodSeconds。
    注意:由于 Issue #64530 的原因,系统中存在一个缺陷,即此处所设置的值会在软性逐出时覆盖 Pod 的宽限期设置,从而有可能增加 Pod 上原本设置的宽限期限时长。 这个缺陷会在未来版本中修复。
    默认值:0

  • evictionMinimumReclaimmap[string]string
    evictionMinimumReclaim 是一个映射,定义信号名称与最小回收量数值之间的关系。 最小回收量指的是资源压力较大而执行 Pod 驱逐操作时,kubelet 对给定资源的最小回收量。 例如:{"imagefs.available": "2Gi"}。
    默认值:nil

根据提示创建必要文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

此步骤执行完成之后,使用命令docker images查看系统中的镜像,可以我们需要的镜像均已安装完成。

记下输出,后面加入时使用

kubeadm join 192.168.244.4:6443 --token rynvll.2rdb5z78if3mtlgd \
        --discovery-token-ca-cert-hash sha256:42739a7aaff927af9dc3b77a5e684a93d1c6485a79b8d23c33d978476c6902e2 

删除或重新配置集群

kubeadm reset
rm -fr ~/.kube/  /etc/kubernetes/* var/lib/etcd/*

node节点也可以执行kubectl,如重置也一同删除

scp -r ~/.kube node01.k8s.local:~/
scp -r ~/.kube node02.k8s.local:~/

node节点加入集群

将 node 节点加入 master 中的集群(该命令在工作节点node中执行)。

yum install --setopt=obsoletes=0 kubelet-1.28.2  kubeadm-1.28.2  kubectl-1.28.2 

 Package                                                    Arch                                       Version                                           Repository                                      Size
==============================================================================================================================================================================================================
Installing:
 kubeadm                                                    x86_64                                     1.28.2-0                                          kubernetes                                      11 M
 kubectl                                                    x86_64                                     1.28.2-0                                          kubernetes                                      11 M
 kubelet                                                    x86_64                                     1.28.2-0                                          kubernetes                                      21 M
Installing for dependencies:
 conntrack-tools                                            x86_64                                     1.4.4-7.el7                                       base                                           187 k
 cri-tools                                                  x86_64                                     1.26.0-0                                          kubernetes                                     8.6 M
 kubernetes-cni                                             x86_64                                     1.2.0-0                                           kubernetes                                      17 M
 libnetfilter_cthelper                                      x86_64                                     1.0.0-11.el7                                      base                                            18 k
 libnetfilter_cttimeout                                     x86_64                                     1.0.0-7.el7                                       base                                            18 k
 libnetfilter_queue                                         x86_64                                     1.0.2-2.el7_2                                     base                                            23 k
 socat                                                      x86_64                                     1.7.3.2-2.el7                                     base                                           290 k

Transaction Summary
==============================================================================================================================================================================================================
systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

将node1节点加入集群

如果没有令牌,可以通过在控制平面节点上运行以下命令来获取令牌:

kubeadm token list
#默认情况下,令牌会在24小时后过期。如果要在当前令牌过期后将节点加入集群
#kubeadm token create
kubeadm token create --print-join-command

kubeadm join 192.168.244.4:6443 --token rynvll.2rdb5z78if3mtlgd \
        --discovery-token-ca-cert-hash sha256:42739a7aaff927af9dc3b77a5e684a93d1c6485a79b8d23c33d978476c6902e2 

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

将node2节点加入集群

kubeadm join 192.168.244.4:6443 --token rynvll.2rdb5z78if3mtlgd \
        --discovery-token-ca-cert-hash sha256:42739a7aaff927af9dc3b77a5e684a93d1c6485a79b8d23c33d978476c6902e2 

查看集群状态 此时的集群状态为NotReady,这是因为还没有配置网络插件

kubectl get nodes
NAME                 STATUS     ROLES           AGE    VERSION
master01.k8s.local   NotReady   control-plane   5m2s   v1.28.2
node01.k8s.local     NotReady   <none>          37s    v1.28.2
kubectl get pods -A
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-66f779496c-9x9cf                     0/1     Pending   0          5m21s
kube-system   coredns-66f779496c-xwgtx                     0/1     Pending   0          5m21s
kube-system   etcd-master01.k8s.local                      1/1     Running   0          5m34s
kube-system   kube-apiserver-master01.k8s.local            1/1     Running   0          5m34s
kube-system   kube-controller-manager-master01.k8s.local   1/1     Running   0          5m34s
kube-system   kube-proxy-58mfp                             1/1     Running   0          72s
kube-system   kube-proxy-z9tpc                             1/1     Running   0          5m21s
kube-system   kube-scheduler-master01.k8s.local            1/1     Running   0          5m36s

删除子节点(在master主节点上操作)

# 假设这里删除 node3 子节点
kubectl drain node3 --delete-local-data --force --ignore-daemonsets
kubectl delete node node3

然后在删除的子节点上操作重置k8s(重置k8s会删除一些配置文件),这里在node3子节点上操作

kubeadm reset

然后在被删除的子节点上手动删除k8s配置文件、flannel网络配置文件 和 flannel网口:

rm -rf /etc/cni/net.d/
rm -rf /root/.kube/config
#删除cni网络
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1

安装 Pod 网络插件

flannel/calico/cilium

如需将flannel网络切换calico网络
kubelet 配置必须增加 –network-plugin=cni 选项
kubec-proxy 组件不能采用 –masquerade-all 启动,因为会与 Calico policy 冲突,并且需要加上–proxy-mode=ipvs(ipvs模式),–masquerade-all=true(表示ipvs proxier将伪装访问服务群集IP的所有流量,)

安装flannel

# 最好提前下载镜像(所有节点)
wget -k https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.2.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.22.3
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.22.3
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

修改Network项的值,改为和–pod-network-cidr一样的值

vxlan/host-gw
host-gw 的性能损失大约在 10% 左右,而其他所有基于 VXLAN“隧道”机制的网络方案,性能损失都在 20%~30% 左右
Flannel host-gw 模式必须要求集群宿主机之间是二层连通的.就是node1和node2在一个局域网.通过arp协议可以访问到.

工作方式首选vxlan+DirectRouting

  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "Directrouting": true
      }
    }

准备镜像

由于有时国内网络的问题,需要修改image的地址,把所有的docker.io改为dockerproxy.com,或者下载下来后打标签

方式一。改地址

修改kube-flannel.yml中地址为能下载的

image: docker.io/flannel/flannel-cni-plugin:v1.2.0
image: docker.io/flannel/flannel:v0.22.3
方式二。下载到本地
docker pull docker.io/flannel/flannel-cni-plugin:v1.2.0
docker pull docker.io/flannel/flannel:v0.22.3

container 操作下载到本地

ctr -n k8s.io i pull -k docker.io/flannel/flannel-cni-plugin:v1.2.0
ctr -n k8s.io i pull -k docker.io/flannel/flannel:v0.22.3
方式三。下载后推送到本地私人仓库

批量打标签
如在harbor仓库安装,可以导出到images.txt中再执行

# cat docker_flannel.sh 
#!/bin/bash
imagesfile=images.txt
$(grep image kube-flannel.yml | grep -v '#' | awk -F '/' '{print $NF}' > ${imagesfile})
images=$(cat ${imagesfile})

for i in ${images}
do
docker pull flannel/$i
docker tag flannel/$i repo.k8s.local:5100/google_containers/$i
docker push repo.k8s.local:5100/google_containers/$i
docker rmi flannel/$i
done
#执行脚本文件:
sh docker_flannel.sh
查看镜像
crictl images 
IMAGE                                                                         TAG                 IMAGE ID            SIZE
docker.io/flannel/flannel-cni-plugin                                          v1.2.0              a55d1bad692b7       3.88MB
docker.io/flannel/flannel                                                     v0.22.3             e23f7ca36333c       27MB
应用安装
kubectl apply -f kube-flannel.yml

namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
查看pods
kubectl get pods -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-flgpn   1/1     Running   0          58s
kube-flannel-ds-qdw64   1/1     Running   0          58s
验证安装结果,仅在master节点执行,二进制tar.gz包安装所有节点都要操作
kubectl get pods -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
coredns-66f779496c-9x9cf                     1/1     Running   0          118m
coredns-66f779496c-xwgtx                     1/1     Running   0          118m
etcd-master01.k8s.local                      1/1     Running   0          118m
kube-apiserver-master01.k8s.local            1/1     Running   0          118m
kube-controller-manager-master01.k8s.local   1/1     Running   0          118m
kube-proxy-58mfp                             1/1     Running   0          114m
kube-proxy-z9tpc                             1/1     Running   0          118m
kube-scheduler-master01.k8s.local            1/1     Running   0          118m

systemctl status containerd
systemctl status kubelet
systemctl restart kubelet
kubectl get nodes

run.go:74] "command failed" err="failed to run Kubelet: validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/containerd/containerd.sock\

检查containerd服务有无错误

failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: `mirrors` cannot be set when `config_path` is provided"

/etc/containerd/config.toml中config_path 和 registry.mirrors 取一配制

    [plugins."io.containerd.grpc.v1.cri".registry]
      #config_path = "/etc/containerd/certs.d"

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
         endpoint = ["https://registry.cn-hangzhou.aliyuncs.com"]
transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory

/usr/lib/systemd/system/kubelet.service 里的Unit.After 改成 After=containerd.target如是docker改成After=docker.target
我这里修改之前的值是After=network-online.target

err="failed to run Kubelet: running with swap on is not supported, please disable swap

永久关闭swap

swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab
"Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""

经过分析后发现,是因为“kebernetes默认设置cgroup驱动为systemd,而docker服务的cgroup驱动为cgroupfs”,有两种决解决方式,方式一,将docker的服务配置文件修改为何kubernetes的相同,方式二是修改kebernetes的配置文件为cgroupfs,这里采用第一种。
修改docker服务的配置文件,“/etc/docker/daemon.json ”文件,添加如下
"exec-opts": ["native.cgroupdriver=systemd"]

node节点可执行kubectl

scp -r ~/.kube node02.k8s.local:~/

请忽略,安装自动补全后再设置

vi ~/.bashrc

source <(kubectl completion bash)
command -v kubecolor >/dev/null 2>&1 && alias k="kubecolor"
complete -o default -F __start_kubectl k
export PATH="/root/.krew/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin"

[ -f ~/.kube/aliases.sh ] && source ~/.kube/aliases.sh

source ~/.bashrc

Posted in 安装k8s/kubernetes.

Tagged with , , .


k8s_安装4_私仓harbor

四、harbor仓库

k8s经常使用的镜像地址

从 Kubernetes 1.25 开始,我们的容器镜像注册中心已经从 k8s.gcr.io 更改为 registry.k8s.io .
registry.aliyuncs.com/google_containers是定时同步kubernetes的镜像到阿里镜像仓库服务的,但只是K8S组件的镜像,阿里云镜像仓库有谷歌和RedHat的镜像,但是不全。
当我们下载k8s.gcr.io,gcr.io镜像和quay.io镜像,可以把k8s.gcr.io,gcr.io, quay.io镜像换成阿里云或其它国内镜像加速地址下,如下所示:

k8s中相关镜像

#k8s.io  不能访问
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
docker pull registry.k8s.io/dns/k8s-dns-node-cache:1.22.28
docker pull k8s.mirror.nju.edu.cn/dns/k8s-dns-node-cache:1.22.28

# gcr.io
gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0
docker pull m.daocloud.io/gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0
docker pull m.daocloud.io/gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2

# k8s.gcr.io
k8s.gcr.io/pause:3.2
registry.aliyuncs.com/google_containers/pause:3.2

#docker.io
docker.io/flannel/flannel-cni-plugin:v1.2.0
https://registry.cn-hangzhou.aliyuncs.com

#quay.io 可以下载
quay.io/external_storage/nfs-client-provisioner:latest
docker pull quay.nju.edu.cn/jetstack/cert-manager-webhook:v1.13.1

#docker.elastic.co 可以下载,有时慢
docker.elastic.co/beats/filebeat:8.11.0

#没有登录,需先登录
Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/cert-manager-webhook, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
docker login registry.cn-hangzhou.aliyuncs.com
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/cert-manager-webhook:v1.13.2

私仓

Docker比较流行使用的三种私有仓库
Docker Registry,Nexus,Harbor

Docker Registry

通过使用 Docker Compose,我们可以轻松地在本地部署 Docker Registry.
Harbor完全是在Registry上的封装,目前比Registry功能主要的强化在于:

  • 提供UI界面
  • 提供基于角色管理的用户权限
  • 提供用户操作记录审计确认
  • 提供镜像
  • 提供对Helm Chart等的支持

Nexus

Java 开发中的那个 Maven 私服,对对对,就是这个 Nexus。Nexus它也可以应用于 docker 仓库。

优势

docker就可以安装,可以代理其它仓库并缓存到本地。
nexus3覆盖更全面,啥都可以做,是一个混合仓库maven、yum、npm的私服

repositories说明:(参考maven已有repositories介绍,因为刚开始进来maven相关的库是已经存在的)

  • maven-public:它的类型是group(分组)。即:仓库所有的访问读取入口,都是从 public 开始,读取会分别从 snapshots、releases、central 中都去找,只要其中一个找到就读取回来),就是本地仓库找不到,就去配置的网络仓库中去找【网络仓库地址,会在docker-public中配置阿里云加速镜像地址】
  • maven-releases 发布后的jar包,放到release中
  • maven-snapshots 测试的jar包,放到snapshots中
  • maven-central 它的类型是proxy(代理)。代理不存数据,是只读的。这个类似配置maven仓库时我们配置的一个aliyun仓库,帮我们去代理到aliyun仓库

nexus中有个默认自带的仓库列表,里面包含了各种各样的仓库。
这些仓库主要分为三类,代理仓库、宿主仓库和仓库组。

  • 代理仓库:代理仓库主要是让使用者通过代理仓库来间接访问外部的第三方远程仓库的。代理仓库会从被代理的仓库中下载构件,缓存在代理仓库中以供maven用户使用。

  • 宿主仓库: 宿主仓库主要是给我们自己用的,主要有2点作用:
        将私有的一些构件通过nexus中网页的方式上传到宿主仓库中给其他同事使用
        将自己开发好的一些构件发布到nexus的宿主仓库中以供其他同事使用。

  • 仓库组:(默认maven-public)仓库组中可以有多个代理仓库和宿主仓库,而maven用户只用访问一个仓库组就可以间接地访问这个组内所有的仓库,仓库组中多个仓库是有顺序的,当maven用户从仓库组下载构件时,仓库组会按顺序依次在组内的仓库中查找组件,查找到了立即返回给本地仓库,所以一般情况我们会将速度快的放在前面。仓库组内部实际上是没有构件内容的,他只是起到一个请求转发的作用,将maven用户下载构件的请求转发给组内的其他仓库处理。

Harbor 私仓

它是Docker Registry的更高级封装,它除了提供友好的Web UI界面,角色和用户权限管理,用户操作审计等功能外,它还整合了K8s的插件(Add-ons)仓库,即Helm通过chart方式下载,管理,安装K8s插件,而chartmuseum可以提供存储chart数据的仓库【注:helm就相当于k8s的yum】。另外它还整合了两个开源的安全组件,一个是Notary,另一个是Clair,Notary类似于私有CA中心,而Clair则是容器安全扫描工具,它通过各大厂商提供的CVE漏洞库来获取最新漏洞信息,并扫描用户上传的容器是否存在已知的漏洞信息,这两个安全功能对于企业级私有仓库来说是非常具有意义的。
harbor 的优势很明显,特别是可以自建文件夹进行分组这点就非常好。其实说实话,作为一个私有的镜像仓库,harbor 已经做得很好的了,唯一的缺点是它无法帮你下载镜像。
相比Nexus要费资源。使用 Harbor 必须要先安装 docker 以及 docker-compose。

安装参考:
https://agou-ops.cn/post/containerdharbor%E7%A7%81%E6%9C%89%E4%BB%93https/

Harbor相关地址

官网:​ ​https://goharbor.io/​​

Github地址:​ ​https://github.com/goharbor/harbor​​

操作文档:​ ​https://goharbor.io/docs/

Harbor安装有多种方式:

在线安装:从Docker Hub下载Harbor相关镜像,因此安装软件包非常小
离线安装:安装包包含部署的相关镜像,因此安装包比较大

只在仓库机器上执行
Harbor安装前提条件
Harbor被部署为几个Docker容器。因此,您可以在任何支持Docker的Linux发行版上部署它。目标主机需要安装Docker,并安装Docker Compose。
安装Harbor前,需要安装Docker engine,Docker Compose和Openssl。

私有库默认是不支持删除镜像的,需要修改config.yml配置文件,在storage节点下加入 delete: enabled: true
删tag不会回收空间

Harbor 高可用方式

http://t.csdnimg.cn/k43Ao

  1. 安装两台 Harbor 仓库,他们共同使用一个存储(一般常见的便是 NFS 共享存储)需要额外配置 Redis 和 PostgreSQL 以及 NFS 服务
  2. 安装两台 Harbor 仓库,并互相配置同步关系。

1.docker环境的安装

docker-ce安装

首先先把服务停止了,不要直接卸载

systemctl stop docker

重命名数据目录

把默认的docker目录改一下名称。

mv /var/lib/docker /var/lib/docker-bak

卸载旧版本

Docker 的旧版本被称为 docker 或 docker-engine。如果这些已安装,请卸载它们以及关联 的依赖关系。

yum remove docker docker-common docker-selinux docker-engine -y

安装docker-ce

## 阿里云源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce

docker version
Client: Docker Engine – Community
Version: 26.1.0
API version: 1.45
Go version: go1.21.9
Git commit: 9714adc
Built: Mon Apr 22 17:09:57 2024
OS/Arch: linux/amd64
Context: default
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors":["https://docker.nju.edu.cn/"],
"insecure-registries":["repo.k8s.local"],
"exec-opts":["native.cgroupdriver=systemd"],
"data-root": "/var/lib/docker",
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 5,
"log-driver":"json-file",
"log-opts": {
"max-size": "300m",
"max-file": "2"
},
"experimental": true,
"live-restore": true
}
EOF

报错
Failed to chown socket at step GROUP: No such process
可能是文件被锁定,用户组没添加成功

lsattr /etc/passwd /etc/shadow /etc/group /etc/gshadow /etc/services
----i----------- /etc/passwd
----i----------- /etc/shadow
----i----------- /etc/group
----i----------- /etc/gshadow
----i----------- /etc/services
chattr -i /etc/passwd /etc/shadow /etc/group /etc/gshadow /etc/services
groupadd docker

恢复数据目录

安装完docker-ce后,系统会创建新的docker目录,删除新的,如何把备份的改回docker名称。
切记:不要启动docker!!

rm -rf /var/lib/docker
mv /var/lib/docker-bak /var/lib/docker

启动docker服务

systemctl enable containerd

docker -v
Docker version 26.1.0, build 9714adc

ctr version
Client:
  Version:  1.6.31
  Revision: e377cd56a71523140ca6ae87e30244719194a521
  Go version: go1.21.9

Server:
  Version:  1.6.31
  Revision: e377cd56a71523140ca6ae87e30244719194a521
  UUID: 07d08019-3523-4f01-90db-367b21874598

cat /etc/containerd/config.toml

报错1
Error response from daemon: Unknown runtime specified docker-runc
需要针对容器里面的docker-runc改一下名称,用runc替换docker-runc。
grep -rl ‘docker-runc’ /var/lib/docker/containers/ | xargs sed -i ‘s/docker-runc/runc/g’
systemctl restart docker

报错1
level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
使用
systemctl start containerd
不要用
systemctl start docker

2.安装docker-compose:

https://github.com/docker/compose/releases
安装方式

  • 二进制安装docker-compose
  • yum方式安装docker-compose
  • python3+pip 安装

二进制方式安装docker-compose:

简单快速

wget https://github.com/docker/compose/releases/download/v2.12.2/docker-compose-linux-x86_64
wget https://github.com/docker/compose/releases/download/v2.22.0/docker-compose-linux-x86_64
sudo cp -arf docker-compose-linux-x86_64 /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
ln -s /usr/bin/docker-compose /usr/local/bin/docker-compose

卸载
sudo rm /usr/bin/docker-compose

yum方式安装docker-compose:

安装复杂

相关包安装

yum -y install libjpeg zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel
yum install -y libffi-devel zlib1g-dev

openssl升级

centos7默认的openssl版本和python3.10以上的版本不兼容
openssl version

OpenSSL 1.0.2k-fips  26 Jan 2017
wget https://www.openssl.org/source/openssl-1.1.1q.tar.gz
tar -zxvf openssl-1.1.1q.tar.gz
cd openssl-1.1.1q
./config --prefix=/usr/local/openssl 
make -j && make install

ln -s /usr/local/openssl/lib/libcrypto.so.1.1  /usr/lib64/libcrypto.so.1.1
ln -s /usr/local/openssl/lib/libssl.so.1.1 /usr/lib64/libssl.so.1.1
/usr/local/openssl/bin/openssl version
OpenSSL 1.1.1q  5 Jul 2022

备份原来的openssl文件,可通过whereis openssl查询位置

mv /usr/bin/openssl /usr/bin/openssl.old
mv /usr/include/openssl /usr/include/openssl.old

用新的文件替换旧的文件,执行命令如下:

ln -sf /usr/local/openssl/bin/openssl /usr/bin/openssl
ln -s /usr/local/openssl/include/openssl /usr/include/openssl

openssl version
OpenSSL 1.1.1q  5 Jul 2022
python3+pip 安装
yum install epel-release 
yum install -y python3 python3-pip python3-devel

pip3 –version

pip 9.0.3 from /usr/lib/python3.6/site-packages (python 3.6)
pip 国内镜像加速
mkdir ~/.pip/
vi ~/.pip/pip.conf
[global]
index-url = http://mirrors.aliyun.com/pypi/simple/
trusted-host = mirrors.aliyun.com         
disable-pip-version-check = true        
timeout = 120
pip升级
pip3 install --upgrade pip --trusted-host mirrors.aliyun.com

pip3 –version
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with ‘-m pip’ instead of running pip directly.
pip 21.3.1 from /usr/local/lib/python3.6/site-packages/pip (python 3.6)

docker-compose 安装
pip3 install docker-compose --trusted-host mirrors.aliyun.com

ln -s /usr/local/python3/bin/docker-compose /usr/bin/docker-compose 

docker-compose version
/usr/local/lib/python3.6/site-packages/paramiko/transport.py:32: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography. The next release of cryptography will remove support for Python 3.6.
  from cryptography.hazmat.backends import default_backend
docker-compose version 1.29.2, build unknown
docker-py version: 5.0.3
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.0.2k-fips  26 Jan 2017
docker-compose 卸载

pip uninstall docker-compose

安装错误

setuptools-rust

File "/tmp/pip-build-6plwm63o/bcrypt/setup.py", line 11, in <module>
        from setuptools_rust import RustExtension
    ModuleNotFoundError: No module named 'setuptools_rust'
pip3 install setuptools-rust
The wheel package is not available.
pip3 install wheel

cffi

Could not find a version that satisfies the requirement cffi>=1.4.1 (from versions: )
pip3 install cffi
Successfully installed cffi-1.15.1 pycparser-2.21

Rust

his package requires Rust >=1.48.0.
pip3 install Rust

zlib,bzip

configure: error: zlib development files not found
yum install zlib zlib-devel
yum install bzip2 bzip2-devel bzip2-libs

libjpeg

The headers or library files could not be found for jpeg,
    a required dependency when compiling Pillow from source.

yum install libjpeg zlib libtiff

3.harbor 安装

准备文件

wget https://github.com/goharbor/harbor/releases/download/v2.8.4/harbor-offline-installer-v2.8.4.tgz
tar xf harbor-offline-installer-v2.8.4.tgz
mv harbor /usr/local/
cd /usr/local/
cd harbor
cp harbor.yml.tmpl harbor.yml

设定访问域名,端口,admin密码,db密码,存储目录

vi harbor.yml
hostname: repo.k8s.local
http:
    port:5100
#https:
  # https port for harbor, default is 443
  #port: 443
  # The path of cert and key files for nginx
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path

harbor_admin_password: Harbor12345

database:
  # The password for the root user of Harbor DB. Change this before any production use.
  password: root123

# The default data volume
data_volume: /data_harbor

初始化

其实这个prepare的作用是用来初始化将harbor.yml转化为/usr/local/harbor/docker-compose.yml文件

./prepare

prepare base dir is set to /usr/local/harbor
Unable to find image 'goharbor/prepare:v2.8.4' locally
v2.8.4: Pulling from goharbor/prepare
b73ab88bdeef: Pull complete 
0a4647ff4f26: Pull complete 
ac87c0a6beec: Pull complete 
58290933e402: Pull complete 
2dd75ae2b8d6: Pull complete 
0432d14b35c2: Pull complete 
1d94d426c05b: Pull complete 
9242105872e7: Pull complete 
7079d6fb028f: Pull complete 
e3e737964616: Pull complete 
Digest: sha256:80837b160b3a876748c4abb68b35389485b4ddcd5de39bb82a7541d3f3051cae
Status: Downloaded newer image for goharbor/prepare:v2.8.4
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir

开始安装

./install.sh
[Step 0]: checking if docker is installed ...

Note: docker version: 24.0.6

[Step 1]: checking docker-compose is installed ...

Note: Docker Compose version v2.21.0

[Step 2]: loading Harbor images ...
a074a02dfff1: Loading layer [==================================================>]  37.79MB/37.79MB
a1845a3d89a2: Loading layer [==================================================>]  9.188MB/9.188MB
3f06bc32288c: Loading layer [==================================================>]  3.584kB/3.584kB
245244bd15d4: Loading layer [==================================================>]   2.56kB/2.56kB
42ca8ea5af72: Loading layer [==================================================>]  47.58MB/47.58MB
8d1a6771e613: Loading layer [==================================================>]  48.37MB/48.37MB
Loaded image: goharbor/harbor-jobservice:v2.8.4
9e404a035c29: Loading layer [==================================================>]  84.62MB/84.62MB
8a45a3e2d467: Loading layer [==================================================>]  3.072kB/3.072kB
50103680c597: Loading layer [==================================================>]   59.9kB/59.9kB
7da34aa8a12d: Loading layer [==================================================>]  61.95kB/61.95kB
Loaded image: goharbor/redis-photon:v2.8.4
5d6d0147b133: Loading layer [==================================================>]  89.19MB/89.19MB
f7f30f0432f2: Loading layer [==================================================>]  3.584kB/3.584kB
b895ffa154de: Loading layer [==================================================>]  3.072kB/3.072kB
9fb8c7a01498: Loading layer [==================================================>]   2.56kB/2.56kB
8a232dc48045: Loading layer [==================================================>]  3.072kB/3.072kB
839e0de14204: Loading layer [==================================================>]  3.584kB/3.584kB
3f683bb644b2: Loading layer [==================================================>]  20.48kB/20.48kB
Loaded image: goharbor/harbor-log:v2.8.4
627fc8f29b12: Loading layer [==================================================>]  115.9MB/115.9MB
b4faf8a74f36: Loading layer [==================================================>]   25.2MB/25.2MB
22c2b4c49c70: Loading layer [==================================================>]   5.12kB/5.12kB
98c144348806: Loading layer [==================================================>]  6.144kB/6.144kB
6f34146f1977: Loading layer [==================================================>]  3.072kB/3.072kB
8dd9b9af7425: Loading layer [==================================================>]  2.048kB/2.048kB
04498149158d: Loading layer [==================================================>]   2.56kB/2.56kB
7600d3f327f6: Loading layer [==================================================>]   2.56kB/2.56kB
e30935897ec8: Loading layer [==================================================>]   2.56kB/2.56kB
b91c1501abe9: Loading layer [==================================================>]  9.728kB/9.728kB
Loaded image: goharbor/harbor-db:v2.8.4
736147cbb70a: Loading layer [==================================================>]  81.13MB/81.13MB
Loaded image: goharbor/nginx-photon:v2.8.4
3ee113d617fa: Loading layer [==================================================>]  72.75MB/72.75MB
8f8c635f3d64: Loading layer [==================================================>]  38.64MB/38.64MB
50ede47ef7b6: Loading layer [==================================================>]  19.94MB/19.94MB
bbe4550fbed9: Loading layer [==================================================>]  65.54kB/65.54kB
6a6c08954476: Loading layer [==================================================>]   2.56kB/2.56kB
4fcee09b3045: Loading layer [==================================================>]  1.536kB/1.536kB
cd9e13a0fadf: Loading layer [==================================================>]  12.29kB/12.29kB
5c4cf244ed4a: Loading layer [==================================================>]  2.123MB/2.123MB
2f207d2f7a63: Loading layer [==================================================>]  419.8kB/419.8kB
Loaded image: goharbor/prepare:v2.8.4
e4e75b52265a: Loading layer [==================================================>]  9.188MB/9.188MB
6ca0b8687881: Loading layer [==================================================>]  3.584kB/3.584kB
2efe438491fa: Loading layer [==================================================>]   2.56kB/2.56kB
6c8c2dc9cf24: Loading layer [==================================================>]  59.31MB/59.31MB
70aa7368b062: Loading layer [==================================================>]  5.632kB/5.632kB
1ad1e6d7b7f2: Loading layer [==================================================>]  116.7kB/116.7kB
fdf3c64c43d4: Loading layer [==================================================>]  44.03kB/44.03kB
af312371ff9e: Loading layer [==================================================>]  60.26MB/60.26MB
2ef0db7a0b49: Loading layer [==================================================>]   2.56kB/2.56kB
Loaded image: goharbor/harbor-core:v2.8.4
0cbd65e4d842: Loading layer [==================================================>]  6.699MB/6.699MB
dfd5a1cf5002: Loading layer [==================================================>]  4.096kB/4.096kB
793940424064: Loading layer [==================================================>]  3.072kB/3.072kB
44888bf86da0: Loading layer [==================================================>]    196MB/196MB
561960448b05: Loading layer [==================================================>]   14.1MB/14.1MB
deb1d83b4cbd: Loading layer [==================================================>]  210.9MB/210.9MB
Loaded image: goharbor/trivy-adapter-photon:v2.8.4
dd52e9dde638: Loading layer [==================================================>]  81.13MB/81.13MB
8cfe6bb78139: Loading layer [==================================================>]    6.1MB/6.1MB
8aebde8774f2: Loading layer [==================================================>]  1.233MB/1.233MB
Loaded image: goharbor/harbor-portal:v2.8.4
e2bb7b6f858e: Loading layer [==================================================>]  6.172MB/6.172MB
b58529f5727f: Loading layer [==================================================>]  4.096kB/4.096kB
15b87640160a: Loading layer [==================================================>]  3.072kB/3.072kB
f8cc13293a41: Loading layer [==================================================>]  17.57MB/17.57MB
51175195e0e4: Loading layer [==================================================>]  18.36MB/18.36MB
Loaded image: goharbor/registry-photon:v2.8.4
8c3c80de8e46: Loading layer [==================================================>]  6.167MB/6.167MB
ba247990a26d: Loading layer [==================================================>]  9.143MB/9.143MB
78c730633955: Loading layer [==================================================>]  15.88MB/15.88MB
901f70ff7f25: Loading layer [==================================================>]  29.29MB/29.29MB
e91438791db8: Loading layer [==================================================>]  22.02kB/22.02kB
eb4f8ee41ee3: Loading layer [==================================================>]  15.88MB/15.88MB
Loaded image: goharbor/notary-server-photon:v2.8.4
d3bc6746e3a0: Loading layer [==================================================>]  6.167MB/6.167MB
e9dc9957d190: Loading layer [==================================================>]  9.143MB/9.143MB
6548f7c0890e: Loading layer [==================================================>]  14.47MB/14.47MB
8ab2eab06c9c: Loading layer [==================================================>]  29.29MB/29.29MB
475002e6da05: Loading layer [==================================================>]  22.02kB/22.02kB
70a417415ad1: Loading layer [==================================================>]  14.47MB/14.47MB
Loaded image: goharbor/notary-signer-photon:v2.8.4
bfe8ceaf89b9: Loading layer [==================================================>]  6.172MB/6.172MB
cf503352618a: Loading layer [==================================================>]  4.096kB/4.096kB
21e09698bb69: Loading layer [==================================================>]  17.57MB/17.57MB
7afc834ab33e: Loading layer [==================================================>]  3.072kB/3.072kB
14752e2b2fbf: Loading layer [==================================================>]  31.13MB/31.13MB
8daa88c089ea: Loading layer [==================================================>]  49.49MB/49.49MB
Loaded image: goharbor/harbor-registryctl:v2.8.4
4b4afa104b42: Loading layer [==================================================>]  9.188MB/9.188MB
4ef2e0c082a7: Loading layer [==================================================>]  26.03MB/26.03MB
8eb9f5ee0436: Loading layer [==================================================>]  4.608kB/4.608kB
d449c6ac0cd4: Loading layer [==================================================>]  26.82MB/26.82MB
Loaded image: goharbor/harbor-exporter:v2.8.4

[Step 3]: preparing environment ...

[Step 4]: preparing harbor configs ...
prepare base dir is set to /root/src/harbor
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir

Note: stopping existing Harbor instance ...

[Step 5]: starting Harbor ...
[+] Running 1/1

--Harbor has been installed and started successfully.----

测试

在外部机器,本机,k8s节点中修改host文件,增加指向
192.168.244.6 repo.k8s.local

vi /etc/hosts

192.168.244.6 repo.k8s.local

在浏览器访问 http://repo.k8s.local:5100/

输入初始密码登录
admin/Harbor12345

忘记密码

docker ps

f11056420a99   goharbor/harbor-core:v2.8.4          "/harbor/entrypoint.…"   3 weeks ago    Up About an hour (healthy)                                                                                        harbor-core  
f69e921a4464   goharbor/harbor-db:v2.8.4            "/docker-entrypoint.…"   3 weeks ago    Up About an hour (healthy)                                                                                        harbor-db                                                                                   

方式一 查看yaml密码

cat /usr/local/harbor/harbor.yml|grep password

harbor_admin_password: Harbor12345
  # The password for the root user of Harbor DB. Change this before any production use.
  password: local_ROOT_!!88

方式二 容器内查看

第一步、 进入容器
docker exec -it "" bash
docker exec -it "f11056420a99" bash

第二步、查看密码
env | grep HARBOR_ADMIN_PASSWORD

HARBOR_ADMIN_PASSWORD=Harbor12345

重置 admin 密码
docker exec -it "" bash
docker exec -it "f69e921a4464" bash

psql -U postgres -d registry
> select * from harbor_user;
> update harbor_user set salt='', password='' where user_id = "<admin user_id>";  
exit

重新启动Harbor私有镜像仓库后,密码就会自动重置为之前安装时配置的Harbor12345

重新启动Harbor私有镜像仓库

# docker-compose down
#./prepare 
# docker-compose up -d

docker-compose ps
no configuration file provided: not found
最常见的原因是没有在docker-compose.yml文件的路径下执行该命令。

cd /usr/local/harbor/
docker-compose ps

web登录时会一直提示用户名密码错误

如果之前清理过镜像可能把镜像删除了。

docker-compose down
./prepare 
prepare base dir is set to /usr/local/harbor
Unable to find image 'goharbor/prepare:v2.8.4' locally
docker-compose up -d

项目用户角色说明

受限访客:受限访客没有项目的完全读取权限。他们可以拉取镜像但不能推送,而且他们看不到日志或项目的其他成员。例如,你可以为来自不同组织的共享项目访问权限的用户创建受限访客。
访客:访客对指定项目具有只读权限。他们可以拉取和重新标记镜像,但不能推送。
开发者:开发者拥有项目的读写权限。
维护者:维护者拥有超越“开发者”的权限,包括扫描镜像、查看复制作业以及删除镜像和helm charts的能力。
项目管理员:创建新项目时,你将被分配给项目的“ProjectAdmin”角色。“ProjectAdmin”除了读写权限外,还有一些管理权限,如添加和删除成员、启动漏洞扫描等。

新建用户

用户管理/创建用户

k8s_user1
k8s_Uus1

k8s_pull
k8s_Pul1

项目

将项目设为公开,并将用户加入项目
library 公开
项目/成员/+用户
k8s_user1
维护者
k8s_pull
访客

项目/新建
k8s 公开
项目/成员/+用户
k8s_user1
维护者
k8s_pull
访客

项目命名规则
方式一 全地址做对应,前面增加私仓地址
docker.io/library/nginx:1.21.4
docker tag docker.io/library/nginx:1.21.4 repo.k8s.local/docker.io/library/nginx:1.21.4

方式二 忽略域名,目录对应
docker.io/library/nginx:1.21.4
docker tag docker.io/library/nginx:1.21.4 repo.k8s.local/library/nginx:1.21.4

方式三 忽略域名和目录,镜像对应
按属性放入目录,如基础服务放入google_containers,组件放library ,应用放app
repo.k8s.local/library/nginx:1.21.4
docker tag docker.io/library/nginx:1.21.4 repo.k8s.local/library/nginx:1.21.4

关闭:

cd /usr/local/harbor/

docker-compose down -v   
ERROR: 
        Can't find a suitable configuration file in this directory or any
        parent. Are you in the right directory?

        Supported filenames: docker-compose.yml, docker-compose.yaml, compose.yml, compose.yaml

cd /usr/local/harbor
docker-compose down -v   

开启:

cd /usr/local/harbor/
docker-compose up -d

Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-portal ... done
Creating registry      ... done
Creating registryctl   ... done
Creating harbor-db     ... done
Creating redis         ... done
Creating harbor-core   ... done
Creating nginx             ... done
Creating harbor-jobservice ... done
docker ps
CONTAINER ID   IMAGE                                COMMAND                  CREATED         STATUS                   PORTS                                       NAMES
7ee76b324f96   goharbor/harbor-jobservice:v2.8.4    "/harbor/entrypoint.…"   3 minutes ago   Up 3 minutes (healthy)                                               harbor-jobservice
e1f9af0dfec1   goharbor/nginx-photon:v2.8.4         "nginx -g 'daemon of…"   3 minutes ago   Up 3 minutes (healthy)   0.0.0.0:5100->8080/tcp, :::5100->8080/tcp   nginx
55212a4181c5   goharbor/harbor-core:v2.8.4          "/harbor/entrypoint.…"   3 minutes ago   Up 3 minutes (healthy)                                               harbor-core
bfd166244ad3   goharbor/redis-photon:v2.8.4         "redis-server /etc/r…"   3 minutes ago   Up 3 minutes (healthy)                                               redis
beb5c0c77832   goharbor/harbor-registryctl:v2.8.4   "/home/harbor/start.…"   3 minutes ago   Up 3 minutes (healthy)                                               registryctl
17bd8d7f8a02   goharbor/harbor-db:v2.8.4            "/docker-entrypoint.…"   3 minutes ago   Up 3 minutes (healthy)                                               harbor-db
c7c665923196   goharbor/registry-photon:v2.8.4      "/home/harbor/entryp…"   3 minutes ago   Up 3 minutes (healthy)                                               registry
a08329e11be2   goharbor/harbor-portal:v2.8.4        "nginx -g 'daemon of…"   3 minutes ago   Up 3 minutes (healthy)                                               harbor-portal
d8716d3159a0   goharbor/harbor-log:v2.8.4           "/bin/sh -c /usr/loc…"   3 minutes ago   Up 3 minutes (healthy)   127.0.0.1:1514->10514/tcp                   harbor-log

注册成服务

cat > /lib/systemd/system/harbor.service <<EOF
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor

[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/local/bin/docker-compose -f /usr/local/harbor/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f /usr/local/harbor/docker-compose.yml down

[Install]
WantedBy=multi-user.target
EOF

开机启动

systemctl daemon-reload 
systemctl status harbor
systemctl stop harbor
systemctl start harbor
systemctl enable harbor

防火墙

iptables -A INPUT -p tcp -m tcp --dport 5100 -j ACCEPT
iptabes-save

配置对Harbor的HTTPS访问

注意一旦启用https 那么http会强跳到https,http不再可用
要配置HTTPS,必须创建SSL证书。您可以使用由受信任的第三方CA签名的证书,也可以使用openssl进行自签名证书。
本节介绍如何使用 ​ ​OpenSSL​​​创建CA,以及如何使用CA签署服务器证书和客户端证书。您可以使用其他CA工具进行自签名
在生产环境中,一般是应该从CA获得证书,例如:在阿里云购买域名之后就可以下载相关域名的CA证书了。但是在测试或开发环境中,对于这种自己定义的内网域名,就可以自己生成自己的CA证书。

1. 生成CA证书私钥 ca.key。Generate a CA certificate private key.

openssl genrsa -out ca.key 4096

2. 根据上面生成的CA证书私钥,再来生成CA证书 ca.crt。 Generate the CA certificate.

设置 ​​-subj​​ 选项中的值来反映的组织,例如:省份、地市、域名等等信息。如果使用FQDN 【 (Fully Qualified Domain Name)全限定域名:同时带有主机名和域名的名称。】连接Harbor主机,则必须将其指定为通用名称(​​CN​​​)属性,可以看到示例写的是​​yourdomain.com​​。

openssl req -x509 -new -nodes -sha512 -days 3650 \
 -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=repo.k8s.local" \
 -key ca.key \
 -out ca.crt

 # 参数说明:
-new 指生成证书请求
-x509 表示直接输出证书
-key 指定私钥文件
-days 指定证书过期时间为3650天
-out 导出结束后证书文件
-subj 输入证书拥有者信息
生成服务器证书 Generate a Server Certificate

证书通常包含一个​​.crt​​​文件和一个​​.key​​​文件,例如​​yourdomain.com.crt​​​和​​yourdomain.com.key​​。

在这里,因为我上面设置的服务器域名为repo.k8s.local​​​,所以将要生成的证书为repo.k8s.local.crt​​​ 和 repo.k8s.local.key​​。

1.生成CA证书私钥
openssl genrsa -out harbor.key 4096
2.生成证书签名请求(CSR)

注意下使用repo.k8s.local.key 和 repo.k8s.local.csr

openssl req -sha512 -new \
    -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=repo.k8s.local" \
    -key harbor.key \
    -out harbor.csr
3.生成一个x509 v3扩展文件。Generate an x509 v3 extension file.
cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=repo.k8s.local
DNS.2=repo.k8s
EOF
4. 使用该​​v3.ext​​​文件为您的Harbor主机生成证书 yourdomain.com.crt。
openssl x509 -req -sha512 -days 3650 \
    -extfile v3.ext \
    -CA ca.crt -CAkey ca.key -CAcreateserial \
    -in harbor.csr \
    -out repo.k8s.local.crt

将服务器证书​​yourdomain.com.crt​​​和密钥​​yourdomain.com.key​​复制到Harbor主机上的certficates文件夹中。

mkdir /usr/local/harbor/certificate
cp repo.k8s.local.crt /usr/local/harbor/certificate/
cp harbor.key /usr/local/harbor/certificate/

双机时也复制到从机

scp  /usr/local/harbor/certificate/* [email protected]:.

编辑配置,打开https支持

vi  /usr/local/harbor/harbor.yml
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /usr/local/harbor/certificate/repo.k8s.local.crt 
  private_key: /usr/local/harbor/certificate/harbor.key
cd /usr/local/harbor/
./prepare 
#重新生成/usr/local/harbor/docker-compose.yml

systemctl daemon-reload 
systemctl stop harbor
systemctl start harbor
systemctl status harbor

netstat -lntp
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      31217/docker-proxy  
tcp        0      0 127.0.0.1:1514          0.0.0.0:*               LISTEN      30700/docker-proxy  
tcp        0      0 0.0.0.0:5100            0.0.0.0:*               LISTEN      31235/docker-proxy  

防火墙

iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
iptabes-save

http://repo.k8s.local:5100

https://repo.k8s.local

在浏览器访问,使用nat 5443指到192.168.244.6:443
https://repo.k8s.local:5443/

docker
将服务器证书​​yourdomain.com.crt​​​的编码格式转换为​​yourdomain.com.cert​​​,提供Docker使用

openssl x509 -inform PEM -in repo.k8s.local.crt -out repo.k8s.local.cert

mkdir -p /etc/docker/certs.d/repo.k8s.local/
cp repo.k8s.local.cert /etc/docker/certs.d/repo.k8s.local/
cp harbor.key /etc/docker/certs.d/repo.k8s.local/repo.k8s.local.key
cp ca.crt /etc/docker/certs.d/repo.k8s.local/

双机时也复制到从机

scp  /etc/docker/certs.d/repo.k8s.local/* [email protected]:.

测试docker登录

docker login repo.k8s.local
Username: k8s_user1
Password: k8s_Uus1

修改http为https
vi /etc/docker/daemon.json

#"insecure-registries":["repo.k8s.local:5100"],
"insecure-registries":["https://repo.k8s.local"],

错误
ctr: failed to resolve reference "xxx.local/library/docker/getting-started
需要拷贝一份上面harbor的ca到系统ca目录并更新
cp ca.crt /usr/local/share/ca-certificates/
/usr/sbin/update-ca-certificates

错误
Error response from daemon: missing client certificate harbor.cert for key harbor.key
注意 harbor.key key文件名和 cert一致

错误

Error response from daemon: Get "https://repo.k8s.local/v2/": Get "https://repo.k8s.local:5433/service/token?account=k8s_user1&client_id=docker&offline_token=true&service=harbor-registry": dial tcp 192.168.244.6:5433: connect: connection refused

harbor配制文件/usr/local/harbor/harbor.yml
修正注释掉 external_url: https://repo.k8s.local:5433

错误
Error response from daemon: Get "https://repo.k8s.local/v2/": tls: failed to verify certificate: x509: certificate signed by unknown authority

错误
Job harbor.service/start failed with result ‘dependency’.

systemctl staart harbor
systemctl status harbor

ctr测试推送

ctr -n k8s.io images ls  |grep busybox
repo.k8s.local:5100/google_containers/busybox:9.9                                                                                       application/vnd.docker.distribution.manifest.v2+json      sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee 2.1 MiB   linux/amd64   

ctr -n k8s.io i tag  repo.k8s.local:5100/google_containers/busybox:9.9 repo.k8s.local/google_containers/busybox:9.9

ctr -n k8s.io i push --user k8s_user1:k8s_Uus1 repo.k8s.local/google_containers/busybox:9.9 
ctr: failed to do request: Head "https://repo.k8s.local/v2/google_containers/busybox/blobs/sha256:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824": tls: failed to verify certificate: x509: certificate signed by unknown authority

#解决办法1.指定 -k 参数跳过证书校验。
ctr -n k8s.io i push --user k8s_user1:k8s_Uus1 -k repo.k8s.local/google_containers/busybox:9.9 

# 解决办法2.指定CA证书、Harbor 相关证书文件路径。
#将harbor的ca.crt 复制到当前节点
scp /root/src/harbor/ca.crt [email protected]:.
ctr -n k8s.io i push --user k8s_user1:k8s_Uus1 --tlscacert ca.crt repo.k8s.local/google_containers/busybox:9.9 

准备测试创建pod文件
vi test-harbor.yaml

# test-harbor.yaml
apiVersion: v1
kind: Pod
metadata:
  name: harbor-registry-test
spec:
  containers:
  - name: test
    image: repo.k8s.local/google_containers/busybox:9.9 
    args:
    - sleep
    - "3600"
  imagePullSecrets:
  - name: harbor-auth

创建pod
kubectl apply -f test-harbor.yaml

pod/harbor-registry-test created

查看
kubectl describe pod harbor-registry-test

Name:             harbor-registry-test
Namespace:        default
Priority:         0
Service Account:  default
Node:             node01.k8s.local/192.168.244.5
Start Time:       Mon, 16 Oct 2023 17:32:15 +0800
Labels:           
Annotations:      
Status:           Running
IP:               10.244.1.4
IPs:
  IP:  10.244.1.4
Containers:
  test:
    Container ID:  containerd://5fa473b370c56d71f0466e14960a974ead4559106495a79eb9b2a21a7ecee52f
    Image:         repo.k8s.local/google_containers/busybox:9.9
    Image ID:      repo.k8s.local/google_containers/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee
    Port:          

kubectl create secret docker-registry harbor-auth –docker-server=https://repo.k8s.local –docker-username=k8s_pull –docker-password=k8s_Pul1 –[email protected] -n default

k8s节点配置

containerd 配置私有仓库
Containerd 目前没有直接配置镜像加速的功能,但 containerd 中可以修改 docker.io 对应的 endpoint,所以可以通过修改 endpoint 来实现镜像加速下载。因为 endpoint 是轮询访问,所以可以给 docker.io 配置多个仓库地址来实现 加速地址+默认仓库地址

注意:这个配置文件是给crictl和kubelet使用,ctr是不可以用这个配置文件的,ctr 不使用 CRI,因此它不读取 plugins."io.containerd.grpc.v1.cri"配置。
直接修改:/etc/containerd/config.toml配置文件,这种方式在较新版本的contaienrd中已经被废弃,将来肯定会被移除,
两者取一
https://github.com/containerd/cri/blob/master/docs/registry.md
[plugins."io.containerd.grpc.v1.cri".registry.configs."repo.k8s.local".tls]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
和 config_path 会冲突

方式一 registry.mirrors
vi /etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".registry.configs]
      [plugins."io.containerd.grpc.v1.cri".registry.configs."repo.k8s.local".tls]
        insecure_skip_verify = true
      [plugins."io.containerd.grpc.v1.cri".registry.configs."repo.k8s.local".auth]
        username = "k8s_pull"
        password = "k8s_Pul1"

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."repo.k8s.local"]
        endpoint = ["http://repo.k8s.local:5100"]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["https://registry.cn-hangzhou.aliyuncs.com"]

保存并关闭配置文件。

方式二 config_path
vi /etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".registry]
   config_path = "/etc/containerd/certs.d"
      [plugins."io.containerd.grpc.v1.cri".registry.configs."repo.k8s.local".auth]
        username = "k8s_pull"
        password = "k8s_Pul1"
#注意创建的目录即为configs后面填写的内容
mkdir -p /etc/containerd/certs.d/docker.io/

cat > /etc/containerd/certs.d/docker.io/hosts.toml <
http/https两者取一配制

mkdir -p /etc/containerd/certs.d/repo.k8s.local

#harbor私仓使用http方式
cat > /etc/containerd/certs.d/repo.k8s.local/hosts.toml <
#harbor私仓使用http 非常规端口方式
cat > /etc/containerd/certs.d/repo.k8s.local:5100/hosts.toml <

从harbor复制证书到节点
scp [email protected]:/etc/docker/certs.d/repo.k8s.local/* /etc/containerd/certs.d/repo.k8s.local/

#harbor私仓使用https方式 带证书
cat > /etc/containerd/certs.d/repo.k8s.local/hosts.toml <
#harbor私仓使用https方式
cat > /etc/containerd/certs.d/repo.k8s.local/hosts.toml <

重新启动containerd服务以使更改生效。可以使用以下命令之一:
使用systemd:

systemctl daemon-reload
systemctl restart containerd
systemctl status containerd

使用Docker Compose(如果您是使用Docker Compose运行的):
docker-compose restart

现在,containerd将使用您配置的私有Harbor源作为容器镜像的默认来源。您可以验证配置是否生效,通过拉取和运行一个位于Harbor Registry上的镜像来测试。

在harbor推送证书到节点
从harbor复制证书到节点
scp [email protected]:/etc/docker/certs.d/repo.k8s.local/* /etc/containerd/certs.d/repo.k8s.local/

节点上测试
ctr -n k8s.io i pull --user k8s_user1:k8s_Uus1 --tlscacert /etc/containerd/certs.d/repo.k8s.local/ca.crt repo.k8s.local/google_containers/busybox:9.9
ctr -n k8s.io images ls

ctr -n k8s.io i pull --user k8s_user1:k8s_Uus1 -k repo.k8s.local/google_containers/busybox:9.9

https 私库

ctr -n k8s.io i push --user k8s_user1:k8s_Uus1 -k repo.k8s.local/google_containers/busybox:9.9

推送到http

ctr -n k8s.io i push --user k8s_user1:k8s_Uus1 --plain-http repo.k8s.local/google_containers/busybox:9.9

ctr image pull --platform linux/amd64 docker.io/library/nginx:1.18.0

jq安装

jq安装
https://pkgs.org/centos-7/epel-x86_64/jq-1.5-1.el7.x86_64.rpm.html

wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -ivh epel-release-latest-7.noarch.rpm
yum install jq
  Installing : oniguruma-6.8.2-2.el7.x86_64                                                                                    1/2 
  Installing : jq-1.6-2.el7.x86_64                                                                                             2/2

卸载

卸载harbor步骤:(针对脚本一键安装)

docker-compose down   #停止 docker-compose 
rm -rf /data_harbor *    #删除挂载的harbor持久化数据的目录内容
rm -rf /opt/harbor   #删除harbor目录
docker-compose up -d   #重启

配置Harbor的高可用(双组复制)

新建用户
admin_sync/sync_9OK

新建目标(仓库管理-新建):
目标名:s2
url:https://10.100.5.6
用户名:admin
密码:Harbor12345
远程验证:x
连接测试:√

新建规则:
登陆S1的Harbor管理页面,进入kubernetes项目
点击“复制”,新建规则
名称:cp_to_s2
目标:https://10.100.5.6
触发模式:即刻

在S2的Harbor做相同配置

harbor-连接其他仓库报错
pkg/reg/adapter/native/adapter.go:126]: failed to ping registry

docker exec -it -u root docker ps |awk '/core/{print $1}' /bin/bash
echo "192.168.244.6    repo.k8s.local">>/etc/hosts
curl http://repo.k8s.local:5100

cat > /etc/hosts <

两个harbor配制相同主机会同步失败,先将备机配成ip
vi /usr/local/harbor/harbor.yml

hostname: 10.100.5.6

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 5100
systemctl stop harbor
cd /usr/local/harbor/
./prepare 
systemctl start harbor

harbor从机需要重新分配用户及项目的权限

各项目添加 k8s_pull 为访客
各项目添加 k8s_user1 为开发者

Posted in 安装k8s/kubernetes.

Tagged with , , .


k8s_安装3_容器

三、容器环境操作

容器选择
安装Docker或Containered

Kubernetes 默认容器运行时(CRI)为 Docker,所以需要先在各个节点中安装 Docker。另可选安装containerd
从kubernetes 1.24开始,dockershim已经从kubelet中移除,但因为历史问题docker却不支持kubernetes主推的CRI(容器运行时接口)标准,所以docker不能再作为kubernetes的容器运行时了,即从kubernetesv1.24开始不再使用docker了。

但是如果想继续使用docker的话,可以在kubelet和docker之间加上一个中间层cri-docker。cri-docker是一个支持CRI标准的shim(垫片)。一头通过CRI跟kubelet交互,另一头跟docker api交互,从而间接的实现了kubernetes以docker作为容器运行时。但是这种架构缺点也很明显,调用链更长,效率更低。
在安装Docker前需要确保操作系统内核版本为 3.10以上,因此需要CentOS7 ,CentOS7内核版本为3.10.

推荐使用containerd作为kubernetes的容器运行。

containerd配置起来比较麻烦
拉取镜像时需ctr ctrctl或安装nerdctl,推送镜像不方便
下载镜像的时候增加–all-platforms参数,譬如:ctr i pull –all-platforms,否则推送时出错,而加上–all-platforms太费带宽和空间,本来几秒的事,搞了几分钟,600多M
推送镜像的时候增加 用户和密码和–plain-http ctr i push –plain-http=true -u admin:xxxxxx ,不想-u user:password 每次必须使用 ctr pull/ctr push, 可以使用nerdctl
nerdctl 构建的机制和 docker 是完全不同的。
docker 首先会检查本地是否有 Dockerfile 中 FROM 的镜像。如果有,直接使用。没有则通过网络下载镜像;
nerdctl 会根据 Dockerfile FROM参数指定镜像的域名去网上找这个镜像,找到后确认和本地同名镜像校验无误之后,才会使用本地的镜像构建新镜像。
harbor而且要求一定要https,http不行.

方式一 安装docker

docker不支持centos6
推荐在harbor安装docker,k8s安装containered
如果之前安装过低版本docker,卸载

yum remove -y docker \
docker-client \
docker-client-latest \
docker-ce-cli \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

在master和node上操作

1 切换镜像源

方式一

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

方式二
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
   --add-repo \
   https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's/download.docker.com/mirrors.aliyun.com\/docker-ce/g' /etc/yum.repos.d/docker-ce.repo
yum makecache fast

2 查看当前镜像源中支持的docker版本

yum list docker-ce --showduplicates
yum list docker-ce-cli --showduplicates

3 安装特定版本的docker-ce

kubernetes各版本对应支持的docker版本列表
https://github.com/kubernetes/kubernetes/releases
kubernetes v1.28.2 支持 docker 24.0.5 ,安装docker-ce-24.0.6-1.el7

必须指定–setopt=obsoletes=0,否则yum会自动安装更高版本

yum install –setopt=obsoletes=0 docker-ce-24.0.6-1.el7 -y

Installing:
 docker-ce                                                            x86_64                                            3:24.0.6-1.el7                                                        docker-ce-stable                                             24 M
Installing for dependencies:
 audit-libs-python                                                    x86_64                                            2.8.5-4.el7                                                           base                                                         76 k
 checkpolicy                                                          x86_64                                            2.5-8.el7                                                             base                                                        295 k
 container-selinux                                                    noarch                                            2:2.119.2-1.911c772.el7_8                                             extras                                                       40 k
 containerd.io                                                        x86_64                                            1.6.24-3.1.el7                                                        docker-ce-stable                                             34 M
 docker-buildx-plugin                                                 x86_64                                            0.11.2-1.el7                                                          docker-ce-stable                                             13 M
 docker-ce-cli                                                        x86_64                                            1:24.0.6-1.el7                                                        docker-ce-stable                                             13 M
 docker-ce-rootless-extras                                            x86_64                                            24.0.6-1.el7                                                          docker-ce-stable                                            9.1 M
 docker-compose-plugin                                                x86_64                                            2.21.0-1.el7                                                          docker-ce-stable                                             13 M
 fuse-overlayfs                                                       x86_64                                            0.7.2-6.el7_8                                                         extras                                                       54 k
 fuse3-libs                                                           x86_64                                            3.6.1-4.el7                                                           extras                                                       82 k
 libcgroup                                                            x86_64                                            0.41-21.el7                                                           base                                                         66 k
 libsemanage-python                                                   x86_64                                            2.5-14.el7                                                            base                                                        113 k
 policycoreutils-python                                               x86_64                                            2.5-34.el7                                                            base                                                        457 k
 python-IPy                                                           noarch                                            0.75-6.el7                                                            base                                                         32 k
 setools-libs                                                         x86_64                                            3.3.8-4.el7                                                           base                                                        620 k
 slirp4netns                                                          x86_64                                            0.4.3-4.el7_8                                                         extras                                                       81 k

Transaction Summary
================================================================================================================================================================================================================================================================

4 添加一个配置文件

Docker在默认情况下使用的Cgroup Driver为cgroupfs,而kubernetes推荐使用systemd来代替cgroupfs;
第二行配置第三方docker仓库

mkdir - p /etc/docker 
cat > /etc/docker/daemon.json <<EOF 
{
    "registry-mirrors":["http://hub-mirror.c.163.com"],
    "exec-opts":["native.cgroupdriver=systemd"],
    "data-root": "/var/lib/docker",
    "max-concurrent-downloads": 10,
    "max-concurrent-uploads": 5,
    "log-driver":"json-file",
    "log-opts": {
        "max-size": "300m",
        "max-file": "2"
    },
    "live-restore": true
}
EOF

5 启动docker,并设置为开机自启

systemctl daemon-reload
systemctl restart docker && systemctl enable docker

6 检查docker状态和版本

docker version
Client: Docker Engine - Community
 Version:           24.0.6
 API version:       1.43
 Go version:        go1.20.7
 Git commit:        ed223bc
 Built:             Mon Sep  4 12:35:25 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          24.0.6
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.7
  Git commit:       1a79695
  Built:            Mon Sep  4 12:34:28 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.24
  GitCommit:        61f9fd88f79f081d64d6fa3bb1a0dc71ec870523
 runc:
  Version:          1.1.9
  GitCommit:        v1.1.9-0-gccaecfc
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

systemctl start docker  # 启动docker服务
systemctl stop docker  # 停止docker服务
systemctl restart docker  # 重启docker服务

#测试
sudo docker run hello-world

在所有节点安装cri-docker

yum install -y libcgroup 

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el8.x86_64.rpm

rpm -ivh cri-dockerd-0.3.4-3.el8.x86_64.rpm

vim /usr/lib/systemd/system/cri-docker.service
----
#修改第10行内容
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://
----

systemctl start cri-docker
systemctl enable cri-docker

方式二 安装containerd

containerd 安装

wget https://github.com/containerd/containerd/releases/download/v1.7.11/cri-containerd-cni-1.7.11-linux-amd64.tar.gz
wget https://github.com/containerd/containerd/releases/download/v1.7.1/containerd-1.7.1-linux-amd64.tar.gz
tar xvf containerd-1.7.1-linux-amd64.tar.gz
mv bin/* /usr/local/bin/

tar xzf cri-containerd-cni-1.7.11-linux-amd64.tar.gz

#生成containerd 配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml

配置 containerd cgroup 驱动程序 systemd(所有节点)
当 systemd 是选定的初始化系统时,要将 systemd 设置为 cgroup 驱动
kubernets 自v1.24.0 后,就不再使用 docker.shim,替换采用 containerd 作为容器运行时端点

vi /etc/containerd/config.toml
#SystemdCgroup的值改为true
SystemdCgroup = true
#由于国内下载不到registry.k8s.io的镜像,修改sandbox_image的值为:
#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml

grep sandbox_image  /etc/containerd/config.toml
sed -i "s#registry.k8s.io/pause:3.8#registry.aliyuncs.com/google_containers/pause:3.9#g"       /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml

vi /etc/containerd/config.toml

  address = "/run/containerd/containerd.sock"
    socket_path = "/var/run/nri/nri.sock"

启动containerd服务

mkdir -p /usr/local/lib/systemd/system
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
mv containerd.service /usr/lib/systemd/system/

cat /usr/lib/systemd/system/containerd.service 

cat > /usr/lib/systemd/system/containerd.service << EOF
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
#uncomment to fallback to legacy CRI plugin implementation with podsandbox support.
#Environment="DISABLE_CRI_SANDBOXES=1"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

modprobe overlay

modprobe: FATAL: Module overlay not found.

lsmod | grep overlay
重新编译内核支持overlay

systemctl daemon-reload
systemctl enable --now containerd
systemctl status containerd

验证安装

ctr version
Client:
  Version:  v1.7.1
  Revision: 1677a17964311325ed1c31e2c0a3589ce6d5c30d
  Go version: go1.20.4

Server:
  Version:  v1.7.1
  Revision: 1677a17964311325ed1c31e2c0a3589ce6d5c30d
  UUID: 0a65fe08-25a6-4bda-a66f-c6f52e334e70

安装runc

安装runc

以下步骤所有节点都执行。

准备文件

wget https://github.com//opencontainers/runc/releases/download/v1.1.7/runc.amd64
chmod +x runc.amd64

查找containerd安装时已安装的runc所在的位置,如果不存在runc文件,则直接进行下一步
which runc
/usr/bin/runc
替换上一步的结果文件

cp  runc.amd64 /usr/bin/runc

验证runc安装

runc -v
runc version 1.1.7
commit: v1.1.7-0-g860f061b
spec: 1.0.2-dev
go: go1.20.3
libseccomp: 2.5.4

安装CNI插件

安装CNI插件

下载地址:https://github.com/containernetworking/plugins/releases

wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz

安装crictl

安装 kubernetes 社区提供的 containerd 客户端工具 crictl
根据 https://www.downloadkubernetes.com/ 确定即将安装的Kubernetes版本, 本次即将安装 Kubernetes v1.28.0。 客户端工具 crictl 的版本号需和即将安装的 Kubernetes 版本号一致。
crictl 命令基本和docker一样的用法
下载地址:https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md

#wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.1/crictl-v1.27.1-linux-amd64.tar.gz
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-amd64.tar.gz
#tar -xf crictl-v1.27.1-linux-amd64.tar.gz -C /usr/local/bin
tar -xf crictl-v1.28.0-linux-amd64.tar.gz -C /usr/local/bin

# 编辑配置文件
cat >  /etc/crictl.yaml << EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 30
debug: false
pull-image-on-create: false
EOF

systemctl daemon-reload && systemctl restart containerd && systemctl status containerd

crictl --version
crictl version v1.28.0

配置Containerd运行时镜像加速器

Containerd通过在启动时指定一个配置文件夹,使后续所有镜像仓库相关的配置都可以在里面热加载,无需重启Containerd。
在/etc/containerd/config.toml配置文件中插入如下config_path:

config_path = "/etc/containerd/certs.d"

说明
/etc/containerd/config.toml非默认路径,您可以根据实际使用情况进行调整。

若已有plugins."io.containerd.grpc.v1.cri".registry,则在下面添加一行,注意要有Indent。若没有,则可以在任意地方写入。

[plugins."io.containerd.grpc.v1.cri".registry]
  config_path = "/etc/containerd/certs.d"

之后需要检查配置文件中是否有原有mirror相关的配置,如下:

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
    endpoint = ["https://registry-1.docker.io"]

若有原有mirror相关的配置,则需要清理。
执行systemctl restart containerd重启Containerd。
若启动失败,执行journalctl -u containerd检查为何失败,通常是配置文件仍有冲突导致,您可以依据报错做相应调整。
在步骤一中指定的config_path路径中创建docker.io/hosts.toml文件。
在文件中写入如下配置。

mkdir /etc/containerd/certs.d/docker.io -pv
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://xxx.mirror.aliyuncs.com"]
  capabilities = ["pull", "resolve"]
EOF

systemctl restart containerd

ctr和crictl

ctr是由containerd提供的一个客户端工具。
crictl是CRI兼容的容器运行时命令接口,和containerd无关,由kubernetes提供,可以使用它来检查和调试k8s节点上的容器运行时和应用程序。

crictl pull docker.io/library/nginx
crictl images
crictl rmi docker.io/library/nginx

ctr image pull docker.io/library/nginx:alpine
ctr image ls
ctr image check
ctr image tag docker.io/library/nginx:alpine harbor.k8s.local/course/nginx:alpine
ctr container create docker.io/library/nginx:alpine nginx
ctr container ls
ctr container info nginx
ctr container rm nginx

Posted in 安装k8s/kubernetes.

Tagged with , , , , , .