九、 ingress-nginx
部署Ingress控制器
部署方式
Ingress控制器可以通过三种方式部署以接入外部请求流量。
第一种方式 Deployment+NodePort模式的Service
通过Deployment控制器管理Ingress控制器的Pod资源,通过NodePort或LoadBalancer类型的Service对象为其接入集群外部的请求流量,所有这种方式在定义一个Ingress控制器时必须在其前端定义一个专用的Service资源.
访问流量先通过nodeport进入到node节点,经iptables (svc) 转发至ingress-controller容器,再根据rule转发至后端各业务的容器中。
同用deployment模式部署ingress-controller,并创建对应的service,但是type为NodePort。这样,ingress就会暴露在集群节点ip的特定端口上。由于nodeport暴露的端口是随机端口,一般会在前面再搭建一套负载均衡器来转发请求。该方式一般用于宿主机是相对固定的环境ip地址不变的场景。
NodePort方式暴露ingress虽然简单方便,但是NodePort多了一层NAT,在请求量级很大时可能对性能会有一定影响。
流量先经过DNS域名解析,然后到达LB,然后流量经过ingress做一次负载分发到service,最后再由service做一次负载分发到对应的pod中
固定nodePort后,LB端指向nodeip+nodeport 任意一个都可以,如果当流量进来负载到某个node上的时候因为Ingress Controller的pod不在这个node上,会走这个node的kube-proxy转发到Ingress Controller的pod上,多转发了一次。nginx接收到的http请求中的source ip将会被转换为接受该请求的node节点的ip,而非真正的client端ip
Deployment 部署的副本 Pod 会分布在各个 Node 上,每个 Node 都可能运行好几个副本,replicas数量(不能大于node节点数) + nodeSeletor / podantifinity。DaemonSet 的不同之处在于:每个 Node 上最多只能运行一个副本。
nodeport优点:一个集群可以部署几个就可以了,你可以一组service对应一个ingress,这样只需要每组service自己维护自己的ingress中NGINX配置,每个ingress之间互不影响,各reload自己的配置,缺点是效率低。如果你使用hostnetwork方式,要么维护所有node上NGINX的配置。
第二种方式 DaemonSet+HostNetwork+nodeSelector
通过DaemonSet控制器确保集群的所有或部分工作节点中每个节点上只运行Ingress控制器的一个Pod资源,并配置这类Pod对象以HostPort或HostNetwork的方式在当前节点接入外部流量。
每个节点都创建一个ingress-controller的容器,容器的网络模式设为hostNetwork。访问请求通过80/443端口将直接进入到pod-nginx中。而后nginx根据ingress规则再将流量转发到对应的web应用容器中。
用DaemonSet结合nodeselector来部署ingress-controller到特定的node上,然后使用HostNetwork直接把该pod与宿主机node的网络打通,直接使用宿主机的80/433端口就能访问服务。这时,ingress-controller所在的node机器就很类似传统架构的边缘节点,比如机房入口的nginx服务器。该方式整个请求链路最简单,性能相对NodePort模式更好。缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress-controller pod。 比较适合大并发的生产环境使用。
不创建nginx svc,效率最高(不使用nodeport的方式进行暴露)。如果我们使用Nodeport的方式,流量是NodeIP->svc->ingress-controller(pod)这样的话会多走一层svc层,不管svc层是使用iptables还是lvs都会降低效率。如果使用hostNetwork的方式就是直接走Node节点的主机网络,唯一要注意的是hostNetwork下pod会继承宿主机的网络协议,也就是使用了主机的dns,会导致svc的请求直接走宿主机的上到公网的dns服务器而非集群里的dns server,需要设置pod的dnsPolicy: ClusterFirstWithHostNet即可解决
写入proxy 配置文件如 nginx.conf 的不是backend service的地址,而是backend service 的 pod 的地址,避免在 service 在增加一层负载均衡转发
hostNetwork需要占用物理机的80和443端口,80和443端口只能在绑定了的node上访问nodeip+80,没绑定的可以用nodeip+nodeport访问
这种方式可能会存在node间无法通信和集群内域名解析的问题
可以布署多套ingress,区分内外网访问,对业务进行拆分壁免reload影响
第三种方式 Deployment+LoadBalancer 模式的 Service
如果要把ingress部署在公有云,那用这种方式比较合适。用Deployment部署ingress-controller,创建一个 type为 LoadBalancer 的 service 关联这组 pod。大部分公有云,都会为 LoadBalancer 的 service 自动创建一个负载均衡器,通常还绑定了公网地址。 只要把域名解析指向该地址,就实现了集群服务的对外暴露
ingres-nginx
k8s社区的ingres-nginx https://github.com/kubernetes/ingress-nginx
Ingress参考文档:https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
kubectl get cs
部署配置Ingress
ingress-nginx v1.9.3
k8s supported version 1.28, 1.27,1.26, 1.25
Nginx Version 1.21.6
wget -k https://github.com/kubernetes/ingress-nginx/raw/main/deploy/static/provider/kind/deploy.yaml -O ingress-nginx.yaml
cat ingress-nginx.yaml
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resourceNames:
- ingress-nginx-leader
resources:
- leases
verbs:
- get
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: v1
data:
allow-snippet-annotations: "false"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-controller
namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
#nodePort: 30080
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
#nodePort: 30443
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
replicas: 1
minReadySeconds: 0
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
strategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
spec:
dnsPolicy: ClusterFirstWithHostNet #既能使用宿主机DNS,又能使用集群DNS
hostNetwork: true #与宿主机共享网络
#nodeName: node01.k8s.local #设置只能在k8snode1节点运行
tolerations: #设置能容忍master污点
- key: node-role.kubernetes.io/master
operator: Exists
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --watch-ingress-without-class=true
- --publish-status-address=localhost
- --logtostderr=false
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
- name: timezone
mountPath: /etc/localtime
- name: vol-ingress-logdir
mountPath: /var/log/nginx
#dnsPolicy: ClusterFirst
nodeSelector:
ingresstype: ingress-nginx
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Equal
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Equal
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
- name: vol-ingress-logdir
hostPath:
path: /var/log/nginx
type: DirectoryOrCreate
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission-create
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
imagePullPolicy: IfNotPresent
name: create
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission-patch
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
imagePullPolicy: IfNotPresent
name: patch
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: nginx
spec:
controller: k8s.io/ingress-nginx
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission
namespace: ingress-nginx
spec:
egress:
- {}
podSelector:
matchLabels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
policyTypes:
- Ingress
- Egress
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: ingress-nginx
path: /networking/v1/ingresses
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None
提取image名称,并在harbor 导入
cat ingress-nginx.yaml |grep image:|sed -e 's/.*image: //'
registry.k8s.io/ingress-nginx/controller:v1.9.3@sha256:8fd21d59428507671ce0fb47f818b1d859c92d2ad07bb7c947268d433030ba98
registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80
registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80
#切换到harbor
docker pull registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.3
docker pull registry.aliyuncs.com/google_containers/kube-webhook-certgen:v20231011-8b53cabe0
docker tag registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.3 repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
docker tag registry.aliyuncs.com/google_containers/kube-webhook-certgen:v20231011-8b53cabe0 repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
docker push repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
docker push repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
docker images |grep ingress
修改yaml为私仓地址
方式一
修改image: 为私仓地址
sed -n "/image:/{s/image: /image: repo.k8s.local\//p}" ingress-nginx.yaml
sed -i "/image:/{s/image: /image: repo.k8s.local\//}" ingress-nginx.yaml
去除@sha256验证码
sed -rn "s/(\s*image:.*)@sha256:.*$/\1 /gp" ingress-nginx.yaml
sed -ri "s/(\s*image:.*)@sha256:.*$/\1 /g" ingress-nginx.yaml
方式二
合并执行,替换为私仓并删除SHA256验证
sed -rn "s/(\s*image: )(.*)@sha256:.*$/\1 repo.k8s.local\/\2/gp" ingress-nginx.yaml
sed -ri "s/(\s*image: )(.*)@sha256:.*$/\1 repo.k8s.local\/\2/g" ingress-nginx.yaml
方式三
手动编辑文件
vi ingress-nginx.yaml
cat ingress-nginx.yaml |grep image:
image: repo.k8s.local/registry.k8s.io/ingress-nginx/controller:v1.9.3
image: repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
image: repo.k8s.local/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
Deployment nodeName 方式
通常调式用,分配到指定的node,无法自动调度
vi ingress-nginx.yaml
kind: Deployment
dnsPolicy: ClusterFirstWithHostNet #既能使用宿主机DNS,又能使用集群DNS
hostNetwork: true #与宿主机共享网络,接在主机上开辟 80,443端口,无需中间解析,速度更快
nodeName: node01.k8s.local #设置只能在k8snode1节点运行
Deployment+nodeSelector 方式
可以调度到带有指定标签的node,可以给node打标签来调度,这里布署ingress更推荐daemonset
vi ingress-nginx.yaml
kind: Deployment
spec:
replicas: 1 #副本数,默认一个node一个
dnsPolicy: ClusterFirstWithHostNet #既能使用宿主机DNS,又能使用集群DNS
hostNetwork: true #与宿主机共享网络,接在主机上开辟 80,443端口,无需中间解析,速度更快
#nodeName: node01.k8s.local #设置只能在k8snode1节点运行
tolerations: #设置能容忍master污点
- key: node-role.kubernetes.io/master
operator: Exists
nodeSelector:
ingresstype: ingress-nginx
kubernetes.io/os: linux
kind: Service
ports:
- name: http
nodePort: 30080 #固定端口
- name: https
nodePort: 30443 #固定端口
DaemonSet 方式
每个node带有标签的node都分配一个
在Deployment:spec指定布署节点
修改Deployment为DaemonSet
修改时同步注释掉
#replicas: 1
#strategy:
# rollingUpdate:
# maxUnavailable: 1
# type: RollingUpdate
修改DaemonSet的nodeSelector: ingress-node=true 。这样只需要给node节点打上 ingresstype=ingress-nginx 标签,即可快速的加入/剔除 ingress-controller的数量
vi ingress-nginx.yaml
kind: DaemonSet
dnsPolicy: ClusterFirstWithHostNet #既能使用宿主机DNS,又能使用集群DNS
hostNetwork: true #与宿主机共享网络,接在主机上开辟 80,443端口,无需中间解析,速度更快,netstat 可以看到端口
#nodeName: node01.k8s.local #设置只能在k8snode1节点运行
tolerations: #设置能容忍master污点,充许布到master
- key: node-role.kubernetes.io/master
operator: Exists
nodeSelector:
ingresstype: ingress-nginx
kubernetes.io/os: linux
NodeSelector 只是一种简单的调度策略,更高级的调度策略可以使用 Node Affinity 和 Node Taints 等机制来实现
kubectl apply -f ingress-nginx.yaml
kubectl delete -f ingress-nginx.yaml
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-client-provisioner-db4f6fb8-gnnbm 1/1 Running 0 24h
ingress-nginx ingress-nginx-admission-create-wxtlz 0/1 Completed 0 103s
ingress-nginx ingress-nginx-admission-patch-8fw72 0/1 Completed 1 103s
ingress-nginx ingress-nginx-controller-57c98745dd-2rn7m 0/1 Pending 0 103s
查看详情
kubectl -n ingress-nginx describe pod ingress-nginx-controller-57c98745dd-2rn7m
didn’t match Pod’s node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
无法调度到正确Node
影响调度的因素:nodename、NodeSelector、Node Affinity、Pod Affinity、taint和tolerations
这里还没有给node打标签ingresstype=ingress-nginx,所以不能调度
如果布署成功可以看到分配到哪台node
Service Account: ingress-nginx
Node: node01.k8s.local/192.168.244.5
查看nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01.k8s.local Ready control-plane 9d v1.28.2
node01.k8s.local Ready <none> 9d v1.28.2
node02.k8s.local Ready <none> 2d23h v1.28.2
查看node是否被打污点
kubectl describe nodes | grep Tain
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: <none>
Taints: <none>
查看node的标签
kubectl get node –show-labels
NAME STATUS ROLES AGE VERSION LABELS
master01.k8s.local Ready control-plane 12d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01.k8s.local,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node01.k8s.local Ready <none> 12d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01.k8s.local,kubernetes.io/os=linux
node02.k8s.local Ready <none> 6d16h v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02.k8s.local,kubernetes.io/os=linux
kubectl get node -l "ingresstype=ingress-nginx" -n ingress-nginx --show-labels
kubectl get node -l "beta.kubernetes.io/arch=amd64" -n ingress-nginx --show-labels
查看是否是node资源不足
在 Linux 中查看实际剩余的 cpu
kubectl describe node |grep -E '((Name|Roles):\s{6,})|(\s+(memory|cpu)\s+[0-9]+\w{0,2}.+%\))'
给需要的node节点上部署ingress-controller:
因为我们使用的是DaemonSet模式,所以理论上会为所有的节点都安装,但是由于我们的selector使用了筛选标签:ingresstype=ingress-nginx ,所以此时所有的node节点都没有被执行安装;
kubectl get ds -n ingress-nginx
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ingress-nginx-controller 0 0 0 0 0 ingresstype=ingress-nginx,kubernetes.io/os=linux 60s
查询详细
kubectl describe ds -n ingress-nginx
当我们需要为所有的node节点安装ingress-controller的时候,只需要为对应的节点打上标签:node-role=ingress-ready
kubectl label node node01.k8s.local ingresstype=ingress-nginx
kubectl label node node02.k8s.local ingresstype=ingress-nginx
kubectl label node master01.k8s.local ingresstype=ingress-nginx
修改标签,需要增加–overwrite这个参数(为了与增加标签的命令做区分)
kubectl label node node01.k8s.local ingresstype=ingress-nginx --overwrite
删除node的标签
kubectl label node node01.k8s.local node-role-
kubectl label node node01.k8s.local ingress-
kubectl get node –show-labels
NAME STATUS ROLES AGE VERSION LABELS
NAME STATUS ROLES AGE VERSION LABELS
master01.k8s.local Ready control-plane 13d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingresstype=ingress-nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01.k8s.local,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node01.k8s.local Ready 13d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingresstype=ingress-nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01.k8s.local,kubernetes.io/os=linux
node02.k8s.local Ready 6d19h v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingresstype=ingress-nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02.k8s.local,kubernetes.io/os=linux
查看当前有3个节点
kubectl get ds -n ingress-nginx
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ingress-nginx-controller 3 3 3 3 3 ingresstype=ingress-nginx,kubernetes.io/os=linux 25s
删除标签后,主节点就被移除了,只有2个节点
kubectl label node master01.k8s.local ingresstype-
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-client-provisioner-db4f6fb8-gnnbm 1/1 Running 10 (26h ago) 4d20h
ingress-nginx ingress-nginx-admission-create-zxz7j 0/1 Completed 0 3m35s
ingress-nginx ingress-nginx-admission-patch-xswhk 0/1 Completed 1 3m35s
ingress-nginx ingress-nginx-controller-7j4nz 1/1 Running 0 3m35s
ingress-nginx ingress-nginx-controller-g285w 1/1 Running 0 3m35s
kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.116.106 80:30080/TCP,443:30443/TCP 8m26s
ingress-nginx-controller-admission ClusterIP 10.96.104.116 443/TCP 8m26s
kubectl -n ingress-nginx describe pod ingress-nginx-controller-7f6c656666-gn4f2
Warning FailedMount 112s kubelet MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found
从Deployment改成DaemonSet,如果有足够资源可以直接改yaml后apply,分配后资源不足会的pod会一直Pending,老的pod依然running,提示以下资源不足信息.
Warning FailedScheduling 33s default-scheduler 0/3 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
手动删除老pod后,新pod可以自动运行,但老的pod一直重新生成新的pending pod
kubectl -n ingress-nginx delete pod ingress-nginx-controller-6c95999b7f-njzvr
创建一个nginx测试
准备镜像
docker pull docker.io/library/nginx:1.21.4
docker tag docker.io/library/nginx:1.21.4 repo.k8s.local/library/nginx:1.21.4
docker push repo.k8s.local/library/nginx:1.21.4
nginx yaml文件
#使用Deployment+nodeName+hostPath,指定分配到node01上
cat > test-nginx-hostpath.yaml < svc-test-nginx-nodeport.yaml < svc-test-nginx-clusterip.yaml <
Ingress规则,将ingress和service绑一起
podip和clusterip都不固定,但是service name是固定的
namespace 要一致
注意1.22版前,yaml格式有差异
apiVersion: extensions/v1beta1
backend:
serviceName: svc-test-nginx
servicePort: 80
cat > ingress-svc-test-nginx.yaml << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc-test-nginx
annotations:
#kubernetes.io/ingress.class: "nginx"
namespace: test
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: svc-test-nginx
port:
number: 31080
EOF
在node1 上创建本地文件夹,后续pod因spec:nodeName: 会分配到此机。
mkdir -p /nginx/{html,logs,conf.d}
#生成一个首页
echo hostname
> /nginx/html/index.html
echo date
>> /nginx/html/index.html
#生成ingress测试页
mkdir /nginx/html/testpath/
echo hostname
> /nginx/html/testpath/index.html
kubectl apply -f test-nginx-hostpath.yaml
kubectl delete -f test-nginx-hostpath.yaml
#service nodeport/clusterip 两者选一
kubectl apply -f svc-test-nginx-nodeport.yaml
kubectl delete -f svc-test-nginx-nodeport.yaml
#service clusterip
kubectl apply -f svc-test-nginx-clusterip.yaml
kubectl delete -f svc-test-nginx-clusterip.yaml
kubectl apply -f ingress-svc-test-nginx.yaml
kubectl delete -f ingress-svc-test-nginx.yaml
kubectl describe ingress ingress-svc-test-hostpath -n test
kubectl get pods -n test
kubectl describe -n test pod nginx-deploy-5bc84b775f-hnqll
kubectl get svc -A
注删pod重启后文件会被重写,html/和logs 不会覆盖
ll /nginx/conf.d/
total 4
-rw-r--r-- 1 root root 1072 Oct 26 11:06 default.conf
cat /nginx/conf.d/default.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
EOF
使用nodeport
kubectl get service -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-test-nginx NodePort 10.96.148.126 31080:30003/TCP 20s
使用clusterip
kubectl get service -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-test-nginx ClusterIP 10.96.209.131 31080/TCP 80s
总结一下如何访问到pod内web服务
ip规则
node节点:192.168.244.0/24
pod网段:10.244.0.0/16
service网段(集群网段):10.96.0.0/12
ingress为HostNetwork模式
集群内外可以访问到ingress匹配到的node上的nodeip+80和443
curl http://192.168.244.5:80/
集群内外通过service nodeport访问任意nodeip+nodePort
ingress service 的nodeip+nodeport
此例中30080为ingress的nodeport
curl http://192.168.244.4:30080/testpath/
node01.k8s.local
nginx service 的nodeip+nodeport
service为 nodeport 在集群内或外部使用任意nodeip+nodeport,访问pod上的nginx
curl http://192.168.244.5:30003
node01.k8s.local
Thu Oct 26 11:11:00 CST 2023
集群内通过service clusterip
ingress service 的clusterip+clusterport
curl http://10.96.111.201:80/testpath/
node01.k8s.local
nginx service 的clusterip+clusterport
在集群内使用clusterip+cluster port,也就是service 访问内部nginx,只有集群内能访问,每次重启pod clusterip会变动,测试使用
curl http://10.96.148.126:31080
node01.k8s.local
Thu Oct 26 11:11:00 CST 2023
集群内通过pod ip
nginx podip+port
curl http://10.244.1.93:80
pod内可以用service域名来访问
curl http://svc-test-nginx:31080
curl http://svc-test-nginx.test:31080
curl http://svc-test-nginx.test.svc.cluster.local:31080
curl http://10.96.148.126:31080
在node1上可以看到访问日志,注意日期的时区是不对的
tail -f /nginx/logs/access.log
10.244.0.0 - - [26/Oct/2023:03:11:04 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36" "-"
pod中时区问题
时区可以在yaml中用hostpath指到宿主机的时区文件
volumeMounts:
- name: timezone
mountPath: /etc/localtime
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
kubectl get pods -o wide -n test
进入容器
kubectl exec -it pod/nginx-deploy-886d78bd5-wlk5l -n test -- /bin/sh
Ingress-nginx 组件添加和设置 header
Ingress-nginx 可以通过 snippets注解 的方式配置,但为了安全起见,默认情况下,snippets注解 不允许的
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#allow-snippet-annotations
这种方式只能给具体的 ingress 资源配置,如果需要给所有ingress 接口配置就很麻烦, 维护起来很不优雅.所以推荐通过官方提供的 自定义Header 的方式来配置
https://help.aliyun.com/zh/ack/ack-managed-and-ack-dedicated/user-guide/ install-the-nginx-ingress-controller-in-high-load-scenarios
ingress默认会丢弃不标准的http头
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-real-ip-cidr
解决:configmaps添加
data:
enable-underscores-in-headers: "true"
cat > ingress-nginx-ConfigMap.yaml <
#注意文本中含用变量,使用vi 编辑模式修改.
#realip 生效在http段,snippet生效在server段
vi ingress-nginx-ConfigMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.3
name: ingress-nginx-controller
namespace: ingress-nginx
data:
allow-snippet-annotations: "true"
worker-processes: "auto" #worker_processes
server-name-hash-bucket-size: "128" #server_names_hash_bucket_size
variables-hash-bucket-size: "256" #variables_hash_bucket_size
variables-hash-max-size: "2048" #variables_hash_max_size
client-header-buffer-size: "32k" #client_header_buffer_size
proxy-body-size: "8m" #client_max_body_size
large-client-header-buffers: "4 512k" #large_client_header_buffers
client-body-buffer-size: "512k" #client_body_buffer_size
proxy-connect-timeout : "5" #proxy_connect_timeout
proxy-read-timeout: "60" #proxy_read_timeout
proxy-send-timeout: "5" #proxy_send_timeout
proxy-buffer-size: "32k" #proxy_buffer_size
proxy-buffers-number: "8 32k" #proxy_buffers
keep-alive: "60" #keepalive_timeout
enable-real-ip: "true"
#use-forwarded-headers: "true"
forwarded-for-header: "ns_clientip" #real_ip_header
compute-full-forwarded-for: "true"
enable-underscores-in-headers: "true" #underscores_in_headers on
proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16 #set_real_ip_from
access-log-path: "/var/log/nginx/access_ext_ingress_$hostname.log"
error-log-path: "/var/log/nginx/error_ext_ingress.log"
log-format-escape-json: "true"
log-format-upstream: '{"timestamp": "$time_iso8601", "req_id": "$req_id",
"geoip_country_code": "$geoip_country_code", "request_time": "$request_time",
"ingress":{ "hostname": "$hostname", "addr": "$server_addr", "port": "$server_port","namespace": "$namespace","ingress_name": "$ingress_name","service_name": "$service_name","service_port": "$service_port" },
"upstream":{ "addr": "$upstream_addr", "name": "$proxy_upstream_name", "response_time": "$upstream_response_time",
"status": "$upstream_status", "response_length": "$upstream_response_length", "proxy_alternative": "$proxy_alternative_upstream_name"},
"request":{ "remote_addr": "$remote_addr", "real_ip": "$realip_remote_addr", "remote_port": "$remote_port", "real_port": "$realip_remote_port",
"remote_user": "$remote_user", "request_method": "$request_method", "hostname": "$host", "request_uri": "$request_uri", "status": $status,
"body_bytes_sent": "$body_bytes_sent", "request_length": "$request_length", "referer": "$http_referer", "user_agent": "$http_user_agent",
"x-forward-for": "$proxy_add_x_forwarded_for", "protocol": "$server_protocol"}}'
创建/关闭 ConfigMap
kubectl apply -f ingress-nginx-ConfigMap.yaml -n ingress-nginx
直接生效,不需重启pod
kubectl delete -f ingress-nginx-ConfigMap.yaml
kubectl get pods -o wide -n ingress-nginx
ingress-nginx-controller-kr8jd 1/1 Running 6 (7m26s ago) 13m 192.168.244.7 node02.k8s.local
查看ingress-nginx 配制文件
kubectl describe pod/ingress-nginx-controller-z5b4f -n ingress-nginx
kubectl exec -it pod/ingress-nginx-controller-z5b4f -n ingress-nginx -- /bin/sh
kubectl exec -it pod/ingress-nginx-controller-kr8jd -n ingress-nginx -- head -n200 /etc/nginx/nginx.conf
kubectl exec -it pod/ingress-nginx-controller-kr8jd -n ingress-nginx -- cat /etc/nginx/nginx.conf
kubectl exec -it pod/ingress-nginx-controller-z5b4f -n ingress-nginx -- tail /var/log/nginx/access.log
kubectl exec -it pod/ingress-nginx-controller-kr8jd -n ingress-nginx -- head -n200 /etc/nginx/nginx.conf|grep client_body_buffer_size
客户端->CDN->WAF->SLB->Ingress->Pod
realip
方式一 kind: ConfigMap
enable-real-ip: "true"
#use-forwarded-headers: "true"
forwarded-for-header: "ns_clientip" #real_ip_header
compute-full-forwarded-for: "true"
enable-underscores-in-headers: "true" #underscores_in_headers on
proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16 #set_real_ip_from
方式二 server-snippet
kind: ConfigMap中打开
allow-snippet-annotations: "true"
kubectl edit configmap -n ingress-nginx ingress-nginx-controller
#ingress关联server-snippet
#realip 会在server 段对全域名生效
#ip 白名单whitelist-source-range 会在location = /showvar 生效,使用remoteaddr判定,需要全域白名单时才用.allow 223.2.2.0/24;deny all;
test-openresty-ingress-snippet.yaml
用cat时在含有变量时需转义\$ ,vi 不用转义
cat > test-openresty-ingress-snippet.yaml << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc-openresty
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/server-snippet: |
underscores_in_headers on;
set_real_ip_from 10.244.0.0/16;
set_real_ip_from 192.168.0.0/16;
real_ip_header ns_clientip;
#real_ip_recursive on;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
add_header Access-Control-Allow-Headers \$http_Access_Control_Request_Headers always;
add_header Access-Control-Allow-Origin \$http_Origin always;
add_header Access-Control-Allow-Credentials 'false' always;
add_header Access-Control-Allow-Methods '*' always;
if (\$request_method = 'OPTIONS') {
return 204;
}
nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
namespace: test
spec:
rules:
- http:
paths:
- path: /showvar
pathType: Prefix
backend:
service:
name: svc-openresty
port:
number: 31080
EOF
kubectl apply -f test-openresty-ingress-snippet.yaml
-
enable-real-ip:
enable-real-ip: "true"
打开real-ip
生成的代码real_ip_header X-Forwarded-For; real_ip_recursive on; set_real_ip_from 0.0.0.0/0;
-
use-forwarded-headers:
use-forwarded-headers: "false" 适用于 Ingress 前无代理层,例如直接挂在 4 层 SLB 上,ingress 默认重写 X-Forwarded-For 为 $remote_addr ,可防止伪造 X-Forwarded-For
use-forwarded-headers: "true" 适用于 Ingress 前有代理层,风险是可以伪造X-Forwarded-For
生成的代码real_ip_header X-Forwarded-For; real_ip_recursive on; set_real_ip_from 0.0.0.0/0;
-
enable-underscores-in-headers:
enable-underscores-in-headers: "true"
是否在hader头中启用非标的_下划线, 缺省默认为"false",如充许 X_FORWARDED_FOR 头,请设为"true"。
相当于nginx的 underscores_in_headers on; -
forwarded-for-header
默认值 X-Forwarded-For,标识客户端的原始 IP 地址的 Header 字段, 如自定义的header头 X_FORWARDED_FOR
forwarded-for-header: "X_FORWARDED_FOR"
相当于nginx的real_ip_header
4.compute-full-forwarded-for
默认会将remote替换X-Forwarded-For
将 remote address 附加到 X-Forwarded-For Header而不是替换它。
- proxy-real-ip-cidr
如果启用 use-forwarded-headers 或 use-proxy-protocol,则可以使用该参数其定义了外部负载衡器 、网络代理、CDN等地址,多个地址可以以逗号分隔的 CIDR 。默认值: "0.0.0.0/0"
set_real_ip_from
6.external-traffic-policy
Cluster模式:是默认模式,Kube-proxy不管容器实例在哪,公平转发,会做一次SNAT,所以源IP变成了节点1的IP地址。
Local模式:流量只发给本机的Pod,Kube-proxy转发时会保留源IP,性能(时延)好。
这种模式下的Service类型只能为外部流量,即:LoadBalancer 或者 NodePort 两种,否则会报错
开realip后 http_x_forwarded_for 值被会被 remoteaddr 取代
如果compute-full-forwarded-for: "true" ,那么remoteaddr会被追加到右侧
由于本机不会跨节点转发报文,所以要想所有节点上的容器有负载均衡,就需要依赖上一级的Loadbalancer来实现
日 志
access-log-path: /var/log/nginx/access.log
/var/log/nginx/access.log -> /dev/stdout
error-log-path:/var/log/nginx/error.log
/var/log/nginx/error.log->/dev/stderr
kubectl get ds -A
kubectl get ds -n ingress-nginx ingress-nginx-controller -o=yaml
将当前damonset布署的ingress-nginx-controller 导出成单独yaml,方便修改
kubectl get ds -n ingress-nginx ingress-nginx-controller -o=yaml > ingress-nginx-deployment.yaml
每个node上
自动创建的目录为root:root ,而ingress没权限写入
mkdir -p /var/log/nginx/
chmod 777 /var/log/nginx/
Error: exit status 1
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied)
nginx: the configuration file /tmp/nginx/nginx-cfg1271722019 syntax is ok
2023/11/02 14:05:02 [emerg] 34#34: open() "/var/log/nginx/access.log" failed (13: Permission denied)
nginx: configuration file /tmp/nginx/nginx-cfg1271722019 test failed
kind: Deployment 中 关闭 logtostderr
- --logtostderr=false
示例:containers: - args:
- /nginx-ingress-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --watch-ingress-without-class=true
- --publish-status-address=localhost
- --logtostderr=false
挂载到宿主目录
volumeMounts:
- name: timezone
mountPath: /etc/localtime
- name: vol-ingress-logdir
mountPath: /var/log/nginx
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
- name: vol-ingress-logdir
hostPath:
path: /var/log/nginx
type: DirectoryOrCreate
创建/关闭 ingress-nginx-deployment
kubectl apply -f ingress-nginx-deployment.yaml
kubectl get pods -o wide -n ingress-nginx
默认日志格式
log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
tail -f /var/log/nginx/access.log
3.2.1.5 - - [02/Nov/2023:14:11:26 +0800] "GET /showvar/?2 HTTP/1.1" 200 316 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.289 Safari/537.36" 764 0.000 [test-svc-openresty-31080] [] 10.244.2.46:8089 316 0.000 200 a9051a75e20e164f1838740e12fa95e3
SpringCloud 微服务 RuoYi-Cloud 部署文档(DevOps版)(2023-10-18) argo-rollouts + istio(金丝雀发布)(渐进式交付)
https://blog.csdn.net/weixin_44797299/article/details/133923956
server-snippet 访问验证 和URL重定向(permanent):
通过Ingress注解nginx.ingress.kubernetes.io/server-snippet配置location,访问/sre,返回401错误代码
cat > test-openresty-ingress-snippet.yaml << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc-openresty
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/server-snippet: |
underscores_in_headers on;
set_real_ip_from 10.244.0.0/16;
set_real_ip_from 192.168.0.0/16;
real_ip_header ns_clientip;
#real_ip_recursive on;
location /sre {
return 401;
}
rewrite ^/baidu.com$ https://www.baidu.com redirect;
nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
namespace: test
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /showvar
pathType: Prefix
backend:
service:
name: svc-openresty
port:
number: 31080
EOF
kubectl apply -f test-openresty-ingress-snippet.yaml
curl http://192.168.244.7:80/sre/
401 Authorization Required
401 Authorization Required
nginx
curl http://192.168.244.7:80/baidu.com
302 Found
302 Found
nginx
configuration-snippet
nginx.ingress.kubernetes.io/denylist-source-range
扩展配置到Location章节
cat > test-openresty-ingress-snippet.yaml << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc-openresty
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/server-snippet: |
underscores_in_headers on;
set_real_ip_from 10.244.0.0/16;
set_real_ip_from 192.168.0.0/16;
real_ip_header ns_clientip;
real_ip_recursive on;
location /sre {
return 401;
}
rewrite ^/baidu.com$ https://www.baidu.com redirect;
nginx.ingress.kubernetes.io/whitelist-source-range: 127.0.0.1/32,192.168.0.0/16,10.244.0.0/16,223.2.2.0/24
nginx.ingress.kubernetes.io/denylist-source-range:223.2.3.0/24
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-Pass $proxy_x_pass;
rewrite ^/v6/(.*)/card/query http://foo.bar.com/v7/#!/card/query permanent;
namespace: test
spec:
ingressClassName: nginx
rules:
- http:
paths:- path: /showvar
pathType: Prefix
backend:
service:
name: svc-openresty
port:
number: 31080
EOF
- path: /showvar
配置HTTPS服务转发到后端容器为HTTPS协议
Nginx Ingress Controller默认使用HTTP协议转发请求到后端业务容器。当您的业务容器为HTTPS协议时,可以通过使用注解nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"来使得Nginx Ingress Controller使用HTTPS协议转发请求到后端业务容器。
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backend-https
annotations:
#注意这里:必须指定后端服务为HTTPS服务。
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
-
secretName:
rules:
- host:
http:
paths:
- path: /
backend:
service:
name:
port:
number:
pathType: ImplementationSpecific
配置域名支持正则化
在Kubernetes集群中,Ingress资源不支持对域名配置正则表达式,但是可以通过nginx.ingress.kubernetes.io/server-alias注解来实现。
创建Nginx Ingress,以正则表达式~^www.\d+.example.com为例。
cat <<-EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-regex
namespace: default
annotations:
nginx.ingress.kubernetes.io/server-alias: '~^www\.\d+\.example\.com$, abc.example.com'
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
service:
name: http-svc1
port:
number: 80
pathType: ImplementationSpecific
EOF
配置域名支持泛化
在Kubernetes集群中,Nginx Ingress资源支持对域名配置泛域名,例如,可配置*. ingress-regex.com泛域名。
cat <<-EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-regex
namespace: default
spec:
rules:
- host: *.ingress-regex.com
http:
paths:
- path: /foo
backend:
service:
name: http-svc1
port:
number: 80
pathType: ImplementationSpecific
EOF
通过注解实现灰度发布
灰度发布功能可以通过设置注解来实现,为了启用灰度发布功能,需要设置注解nginx.ingress.kubernetes.io/canary: "true",通过不同注解可以实现不同的灰度发布功能:
nginx.ingress.kubernetes.io/canary-weight:设置请求到指定服务的百分比(值为0~100的整数)。
nginx.ingress.kubernetes.io/canary-by-header:基于Request Header的流量切分,当配置的hearder值为always时,请求流量会被分配到灰度服务入口;当hearder值为never时,请求流量不会分配到灰度服务;将忽略其他hearder值,并通过灰度优先级将请求流量分配到其他规则设置的灰度服务。
nginx.ingress.kubernetes.io/canary-by-header-value和nginx.ingress.kubernetes.io/canary-by-header:当请求中的hearder和header-value与设置的值匹配时,请求流量会被分配到灰度服务入口;将忽略其他hearder值,并通过灰度优先级将请求流量分配到其他规则设置的灰度服务。
nginx.ingress.kubernetes.io/canary-by-cookie:基于Cookie的流量切分,当配置的cookie值为always时,请求流量将被分配到灰度服务入口;当配置的cookie值为never时,请求流量将不会分配到灰度服务入口。
基于Header灰度(自定义header值):当请求Header为ack: alibaba时将访问灰度服务;其它Header将根据灰度权重将流量分配给灰度服务。
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
nginx.ingress.kubernetes.io/canary-by-header: "ack"
nginx.ingress.kubernetes.io/canary-by-header-value: "alibaba"
默认后端
nginx.ingress.kubernetes.io/default-backend: <svc name>
给后端传递header ns_clientip
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header ns-clientip $remote_addr;
或者是如下这种:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Request-Id: $req_id";
由于开启了realip,forwarded-for-header: "ns_clientip".
ns_clientip不再传给上游,这里再次指定传递
全局configmap 跨域
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS" #Default: GET, PUT, POST, DELETE, PATCH, OPTIONS
nginx.ingress.kubernetes.io/cors-allow-headers: "Origin,User-Agent,Authorization, Content-Type, If-Match, If-Modified-Since, If-None-Match, If-Unmodified-Since, X-CSRF-TOKEN, X-Requested-With,token" Default: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization
nginx.ingress.kubernetes.io/cors-expose-headers: "" #Default: empty
nginx.ingress.kubernetes.io/cors-allow-origin: "http://wap.bbs.yingjiesheng.com, https://wap.bbs.yingjiesheng.com " # Default: *
nginx.ingress.kubernetes.io/cors-allow-credentials: "true" #Default: true
nginx.ingress.kubernetes.io/cors-max-age: "1728000" #Default: 1728000
main-snippet string ""
http-snippet string ""
server-snippet string ""
stream-snippet string ""
location-snippet string ""
otel-service-name string "nginx"
otel-service-name : "gateway"
添加自定义header
proxy-set-headers
https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/
打开realip
enable-real-ip bool "false"
enable-real-ip: "true"
realip 的header头打开
use-forwarded-headers bool "false"
use-forwarded-headers: "true"
realip 的认证header头
forwarded-for-header string "X-Forwarded-For"
forwarded-for-header: ns_clientip
realip ip段
proxy-real-ip-cidr
proxy-real-ip-cidr: 192.168.0.0/16,10.244.0.0/16
将 remote address 附加到 X-Forwarded-For Header而不是替换它。
compute-full-forwarded-for bool "false"
compute-full-forwarded-for: "true"
全局ip封禁,优先于annotaion,或ingress规则
denylist-source-range []string []string{}
denylist-source-range: "223.2.4.0/24"
全局ip白名单,优先于denylist,如果设定那么只有此ip能访问,k8s内部不用ingress,所以内网ip不添加
可以和server的annotations配合再封禁某一段。nginx.ingress.kubernetes.io/denylist-source-range:223.2.2.0/24
whitelist-source-range []string []string{}
whitelist-source-range: "127.0.0.1,192.168.244.1,223.0.0.0/8"
全局ip封禁,优先于server白名单
block-cidrs []string ""
封禁223.2.4.0/24,如有多个用,分割
block-cidrs: "223.2.4.0/24"
https://nginx.org/en/docs/http/ngx_http_access_module.html#deny
全局ua封禁
block-user-agents []string ""
封禁含有spider 的ua,不区分大小写
block-user-agents: "~*spider"
全局封禁referer
block-referers []string ""
block-referers: "~*chinahr.com"
查看运行状态的ip
nginx-status-ipv4-whitelist []string "127.0.0.1"
http://127.0.0.1/nginx_status/
No Responses (yet)
Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.