Skip to content


k8s_安装7_存储

七、存储

持久化存储pv,pvc
k8s存储支持多种模式:

  • 本地存储:hostPath/emptyDir
  • 传递网络存储:iscsi/nfs
  • 分布式网络存储:glusterfs/rbd/cephfs等
  • 云存储:AWS,EBS等
  • k8s资源: configmap,secret等

emptyDir 数据卷

临时数据卷,与pod生命周期绑定在一起,pod删除了,数据卷也会被删除。
作用:持久化,随着pod的生命周期而存在(删除容器不代表pod被删除了,不影响数据)
多个pod之间不能通信,同一pod中的容器可以共享数据

spec:
  containers:
  - name: nginx1
    image: nginx
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  - name: nginx2
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /data/
    command: ['/bin/bash','-c','while true;do echo $(date) >> /data/index.html;sleep 10;done']
  volumes:
  - name: html
    emptyDir: {}

hostPath 数据卷

实现同一个node节点的多个pod实现数据共享
也可以设置type字段,支持的类型有File、FileOrCreate、 Directory、DirectoryOrCreate、Socket、CharDevice和BlockDevice。

spec:
  containers:
  - name: nginx1
    image: nginx
    ports:
    - name: http
      containerPort: 80
    volumeMounts:        
    - name: html    ##使用的存储卷名称,如果跟下面volume字段name值相同,则表示使用volume的这个存储卷
      mountPath: /usr/share/nginx/html/   ##挂载至容器中哪个目录
      readOnly: false     #读写挂载方式,默认为读写模式false
  volumes: 
  - name: html     #存储卷名称
    hostPath:      
      path: /data/pod/volume   #在宿主机上目录的路径
      type: DirectoryOrCreate  #定义类型,这表示如果宿主机没有此目录则会自动创建

NFS数据卷

不同的node节点的pod实现数据共享,但是存在单点故障

spec:
  containers:
  - name: nginx1
    image: nginx
    ports:
    - name: http
      containerPort: 80
    volumeMounts:        
    - name: html    ##使用的存储卷名称,如果跟下面volume字段name值相同,则表示使用volume的这个存储卷
      mountPath: /usr/share/nginx/html/   ##挂载至容器中哪个目录
      readOnly: false     #读写挂载方式,默认为读写模式false
  volumes: 
  - name: html     #存储卷名称
    nfs:      
      path: /nfs/k8s/data/volume   #在宿主机上目录的路径
      server: 192.168.244.6

NFS

NFS安装

k8s集群所有节点都需要安装NFS服务。本章节实验我们选用k8s的harbor节点作为NFS服务的server端.

nfs服务提供机安装

yum install -y nfs-utils rpcbind

创建nfs共享目录
mkdir -p /nfs/k8s/{yml,data,cfg,log,web}

#pv
mkdir -p /nfs/k8s/{spv_r1,spv_w1,spv_w2,dpv}  
mkdir -p /nfs/k8s/{spv_001,spv_002,spv_003,dpv}

#修改权限
chmod -R 777 /nfs/k8s
开始共享
#编辑export文件
/nfs/k8s:是共享的数据目录
*:               #表示任何人都有权限连接,当然也可以是一个网段,一个 IP,也可以是域名
rw:              #读写的权限
sync              #表示文件同时写入硬盘和内存
async             # 非同步模式,也就是每隔一段时间才会把内存的数据写入硬盘,能保证磁盘效率,但当异常宕机/断电时,会丢失内存里的数据
no_root_squash    # 当登录 NFS 主机使用共享目录的使用者是 root 时,其权限将被转换成为匿名使用者,通常它的 UID 与 GID,都会变成 nobody 身份
root_squash       # 跟no_root_squash相反,客户端上的root用户受到这些挂载选项的限制,被当成普通用户
all_squash        # 客户端上的所有用户在使用NFS共享目录时都被限定为一个普通用户
anonuid           # 上面的几个squash用于把客户端的用户限定为普通用户,而anouid用于限定这个普通用户的uid,这个uid与服务端的/etc/passwd文件相对应,如:anouid=1000 
                  # 比如我客户端用xiaoming这个用户去创建文件,那么服务端同步这个文件的时候,文件的属主会变成服务端的uid(1000)所对应的用户
anongid           # 同上,用于限定这个普通用户的gid

vi /etc/exports
/nfs/k8s/yml *(rw,no_root_squash,sync)  
/nfs/k8s/data *(rw,no_root_squash,sync)  
/nfs/k8s/cfg *(rw,no_root_squash,sync)  
/nfs/k8s/log *(rw,no_root_squash,sync)  
/nfs/k8s/web *(rw,no_root_squash,sync)  
/nfs/k8s/spv_001 *(rw,no_root_squash,sync)  
/nfs/k8s/spv_002 *(rw,no_root_squash,sync)  
/nfs/k8s/spv_003 *(rw,no_root_squash,sync)  
/nfs/k8s/dpv *(rw,no_root_squash,sync)  

#配置生效
exportfs -r
启动服务
#启动顺序,先启动rpc,再启动nfs
systemctl enable rpcbind
systemctl start rpcbind
systemctl status rpcbind
systemctl restart rpcbind

systemctl enable nfs
systemctl start nfs
systemctl status nfs
systemctl restart nfs

#查看相关信息
rpcinfo -p|grep nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl

#查看
cat /var/lib/nfs/etab 
/nfs/k8s/log    *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
/nfs/k8s/cfg    *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
/nfs/k8s/data   *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
/nfs/k8s/yml    *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)

#查看
showmount -e 
Export list for 192.168.244.6:
Export list for repo.k8s.local:
/nfs/k8s/log  *
/nfs/k8s/cfg  *
/nfs/k8s/data *
/nfs/k8s/yml  *
客户端操作
安装服务

yum -y install nfs-utils

查看

showmount -e 192.168.244.6
Export list for 192.168.244.6:
/nfs/k8s/log
/nfs/k8s/cfg

/nfs/k8s/data
/nfs/k8s/yml

/nfs/k8s/web *

工作节点创建挂载点

mkdir -p /data/k8s/{yml,data,cfg,log,web}

systemctl enable rpcbind
systemctl start rpcbind
systemctl status rpcbind

手动挂载
mount 192.168.244.6:/nfs/k8s/yml /data/k8s/yml
mount 192.168.244.6:/nfs/k8s/data /data/k8s/data
mount 192.168.244.6:/nfs/k8s/cfg /data/k8s/cfg
mount 192.168.244.6:/nfs/k8s/log /data/k8s/log

df -Th | grep /data/k8s/
192.168.244.6:/nfs/k8s/data nfs4       26G  8.9G   18G  35% /data/k8s/data
192.168.244.6:/nfs/k8s/cfg  nfs4       26G  8.9G   18G  35% /data/k8s/cfg
192.168.244.6:/nfs/k8s/log  nfs4       26G  8.9G   18G  35% /data/k8s/log
192.168.244.6:/nfs/k8s/yml  nfs4       26G  8.9G   18G  35% /data/k8s/yml

touch /data/k8s/cfg/test

#加入启动
# 追加插入以下内容
cat >> /etc/fstab <<EOF
192.168.244.6:/nfs/k8s/yml /data/k8s/yml nfs rw,rsize=8192,wsize=8192,soft,intr 0 0
192.168.244.6:/nfs/k8s/data /data/k8s/data nfs rw,rsize=8192,wsize=8192,soft,intr 0 0
192.168.244.6:/nfs/k8s/cfg /data/k8s/cfg nfs rw,rsize=8192,wsize=8192,soft,intr 0 0
192.168.244.6:/nfs/k8s/log /data/k8s/log nfs rw,rsize=8192,wsize=8192,soft,intr 0 0
EOF

PV与PVC的简介

PV(PersistentVolume)是持久化卷的意思,是对底层的共享存储的一种抽象,管理员已经提供好的一块存储。在k8s集群中,PV像Node一样,是一个资源。
PV类型:NFS iSCSI CephFS Glusterfs HostPath AzureDisk 等等
PVC(PersistentVolumeClaim)是持久化卷声明的意思,是用户对于存储需求的一种声明。PVC对于PV就像Pod对于Node一样,Pod可以申请CPU和Memory资源,而PVC也可以申请PV的大小与权限。
有了PersistentVolumeClaim,用户只需要告诉Kubernetes需要什么样的存储资源,而不必关心真正的空间从哪里分配,如何访问等底层细节信息。这些Storage Provider的底层信息交给管理员来处理,只有管理员才应该关心创建PersistentVolume的细节信息

PVC和PV是一一对应的。
PV和StorageClass不受限于Namespace,PVC受限于Namespace,Pod在引用PVC时同样受Namespace的限制,只有相同Namespace中的PVC才能挂载到Pod内。
虽然PersistentVolumeClaims允许用户使用抽象存储资源,但是PersistentVolumes对于不同的问题,用户通常需要具有不同属性(例如性能)。群集管理员需要能够提供各种PersistentVolumes不同的方式,而不仅仅是大小和访问模式,而不会让用户了解这些卷的实现方式。对于这些需求,有StorageClass 资源。
StorageClass为管理员提供了一种描述他们提供的存储的“类”的方法。 不同的类可能映射到服务质量级别,或备份策略,或者由群集管理员确定的任意策略。 Kubernetes本身对于什么类别代表是不言而喻的。 这个概念有时在其他存储系统中称为“配置文件”。
一个PVC只能绑定一个PV,一个PV只能对应一种后端存储,一个Pod可以使用多个PVC,一个PVC也可以给多个Pod使用

生命周期

Provisioning ——-> Binding ——–>Using——>Releasing——>Recycling

供应准备Provisioning
静态供给 Static Provision

集群管理员通过手动方式先创建好应用所需不同大小,是否读写的PV,在创建PVC时云匹配相近的。
在创建PVC时会根据需求属性(大小,只读等)匹配最合适的PV,有一定随机,可以定义回收时清除PV中数据。
如创建一个6G的存储,不会匹配5G的PV,会匹配比6G大的PV,如10G的PV.

如何将PVC绑定到特定的PV上?给PV打上一个label,然后让PVC去匹配这个label即可,
方式一,在 PVC 的 YAML 文件中指定 spec.volumeName 字段
方式二,spec.selector.matchLabels.pv

动态供给 Dynamic Provision

当管理员创建的静态PV都不匹配用户的PVC,集群可能会尝试为PVC动态配置卷,实现了存储卷的按需创建,不需要提前创建 PV,此配置基于StorageClasse.
用户创建了PVC需求,然后由组件云动态创建的PV,PVC必须请求一个类,并且管理员必须已创建并配置该类才能进行动态配置。 要求该类的声明有效地为自己禁用动态配置。

Binding 绑定

用户创建pvc并指定需要的资源和访问模式。在找到可用pv之前,pvc会保持未绑定状态。

Using 使用

用户可在pod中像volume一样使用pvc。

Releasing 释放

用户删除pvc来回收存储资源,pv将变成“released”状态。由于还保留着之前的数据,这些数据需要根据不同的策略来处理,否则这些存储资源无法被其他pvc使用。

回收Recycling

pv可以设置三种回收策略:保留(Retain),回收(Recycle)和删除(Delete)。

  • 保留策略Retain :允许人工处理保留的数据。
  • 删除策略Delete :将删除pv和外部关联的存储资源,需要插件支持。
  • 回收策略Recycle :将执行清除操作,之后可以被新的pvc使用,需要插件支持。
    PVC 只能和 Available 状态的 PV 进行绑定
PV卷阶段状态

Available – 资源尚未被pvc使用
Bound – 卷已经被绑定到pvc了
Released – pvc被删除,卷处于释放状态,但未被集群回收。
Failed – 卷自动回收失败

StorageClass运行原理及部署流程

  • 1.自动创建的 PV 以${namespace}-${pvcName}-${pvName}这样的命名格式创建在 NFS 服务器上的共享数据目录中

  • 2.而当这个 PV 被回收后会以archieved-${namespace}-${pvcName}-${pvName}这样的命名格式存在 NFS 服务器上
    StorageClass一旦被创建,就无法修改,如需修改,只能删除重建。

配置说明:

capacity 指定 PV 的容量为 1G。
accessModes 指定访问模式为 ReadWriteOnce,支持的访问模式有:

    ReadWriteOnce – PV 能以 read-write 模式 mount 到单个节点。
    ReadOnlyMany – PV 能以 read-only 模式 mount 到多个节点。
    ReadWriteMany – PV 能以 read-write 模式 mount 到多个节点。

 persistentVolumeReclaimPolicy 指定当 PV 的回收策略为 Recycle,支持的策略有:

    Retain – 需要管理员手工回收。
    Recycle – 清除 PV 中的数据,效果相当于执行 rm -rf /thevolume/*。目前只支持NFS和hostPath支持此操作
    Delete – 删除 Storage Provider 上的对应存储资源,仅部分支持云端存储系统支持,例如 AWS EBS、GCE PD、Azure
    Disk、OpenStack Cinder Volume 等。

storageClassName 指定 PV 的 class 为 nfs。相当于为 PV 设置了一个分类,PVC 可以指定 class 申请相应 class 的 PV。

静态创建PV卷

# 注意命名和域名一样,只充许小写字母数字-.
# 1g大小,多节点只读,保留数据
# cat nfs-pv001.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-spv001
  labels:
    pv: nfs-spv001
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfs/k8s/spv_001
    server: 192.168.244.6

# 2g大小,单节点读写,清除数据
# cat nfs-pv002.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-spv002
  labels:
    pv: nfs-spv002
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfs/k8s/spv_002
    server: 192.168.244.6

# 3g大小,多节点读写,保留数据
pv上可设nfs参数,pod中不可以
# cat nfs-pv003.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-spv003
  labels:
    pv: nfs-spv003
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  mountOptions:
    - hard
    - intr
    - timeo=60
    - retrans=2
    - noresvport
    - nfsvers=4.1
  nfs:
    path: /nfs/k8s/spv_003
    server: 192.168.244.6
pv生效
kubectl apply -f nfs-pv001.yaml
persistentvolume/nfs-spv001 created
kubectl apply -f nfs-pv002.yaml
persistentvolume/nfs-spv002 created
kubectl apply -f nfs-pv003.yaml
persistentvolume/nfs-spv003 created

kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-spv001r   1Gi        ROX            Retain           Available           nfs                     36s
nfs-spv002    2Gi        RWO            Recycle          Available           nfs                     21s
nfs-spv003    3Gi        RWX            Retain           Available           nfs                     18s
删pv

kubectl delete -f nfs-pv001.yaml

回收pv
#,当pv标记为Recycle时,删除pvc,pv STATUS 为Released/Failed ,删除claimRef段后恢复 Available
kubectl edit pv nfs-spv002
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: nfs-pvc002
    namespace: default
    resourceVersion: "835243"
    uid: 3bd83223-fd84-4b53-a0db-1a5f62e433fa

kubectl patch pv nfs-spv002 –patch ‘{"spec": {"claimRef":null}}’

创建 PVC

### 1G大小,只读,指定pv
# cat nfs-pvc1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc001
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nfs-spv001

# 指定pv,注意pvc空间比pv小,实际创建已匹配到的pv为准
# cat nfs-pvc2.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc002
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nfs-spv002

# 指定pv,注意pvc空间比pv小
# cat nfs-pvc3.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc003
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nfs-spv003
生效pvc
kubectl apply -f nfs-pvc1.yaml
persistentvolumeclaim/nfs-pvc001 created
kubectl apply -f nfs-pvc2.yaml
persistentvolumeclaim/nfs-pvc002 created
kubectl apply -f nfs-pvc3.yaml
persistentvolumeclaim/nfs-pvc003 created

kubectl get pvc
NAME         STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc001   Bound    nfs-spv001   1Gi        ROX            nfs            4s
nfs-pvc002   Bound    nfs-spv002   2Gi        RWO            nfs            5m3s
nfs-pvc003   Bound    nfs-spv003   3Gi        RWX            nfs            5m1s
删除pvc

kubectl delete -f nfs-pvc1.yaml

如果删pvc前先删了pv,pv STATUS:Terminating ,pvc 看不出异常.删除pvc后pv成功删除
没有pv前创建pvc,pvc STATUS Pending,创建pv后,pvc 正常
删除Recycle的pvc 后,在没回收前无法再使用相关pv

k8s进行pvc扩容

确认pv的回收策略,十分的重要!!!!!,确认回收策略必须为Retain不能是Delete不然解除绑定后PV就被删了,并修改pv大小

  • 1.根据pod找到绑定的pvc信息
    kubectl edit pods -n namespace podName
    在spec.volumes.persistentVolumeClaim.claimName可以找到我们需要的pvc名称

  • 2.根据PVC找到绑定的PV
    kubectl edit pvc -n namespace pvcName
    在spec.volumeName可以找到我们需要的pv名称

  • 3.确认pv信息,修改回收策略为Retain,修改pv大小,修改labels信息
    kubectl edit pv -n namespace nfs-spv001

修改spec.persistentVolumeReclaimPolicy为Retain
修改spec.capacity.storage的大小例如30Gi

或用patch
kubectl patch pv nfs-spv001 –type merge –patch ‘{"spec": {"capacity": {"storage": "1.3Gi"},"persistentVolumeReclaimPolicy":"Retain"}}’
kubectl get pv nfs-spv001

注意静态pvc 不能动态扩容
动态pvc可扩容,storageclass是否存在allowVolumeExpansion字段,不能缩容,需重启服务

k8s里面pod使用pv的大小跟pv的容量没有直接关系,而是跟挂载的文件系统大小有关系,pv的容量可以理解为给pv预留容量,当文件系统的可用容量<PV预留容量 时,pv创建失败,Pod也会Pending。
pv的实际使用量主要看挂载的文件系统大小,会出现nfs 存储类pv使用容量超出pv预留容量的情况。

基于NFS动态创建PV、PVC

https://kubernetes.io/docs/concepts/storage/storage-classes/
因为NFS不支持动态存储,所以我们需要借用这个存储插件nfs-client
https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy

搭建StorageClass+NFS,大致有以下几个步骤:

  • 1).创建一个可用的NFS Serve
  • 2).创建Service Account.这是用来管控NFS provisioner在k8s集群中运行的权限
  • 3).创建StorageClass.负责建立PVC并调用NFS provisioner进行预定的工作,并让PV与PVC建立管理
  • 4).创建NFS provisioner.有两个功能,一个是在NFS共享目录下创建挂载点(volume),另一个则是建了PV并将PV与NFS的挂载点建立关联

nfs-provisioner-rbac.yaml 集群角色,普通角色,sa用户

wget https://github.com/kubernetes-retired/external-storage/raw/master/nfs-client/deploy/rbac.yaml -O nfs-provisioner-rbac.yaml
如果之前布署过,请先修改 namespace: default

# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ kubectl create -f deploy/rbac.yaml
cat nfs-provisioner-rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

StorageClass


cat > nfs-StorageClass.yaml << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'  #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters:
  archiveOnDelete: "false"
  type: nfs
reclaimPolicy: Retain
allowVolumeExpansion: true     #允许扩容
mountOptions:
  - hard
  - intr
  - timeo=60
  - retrans=2
  - noresvport
  - nfsvers=4.1
volumeBindingMode: Immediate     #pv和pvc绑定
EOF

请确认reclaimPolicy 是否为 Retain
请确认reclaimPolicy 是否为 Retain
请确认reclaimPolicy 是否为 Retain
重要事情三遍

在启用动态供应模式的情况下,一旦用户删除了PVC,与之绑定的PV也将根据其默认的回收策略“Delete”被删除。如果需要保留PV(用户数据),则在动态绑定成功后,用户需要将系统自动生成PV的回收策略从“Delete”改为“Retain”(保留)。
kubectl edit pv -n default nfs-spv001

使用如下命令可以修改pv回收策略:
kubectl patch pv -p ‘{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}’

provisioner 提供者 deployment.yaml

wget https://github.com/kubernetes-retired/external-storage/raw/master/nfs-client/deploy/deployment.yaml -O nfs-provisioner-deployment.yaml
namespace: default #与RBAC文件中的namespace保持一致
修改的参数包括NFS服务器所在的IP地址,共享的路径

cat nfs-provisioner-deployment.yaml


apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.244.6
            - name: NFS_PATH
              value: /nfs/k8s/dpv
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.244.6
            path: /nfs/k8s/dpv

准备镜像

cat nfs-provisioner-deployment.yaml  |grep image:|sed -e 's/.*image: //'
quay.io/external_storage/nfs-client-provisioner:latest

docker pull quay.io/external_storage/nfs-client-provisioner:latest
docker tag quay.io/external_storage/nfs-client-provisioner:latest repo.k8s.local/google_containers/nfs-client-provisioner:latest
docker push repo.k8s.local/google_containers/nfs-client-provisioner:latest
docker rmi quay.io/external_storage/nfs-client-provisioner:latest

替换镜像地址

cp nfs-provisioner-deployment.yaml nfs-provisioner-deployment.org.yaml
sed -n '/image:/{s/quay.io\/external_storage/repo.k8s.local\/google_containers/p}' nfs-provisioner-deployment.yaml
sed -i '/image:/{s/quay.io\/external_storage/repo.k8s.local\/google_containers/}' nfs-provisioner-deployment.yaml
cat nfs-provisioner-deployment.yaml  |grep image:

执行

kubectl apply -f nfs-provisioner-rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

#kubectl delete -f nfs-StorageClass.yaml
kubectl apply -f nfs-StorageClass.yaml
storageclass.storage.k8s.io/managed-nfs-storage created

kubectl apply -f nfs-provisioner-deployment.yaml 
deployment.apps/nfs-client-provisioner created

#查看sa
kubectl get sa

#查看storageclass
kubectl get sc

#查看deploy控制器
kubectl get deploy

#查看pod
kubectl get pod

[root@master01 k8s]# kubectl get sa
NAME                     SECRETS   AGE
default                  0         8d
nfs-client-provisioner   0         2m42s
[root@master01 k8s]# kubectl get sc
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  60s

注意 RECLAIMPOLICY 如果是Delete,数据会被删除

[root@master01 k8s]# kubectl get deploy
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           30s
[root@master01 k8s]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-54698bfc75-ld8fj   1/1     Running   0          37s

创建测试

#storageClassName 与nfs-StorageClass.yaml metadata.name保持一致
cat > test-pvc.yaml  << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: managed-nfs-storage  
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Mi
EOF

#执行
kubectl apply -f test-pvc.yaml

kubectl get pvc
NAME         STATUS    VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim   Pending                                          managed-nfs-storage   14m

#可以看到一直是Pending状态

kubectl describe pvc test-claim
Name:          test-claim
Namespace:     default
StorageClass:  managed-nfs-storage
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: fuseim.pri/ifs
               volume.kubernetes.io/storage-provisioner: fuseim.pri/ifs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  ExternalProvisioning  35s (x63 over 15m)  persistentvolume-controller  Waiting for a volume to be created either by the external provisioner 'fuseim.pri/ifs' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-54698bfc75-ld8fj   1/1     Running   0          21m

kubectl logs nfs-client-provisioner-54698bfc75-ld8fj 
E1019 07:43:08.625350       1 controller.go:1004] provision "default/test-claim" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference

Kubernetes 1.20及以后版本废弃了 selfLink 所致。
相关issue链接:https://github.com/kubernetes/kubernetes/pull/94397

解决方案一 (不适用1.26.6以上)

添加 – –advertise-address=192.168.244.4 后重启apiserver

vi /etc/kubernetes/manifests/kube-apiserver.yaml
  - command:
    - kube-apiserver
    - --feature-gates=RemoveSelfLink=false
    - --advertise-address=192.168.244.4
    - --allow-privileged=true

systemctl daemon-reload
systemctl restart kubelet

kubectl get nodes
The connection to the server 192.168.244.4:6443 was refused - did you specify the right host or port?

sed n '/insecure-port/s/^/#/gp' -i /etc/kubernetes/manifests/kube-apiserver.yaml
sed -e '/insecure-port/s/^/#/g' -i /etc/kubernetes/manifests/kube-apiserver.yaml

解决方案二 (适用1.20.x以上所有版本)

不用修改–feature-gates=RemoveSelfLink=false
修改 nfs-provisioner-deployment.yaml 镜像为nfs-subdir-external-provisioner:v4.0.2

原镜像
registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
国内加速
m.daocloud.io/gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2

拉取推送私仓

docker pull m.daocloud.io/gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
docker tag m.daocloud.io/gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2 repo.k8s.local/google_containers/nfs-subdir-external-provisioner:v4.0.2
docker push repo.k8s.local/google_containers/nfs-subdir-external-provisioner:v4.0.2
vi nfs-provisioner-deployment.yaml
image: repo.k8s.local/google_containers/nfs-subdir-external-provisioner:v4.0.2

#重新生效
kubectl apply -f nfs-provisioner-deployment.yaml 

kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-db4f6fb8-gnnbm   1/1     Running   0          16m

kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim   Bound    pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b   2Mi        RWX            managed-nfs-storage   99m

kubectl get pvc test-claim -o yaml | grep phase        
  phase: Bound

如果显示 phase 为 Bound,则说明已经创建 PV 且与 PVC 进行了绑定

kubectl describe pvc test-claim  
Normal  ProvisioningSucceeded  18m                fuseim.pri/ifs_nfs-client-provisioner-db4f6fb8-gnnbm_7f8d1cb0-c840-4198-afb0-f066f0ca86da  Successfully provisioned volume pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b

可以看到创建了pv pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b

在nfs目录下可以看自动创建的目录,
创建的目录命名方式为 namespace名称-pvc名称-pv名称,
PV 名称是随机字符串,所以每次只要不删除 PVC,那么 Kubernetes 中的与存储绑定将不会丢失,要是删除 PVC 也就意味着删除了绑定的文件夹,下次就算重新创建相同名称的 PVC,生成的文件夹名称也不会一致,因为 PV 名是随机生成的字符串,而文件夹命名又跟 PV 有关,所以删除 PVC 需谨慎
ll /nfs/k8s/dpv/default-test-claim-pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b/

helm方式 安装

https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=x.x.x.x \
    --set nfs.path=/exported/path

创建测试的 Pod 资源文件

创建一个用于测试的 Pod 资源文件 test-pod.yaml,文件内容如下:

cat > test-nfs-pod.yaml << EOF
kind: Pod
apiVersion: v1
metadata:
  name: test-nfs-pod
spec:
  containers:
  - name: test-pod
    image: busybox:latest
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"  ## 创建一个名称为"SUCCESS"的文件
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF

kubectl apply -f test-nfs-pod.yaml

ll /nfs/k8s/dpv/default-test-claim-pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b/
total 0
-rw-r--r--. 1 root root 0 Oct 19 17:23 SUCCESS

[root@master01 k8s]# kubectl get pv,pvc,pods
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
persistentvolume/nfs-spv001                                 1Gi        ROX            Retain           Bound    default/nfs-pvc001   nfs                            25h
persistentvolume/nfs-spv002                                 2Gi        RWO            Recycle          Bound    default/nfs-pvc002   nfs                            25h
persistentvolume/nfs-spv003                                 3Gi        RWX            Retain           Bound    default/nfs-pvc003   nfs                            25h
persistentvolume/pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b   2Mi        RWX            Delete           Bound    default/test-claim   managed-nfs-storage            32m

NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/nfs-pvc001   Bound    nfs-spv001                                 1Gi        ROX            nfs                   25h
persistentvolumeclaim/nfs-pvc002   Bound    nfs-spv002                                 2Gi        RWO            nfs                   25h
persistentvolumeclaim/nfs-pvc003   Bound    nfs-spv003                                 3Gi        RWX            nfs                   25h
persistentvolumeclaim/test-claim   Bound    pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b   2Mi        RWX            managed-nfs-storage   116m

NAME                                        READY   STATUS      RESTARTS   AGE
pod/nfs-client-provisioner-db4f6fb8-gnnbm   1/1     Running     0          32m
pod/test-nfs-pod                            0/1     Completed   0          5m48s

pod执行完就退出,pvc状态是Completed,pv状态是Delete

测试写入空间限制

cat > test-nfs-pod1.yaml << EOF
kind: Pod
apiVersion: v1
metadata:
  name: test-nfs-pod1
spec:
  containers:
  - name: test-pod
    image: busybox:latest
    command: [ "sleep", "3600" ]
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF
kubectl apply -f test-pvc.yaml

kubectl apply -f test-nfs-pod1.yaml
kubectl get pods -o wide 
kubectl describe pod test-nfs-pod1
kubectl exec -it test-nfs-pod1   /bin/sh

cd /mnt/
写入8M大小文件
time dd if=/dev/zero of=./test_w bs=8k count=100000

ll /nfs/k8s/dpv/default-test-claim-pvc-505ec1a4-4b79-4fb4-a1b6-4a26bccdd65a
-rw-r--r--. 1 root root 7.9M Oct 20 13:36 test_w

当初pvc 是申请了2M,storage: 2Mi,当前写入8M, 可见pvc是无法限制空间大小的,使用的还是nfs的共享大小。

kubectl delete -f test-nfs-pod1.yaml
kubectl delete -f test-pvc.yaml

创建一个nginx测试

准备镜像
docker pull docker.io/library/nginx:1.21.4
docker tag docker.io/library/nginx:1.21.4 repo.k8s.local/library/nginx:1.21.4
docker push repo.k8s.local/library/nginx:1.21.4
nginx yaml文件
cat > test-nginx.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  labels: {app: nginx}
  name: test-nginx
  namespace: test
spec:
  ports:
  - {name: t9080, nodePort: 30002, port: 30080, protocol: TCP, targetPort: 80}
  selector: {app: nginx}
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy
  namespace: test
  labels: {app: nginx}
spec:
  replicas: 1
  selector:
    matchLabels: {app: nginx}
  template:
    metadata:
      name: nginx
      labels: {name: nginx}
    spec:
      containers:
      - name: nginx
        #image: docker.io/library/nginx:1.21.4
        image: repo.k8s.local/library/nginx:1.21.4
        volumeMounts:
        - name: volv
          mountPath: /data
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: test-pvc2
      nodeSelector:
        ingresstype: ingress-nginx
EOF

准备pvc

注意namesapce要和 service一致

cat > test-pvc2.yaml  << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc2
  namespace: test
spec:
  storageClassName: managed-nfs-storage  
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 300Mi
EOF
创建命名空间并执行
kubectl create ns test
kubectl apply -f  test-pvc2.yaml
kubectl apply -f  test-nginx.yaml
kubectl get -f test-nginx.yaml

#修改动态pv加收为Retain
kubectl edit pv -n default pvc-f9153444-5653-4684-a845-83bb313194d1
persistentVolumeReclaimPolicy: Retain

kubectl get pods -o wide -n test                   
NAME                     READY   STATUS    RESTARTS   AGE    IP       NODE     NOMINATED NODE   READINESS GATES
nginx-5ccbddff9d-k2lgs   0/1     Pending   0          8m6s   <none>   <none>   <none>           <none>

kubectl -n test describe pod nginx-5ccbddff9d-k2lgs 
 Warning  FailedScheduling  53s (x2 over 6m11s)  default-scheduler  0/3 nodes are available: persistentvolumeclaim "test-claim" not found. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

get pv -o wide -n test 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE   VOLUMEMODE
pvc-16c63bbf-11c4-49aa-8289-7e4c64c78c2b   2Mi        RWX            Delete           Bound    default/test-claim   managed-nfs-storage            84m   Filesystem
NAME                     READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
nginx-5bc65c5745-rsp6s   0/1     Pending   0          17h   <none>   <none>   <none>           <none>

kubectl -n test describe pod/nginx-5bc65c5745-rsp6s 
Warning  FailedScheduling  4m54s (x206 over 17h)  default-scheduler  0/3 nodes are available: persistentvolumeclaim "test-pvc2" not found. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
#注意检查pvc的namespace和service是否一致

当前pvc是Delete 状态,还没回收  
kubectl delete -f test-pvc2.yaml  
kubectl delete -f test-nginx.yaml  

kubectl get pv,pvc -o wide  
kubectl get pvc -o wide  
kubectl get pods -o wide -n test
#进入容器,修改内容
kubectl exec -it nginx-5c5c944c4f-4v5g7  -n test -- /bin/sh
cat /etc/nginx/conf.d/default.conf

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

  access_log  /var/log/nginx/access.log  main;
echo `hostname` >> /usr/share/nginx/html/index.html
Ingress资源

namespace 要一致

cat > ingress_svc_test.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc-test
  annotations:
    kubernetes.io/ingress.class: "nginx"
  namespace: test
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test-nginx
            port:
              number: 30080
EOF
cat > ingress_svc_test.yaml  << EOF
apiVersion: extensions/v1
kind: Ingress
metadata:
  name: my-ingress
  namespace: test
spec:
  backend:
    serviceName: test-nginx
    servicePort: 30080
EOF

kubectl apply -f ingress_svc_test.yaml
kubectl describe ingress ingress-svc-test -n test

cat > ingress_svc_dashboard.yaml  << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard
  namespace: kube-system
spec:
  rules:
  - http:
      paths:
      - path: /dashboard
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard
            port:
              number: 443
EOF

查看系统service资源

kubectl get service –all-namespaces

备份pv数据

首先根据pvc找到对应的pv:

kubectl get pv,pvc
kubectl -n default get pvc nfs-pvc001 -o jsonpath='{.spec.volumeName}'
nfs-spv001

找到pv的挂载目录:

kubectl -n default get pv nfs-spv001
kubectl -n default describe pv nfs-spv001
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.0.244.6
    Path:      /nfs/k8s/spv_001

使用rsync命令备份数据:

rsync -avp --delete /nfs/k8s/spv_001 /databak/pvc-data-bak/default/spv_001/

还原

rsync -avp --delete /databak/pvc-data-bak/default/spv_001/ /nfs/k8s/spv_001/

批量回收pv

kubectl get pv 
当pv标记为Recycle时,删除pvc,pv STATUS 为Released/Failed ,删除claimRef段后恢复 Available
kubectl edit pv pvc-6f57e98a-dcc8-4d65-89c6-49826b2a3f18
kubectl patch pv pvc-91e17178-5e99-4ef1-90c2-1c9c2ce33af9 --patch '{"spec": {"claimRef":null}}'

kubectl delete pv pvc-e99c588a-8834-45b2-a1a9-85e31dc211ff
1.导出废弃pv在nfs服务器上的对应路径
kubectl get pv \
    -o custom-columns=STATUS:.status.phase,PATH:.spec.nfs.path \
    |grep Released  \
    |awk '{print $2}' \
> nfsdir.txt
2 清理k8s中的废弃pv

vi k8s_cleanpv.sh

#!/bin/bash
whiteList=`kubectl get pv |grep  Released |awk '{print $1}'`
echo "${whiteList}" | while read line
do
  kubectl patch pv ${line}  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
done
3 清理nfs服务器上的废弃文件

vi ./k8s_pv_cleaner.sh

#!/bin/bash
whiteList=`cat $1`
echo "${whiteList}" | while read line
do
  rm -rf  "$line"
done

./k8s_pv_cleaner.sh nfsdir.txt

错误

no matches for kind "Ingress" in version "networking.k8s.io/v1beta
1.19版本以后,Ingress 所在的 apiServer 和配置文件参数都有所更改
apiVersion: networking.k8s.io/v1beta1 改为apiVersion: networking.k8s.io/v1

进入容器,创建数据

kubectl exec -it pod/test-nfs-pod -- sh
kubectl debug -it pod/test-nfs-pod --image=busybox:1.28 --target=pod_debug

kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh

删除测试的 Pod 资源文件

kubectl delete -f test-nfs-pod.yaml 

删除测试的 PVC 资源文件

kubectl delete -f test-pvc.yaml

错误
当nfs服务器重启后,原有挂nfs的pod不能关闭不能重启。
当nfs异常时,网元进程读nfs挂载目录超时卡住,导致线程占满,无法响应k8s心跳检测,一段时间后,k8s重启该网元pod,在终止pod时,由于nfs异常,umount卡住,导致pod一直处于Terminating状态。
为解决上述问题,需要实现如下两点:
1、若pod一直处于Terminating,也可以强制删除:kubectl delete pod foo –grace-period=0 –force
2、可使用mount -l | grep nfs,umount -l -f 执行强制umount操作
3、也可以先建立本地目录,让本地目录mount nfs设备,再将本地目录通过hostpath挂载到容器中,这样不管参数是否设置,pod均能删除,但是如果不设置参数,读取还会挂住

修改node默认挂载为Soft方式
vi /etc/nfsmount.conf

Soft=True

nfs对于高可用来说存在一个隐患:客户端nfs中有一个内核级别的线程,nfsv4.1-svc,该线程会一直和nfs服务端进行通信,且无法被kill掉。(停止客户端Nfs服务,设置开机不自启动,并卸载nfs,重启主机才能让该线程停掉)。一旦nfs服务端停掉,或者所在主机关机,那么nfs客户端就会找不到nfs服务端,导致nfs客户端所在主机一直处于卡死状态,表现为无法ssh到该主机,不能使用 df -h 等命令,会对客户造成比较严重的影响。

Posted in 安装k8s/kubernetes.

Tagged with , .


No Responses (yet)

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.



Some HTML is OK

or, reply to this post via trackback.