k8s持久化存储: PVC
参考官网: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
强制删除pvc
kubectl patch pvc jenkins-k8s-pvc -p '{"metadata":{"finalizers":null}}' -n jenkins-k8s
k8s PV是什么?
PersistentVolume(PV)是群集中的一块存储,由管理员配置或使用存储类动态配置。 它是集群中的资源,就像pod是k8s集群资源一样。 PV是容量插件,如Volumes,其生命周期独立于使用PV的任何单个pod。
k8s PVC是什么?
PersistentVolumeClaim(PVC)是一个持久化存储卷,我们在创建pod时可以定义这个类型的存储卷。 它类似于一个pod。 Pod消耗节点资源,PVC消耗PV资源。 Pod可以请求特定级别的资源(CPU和内存)。 pvc在申请pv的时候也可以请求特定的大小和访问模式(例如,可以一次读写或多次只读)。
k8s PVC和PV工作原理
PV是群集中的资源。 PVC是对这些资源的请求。
PV和PVC之间的相互作用遵循以下生命周期:
- (1)pv的供应方式
可以通过两种方式配置PV:静态或动态
静态的:
- 集群管理员创建了许多PV。它们包含可供群集用户使用的实际存储的详细信息。它们存在于Kubernetes API中,可供使用。
动态的:
- 当管理员创建的静态PV都不匹配用户的PersistentVolumeClaim时,群集可能会尝试为PVC专门动态配置卷。此配置基于StorageClasses,PVC必须请求存储类,管理员必须创建并配置该类,以便进行动态配置。
- (2)绑定
用户创建pvc并指定需要的资源和访问模式。在找到可用pv之前,pvc会保持未绑定状态
- (3)使用
- a)需要找一个存储服务器,把它划分成多个存储空间;b)k8s管理员可以把这些存储空间定义成多个pv;c)在pod中使用pvc类型的存储卷之前需要先创建pvc,通过定义需要使用的pv的大小和对应的访问模式,找到合适的pv;d)pvc被创建之后,就可以当成存储卷来使用了,我们在定义pod时就可以使用这个pvc的存储卷e)pvc和pv它们是一一对应的关系,pv如果被pvc绑定了,就不能被其他pvc使用了;f)我们在创建pvc的时候,应该确保和底下的pv能绑定,如果没有合适的pv,那么pvc就会处于pending状态。
- (4)回收策略
[root@master01 ~]# kubectl explain pv.spec.persistentVolumeReclaimPolicy KIND: PersistentVolume VERSION: v1 FIELD: persistentVolumeReclaimPolicy <string> DESCRIPTION: What happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming
当我们创建pod时如果使用pvc做为存储卷,那么它会和pv绑定,当删除pod,pvc和pv绑定就会解除,解除之后和pvc绑定的pv卷里的数据需要怎么处理,目前,卷可以保留,回收或删除:
- Retain(保留) 静态用默认就行了
- 当删除pvc的时候,pv仍然存在,处于released状态,但是它不能被其他pvc绑定使用,里面的数据还是存在的,当我们下次再使用的时候,数据还是存在的,这个是默认的回收策略
- Delete(删除)
- 删除pvc时即会从Kubernetes中移除PV,也会从相关的外部设施中删除存储资产
- 修改回收策略维delete{静态没啥用,目录内的数据都在}
- Retain(保留) 静态用默认就行了
apiVersion: v1 kind: PersistentVolume metadata: name: v1 spec: persistentVolumeReclaimPolicy: delete capacity: storage: 1Gi #pv的存储空间容量 accessModes: ["ReadWriteOnce"] nfs: path: /data/volume_test/v1 #把nfs的存储空间创建成pv server: 192.168.1.180 #nfs服务器的地址
创建pod,使用静态pvc作为持久化存储卷
1、创建nfs共享目录
[root@master01 volumes]# mkdir /data/volume_test/v{1,2,3,4,5,6,7,8,9,10} -p
#配置nfs共享宿主机上的/data/volume_test/v1..v10目录
[root@master01 volumes]# cat /etc/exports /data/volume_test/v1 192.168.1.0/24(rw,no_root_squash) /data/volume_test/v2 192.168.1.0/24(rw,no_root_squash) /data/volume_test/v3 192.168.1.0/24(rw,no_root_squash) /data/volume_test/v4 192.168.1.0/24(rw,no_root_squash) /data/volume_test/v5 192.168.1.0/24(rw,no_root_squash) /data/volume_test/v6 192.168.1.0/24(rw,no_root_squash) /data/volume_test/v7 192.168.1.0/24(rw,no_root_squash) /data/volume_test/v8 192.168.1.0/24(rw,no_root_squash) /data/volume_test/v9 192.168.1.0/24(rw,no_root_squash) /data/volume_test/v10 192.168.1.0/24(rw,no_root_squash)
#重新加载配置,使配置成效
[root@master01 volumes]# exportfs -arv
2、创建pv
参考:https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/#reclaiming
[root@master01 ~]# cat pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: v1 spec: capacity: storage: 1Gi #pv的存储空间容量 accessModes: ["ReadWriteOnce"] nfs: path: /data/volume_test/v1 #把nfs的存储空间创建成pv server: 192.168.1.180 #nfs服务器的地址 --- apiVersion: v1 kind: PersistentVolume metadata: name: v2 spec: capacity: storage: 2Gi accessModes: ["ReadWriteMany"] nfs: path: /data/volume_test/v2 server: 192.168.1.180 --- apiVersion: v1 kind: PersistentVolume metadata: name: v3 spec: capacity: storage: 3Gi accessModes: ["ReadOnlyMany"] nfs: path: /data/volume_test/v3 server: 192.168.1.180 --- apiVersion: v1 kind: PersistentVolume metadata: name: v4 spec: capacity: storage: 4Gi accessModes: ["ReadWriteOnce","ReadWriteMany"] nfs: path: /data/volume_test/v4 server: 192.168.1.180 --- apiVersion: v1 kind: PersistentVolume metadata: name: v5 spec: capacity: storage: 5Gi accessModes: ["ReadWriteOnce","ReadWriteMany"] nfs: path: /data/volume_test/v5 server: 192.168.1.180 --- apiVersion: v1 kind: PersistentVolume metadata: name: v6 spec: capacity: storage: 6Gi accessModes: ["ReadWriteOnce","ReadWriteMany"] nfs: path: /data/volume_test/v6 server: 192.168.1.180 --- apiVersion: v1 kind: PersistentVolume metadata: name: v7 spec: capacity: storage: 7Gi accessModes: ["ReadWriteOnce","ReadWriteMany"] nfs: path: /data/volume_test/v7 server: 192.168.1.180 --- apiVersion: v1 kind: PersistentVolume metadata: name: v8 spec: capacity: storage: 8Gi accessModes: ["ReadWriteOnce","ReadWriteMany"] nfs: path: /data/volume_test/v8 server: 192.168.1.180 --- apiVersion: v1 kind: PersistentVolume metadata: name: v9 spec: capacity: storage: 9Gi accessModes: ["ReadWriteOnce","ReadWriteMany"] nfs: path: /data/volume_test/v9 server: 192.168.1.180 --- apiVersion: v1 kind: PersistentVolume metadata: name: v10 spec: capacity: storage: 10Gi accessModes: ["ReadWriteOnce","ReadWriteMany"] nfs: path: /data/volume_test/v10 server: 192.168.1.180
- ReadWriteOnce -- 卷可以被一个节点以读写方式挂载;
- ReadOnlyMany -- 卷可以被多个节点以只读方式挂载;
- ReadWriteMany -- 卷可以被多个节点以读写方式挂载。
#更新资源清单文件【格式要对】
[root@master01 ~]# kubectl apply -f pv.yaml persistentvolume/v1 unchanged persistentvolume/v2 unchanged persistentvolume/v3 unchanged persistentvolume/v4 unchanged persistentvolume/v5 unchanged persistentvolume/v6 unchanged persistentvolume/v7 unchanged persistentvolume/v8 unchanged persistentvolume/v9 unchanged persistentvolume/v10 unchanged
#查看pv资源
[root@master01 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE v1 1Gi RWO Retain Available 42m v10 10Gi RWO,RWX Retain Available 67s v2 2Gi RWX Retain Available 42m v3 3Gi ROX Retain Available 42m v4 4Gi RWO,RWX Retain Available 42m v5 5Gi RWO,RWX Retain Available 42m v6 6Gi RWO,RWX Retain Available 42m v7 7Gi RWO,RWX Retain Available 42m v8 8Gi RWO,RWX Retain Available 42m v9 9Gi RWO,RWX Retain Available 42m
#STATUS是Available,表示pv是可用的
4、创建pvc,和符合条件的pv绑定
[root@master01 ~]# cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: ["ReadWriteMany"] resources: requests: storage: 2Gi
#更新资源清单文件
[root@master01 ~]# kubectl apply -f pvc.yaml persistentvolumeclaim/my-pvc created
#查看pv和pvc
[root@master01 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE v1 1Gi RWO Retain Available 45m v10 10Gi RWO,RWX Retain Available 4m12s v2 2Gi RWX Retain Bound default/my-pvc 45m v3 3Gi ROX Retain Available 45m v4 4Gi RWO,RWX Retain Available 45m v5 5Gi RWO,RWX Retain Available 45m v6 6Gi RWO,RWX Retain Available 45m v7 7Gi RWO,RWX Retain Available 45m v8 8Gi RWO,RWX Retain Available 45m v9 9Gi RWO,RWX Retain Available 45m
#STATUS是Bound,表示这个pv已经被my-pvc绑定了
[root@master01 ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-pvc Bound v2 2Gi RWX 66s
pvc的名字-绑定到pv-绑定的是v2这个pv-pvc可使用的容量是2G
5、创建pod,挂载pvc
[root@master01 ~]# kubectl explain deploy.spec.template.spec.volumes.persistentVolumeClaim KIND: Deployment VERSION: apps/v1 RESOURCE: persistentVolumeClaim <Object> DESCRIPTION: PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). FIELDS: claimName <string> -required- ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly <boolean> Will force the ReadOnly setting in VolumeMounts. Default false.
[root@master01 ~]# cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 #pod中的容器需要暴露的端口 volumeMounts: - mountPath: /usr/share/nginx/html name: cache-volume #卷组名字 - name: my-tomcat image: tomcat:8.5-jre8-alpine imagePullPolicy: IfNotPresent ports: - containerPort: 8080 #pod中的容器需要暴露的端口 volumeMounts: - mountPath: /usr/share/tomcat/webapps/ROOT name: cache-volume #卷组名字 volumes: - name: cache-volume persistentVolumeClaim: claimName: my-pvc --- apiVersion: v1 kind: Service metadata: name: web01 namespace: default spec: ports: - name: nginx port: 80 protocol: TCP targetPort: 80 nodePort: 30008 - name: tomcat port: 8080 protocol: TCP targetPort: 8080 nodePort: 30009 selector: run: my-nginx type: NodePort
#更新资源清单文件
[root@master01 ~]# kubectl apply -f nginx.yaml deployment.apps/my-nginx created service/web01 created
注:使用pvc和pv的注意事项
1、我们每次创建pvc的时候,需要事先有划分好的pv,这样可能不方便,那么可以在创建pvc的时候直接动态创建一个pv这个存储类,pv事先是不存在的
2、pvc和pv绑定,如果使用默认的回收策略retain,那么删除pvc之后,pv会处于released状态,我们想要继续使用这个pv,需要手动删除pv,kubectl delete pv pv_name,删除pv,不会删除pv里的数据,当我们重新创建pvc时还会和这个最匹配的pv绑定,数据还是原来数据,不会丢失。
[root@master01 ~]# kubectl delete pvc my-pvc
#删除pvc后pv没有释放掉,其它想用pv必须删除pv重新创建
[root@master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
v1 1Gi RWO Retain Available 4h
v10 10Gi RWO,RWX Retain Available 3h19m
v2 2Gi RWX Retain Released default/my-pvc 4h
v3 3Gi ROX Retain Available 4h
v4 4Gi RWO,RWX Retain Available 4h
v5 5Gi RWO,RWX Retain Available 4h
v6 6Gi RWO,RWX Retain Available 4h
v7 7Gi RWO,RWX Retain Available 4h
v8 8Gi RWO,RWX Retain Available 4h
v9 9Gi RWO,RWX Retain Available 4h
k8s存储类:storageclass
步骤总结:
- 1、供应商:创建一个nfs provisioner
- 2、创建storageclass,storageclass指定刚才创建的供应商
- 3、创建pvc,这个pvc指定storageclass
上面介绍的PV和PVC模式都是需要先创建好PV,然后定义好PVC和pv进行一对一的Bond,但是如果PVC请求成千上万,那么就需要创建成千上万的PV,对于运维人员来说维护成本很高,Kubernetes提供一种自动创建PV的机制,叫StorageClass,它的作用就是创建PV的模板。k8s集群管理员通过创建storageclass可以动态生成一个存储卷pv供k8s pvc使用。
每个StorageClass都包含字段provisioner,parameters和reclaimPolicy。
具体来说,StorageClass会定义以下两部分:
- 1、PV的属性 ,比如存储的大小、类型等;
- 2、创建这种PV需要使用到的存储插件,比如Ceph、NFS等
有了这两部分信息,Kubernetes就能够根据用户提交的PVC,找到对应的StorageClass,然后Kubernetes就会调用 StorageClass声明的存储插件,创建出需要的PV。
#查看定义的storageclass需要的字段
[root@xianchaomaster1 ~]# kubectl explain storageclass
KIND: StorageClass
VERSION: storage.k8s.io/v1
DESCRIPTION:
StorageClass describes the parameters for a class of storage for which
PersistentVolumes can be dynamically provisioned.
StorageClasses are non-namespaced; the name of the storage class according
to etcd is in ObjectMeta.Name.
FIELDS:
allowVolumeExpansion <boolean>
allowedTopologies <[]Object>
apiVersion <string>
kind <string>
metadata <Object>
mountOptions <[]string>
parameters <map[string]string>
provisioner <string> -required-
reclaimPolicy <string>
volumeBindingMode <string>
provisioner:
- 供应商,storageclass需要有一个供应者,用来确定我们使用什么样的存储来创建pv,常见的provisioner如下
https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
- 有√的就是内部
- 没有√就是外部
provisioner既可以由内部供应商提供,也可以由外部供应商提供,如果是外部供应商可以参考https://github.com/kubernetes-incubator/external-storage/下提供的方法创建
https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner
以NFS为例,要想使用NFS,我们需要一个nfs-client的自动装载程序,称之为provisioner,这个程序会使用我们已经配置好的NFS服务器自动创建持久卷,也就是自动帮我们创建PV。
reclaimPolicy:回收策略
allowVolumeExpansion:允许卷扩展
- PersistentVolume 可以配置成可扩展。将此功能设置为true时,允许用户通过编辑相应的 PVC 对象来调整卷大小。当基础存储类的allowVolumeExpansion字段设置为 true 时,以下类型的卷支持卷扩展。
注意:此功能仅用于扩容卷,不能用于缩小卷。
安装nfs provisioner,用于配合存储类动态生成pv
#把nfs-subdir-external-provisioner.tar.gz上传到node2和node1上,手动解压
[root@node01 ~]# docker load -i nfs-subdir-external-provisioner.tar.gz 8651333b21e7: Loading layer [==================================================>] 3.031MB/3.031MB 3e8e43177b30: Loading layer [==================================================>] 42.02MB/42.02MB Loaded image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
1、创建运行nfs-provisioner需要的sa账号
[root@master01 ~]# cat serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner
[root@master01 ~]# kubectl apply -f serviceaccount.yaml serviceaccount/nfs-provisioner created
扩展:什么是sa?
sa的全称是serviceaccount。
serviceaccount是为了方便Pod里面的进程调用Kubernetes API或其他外部服务而设计的。
指定了serviceaccount之后,我们把pod创建出来了,我们在使用这个pod时,这个pod就有了我们指定的账户的权限了。
2、对sa授权
sa名称要和上面的对应
[root@master01 ~]# kubectl create clusterrolebinding nfs-provisioner-clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:nfs-provisioner
3、安装nfs-provisioner程序
mkdir /data/nfs_pro -p
#把/data/nfs_pro变成nfs共享的目录
[root@master01 ~]# cat /etc/exports /data/nfs_pro 192.168.1.0/24(rw,no_root_squash)
[root@master01 ~]# exportfs -arv exporting 192.168.1.0/24:/data/nfs_pro
[root@master01 ~]# cat nfs-deployment.yaml kind: Deployment apiVersion: apps/v1 metadata: name: nfs-provisioner spec: selector: matchLabels: app: nfs-provisioner replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-provisioner spec: serviceAccount: nfs-provisioner #前面创建的sa名字要一致 containers: - name: nfs-provisioner image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0 imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: example.com/nfs - name: NFS_SERVER value: 192.168.1.180 - name: NFS_PATH value: /data/nfs_pro volumes: - name: nfs-client-root nfs: server: 192.168.1.180 path: /data/nfs_pro
#更新资源清单文件
[root@master01 ~]# kubectl apply -f nfs-deployment.yaml deployment.apps/nfs-provisioner created
创建storageclass,动态供给pv
[root@master01 ~]# cat nfs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nfs provisioner: example.com/nfs
[root@master01 ~]# kubectl apply -f nfs-storageclass.yaml
#查看storageclass是否创建成功
[root@master01 ~]# kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs example.com/nfs Delete Immediate false 9m20s
注意:
provisioner处写的example.com/nfs应该跟安装nfs provisioner时候的env下的PROVISIONER_NAME的value值保持一致,如下:
cat nfs-deployment.yaml env: - name: PROVISIONER_NAME value: example.com/nfs
创建pvc,通过storageclass动态生成pv
[root@master01 ~]# cat claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim1 spec: accessModes: ["ReadWriteMany"] resources: #资源相关 requests: #申请的资源 storage: 1Gi storageClassName: nfs #这个名称和你nfs定义的存储类里的一致
resources的详细使用方法
[root@master01 ~]# kubectl explain pvc.spec.resources KIND: PersistentVolumeClaim VERSION: v1 RESOURCE: resources <Object> DESCRIPTION: Resources represents the minimum resources the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources ResourceRequirements describes the compute resource requirements. FIELDS: limits <map[string]string> Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests <map[string]string> Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
[root@master01 ~]# kubectl apply -f claim.yaml
#查看是否动态生成了pv,pvc是否创建成功,并和pv绑定
[root@master01 ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim1 Bound pvc-4d4c5493-7b7f-4aeb-8362-42e85675f53e 1Gi RWX nfs 53s
步骤总结:
- 1、供应商:创建一个nfs provisioner
- 2、创建storageclass,storageclass指定刚才创建的供应商
- 3、创建pvc,这个pvc指定storageclass
创建pod,挂载storageclass动态生成的pvc:test-claim1
[root@master01 ~]# cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 #pod中的容器需要暴露的端口 volumeMounts: - mountPath: /usr/share/nginx/html name: cache-volume #卷组名字 - name: my-tomcat image: tomcat:8.5-jre8-alpine imagePullPolicy: IfNotPresent ports: - containerPort: 8080 #pod中的容器需要暴露的端口 volumeMounts: - mountPath: /usr/share/tomcat/webapps/ROOT name: cache-volume #卷组名字 volumes: - name: cache-volume persistentVolumeClaim: claimName: test-claim1 --- apiVersion: v1 kind: Service metadata: name: web01 namespace: default spec: ports: - name: nginx port: 80 protocol: TCP targetPort: 80 nodePort: 30008 - name: tomcat port: 8080 protocol: TCP targetPort: 8080 nodePort: 30009 selector: run: my-nginx type: NodePort
#更新资源清单文件
[root@master01 ~]# kubectl apply -f nginx.yaml deployment.apps/my-nginx created service/web01 created
[root@master01 ~]# kubectl describe pod my-nginx-84b4f9586b-h77hl
Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cache-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: test-claim1 ReadOnly: false

评论