NFS动态存储
动态存储的优势:
管理员无需预先创建大量的PV作为存储资源;
静态存储需要用户申请PVC时保证容量和读写类型与预置PV的容量及读写类型完全匹配, 而动态存储则无需如此
安装NFS服务
安装nfs(k8s的work节点上也要安装):
[root@test20 ~]# yum install nfs-utils
编辑exports文件:
[root@test20 ~]# mkdir -p /data/loki
[root@test20 ~]# cat /etc/exports
/data/loki *(rw,sync,all_squash)
查看nfs共享的默认用户是,并把/data目录的所有者更改为此用户:
[root@test20 ~]# grep 65534 /etc/passwd
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
[root@test20 ~]# chown -R nfsnobody.nfsnobody /data
启动服务及设置开机自启动
[root@test20 ~]# systemctl start nfs && systemctl enable nfs
#正常来说rpcbind服务是启动的,没有启动的启动
[root@test20 ~]# systemctl start rpcbind.service && systemctl enable rpcbind.service
客户端验证
[root@test20 ~]# showmount -e 192.168.19.20
Export list for 192.168.19.20:
/data/loki *
创建用于k8s集群的nfs_client插件
下面的nfs_client.yaml文件中包含了ServiceAccount认证,存储资源类storageclass,存储供应卷这三部分。
[root@kubemaster01 ~]# cat nfs_client.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: devops
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
namespace: devops
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get", "create", "update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: devops
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
archiveOnDelete: "false"
reclaimPolicy: Delete #默认就是这个,可以不写。如果你想删除pvc的时候保留pv,还可以选择Retain(保留)这个选项
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: devops
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs #要跟上面StorageClass里的provisioner这一项的值一样
- name: NFS_SERVER
value: 192.168.19.20 #要挂载nfs的ip地址或域名
- name: NFS_PATH
value: /data/loki #要挂载的路径
volumes:
- name: nfs-client-root
nfs:
server: 192.168.19.20
path: /data/loki
创建一个测试pvc来验证一下上面nfs_client.yaml文件能否连接到nfs后端存储上。
[root@kubemaster01 ~]# cat test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
namespace: devops
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
kubectl执行上面运行的两个yaml文件
[root@kubemaster01 ~]# kubectl apply -f nfs_client.yaml
[root@kubemaster01 ~]# kubectl apply -f test-claim.yaml
查看deployment及pvc状态
[root@kubemaster01 ~]# kubectl get deployment -n devops
NAME READY UP-TO-DATE AVAILABLE AGE
contract-deployment 1/1 1 1 27h
nfs-client-provisioner 1/1 1 1 3h23m
[root@kubemaster01 ~]# kubectl get pvc -n devops
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Pending pvc-7b39471b-9195-4b07-b575-e879f96bc380 1Mi RWX managed-nfs-storage 3h21m
排查错误:
pvc没有起来,用kubectl logs分别查看pvc,deployment的错误,发现在deployment的错误日志来有以下错误:
#排查:
[root@kubemaster01 ~]# kubectl logs deployment/contract-deployment -n devops
#错误1:
error initially creating leader election record: endpoints is forbidden: User "system:serviceaccount:devops:nfs-provisioner" cannot create resource "endpoints" in API group "" in the namespace "devops"
#解决:
明显上在nfs_client.yaml文件中的ClusterRole中的resources: ["services", "endpoints"]的verbs添加 "create"
#错误2:
error syncing claim "devops/test-claim": failed to provision volume with StorageClass "managed-nfs-storage": unable to create directory to provision new pv: mkdir
#解决:
明显是nfs服务器上的目录权限问题,参考上面nfs服务器部暑---->更改/data目录的所有者
Loki数据持久化
打算用上面的NFS作为Loki的后端存储,实现Loki日志数据的持久化。
创建loki后端存储所需要的pvc,如下:
[root@kubemaster01 ~]# vim loki_pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: loki
namespace: devops
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Ti
基于helm安装的没有数据持久化,这里直接更改statefulset里面的数据卷挂载。
查看:
[root@kubemaster01 ~]# kubectl get statefulset loki -n devops -o yaml | grep -C 10 volumes
--
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 10001
runAsGroup: 10001
runAsNonRoot: true
runAsUser: 10001
serviceAccount: loki
serviceAccountName: loki
terminationGracePeriodSeconds: 4800
volumes:
- name: config
secret:
defaultMode: 420
secretName: loki
- emptyDir: {} #这里可以看到没有做storage,是emptyDir
name: storage
updateStrategy:
type: RollingUpdate
status:
修改:
[root@kubemaster01 ~]# kubectl get statefulset loki -n devops -o yaml >> loki_sf.yaml
[root@kubemaster01 ~]# vim loki_sf.yaml
...
terminationGracePeriodSeconds: 4800
volumes:
- name: config
secret:
defaultMode: 420
secretName: loki
- name: storage
persistentVolumeClaim: # 将emtypDir改成上面创建的pvc
claimName: loki
updateStrategy:
type: RollingUpdate
status:
collisionCount: 0
...
[root@kubemaster01 ~]# kubectl apply -f loki_sf.yaml
文档更新时间: 2020-07-14 15:10 作者:子木