k8s部署-46-k8s共享存储(下)
经过上文了解,我们把共享存储的理念了解清楚了,那么咱们就实操一把,这里我们的底层存储服务选择了GlusterFS,看看如何操作。
3、glusterfs环境准备
要求:
1、GlusterFS需要三个节点(我这里只有两个节点,就配置两个了,能跑起来,但是后面会有问题,本文会执行不下去,必须配置三个节点以上,我这里只做演示来用,如果你的系统资源足够,但是node节点不够,可以扩增node节点数哈,前面的文章有写到);
2、每个节点上都要有一块裸磁盘;
3、三个节点都需要在k8s集群中。
我这里没有多余的空服务器了,直接在我们一直使用的两个节点上,每个节点再添加一块1G的裸磁盘吧,添加完毕之后状态如下;
[root@node2 ~]# fdisk -l磁盘 /dev/sda:21.5 GB, 21474836480 字节,41943040 个扇区Units = 扇区 of 1 * 512 = 512 bytes扇区大小(逻辑/物理):512 字节 / 512 字节I/O 大小(最小/最佳):512 字节 / 512 字节磁盘标签类型:dos磁盘标识符:0x000d1bb2 设备 Boot Start End Blocks Id System/dev/sda1 * 2048 2099199 1048576 83 Linux/dev/sda2 2099200 41943039 19921920 8e Linux LVM磁盘 /dev/sdc:1073 MB, 1073741824 字节,2097152 个扇区Units = 扇区 of 1 * 512 = 512 bytes扇区大小(逻辑/物理):512 字节 / 512 字节I/O 大小(最小/最佳):512 字节 / 512 字节磁盘 /dev/sdb:21.5 GB, 21474836480 字节,41943040 个扇区Units = 扇区 of 1 * 512 = 512 bytes扇区大小(逻辑/物理):512 字节 / 512 字节I/O 大小(最小/最佳):512 字节 / 512 字节磁盘标签类型:dos磁盘标识符:0x8207355b 设备 Boot Start End Blocks Id System/dev/sdb1 2048 41943039 20970496 83 Linux磁盘 /dev/mapper/centos-root:34.0 GB, 33978056704 字节,66363392 个扇区Units = 扇区 of 1 * 512 = 512 bytes扇区大小(逻辑/物理):512 字节 / 512 字节I/O 大小(最小/最佳):512 字节 / 512 字节磁盘 /dev/mapper/centos-swap:2147 MB, 2147483648 字节,4194304 个扇区Units = 扇区 of 1 * 512 = 512 bytes扇区大小(逻辑/物理):512 字节 / 512 字节I/O 大小(最小/最佳):512 字节 / 512 字节[root@node2 ~]#
可以看到有一块/dev/sdc是一个空磁盘,那么我们就将针对这个空磁盘来操作。
然后我们登录gluster官网来看看,还需要什么东西呢?如下图:
看起来还需要一个heketi,好了,我们准备开始吧。
4、glusterfs安装
首先我们需要在node节点上,安装glusterfs的客户端,对应着我这里就是node2和node3,下面的这个命令,node2和node3都需要执行;
[root@node2 ~]# yum -y install glusterfs glusterfs-fuse
然后我们再看apiserver是否支持,主要看一个参数,如下:
[root@node1 ~]# ps -ef | grep apiserver | grep allow-privilegedroot 777 1 6 09:32 ? 00:04:06 /usr/local/bin/kube-apiserver --advertise-address=192.168.112.130 --allow-privileged=true --apiserver-count=2 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/audit.log --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --client-ca-file=/etc/kubernetes/ssl/ca.pem --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --etcd-servers=https://192.168.112.130:2379,https://192.168.112.131:2379,https://192.168.112.132:2379 --event-ttl=1h --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem --service-account-issuer=api --service-account-key-file=/etc/kubernetes/ssl/service-account.pem --service-account-signing-key-file=/etc/kubernetes/ssl/service-account-key.pem --api-audiences=api,vault,factors --service-cluster-ip-range=10.233.0.0/16 --service-node-port-range=30000-32767 --proxy-client-cert-file=/etc/kubernetes/ssl/proxy-client.pem --proxy-client-key-file=/etc/kubernetes/ssl/proxy-client-key.pem --runtime-config=api/all=true --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem --requestheader-allowed-names=aggregator --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --v=1 --feature-gates=RemoveSelfLink=false[root@node1 ~]#
spiserver检查完毕之后,再看下kubelet是否支持,如果不支持的话需要添加下,并重启kubelet服务;
[root@node2 ~]# cat /etc/systemd/system/kubelet.service | grep allow-privileged --allow-privileged=true \[root@node2 ~]#
首先我们需要准备一个daemonset,这个deamonset就是为咱们提供了一个GlusterFS的服务端;
[root@node1 ~]# cd namespace/[root@node1 namespace]# mkdir glusterfs[root@node1 namespace]# cd glusterfs/[root@node1 glusterfs]# [root@node1 glusterfs]# vim glusterfs-daemonset.yaml ---kind: DaemonSet#apiVersion: extensions/v1beta1apiVersion: apps/v1metadata: name: glusterfs labels: glusterfs: daemonset annotations: description: GlusterFS DaemonSet tags: glusterfsspec: selector: matchLabels: glusterfs: pod glusterfs-node: pod template: metadata: name: glusterfs labels: glusterfs: pod glusterfs-node: pod spec: nodeSelector: storagenode: glusterfs hostNetwork: true containers: - image: gluster/gluster-centos:latest imagePullPolicy: IfNotPresent name: glusterfs env: # alternative for /dev volumeMount to enable access to *all* devices - name: HOST_DEV_DIR value: "/mnt/host-dev" # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the # readiness/liveness probe validate gluster-blockd as well - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE value: "1" - name: GB_GLFS_LRU_COUNT value: "15" - name: TCMU_LOGDIR value: "/var/log/glusterfs/gluster-block" resources: requests: memory: 100Mi cpu: 100m volumeMounts: - name: glusterfs-heketi mountPath: "/var/lib/heketi" - name: glusterfs-run mountPath: "/run" - name: glusterfs-lvm mountPath: "/run/lvm" - name: glusterfs-etc mountPath: "/etc/glusterfs" - name: glusterfs-logs mountPath: "/var/log/glusterfs" - name: glusterfs-config mountPath: "/var/lib/glusterd" - name: glusterfs-host-dev mountPath: "/mnt/host-dev" - name: glusterfs-misc mountPath: "/var/lib/misc/glusterfsd" - name: glusterfs-block-sys-class mountPath: "/sys/class" - name: glusterfs-block-sys-module mountPath: "/sys/module" - name: glusterfs-cgroup mountPath: "/sys/fs/cgroup" readOnly: true - name: glusterfs-ssl mountPath: "/etc/ssl" readOnly: true - name: kernel-modules mountPath: "/usr/lib/modules" readOnly: true securityContext: capabilities: {} privileged: true readinessProbe: timeoutSeconds: 3 initialDelaySeconds: 40 exec: command: - "/bin/bash" - "-c" - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi" periodSeconds: 25 successThreshold: 1 failureThreshold: 50 livenessProbe: timeoutSeconds: 3 initialDelaySeconds: 40 exec: command: - "/bin/bash" - "-c" - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi" periodSeconds: 25 successThreshold: 1 failureThreshold: 50 volumes: - name: glusterfs-heketi hostPath: path: "/var/lib/heketi" - name: glusterfs-run - name: glusterfs-lvm hostPath: path: "/run/lvm" - name: glusterfs-etc hostPath: path: "/etc/glusterfs" - name: glusterfs-logs hostPath: path: "/var/log/glusterfs" - name: glusterfs-config hostPath: path: "/var/lib/glusterd" - name: glusterfs-host-dev hostPath: path: "/dev" - name: glusterfs-misc hostPath: path: "/var/lib/misc/glusterfsd" - name: glusterfs-block-sys-class hostPath: path: "/sys/class" - name: glusterfs-block-sys-module hostPath: path: "/sys/module" - name: glusterfs-cgroup hostPath: path: "/sys/fs/cgroup" - name: glusterfs-ssl hostPath: path: "/etc/ssl" - name: kernel-modules hostPath: path: "/usr/lib/modules"[root@node1 glusterfs]#
可以看到上面的daemonset中有一个nodeSelector:,如下:
spec: nodeSelector: storagenode: glusterfs
所以我们需要给有需要的节点,打上这个标签,我这里就是在node2和node3上配置;
[root@node1 glusterfs]# kubectl get nodeNAME STATUS ROLES AGE VERSIONnode2 NotReady <none> 36d v1.20.2node3 NotReady <none> 36d v1.20.2[root@node1 glusterfs]# kubectl label node node2 storagenode=glusterfsnode/node2 labeled[root@node1 glusterfs]# kubectl label node node3 storagenode=glusterfsnode/node3 labeled[root@node1 glusterfs]#
然后我们执行下daemonset;
[root@node1 glusterfs]# kubectl apply -f glusterfs-daemonset.yaml daemonset.apps/glusterfs created[root@node1 glusterfs]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESglusterfs-ld4vr 1/1 Running 0 155m 192.168.112.131 node2 <none> <none>glusterfs-mz8rt 1/1 Running 0 155m 192.168.112.132 node3 <none> <none>[root@node1 glusterfs]#
服务端有了之后,我们还有磁盘没有初始化是吧,磁盘初始化的动作交给了heketi,看下如何配置;
[root@node1 glusterfs]# vim heketi-security.yaml apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: heketi-clusterrolebindingroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: heketi-clusterrolesubjects:- kind: ServiceAccount name: heketi-service-account namespace: default---apiVersion: v1kind: ServiceAccountmetadata: name: heketi-service-account namespace: default---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: heketi-clusterrolerules:- apiGroups: - "" resources: - pods - pods/status - pods/exec verbs: - get - list - watch - create[root@node1 glusterfs]# [root@node1 glusterfs]# vim heketi-deployment.yaml kind: ServiceapiVersion: v1metadata: name: heketi labels: glusterfs: heketi-service deploy-heketi: support annotations: description: Exposes Heketi Servicespec: selector: name: heketi ports: - name: heketi port: 80 targetPort: 8080---apiVersion: v1kind: ConfigMapmetadata: name: tcp-services namespace: ingress-nginxdata: "30001": default/heketi:80---kind: DeploymentapiVersion: apps/v1metadata: name: heketi labels: glusterfs: heketi-deployment annotations: description: Defines how to deploy Heketispec: selector: matchLabels: name: heketi glusterfs: heketi-pod replicas: 1 template: metadata: name: heketi labels: name: heketi glusterfs: heketi-pod spec: serviceAccountName: heketi-service-account containers: - image: heketi/heketi:dev imagePullPolicy: Always name: heketi env: - name: HEKETI_EXECUTOR value: "kubernetes" - name: HEKETI_DB_PATH value: "/var/lib/heketi/heketi.db" - name: HEKETI_FSTAB value: "/var/lib/heketi/fstab" - name: HEKETI_SNAPSHOT_LIMIT value: "14" - name: HEKETI_KUBE_GLUSTER_DAEMONSET value: "y" - name: HEKETI_ADMIN_KEY value: "yunweijia123" ports: - containerPort: 8080 volumeMounts: - name: db mountPath: /var/lib/heketi readinessProbe: timeoutSeconds: 3 initialDelaySeconds: 3 httpGet: path: /hello port: 8080 livenessProbe: timeoutSeconds: 3 initialDelaySeconds: 30 httpGet: path: /hello port: 8080 volumes: - name: db hostPath: path: "/heketi-data"[root@node1 glusterfs]#
这里我们配置了一个HEKETI_ADMIN_KEY,需要注意下,后面初始化的时候会用到;
然后执行下,让其生效;
[root@node1 glusterfs]# kubectl apply -f heketi-security.yaml clusterrolebinding.rbac.authorization.k8s.io/heketi-clusterrolebinding createdserviceaccount/heketi-service-account createdclusterrole.rbac.authorization.k8s.io/heketi-clusterrole created[root@node1 glusterfs]# kubectl apply -f heketi-deployment.yaml service/heketi unchangedconfigmap/tcp-services unchangeddeployment.apps/heketi created[root@node1 glusterfs]# [root@node1 glusterfs]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESglusterfs-ld4vr 1/1 Running 0 3h27m 192.168.112.131 node2 <none> <none>glusterfs-mz8rt 1/1 Running 0 3h27m 192.168.112.132 node3 <none> <none>heketi-7d7bc4758-7m6d6 1/1 Running 0 3m49s 10.200.104.47 node2 <none> <none>[root@node1 glusterfs]#
然后我们做初始化磁盘的动作;
从上面看到我的heketi的pod部署在了node2上,那么我们去看下;
剩余内容请转至VX公众号 “运维家” ,回复 “152” 查看。
------ 以下内容为防伪内容,忽略即可 ------
------ 以下内容为防伪内容,忽略即可 ------
------ 以下内容为防伪内容,忽略即可 ------
linux视频监控软件lslinux命令详解linux网络切换stm32linuxlinux公网linux在线安装软件gcclinux语句的for怎样使linux和主机连接在移动硬盘里装linuxlinux设置新加坡中文版linux逆向教学linuxdtshoglinux控制服务怎么修改linux系统名字linux缩小根分区linux下退出编辑文件在linux系统搭建phplinux读取内核空间linux分区域性能linux系统的搭建与测试图片
评论