本文共 5897 字,大约阅读时间需要 19 分钟。
git clone https://github.com/rook/rookcd cluster/examples/kubernetes/cephkubectl create -f operator.yaml
查看operator是否成功:
[root@dev-86-201 ~]# kubectl get pod -n rook-ceph-systemNAME READY STATUS RESTARTS AGErook-ceph-agent-5z6p7 1/1 Running 0 88mrook-ceph-agent-6rj7l 1/1 Running 0 88mrook-ceph-agent-8qfpj 1/1 Running 0 88mrook-ceph-agent-xbhzh 1/1 Running 0 88mrook-ceph-operator-67f4b8f67d-tsnf2 1/1 Running 0 88mrook-discover-5wghx 1/1 Running 0 88mrook-discover-lhwvf 1/1 Running 0 88mrook-discover-nl5m2 1/1 Running 0 88mrook-discover-qmbx7 1/1 Running 0 88m
然后创建ceph集群:
kubectl create -f cluster.yaml
查看ceph集群:
[root@dev-86-201 ~]# kubectl get pod -n rook-cephNAME READY STATUS RESTARTS AGErook-ceph-mgr-a-8649f78d9b-jklbv 1/1 Running 0 64mrook-ceph-mon-a-5d7fcfb6ff-2wq9l 1/1 Running 0 81mrook-ceph-mon-b-7cfcd567d8-lkqff 1/1 Running 0 80mrook-ceph-mon-d-65cd79df44-66rgz 1/1 Running 0 79mrook-ceph-osd-0-56bd7545bd-5k9xk 1/1 Running 0 63mrook-ceph-osd-1-77f56cd549-7rm4l 1/1 Running 0 63mrook-ceph-osd-2-6cf58ddb6f-wkwp6 1/1 Running 0 63mrook-ceph-osd-3-6f8b78c647-8xjzv 1/1 Running 0 63m
参数说明:
apiVersion: ceph.rook.io/v1kind: CephClustermetadata: name: rook-ceph namespace: rook-cephspec: cephVersion: # For the latest ceph images, see https://hub.docker.com/r/ceph/ceph/tags image: ceph/ceph:v13.2.2-20181023 dataDirHostPath: /var/lib/rook # 数据盘目录 mon: count: 3 allowMultiplePerNode: true dashboard: enabled: true storage: useAllNodes: true useAllDevices: false config: databaseSizeMB: "1024" journalSizeMB: "1024"
访问ceph dashboard:
[root@dev-86-201 ~]# kubectl get svc -n rook-cephNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGErook-ceph-mgr ClusterIP 10.98.183.339283/TCP 66mrook-ceph-mgr-dashboard NodePort 10.103.84.48 8443:31631/TCP 66m # 把这个改成NodePort模式rook-ceph-mon-a ClusterIP 10.99.71.227 6790/TCP 83mrook-ceph-mon-b ClusterIP 10.110.245.119 6790/TCP 82mrook-ceph-mon-d ClusterIP 10.101.79.159 6790/TCP 81m
然后访问https://10.1.86.201:31631 即可
管理账户admin,获取登录密码:
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode
apiVersion: ceph.rook.io/v1kind: CephBlockPoolmetadata: name: replicapool # operator会监听并创建一个pool,执行完后界面上也能看到对应的pool namespace: rook-cephspec: failureDomain: host replicated: size: 3---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: rook-ceph-block # 这里创建一个storage class, 在pvc中指定这个storage class即可实现动态创建PVprovisioner: ceph.rook.io/blockparameters: blockPool: replicapool # The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist clusterNamespace: rook-ceph # Specify the filesystem type of the volume. If not specified, it will use `ext4`. fstype: xfs# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/reclaimPolicy: Retain
在cluster/examples/kubernetes 目录下,官方给了个worldpress的例子,可以直接运行一下:
kubectl create -f mysql.yamlkubectl create -f wordpress.yaml
查看PV PVC:
[root@dev-86-201 ~]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmysql-pv-claim Bound pvc-a910f8c2-1ee9-11e9-84fc-becbfc415cde 20Gi RWO rook-ceph-block 144mwp-pv-claim Bound pvc-af2dfbd4-1ee9-11e9-84fc-becbfc415cde 20Gi RWO rook-ceph-block 144m[root@dev-86-201 ~]# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-a910f8c2-1ee9-11e9-84fc-becbfc415cde 20Gi RWO Retain Bound default/mysql-pv-claim rook-ceph-block 145mpvc-af2dfbd4-1ee9-11e9-84fc-becbfc415cde 20Gi RWO Retain Bound default/wp-pv-claim rook-ceph-block 145m
看下yaml文件:
apiVersion: v1kind: PersistentVolumeClaimmetadata: name: mysql-pv-claim labels: app: wordpressspec: storageClassName: rook-ceph-block # 指定storage class accessModes: - ReadWriteOnce resources: requests: storage: 20Gi # 需要一个20G的盘... volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim # 指定上面定义的PVC
是不是非常简单。
要访问wordpress的话请把service改成NodePort类型,官方给的是loadbalance类型:
kubectl edit svc wordpress[root@dev-86-201 kubernetes]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEwordpress NodePort 10.109.30.9980:30130/TCP 148m
分布式存储在容器集群中充当非常重要的角色,使用容器集群一个非常重要的理念就是把集群当成一个整体使用,如果你在使用中还关心单个主机,比如调度到某个节点,
挂载某个节点目录等,必然会导致不能把云的威力百分之百发挥出来。 一旦计算存储分离后,就可真正实现随意漂移,对集群维护来说是个极大的福音。
比如集群机器过保了需要下架,那么我们云化的架构因为所有东西无单点,所以只需要简单驱逐改节点,然后下架即可,不用关心上面跑的是什么业务,不管是有状态还是无
状态的都可以自动修复。 不过目前面临最大的挑战可能还是分布式存储的性能问题。 在性能要求不苛刻的场景下我是极推荐这种计算存储分离架构的。
探讨可加QQ群:98488045
转载于:https://blog.51cto.com/fangnux/2345958