【kubernetesk8s概念】 rook design分析

    xiaoxiao2022-06-26  211

    1. Ceph Cluster CRD

    apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: # see the "Cluster Settings" section below for more details on which image of ceph to run image: ceph/ceph:v14.2.1-20190430 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: true storage: useAllNodes: true useAllDevices: true

        1.1 cephVersion: 版本信息

              image

              allowUnsupported:生产环境建议false

        1.2 dataDirHostPath: hostPath方式的挂载路径,一般为/var/lib/rook

             如果创建新cluster,先前数据必须清除,否则mon启动失败

        1.3 mon:

              count:启动mon pod的个数 1 <= count <= 9,默认为3,

              preferredCount:主要是扩容Mon个数使用的

              allowMultiplePerNode:是否允许一个node上跑多个mon,默认为false

        1.4 dashboard:集群dashboard,可以查看集群状态

        1.5 resources:pod资源请求与限制,cpu与mem

        1.6 storage:存储配置信息

              useAllNodes: true / false,是否所有的节点都用来做存储,如果指定节点存储设置为false,生产环境建议设置false

             nodes:配置单个存储节点,需要useAllNodes设置为false

                   name: kubernetes匹配kubernetes.io/hostname 

        1.7 useAllDevices:不建议使用true,如果deviceFilter设置,将会被覆盖掉

        1.8 deviceFilter:正则表达,如果一个节点指定使用device,则该选项被忽略

        1.9 devices:

        1.10 config:

                metadataDevice:可以选择使用设备存储metadata,比如SSD

                storeType:filestore / bluestore(default),

               databaseSizeMB:bluestore database

                walSizeMB:bluestore write ahead log (WAL)

                journalSizeMB:filestore journal.

                osdsPerDevice:

                encryptedDevice:

                 

    示例:

    apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: ceph/ceph:v14.2.1-20190430 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: true dashboard: enabled: true # cluster level storage configuration and selection storage: useAllNodes: false useAllDevices: false deviceFilter: location: config: metadataDevice: databaseSizeMB: "1024" # this value can be removed for environments with normal sized disks (100 GB or larger) journalSizeMB: "1024" # this value can be removed for environments with normal sized disks (20 GB or larger) nodes: - name: "172.17.4.101" directories: # specific directories to use for storage can be specified for each node - path: "/rook/storage-dir" - name: "172.17.4.201" devices: # specific devices to use for storage can be specified for each node - name: "sdb" - name: "sdc" config: # configuration can be specified at the node level which overrides the cluster level config storeType: bluestore - name: "172.17.4.301" deviceFilter: "^sd."

     

    2. Ceph Block Pool CRD

       Rook allows creation and customization of storage pools through the custom resource definitions (CRDs). The following settings are available for pools.

    Replicated: 数据副本数量,至少需要的osd数量,如果设置erasureCoded就不能设置

    erasureCoded:

    dataChunks: Number of chunks to divide the original object intocodingChunks: Number of redundant chunks to store

    failureDomain:

    apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 deviceClass: hdd

     

     

    参考:

       https://github.com/rook/rook/blob/master/Documentation/ceph-cluster-crd.md

       https://github.com/rook/rook/blob/master/Documentation/ceph-pool-crd.md


    最新回复(0)