跳转至

Ceph

1.1. 资料

1.2. ceph架构

1. cephfs不需要经过中间层librados,直接使用osdc,librados会使用osdc
2. ceph-osd 是osd守护程序,生成过程在src目录CMakeLists.txt中定义,ceph-osd需要连接osd、os等库
3. ceph-mds 是mds守护程序,位cephfs提供元数据服务。生成过程在src目录CMakeLists.txt中定义,ceph-mds需要连接mds等库
4. ceph-mon 是osd监控守护程序,生成过程在src目录CMakeLists.txt中定义,ceph-mon需要链接mon、os等库
5. ceph-mgr 是osd管理模块守护程序,生成过程在src/mgr中定义,需要连接osdc、client等库
7. ceph-fuse
8. os 是ceph存储的最底层,实现了filestore、bluestore、kstore、memstore等存储引擎
  • 官网架构文档 ceph存储集群包含以下几类daemons服务:
  • Ceph Monitor:Ceph Monitor 维护集群映射的主副本。 Ceph 监视器集群可确保监视器守护程序发生故障时的高可用性。存储集群客户端从 Ceph Monitor 检索集群映射的副本。
  • Ceph OSD Daemon:Ceph OSD 守护进程检查自己的状态和其他 OSD 的状态,然后向监视器报告。
  • Ceph Manager:Ceph Manager 充当监控、编排和插件模块的端点。
  • Ceph Metadata Server:当 CephFS 用于提供文件服务时,Ceph 元数据服务器 (MDS) 管理文件元数据。 存储集群客户端和每个 Ceph OSD 守护进程使用 CRUSH 算法来有效地计算有关数据位置的信息,而不必依赖于中心化查找表。 Ceph 的高级特性包括通过 librados 连接到 Ceph 存储集群的本地接口,以及在 librados 之上构建的许多服务接口。

1.3. 名称解释

  • 官网词汇文档
  • RADOS(Reliable Autonomic Distributed Object Store) :可靠、自动,分布式对象存储,是ceph存储的核心组件位于ceph最底层。
  • RADOS(Ceph Storage Cluster、Ceph Object Store、RADOS Cluster、Reliable Autonomic Distributed Object Store)
    • The core set of storage software which stores the user’s data (MON+OSD).
  • Ceph Cluster Map(Cluster Map)
    • The set of maps comprising the monitor map, OSD map, PG map, MDS map and CRUSH map. See Cluster Map for details.
  • Ceph Object Storage
    • The object storage “product”, service or capabilities, which consists essentially of a Ceph Storage Cluster and a Ceph Object Gateway.
  • RGW(Ceph Object Gateway、RADOS Gateway)
    • The S3/Swift gateway component of Ceph.
  • RBD(Ceph Block Device)
    • The block storage component of Ceph.
  • Ceph Block Storage
    • The block storage “product,” service or capabilities when used in conjunction with librbd, a hypervisor such as QEMU or Xen, and a hypervisor abstraction layer such as libvirt.
  • CephFS(Ceph FS、Ceph File System)
  • Cloud Platforms(Cloud Stacks)
    • Third party cloud provisioning platforms such as OpenStack, CloudStack, OpenNebula, Proxmox VE, etc.
  • OSD(Object Storage Device)
    • A physical or logical storage unit (e.g., LUN). Sometimes, Ceph users use the term “OSD” to refer to Ceph OSD Daemon, though the proper term is “Ceph OSD”.
  • Ceph OSD(Ceph OSD Daemon、Ceph OSD Daemons)
    • The Ceph OSD software, which interacts with a logical disk (OSD). Sometimes, Ceph users use the term “OSD” to refer to “Ceph OSD Daemon”, though the proper term is “Ceph OSD”.
  • OSD id
    • The integer that defines an OSD. It is generated by the monitors as part of the creation of a new OSD.
  • OSD fsid
    • This is a unique identifier used to further improve the uniqueness of an OSD and it is found in the OSD path in a file called osd_fsid. This fsid term is used interchangeably with uuid
  • OSD uuid
    • Just like the OSD fsid, this is the OSD unique identifier and is used interchangeably with fsid
  • bluestore
    • OSD BlueStore is a new back end for OSD daemons (kraken and newer versions). Unlike filestore it stores objects directly on the Ceph block devices without any file system interface.
  • filestore
    • A back end for OSD daemons, where a Journal is needed and files are written to the filesystem.
  • MON(Ceph Monitor)
    • The Ceph monitor software.
  • MGR(Ceph Manage)
    • The Ceph manager software, which collects all the state from the whole cluster in one place.
  • MDS(Ceph Metadata Server)
    • The Ceph metadata software.
  • Dashboard(Ceph Manager Dashboard、Ceph Dashboar、Dashboard Module、Dashboard Plugin)
    • A built-in web-based Ceph management and monitoring application to administer various aspects and objects of the cluster. The dashboard is implemented as a Ceph Manager module. See Ceph Dashboard for more details.
  • Ceph Client(Ceph Clients)
    • The collection of Ceph components which can access a Ceph Storage Cluster. These include the Ceph Object Gateway, the Ceph Block Device, the Ceph File System, and their corresponding libraries, kernel modules, and FUSEs.
  • CRUSH
    • Controlled Replication Under Scalable Hashing. It is the algorithm Ceph uses to compute object storage locations.
  • CRUSH rule
    • The CRUSH data placement rule that applies to a particular pool(s).
  • Pool(Pools)
    • Pools are logical partitions for storing objects.
  • systemd oneshot
    • A systemd type where a command is defined in ExecStart which will exit upon completion (it is not intended to daemonize)
  • LVM tags
    • Extensible metadata for LVM volumes and groups. It is used to store Ceph-specific information about devices and its relationship with OSDs.