RBD images can be asynchronously mirrored between two Ceph clusters. This
capability uses the RBD journaling image feature to ensure crash-consistent
replication between clusters. The charm automatically creates pools used for
RBD images on the remote cluster and configures mirroring. Pools tagged with
rbd application are selected.
NOTE: The charm requires Ceph Luminous or later.
Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.
The ceph-rbd-mirror charm deploys the Ceph
rbd-mirror daemon and helps
automate remote creation and configuration of mirroring for Ceph pools used for
hosting RBD images.
Note: RBD mirroring is only one aspect of datacentre redundancy. Refer to Ceph RADOS Gateway Multisite Replication and other work to arrive at a complete solution.
The charm has the following major features:
Support for a maximum of two Ceph clusters. The clusters may reside within a single model or be contained within two separate models.
Specifically written for two-way replication. This provides the ability to fail over and fall back to/from a single secondary site. Ceph does have support for mirroring to any number of clusters but the charm does not support this.
Automatically creates and configures (for mirroring) pools in the remote cluster based on any pools in the local cluster that are labelled with the 'rbd' tag.
Mirroring of whole pools only. Ceph itself has support for the mirroring of individual images but the charm does not support this.
Network space aware. The mirror daemon can be informed about network configuration by binding the
clusterendpoints. The daemon will use the network associated with the
clusterendpoint for mirroring traffic.
Other notes on RBD mirroring:
Supports multiple running instances of the mirror daemon in each cluster. Doing so allows for the dynamic re-distribution of the mirroring load amongst the daemons. This addresses both high availability and performance concerns. Leverage this feature by scaling out the ceph-rbd-mirror application (i.e. add more units).
Requires that every RBD image within each pool is created with the
exclusive-lockimage features enabled. The charm enables these features by default and the ceph-mon charm will announce them over the
clientrelation when it has units connected to its
The feature first appeared in Ceph Luminous (OpenStack Queens).
config.yaml of the built charm (or see the charm in the Charm
Store) for the full list of configuration options, along
with their descriptions and default values. See the Juju
documentation for details on configuring applications.
A standard topology consists of two Ceph clusters with each cluster residing in a separate Juju model. The deployment steps are a fairly involved and are therefore covered under Ceph RBD Mirroring in the OpenStack Charms Deployment Guide.
This section lists Juju actions supported by the charm.
Actions allow specific operations to be performed on a per-unit basis. To
display action descriptions run
juju actions ceph-rbd-mirror. If the charm is
not deployed then see file
Operational procedures touch upon pool creation, failover & fallback, and recovering from an abrupt shutdown. These topics are also covered under Ceph RBD Mirroring in the OpenStack Charms Deployment Guide.
Please report bugs on Launchpad.
For general charm questions refer to the OpenStack Charm Guide.
- (string) Repository from which to install Ceph May be one of the following: distro (default) ppa:somecustom/ppa (PPA name must include UCA OpenStack Release name) deb url sources entry|key id or a supported Ubuntu Cloud Archive pocket. Supported Ubuntu Cloud Archive pockets include: cloud:xenial-pike cloud:xenial-queens cloud:bionic-rocky Note that updating this setting to a source that is known to provide a later version of Ceph will trigger a software upgrade.
- (boolean) Setting this to True will allow supporting services to log to syslog.