cinder ceph #13

Supports: xenial bionic eoan focal trusty
Add to new model

Description

Cinder is the block storage service for the Openstack project. . This charm provides a Ceph storage backend for Cinder


Ceph Storage Backend for Cinder

Overview

This charm provides a Ceph storage backend for use with the Cinder charm; this allows multiple Ceph storage clusters to be associated with a single Cinder deployment, potentially alongside other storage backends from other vendors.

To use:

juju deploy cinder
juju deploy -n 3 ceph
juju deploy cinder-ceph
juju add-relation cinder-ceph cinder
juju add-relation cinder-ceph ceph

Configuration

The cinder-ceph charm allows the replica count for the Ceph storage pool to be configured. This must be done in advance of relating to the ceph charm:

juju set cinder-ceph ceph-osd-replication-count=3
juju add-relation cinder-ceph ceph

By default, the replica count is set to 2 replicas. Increasing this value increases data resilience at the cost of consuming most real storage in the Ceph cluster.


Configuration

ceph-osd-replication-count
(int) This value dictates the number of replicas ceph must make of any object it stores withing the cinder rbd pool. Of course, this only applies if using Ceph as a backend store. Note that once the cinder rbd pool has been created, changing this value will not have any effect (although it can be changed in ceph by manually configuring your ceph cluster).
3
ceph-pool-weight
(int) Defines a relative weighting of the pool as a percentage of the total amount of data in the Ceph cluster. This effectively weights the number of placement groups for the pool created to be appropriately portioned to the amount of data expected. For example, if the ephemeral volumes for the OpenStack compute instances are expected to take up 20% of the overall configuration then this value would be specified as 20. Note - it is important to choose an appropriate value for the pool weight as this directly affects the number of placement groups which will be created for the pool. The number of placement groups for a pool can only be increased, never decreased - so it is important to identify the percent of data that will likely reside in the pool.
40
ec-profile-extra-chunk
(int) Ceph counts with multiple Erasure Coding plugins, which are used to calculate where the chunks should be placed. Most of those plugins have a third type of chunk for the purposes of its own balancing, e.g. LRC and "l"-chunk. For jerasure and isa plugin, this value will be ignored. For lrc, this will define the l parameter, which define how many chunks are needed to read before recovering from a object loss (instead of reading from other hosts). For SHEC, this is the "c" parameter and corresponds to the number of OSDs that can go down before losing data. For CLAY, this is the "d" parameter and corresponds to the number of OSDs contacted in recovery time.
ec-profile-k
(int) Number of data chunks that will be used for EC data pool. K+M factors should never be greater than number of available AZs for balancing.
1
ec-profile-m
(int) Number of coding chunks that will be used for EC data pool. K+M factors should never be greater than number of available AZs for balancing.
2
ec-profile-name
(string) Name for the erasure-code-profile to be created for EC pools. If not defined, value will default to rbd pool name value and "-profile" appended.
ec-profile-plugin
(string) EC plugins available for this deployment. The following list of plugins acceptable: jerasure, lrc, isa, shec, clay.
jerasure
ec-profile-technique
(string) EC profile technique used for this deployment.
reed_sol_van
ec-rbd-metadata-pool
(string) Name of the metadata pool to be created. Metadata pools are used in conjuction with data pools on erasure code. Data pools use rbd-pool-name value. Only alphanumeric characters or dashes should be inserted. If not defined, value will default to rbd pool name value and "-metadata" appended.
pool-type
(string) If set to "erasure-coded", cinder-ceph charm will create 2 pools, one for metadata (replicated) and another for actual data (with erasure code enabled). Any other value will make the charm set it as “replicated”. For EC, K+M factor is defined on the ec-profile-{k,m} configs.
replicated
rbd-flatten-volume-from-snapshot
(boolean) Flatten volumes created from snapshots to remove dependency from volume to snapshot. Supported on Queens+
rbd-pool-name
(string) Optionally specify an existing rbd pool that cinder should map to.
restrict-ceph-pools
(boolean) Optionally restrict Ceph key permissions to access pools as required.
use-syslog
(boolean) Setting this to True will configure services to log to syslog.