cinder ceph #18

Supports: xenial bionic eoan focal trusty groovy
Add to new model

Description

Cinder is the block storage service for the Openstack project. . This charm provides a Ceph storage backend for Cinder


Ceph Storage Backend for Cinder

Overview

This charm provides a Ceph storage backend for use with the Cinder charm; this allows multiple Ceph storage clusters to be associated with a single Cinder deployment, potentially alongside other storage backends from other vendors.

To use:

juju deploy cinder
juju deploy -n 3 ceph
juju deploy cinder-ceph
juju add-relation cinder-ceph cinder
juju add-relation cinder-ceph ceph

Configuration

The cinder-ceph charm allows the replica count for the Ceph storage pool to be configured. This must be done in advance of relating to the ceph charm:

juju set cinder-ceph ceph-osd-replication-count=3
juju add-relation cinder-ceph ceph

By default, the replica count is set to 2 replicas. Increasing this value increases data resilience at the cost of consuming most real storage in the Ceph cluster.


Configuration

backend-availability-zone
(string) Availability zone name of this volume backend. If set, it will override the default availability zone. Supported for Pike or newer releases.
ceph-osd-replication-count
(int) This value dictates the number of replicas ceph must make of any object it stores withing the cinder rbd pool. Of course, this only applies if using Ceph as a backend store. Note that once the cinder rbd pool has been created, changing this value will not have any effect (although it can be changed in ceph by manually configuring your ceph cluster).
3
ceph-pool-weight
(int) Defines a relative weighting of the pool as a percentage of the total amount of data in the Ceph cluster. This effectively weights the number of placement groups for the pool created to be appropriately portioned to the amount of data expected. For example, if the ephemeral volumes for the OpenStack compute instances are expected to take up 20% of the overall configuration then this value would be specified as 20. Note - it is important to choose an appropriate value for the pool weight as this directly affects the number of placement groups which will be created for the pool. The number of placement groups for a pool can only be increased, never decreased - so it is important to identify the percent of data that will likely reside in the pool.
40
ec-profile-crush-locality
(string) (lrc plugin) The type of the crush bucket in which each set of chunks defined by l will be stored. For instance, if it is set to rack, each group of l chunks will be placed in a different rack. It is used to create a CRUSH rule step such as step choose rack. If it is not set, no such grouping is done.
ec-profile-device-class
(string) Device class from CRUSH map to use for placement groups for erasure profile - valid values: ssd, hdd or nvme (or leave unset to not use a device class).
ec-profile-durability-estimator
(int) (shec plugin - c) The number of parity chunks each of which includes each data chunk in its calculation range. The number is used as a durability estimator. For instance, if c=2, 2 OSDs can be down without losing data.
ec-profile-helper-chunks
(int) (clay plugin - d) Number of OSDs requested to send data during recovery of a single chunk. d needs to be chosen such that k+1 <= d <= k+m-1. Larger the d, the better the savings.
ec-profile-k
(int) Number of data chunks that will be used for EC data pool. K+M factors should never be greater than the number of available zones (or hosts) for balancing.
1
ec-profile-locality
(int) (lrc plugin - l) Group the coding and data chunks into sets of size l. For instance, for k=4 and m=2, when l=3 two groups of three are created. Each set can be recovered without reading chunks from another set. Note that using the lrc plugin does incur more raw storage usage than isa or jerasure in order to reduce the cost of recovery operations.
ec-profile-m
(int) Number of coding chunks that will be used for EC data pool. K+M factors should never be greater than the number of available zones (or hosts) for balancing.
2
ec-profile-name
(string) Name for the EC profile to be created for the EC pools. If not defined a profile name will be generated based on the name of the pool used by the application.
ec-profile-plugin
(string) EC plugin to use for this applications pool. The following list of plugins acceptable - jerasure, lrc, isa, shec, clay.
jerasure
ec-profile-scalar-mds
(string) (clay plugin) specifies the plugin that is used as a building block in the layered construction. It can be one of jerasure, isa, shec (defaults to jerasure).
ec-profile-technique
(string) EC profile technique used for this applications pool - will be validated based on the plugin configured via ec-profile-plugin. Supported techniques are ‘reed_sol_van’, ‘reed_sol_r6_op’, ‘cauchy_orig’, ‘cauchy_good’, ‘liber8tion’ for jerasure, ‘reed_sol_van’, ‘cauchy’ for isa and ‘single’, ‘multiple’ for shec.
ec-rbd-metadata-pool
(string) Name of the metadata pool to be created (for RBD use-cases). If not defined a metadata pool name will be generated based on the name of the data pool used by the application. The metadata pool is always replicated, not erasure coded.
pool-type
(string) Ceph pool type to use for storage - valid values include ‘replicated’ and ‘erasure-coded’.
replicated
rbd-flatten-volume-from-snapshot
(boolean) Flatten volumes created from snapshots to remove dependency from volume to snapshot. Supported on Queens+
rbd-pool-name
(string) Optionally specify an existing rbd pool that cinder should map to.
restrict-ceph-pools
(boolean) Optionally restrict Ceph key permissions to access pools as required.
use-syslog
(boolean) Setting this to True will configure services to log to syslog.