ceph fs #80

Supports: xenial bionic eoan focal groovy
Add to new model


Ceph is a distributed storage and network file system designed to provide excellent performance, reliability, and scalability.


Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.

The ceph-fs charm deploys the metadata server daemon (MDS) for the Ceph distributed file system (CephFS). It is used in conjunction with the ceph-mon and the ceph-osd charms.

Highly available CephFS is achieved by deploying multiple MDS servers (i.e. multiple ceph-fs units).



This section covers common and/or important configuration options. See file config.yaml for the full list of options, along with their descriptions and default values. A YAML file (e.g. ceph-osd.yaml) is often used to store configuration options. See the Juju documentation for details on configuring applications.


The source option states the software sources. A common value is an OpenStack UCA release (e.g. 'cloud:xenial-queens' or 'cloud:bionic-ussuri'). See Ceph and the UCA. The underlying host's existing apt sources will be used if this option is not specified (this behaviour can be explicitly chosen by using the value of 'distro').


We are assuming a pre-existing Ceph cluster.

To deploy a single MDS node:

juju deploy ceph-fs

Then add a relation to the ceph-mon application:

juju add-relation ceph-fs:ceph-mds ceph-mon:mds


This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis. To display action descriptions run juju actions ceph-fs. If the charm is not deployed then see file actions.yaml.

  • get-quota
  • remove-quota
  • set-quota


Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.


(int) This value dictates the number of replicas ceph must make of any object it stores within the images rbd pool. Of course, this only applies if using Ceph as a backend store. Note that once the images rbd pool has been created, changing this value will not have any effect (although it can be changed in ceph by manually configuring your ceph cluster).
(int) Defines a relative weighting of the pool as a percentage of the total amount of data in the Ceph cluster. This effectively weights the number of placement groups for the pool created to be appropriately portioned to the amount of data expected. For example, if the compute images for the OpenStack compute instances are expected to take up 20% of the overall configuration then this value would be specified as 20. Note - it is important to choose an appropriate value for the pool weight as this directly affects the number of placement groups which will be created for the pool. The number of placement groups for a pool can only be increased, never decreased - so it is important to identify the percent of data that will likely reside in the pool.
(string) The IP address and netmask of the public (front-side) network (e.g., If multiple networks are to be used, a space-delimited list of a.b.c.d/x can be provided.
(string) (lrc plugin) The type of the crush bucket in which each set of chunks defined by l will be stored. For instance, if it is set to rack, each group of l chunks will be placed in a different rack. It is used to create a CRUSH rule step such as step choose rack. If it is not set, no such grouping is done.
(string) Device class from CRUSH map to use for placement groups for erasure profile - valid values: ssd, hdd or nvme (or leave unset to not use a device class).
(int) (shec plugin - c) The number of parity chunks each of which includes each data chunk in its calculation range. The number is used as a durability estimator. For instance, if c=2, 2 OSDs can be down without losing data.
(int) (clay plugin - d) Number of OSDs requested to send data during recovery of a single chunk. d needs to be chosen such that k+1 <= d <= k+m-1. Larger the d, the better the savings.
(int) Number of data chunks that will be used for EC data pool. K+M factors should never be greater than the number of available zones (or hosts) for balancing.
(int) (lrc plugin - l) Group the coding and data chunks into sets of size l. For instance, for k=4 and m=2, when l=3 two groups of three are created. Each set can be recovered without reading chunks from another set. Note that using the lrc plugin does incur more raw storage usage than isa or jerasure in order to reduce the cost of recovery operations.
(int) Number of coding chunks that will be used for EC data pool. K+M factors should never be greater than the number of available zones (or hosts) for balancing.
(string) Name for the EC profile to be created for the EC pools. If not defined a profile name will be generated based on the name of the pool used by the application.
(string) EC plugin to use for this applications pool. The following list of plugins acceptable - jerasure, lrc, isa, shec, clay.
(string) (clay plugin) specifies the plugin that is used as a building block in the layered construction. It can be one of jerasure, isa, shec (defaults to jerasure).
(string) EC profile technique used for this applications pool - will be validated based on the plugin configured via ec-profile-plugin. Supported techniques are ‘reed_sol_van’, ‘reed_sol_r6_op’, ‘cauchy_orig’, ‘cauchy_good’, ‘liber8tion’ for jerasure, ‘reed_sol_van’, ‘cauchy’ for isa and ‘single’, ‘multiple’ for shec.
(string) Key ID to import to the apt keyring to support use with arbitary source configuration from outside of Launchpad archives or PPA's.
(int) Mon and OSD debug level. Max is 20.
(string) Name of the metadata pool to be created/used. If not defined a metadata pool name will be generated based on the name of the application. The metadata pool is always replicated, not erasure coded.
(string) Ceph pool type to use for storage - valid values include ‘replicated’ and ‘erasure-coded’.
(boolean) If True enables IPv6 support. The charm will expect network interfaces to be configured with an IPv6 address. If set to False (default) IPv4 is expected. NOTE: these charms do not currently support IPv6 privacy extension. In order for this charm to function correctly, the privacy extension must be disabled and a non-temporary address must be configured/available on your network interface.
(string) Name of the data pool to be created/used. If not defined a data pool name will be generated based on the name of the application.
(string) Optional configuration to support use of additional sources such as: . - ppa:myteam/ppa - cloud:bionic-ussuri - cloud:xenial-proposed/queens - http://my.archive.com/ubuntu main . The last option should be used in conjunction with the key configuration option.
(boolean) If set to True, supporting services will log to syslog.