ceph mon #408

Supports: xenial bionic disco eoan trusty
Add to new model

Description

Ceph is a distributed storage and network file system designed to provide
excellent performance, reliability, and scalability.


Overview

Ceph is a unified, distributed storage system designed for
excellent performance, reliability, and scalability.

The ceph-mon charm deploys Ceph monitor nodes, allowing one to create a monitor
cluster. It is used in conjunction with the ceph-osd charm.
Together, these charms can scale out the amount of storage available in a Ceph
cluster.

Usage

Deployment

A cloud with three MON nodes is a typical design whereas three OSD nodes are
considered the minimum. For example, to deploy a Ceph cluster consisting of
three OSDs and three MONs:

juju deploy --config ceph-osd.yaml -n 3 ceph-osd
juju deploy --to lxd:0 ceph-mon
juju add-unit --to lxd:1 ceph-mon
juju add-unit --to lxd:2 ceph-mon
juju add-relation ceph-osd ceph-mon

Here, a containerised MON is running alongside each OSD.

By default, the monitor cluster will not be complete until three ceph-mon units
have been deployed. This is to ensure that a quorum is achieved prior to the
addition of storage devices.

See the Ceph documentation for notes on monitor cluster
deployment strategies.

Note: Refer to the Install OpenStack page in the
OpenStack Charms Deployment Guide for instructions on installing a monitor
cluster for use with OpenStack.

Network spaces

This charm supports the use of Juju network spaces (Juju
v.2.0). This feature optionally allows specific types of the application's
network traffic to be bound to subnets that the underlying hardware is
connected to.

Note: Spaces must be configured in the backing cloud prior to deployment.

The ceph-mon charm exposes the following Ceph traffic types (bindings):

  • 'public' (front-side)
  • 'cluster' (back-side)

For example, providing that spaces 'data-space' and 'cluster-space' exist, the
deploy command above could look like this:

juju deploy --config ceph-mon.yaml -n 3 ceph-mon \
   --bind "public=data-space cluster=cluster-space"

Alternatively, configuration can be provided as part of a bundle:

    ceph-mon:
      charm: cs:ceph-mon
      num_units: 1
      bindings:
        public: data-space
        cluster: cluster-space

Refer to the Ceph Network Reference to learn about the
implications of segregating Ceph network traffic.

Note: Existing ceph-mon units configured with the ceph-public-network
or ceph-cluster-network options will continue to honour them. Furthermore,
these options override any space bindings, if set.

Actions

This section lists Juju actions supported by the charm.
Actions allow specific operations to be performed on a per-unit basis.

copy-pool

Copy contents of a pool to a new pool.

create-cache-tier

Create a new cache tier.

create-crush-rule

Create a new replicated CRUSH rule to use on a pool.

create-erasure-profile

Create a new erasure code profile to use on a pool.

create-pool

Create a pool.

crushmap-update

Apply a new CRUSH map definition.

Warning: This action can break your cluster in unexpected ways if
misused.

delete-erasure-profile

Delete an erasure code profile.

delete-pool

Delete a pool.

get-erasure-profile

Display an erasure code profile.

get-health

Display cluster health.

list-erasure-profiles

List erasure code profiles.

list-pools

List pools.

pause-health

Pause the cluster's health operations.

pool-get

Get a value for a pool.

pool-set

Set a value for a pool.

pool-statistics

Display a pool's utilisation statistics.

remove-cache-tier

Remove a cache tier.

remove-pool-snapshot

Remove a pool's snapshot.

rename-pool

Rename a pool.

resume-health

Resume the cluster's health operations.

security-checklist

Validate the running configuration against the OpenStack security guides
checklist.

set-noout

Set the cluster's 'noout' flag.

set-pool-max-bytes

Set a pool's quota for the maximum number of bytes.

show-disk-free

Show disk utilisation by host and OSD.

snapshot-pool

Create a pool snapshot.

unset-noout

Unset the cluster's 'noout' flag.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.


Configuration

auth-supported
(string) Which authentication flavour to use. . Valid options are "cephx" and "none". If "none" is specified, keys will still be created and deployed so that it can be enabled later.
cephx
ceph-cluster-network
(string) The IP address and netmask of the cluster (back-side) network (e.g., 192.168.0.0/24) . If multiple networks are to be used, a space-delimited list of a.b.c.d/x can be provided.
ceph-public-network
(string) The IP address and netmask of the public (front-side) network (e.g., 192.168.0.0/24) . If multiple networks are to be used, a space-delimited list of a.b.c.d/x can be provided.
config-flags
(string) User provided Ceph configuration. Supports a string representation of a python dictionary where each top-level key represents a section in the ceph.conf template. You may only use sections supported in the template. . WARNING: this is not the recommended way to configure the underlying services that this charm installs and is used at the user's own risk. This option is mainly provided as a stop-gap for users that either want to test the effect of modifying some config or who have found a critical bug in the way the charm has configured their services and need it fixed immediately. We ask that whenever this is used, that the user consider opening a bug on this charm at http://bugs.launchpad.net/charms providing an explanation of why the config was needed so that we may consider it for inclusion as a natively supported config in the charm.
customize-failure-domain
(boolean) Setting this to true will tell Ceph to replicate across Juju's Availability Zone instead of specifically by host.
default-rbd-features
(int) Default RBD Features to use when creating new images. The value of this configuration option will be shared with consumers of the ``ceph-client`` interface and client charms may choose to add this to the Ceph configuration file on the units they manage. Example: rbd default features = 1 NOTE: If you have clients using the kernel RBD driver you must set this configuration option to a value corrensponding to the features the driver in your kernel supports. The kernel RBD driver tends to be multiple cycles behind the userspace driver available for libvirt/qemu. Nova LXD is among the clients depending on the kernel RBD driver. NOTE: If you want to use the RBD Mirroring feature you must either let this configuration option be the default or make sure the value you set includes the ``exclusive-lock`` and ``journaling`` features.
disable-pg-max-object-skew
(boolean) Openstack clouds that use ceph will typically start their life with at least one pool (glance) loaded with a disproportionately high amount of data/objects where other pools may remain empty. This can trigger HEALTH_WARN if mon_pg_warn_max_object_skew is exceeded but that is actually false positive.
expected-osd-count
(int) Number of OSDs expected to be deployed in the cluster. This value is used for calculating the number of placement groups on pool creation. The number of placement groups for new pools are based on the actual number of OSDs in the cluster or the expected-osd-count, whichever is greater A value of 0 will cause the charm to only consider the actual number of OSDs in the cluster.
fsid
(string) The unique identifier (fsid) of the Ceph cluster. . WARNING: this option should only be used when performing an in-place migration of an existing non-charm deployed Ceph cluster to a charm managed deployment.
harden
(string) Apply system hardening. Supports a space-delimited list of modules to run. Supported modules currently include os, ssh, apache and mysql.
key
(string) Key ID to import to the apt keyring to support use with arbitary source configuration from outside of Launchpad archives or PPA's.
loglevel
(int) Mon and OSD debug level. Max is 20.
1
monitor-count
(int) Number of ceph-mon units to wait for before attempting to bootstrap the monitor cluster. For production clusters the default value of 3 ceph-mon units is normally a good choice. . For test and development environments you can enable single-unit deployment by setting this to 1. . NOTE: To establish quorum and enable partition tolerance a odd number of ceph-mon units is required.
3
monitor-hosts
(string) A space-separated list of ceph mon hosts to use. This field is only used to migrate an existing cluster to a juju-managed solution and should otherwise be left unset.
monitor-secret
(string) The Ceph secret key used by Ceph monitors. This value will become the mon.key. To generate a suitable value use: . ceph-authtool /dev/stdout --name=mon. --gen-key . If left empty, a secret key will be generated. . NOTE: Changing this configuration after deployment is not supported and new service units will not be able to join the cluster.
nagios_additional_checks
(string) Dictionary describing additional checks. Key is a name of a check which will be visible in Nagios. Value is a string (regular expression) which is checked against status messages. . Example: . {'noout_set': 'noout', 'too_few_PGs': 'too few PGs', 'clock': 'clock skew', 'degraded_redundancy': 'Degraded data redundancy'} .
nagios_additional_checks_critical
(boolean) Whether additional checks report warning or error when their checks are positive.
nagios_check_num_osds
(boolean) Whether to report an error when number of known OSDs does not equal to the number of OSDs in or up.
nagios_context
(string) Used by the nrpe-external-master subordinate charm. A string that will be prepended to instance name to set the hostname in nagios. So for instance the hostname would be something like: . juju-myservice-0 . If you're running multiple environments with the same services in them this allows you to differentiate between them.
juju
nagios_degraded_thresh
(float) Threshold for degraded ratio (0.1 = 10%)
1
nagios_misplaced_thresh
(float) Threshold for misplaced ratio (0.1 = 10%)
1
nagios_raise_nodeepscrub
(boolean) Whether to report Critical instead of Warning when the nodeep-scrub flag is set.
True
nagios_recovery_rate
(string) Recovery rate (in objects/s) below which we consider recovery to be stalled.
1
nagios_servicegroups
(string) A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup.
no-bootstrap
(boolean) Causes the charm to not do any of the initial bootstrapping of the Ceph monitor cluster. This is only intended to be used when migrating from the ceph all-in-one charm to a ceph-mon / ceph-osd deployment. Refer to the Charm Deployment guide at https://docs.openstack.org/charm-deployment-guide/latest/ for more information.
pg-autotune
(string) The default configuration for pg-autotune will be to automatically enable the module for new cluster installs on Ceph Nautilus, but to leave it disabled for all cluster upgrades to Nautilus. To enable the pg-autotune feature for upgraded clusters, the pg-autotune option should be set to 'true'. To disable the autotuner for new clusters, the pg-autotune option should be set to 'false'.
auto
pgs-per-osd
(int) The number of placement groups per OSD to target. It is important to properly size the number of placement groups per OSD as too many or too few placement groups per OSD may cause resource constraints and performance degradation. This value comes from the recommendation of the Ceph placement group calculator (http://ceph.com/pgcalc/) and recommended values are: . 100 - If the cluster OSD count is not expected to increase in the foreseeable future. 200 - If the cluster OSD count is expected to increase (up to 2x) in the foreseeable future. 300 - If the cluster OSD count is expected to increase between 2x and 3x in the foreseeable future.
100
prefer-ipv6
(boolean) If True enables IPv6 support. The charm will expect network interfaces to be configured with an IPv6 address. If set to False (default) IPv4 is expected. . NOTE: these charms do not currently support IPv6 privacy extension. In order for this charm to function correctly, the privacy extension must be disabled and a non-temporary address must be configured/available on your network interface.
source
(string) Optional configuration to support use of additional sources such as: . - ppa:myteam/ppa - cloud:xenial-proposed/ocata - http://my.archive.com/ubuntu main . The last option should be used in conjunction with the key configuration option.
sysctl
(string) YAML-formatted associative array of sysctl key/value pairs to be set persistently. By default we set pid_max, max_map_count and threads-max to a high value to avoid problems with large numbers (>20) of OSDs recovering. very large clusters should set those values even higher (e.g. max for kernel.pid_max is 4194303).
{ kernel.pid_max : 2097152, vm.max_map_count : 524288, kernel.threads-max: 2097152 }
use-direct-io
(boolean) Configure use of direct IO for OSD journals.
True
use-syslog
(boolean) If set to True, supporting services will log to syslog.