cinder #405

Supports: xenial bionic eoan trusty
Add to new model

Description

Cinder is the block storage service for the OpenStack.


Overview

This charm provides the Cinder volume service for OpenStack. It is intended to
be used alongside the other OpenStack components.

Usage

Deployment

Two deployment configurations will be shown. Both assume the existence of core
OpenStack services: mysql, rabbitmq-server, keystone, and
nova-cloud-controller.

Storage backed by LVM-iSCSI

With this configuration, a block device (local to the cinder unit) is used as
an LVM physical volume. A logical volume is created (openstack volume create)
and exported to a cloud instance via iSCSI (openstack server add volume).

Note: It is not recommended to use the LVM storage method for anything
other than testing or for small non-production deployments.

A sample cinder.yaml file's contents:

    cinder:
        block-device: sdc

Important: Make sure the designated block device exists and is not
currently in use.

Deploy and add relations in this way:

juju deploy --config cinder.yaml cinder

juju add-relation cinder:cinder-volume-service nova-cloud-controller:cinder-volume-service
juju add-relation cinder:shared-db mysql:shared-db
juju add-relation cinder:identity-service keystone:identity-service
juju add-relation cinder:amqp rabbitmq-server:amqp

Note: It has been reported that the LVM storage method may not properly
initialise the physical volume and volume group. See bug
LP #1862392.

Storage backed by Ceph

Here, storage volumes are backed by Ceph to allow for scalability and
redundancy. This is intended for large-scale production deployments. These
instructions assume a functioning Ceph cluster has been deployed to the cloud.

Note: The Ceph storage method is the recommended method for production
deployments.

File cinder.yaml contains the following:

    cinder:
        block-device: None

Deploy and add relations as in the standard configuration (using the altered
YAML file). However, to use Ceph as the backend the intermediary cinder-ceph
charm is required:

juju deploy cinder-ceph

Then add a relation from this charm to both Cinder and Ceph:

juju add-relation cinder-ceph:storage-backend cinder:storage-backend
juju add-relation cinder-ceph:ceph ceph-mon:client

High availability

This charm supports high availability. There are two mutually exclusive
HA/clustering strategies:

  • virtual IP(s)
  • DNS

In both cases, the hacluster subordinate charm is required. It provides the
corosync backend HA functionality.

virtual IP(s)

To use virtual IP(s) the clustered nodes and the VIP must be on the same
subnet. That is, the VIP must be a valid IP on the subnet for one of the node's
interfaces and each node has an interface in said subnet. The VIP becomes a
highly-available API endpoint.

At a minimum, the configuration option vip must be defined. The value can
take on space-separated values if multiple networks are in use. Optionally,
options vip_iface or vip_cidr may be specified.

DNS

DNS high availability does not require the clustered nodes to be on the same
subnet.

It does require:

  • an environment with MAAS 2.0 and Juju 2.0 (as minimum versions)
  • clustered nodes with static or "reserved" IP addresses registered in MAAS
  • DNS hostnames that are pre-registered in MAAS

At a minimum, the configuration option dns-ha must be set to 'true' and at
least one of os-admin-hostname, os-internal-hostname, or
os-public-hostname must be set.

The charm will throw an exception in the following circumstances:

  • if neither vip nor dns-ha is set and the charm has a relation added to
    hacluster
  • if both vip and dns-ha are set
  • if dns-ha is set and none of os-admin-hostname, os-internal-hostname,
    or os-public-hostname are set

Network spaces

This charm supports the use of Juju network spaces (Juju
v.2.0). This feature optionally allows specific types of the application's
network traffic to be bound to subnets that the underlying hardware is
connected to.

Note: Spaces must be configured in the backing cloud prior to deployment.

API endpoints can be bound to distinct network spaces supporting the network
separation of public, internal, and admin endpoints.

Access to the underlying MySQL instance can also be bound to a specific space
using the shared-db relation.

For example, providing that spaces 'public-space', 'internal-space', and
'admin-space' exist, the deploy command above could look like this:

juju deploy --config cinder.yaml cinder \
   --bind "public=public-space internal=internal-space admin=admin-space shared-db=internal-space"

Alternatively, configuration can be provided as part of a bundle:

    cinder:
      charm: cs:cinder
      num_units: 1
      bindings:
        public: public-space
        internal: internal-space
        admin: admin-space
        shared-db: internal-space

Note: Existing cinder units configured with the os-admin-network,
os-internal-network, or os-public-network options will continue to honour
them. Furthermore, these options override any space bindings, if set.

Actions

This section covers Juju actions supported by the charm.
Actions allow specific operations to be performed on a per-unit basis.

openstack-upgrade

Perform the OpenStack service upgrade. Configuration option
action-managed-upgrade must be set to 'True'.

pause

Pause the cinder unit. This action will stop the Cinder service.

remove-services

Remove unused services entities from the database after enabling HA with a
stateless backend such as the cinder-ceph application.

rename-volume-host

Update the host attribute of volumes from currenthost to newhost.

resume

Resume the cinder unit. This action will start the Cinder service if paused.

security-checklist

Validate the running configuration against the OpenStack security guides
checklist.

volume-host-add-driver

Update the 'os-vol-host-attr:host' volume attribute. Used for migrating volumes
to another backend.

Policy Overrides

Policy overrides is an advanced feature that allows an operator to override the
default policy of an OpenStack service. The policies that the service supports,
the defaults it implements in its code, and the defaults that a charm may
include should all be clearly understood before proceeding.

Caution: It is possible to break the system (for tenants and other
services) if policies are incorrectly applied to the service.

Policy statements are placed in a YAML file. This file (or files) is then (ZIP)
compressed into a single file and used as an application resource. The override
is then enabled via a Boolean charm option.

Here are the essential commands (filenames are arbitrary):

zip overrides.zip override-file.yaml
juju attach-resource cinder policyd-override=overrides.zip
juju config cinder use-policyd-override=true

See appendix Policy Overrides in the OpenStack Charms
Deployment Guide
for a thorough treatment of this feature.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.


Configuration

action-managed-upgrade
(boolean) If True enables openstack upgrades for this charm via juju actions. You will still need to set openstack-origin to the new repository but instead of an upgrade running automatically across all units, it will wait for you to execute the openstack-upgrade action for this charm on each unit. If False it will revert to existing behavior of upgrading all units on config change.
api-listening-port
(int) OpenStack Volume API listening port.
8776
block-device
(string) The block devices on which to create LVM volume group. . May be set to None for deployments that will not need local storage (eg, Ceph/RBD-backed volumes). . This can also be a space-delimited list of block devices to attempt to use in the cinder LVM volume group - each block device detected will be added to the available physical volumes in the volume group. . May be set to the path and size of a local file (/path/to/file.img|$sizeG), which will be created and used as a loopback device (for testing only). $sizeG defaults to 5G
sdb
ceph-osd-replication-count
(int) This value dictates the number of replicas ceph must make of any object it stores within the cinder rbd pool. Of course, this only applies if using Ceph as a backend store. Note that once the cinder rbd pool has been created, changing this value will not have any effect (although the configuration of a pool can be always be changed within ceph itself or via the charm used to deploy ceph).
3
config-flags
(string) Comma-separated list of key=value config flags. These values will be placed in the cinder.conf [DEFAULT] section.
database
(string) Database to request access.
cinder
database-user
(string) Username to request database access.
cinder
debug
(boolean) Enable debug logging.
dns-ha
(boolean) Use DNS HA with MAAS 2.0. Note if this is set do not set vip settings below.
enabled-services
(string) If splitting cinder services between units, define which services to install and configure.
all
ephemeral-unmount
(string) Cloud instances provide ephemeral storage which is normally mounted on /mnt. . Providing this option will force an unmount of the ephemeral device so that it can be used as a Cinder storage device. This is useful for testing purposes (cloud deployment is not a typical use case).
glance-api-version
(int) Newer storage drivers may require the v2 Glance API to perform certain actions e.g. the RBD driver requires requires this to support COW cloning of images. This option will default to v1 for backwards compatibility with older glance services.
1
ha-bindiface
(string) Default network interface on which HA cluster will bind to communication with the other members of the HA Cluster.
eth0
ha-mcastport
(int) Default multicast port number that will be used to communicate between HA Cluster nodes.
5454
haproxy-client-timeout
(int) Client timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
haproxy-connect-timeout
(int) Connect timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
haproxy-queue-timeout
(int) Queue timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
haproxy-server-timeout
(int) Server timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
harden
(string) Apply system hardening. Supports a space-delimited list of modules to run. Supported modules currently include os, ssh, apache and mysql.
nagios_context
(string) Used by the nrpe-external-master subordinate charm. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like 'juju-myservice-0'. If you are running multiple environments with the same services in them this allows you to differentiate between them.
juju
nagios_servicegroups
(string) A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup
notification-topics
(string) A comma-separated list of oslo notification topics. If left empty, the default topic 'cinder' is going to be used.
openstack-origin
(string) Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Ubuntu Cloud Archive e.g. . cloud:<series>-<openstack-release> cloud:<series>-<openstack-release>/updates cloud:<series>-<openstack-release>/staging cloud:<series>-<openstack-release>/proposed . See https://wiki.ubuntu.com/OpenStack/CloudArchive for info on which cloud archives are available and supported. . NOTE: updating this setting to a source that is known to provide a later version of OpenStack will trigger a software upgrade unless action-managed-upgrade is set to True.
distro
os-admin-hostname
(string) The hostname or address of the admin endpoints created for cinder in the keystone identity provider. . This value will be used for admin endpoints. For example, an os-admin-hostname set to 'cinder.admin.example.com' with ssl enabled will create two admin endpoints for cinder: . https://cinder.admin.example.com:443/v2/$(tenant_id)s and https://cinder.admin.example.com:443/v3/$(tenant_id)s
os-admin-network
(string) The IP address and netmask of the OpenStack Admin network (e.g. 192.168.0.0/24) . This network will be used for admin endpoints.
os-internal-hostname
(string) The hostname or address of the internal endpoints created for cinder in the keystone identity provider. . This value will be used for internal endpoints. For example, an os-internal-hostname set to 'cinder.internal.example.com' with ssl enabled will create two internal endpoints for cinder: . https://cinder.internal.example.com:443/v2/$(tenant_id)s and https://cinder.internal.example.com:443/v3/$(tenant_id)s
os-internal-network
(string) The IP address and netmask of the OpenStack Internal network (e.g. 192.168.0.0/24) . This network will be used for internal endpoints.
os-public-hostname
(string) The hostname or address of the public endpoints created for cinder in the keystone identity provider. . This value will be used for public endpoints. For example, an os-public-hostname set to 'cinder.example.com' with ssl enabled will create two public endpoints for cinder: . https://cinder.example.com:443/v2/$(tenant_id)s and https://cinder.example.com:443/v3/$(tenant_id)s
os-public-network
(string) The IP address and netmask of the OpenStack Public network (e.g. 192.168.0.0/24) . This network will be used for public endpoints.
overwrite
(string) If true, charm will attempt to overwrite block devices containing previous filesystems or LVM, assuming it is not in use.
false
prefer-ipv6
(boolean) If True enables IPv6 support. The charm will expect network interfaces to be configured with an IPv6 address. If set to False (default) IPv4 is expected. . NOTE: these charms do not currently support IPv6 privacy extension. In order for this charm to function correctly, the privacy extension must be disabled and a non-temporary address must be configured/available on your network interface.
rabbit-user
(string) Username to request access on rabbitmq-server.
cinder
rabbit-vhost
(string) RabbitMQ virtual host to request access on rabbitmq-server.
openstack
region
(string) OpenStack Region
RegionOne
remove-missing
(boolean) If True, charm will attempt to remove missing physical volumes from volume group, if logical volumes are not allocated on them.
remove-missing-force
(boolean) If True, charm will attempt to remove missing physical volumes from volume group, even when logical volumes are allocated on them. This option overrides 'remove-missing' when set.
restrict-ceph-pools
(boolean) Cinder can optionally restrict the key it asks Ceph for to only be able to access the pools it needs.
ssl_ca
(string) SSL CA to use with the certificate and key provided - this is only required if you are providing a privately signed ssl_cert and ssl_key.
ssl_cert
(string) SSL certificate to install and use for API ports. Setting this value and ssl_key will enable reverse proxying, point Cinder's entry in the Keystone catalog to use https, and override any certificate and key issued by Keystone (if it is configured to do so).
ssl_key
(string) SSL key to use with certificate specified as ssl_cert.
use-internal-endpoints
(boolean) Openstack mostly defaults to using public endpoints for internal communication between services. If set to True this option will configure services to use internal endpoints where possible.
use-policyd-override
(boolean) If True then use the resource file named 'policyd-override' to install override YAML files in the service's policy.d directory. The resource file should be a ZIP file containing at least one yaml file with a .yaml or .yml extension. If False then remove the overrides.
use-syslog
(boolean) Setting this to True will allow supporting services to log to syslog.
verbose
(boolean) Enable verbose logging.
vip
(string) Virtual IP(s) to use to front API services in HA configuration. . If multiple networks are being used, a VIP should be provided for each network, separated by spaces.
vip_cidr
(int) Default CIDR netmask to use for HA vip when it cannot be automatically determined.
24
vip_iface
(string) Default network interface to use for HA vip when it cannot be automatically determined.
eth0
volume-group
(string) Name of volume group to create and store Cinder volumes.
cinder-volumes
volume-usage-audit-period
(string) Time period for which to generate volume usages. The options are hour, day, month, or year.
month
worker-multiplier
(float) The CPU core multiplier to use when configuring worker processes for Cinder. By default, the number of workers for each daemon is set to twice the number of CPU cores a service unit has. When deployed in a LXD container, this default value will be capped to 4 workers unless this configuration option is set.