Cinder is a storage service for the Openstack project
This charm provides the Cinder volume service for OpenStack. It is intended to
be used alongside the other OpenStack components, starting with the Folsom
Cinder is made up of 3 separate services: an API service, a scheduler and a
volume service. This charm allows them to be deployed in different
combination, depending on user preference and requirements.
This charm was developed to support deploying Folsom on both
Ubuntu Quantal and Ubuntu Precise. Since Cinder is only available for
Ubuntu 12.04 via the Ubuntu Cloud Archive, deploying this charm to a
Precise machine will by default install Cinder and its dependencies from
the Cloud Archive.
Cinder may be deployed in a number of ways. This charm focuses on 3 main
configurations. All require the existence of the other core OpenStack
services deployed via Juju charms, specifically: mysql, rabbitmq-server,
keystone and nova-cloud-controller. The following assumes these services
have already been deployed.
a. Basic, all-in-one using local storage and iSCSI.
The api server, scheduler and volume service are all deployed into the same
unit. Local storage will be initialized as a LVM phsyical device, and a volume
group initialized. Instance volumes will be created locally as logical volumes
and exported to instances via iSCSI. This is ideal for small-scale deployments
$ cat >cinder.cfg <<END cinder: block-device: sdc overwrite: true END $ juju deploy --config=cinder.cfg cinder $ juju add-relation cinder keystone $ juju add-relation cinder mysql $ juju add-relation cinder rabbitmq-server $ juju add-relation cinder nova-cloud-controller
b. Separate volume units for scale out, using local storage and iSCSI.
Separating the volume service from the API service allows the storage pool
to easily scale without the added complexity that accompanies load-balancing
the API server. When we've exhausted local storage on volume server, we can
simply add-unit to expand our capacity. Future requests to allocate volumes
will be distributed across the pool of volume servers according to the
availability of storage space.
$ cat >cinder.cfg <<END cinder-api: enabled-services: api, scheduler cinder-volume: enabled-services: volume block-device: sdc overwrite: true END $ juju deploy --config=cinder.cfg cinder cinder-api $ juju deploy --config=cinder.cfg cinder cinder-api $ juju add-relation cinder-api mysql $ juju add-relation cinder-api rabbitmq-server $ juju add-relation cinder-api keystone $ juju add-relation cinder-api nova-cloud-controller $ juju add-relation cinder-volume mysql $ juju add-relation cinder-volume rabbitmq-server # When more storage is needed, simply add more volume servers. $ juju add-unit cinder-volume
c. All-in-one using Ceph-backed RBD volumes.
All 3 services can be deployed to the same unit, but instead of relying
on local storage to back volumes an external Ceph cluster is used. This
allows scalability and redundancy needs to be satisified and Cinder's RBD
driver used to create, export and connect volumes to instances. This assumes
a functioning Ceph cluster has already been deployed using the official Ceph
charm and a relation exists between the Ceph service and nova-compute.
$ cat >cinder.cfg <<END cinder: block-device: None END $ juju deploy --config=cinder.cfg cinder $ juju add-relation cinder ceph $ juju add-relation cinder keystone $ juju add-relation cinder mysql $ juju add-relation cinder rabbitmq-server $ juju add-relation cinder nova-cloud-controller
The default value for most config options should work for most deployments.
Users should be aware of three options, in particular:
openstack-origin: Allows Cinder to be installed from a specific apt repository.
See config.yaml for a list of supported sources.
block-device: When using local storage, a block device should be specified to
back a LVM volume group. It's important this device exists on
all nodes that the service may be deployed to.
overwrite: Whether or not to wipe local storage that of data that may prevent
it from being initialized as a LVM phsyical device. This includes
filesystems and partition tables. CAUTION
enabled-services: Can be used to separate cinder services between service
service units (see previous section)
- (int) OpenStack Volume API listening port.
- (string) The *available* block device on which to create LVM volume group. May also be set to None for deployments that will not need local storage (eg, Ceph/RBD-backed volumes).
- (string) Database to request access.
- (string) Username to request database access.
- (string) If splitting cinder services between units, define which services to install and configure.
- (int) Some drivers in Grizzly require the v2 Glance API to perform certain actions e.g. Ceph RBD driver requires v2 API to perform copy-on-write cloning of images. This option will only be set for Grizzly and up and will default to v1 for backwards compatibility with related glance services.
- (string) Default network interface on which HA cluster will bind to communication with the other members of the HA Cluster.
- (int) Default multicast port number that will be used to communicate between HA Cluster nodes.
- (string) Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Cloud Archive release pocket. Supported Cloud Archive sources include: cloud:precise-folsom, cloud:precise-folsom/updates, cloud:precise-folsom/staging, cloud:precise-folsom/proposed. When deploying to Precise, the default distro option will use the cloud:precise-folsom/updates repository instead, since Cinder was not available in the Ubuntu archive for Precise and is only available via the Ubuntu Cloud Archive.
- (string) If true, charm will attempt to overwrite block devices containin previous filesystems or LVM, assuming it is not in use.
- (string) Username to request access on rabbitmq-server.
- (string) RabbitMQ virtual host to request access on rabbitmq-server.
- (string) OpenStack Region
- (string) SSL certificate to install and use for API ports. Setting this value and ssl_key will enable reverse proxying, point Glance's entry in the Keystone catalog to use https, and override any certficiate and key issued by Keystone (if it is configured to do so).
- (string) SSL key to use with certificate specified as ssl_cert.
- (string) Virtual IP to use to front cinder API in ha configuration
- (int) Netmask that will be used for the Virtual IP
- (string) Network Interface where to place the Virtual IP
- (string) Name of volume group to create and store Cinder volumes.