manila #96
Description
Shared File Systems service provides a set of services for management of shared file systems in a multi-tenant cloud environment. The service resembles OpenStack block-based storage management from the OpenStack Block Storage service project. With the Shared File Systems service, you can create a remote file system, mount the file system on your instances, and then read and write data from your instances to and from your file system.
- Tags:
- openstack ›
Overview
This charm provides the Manila shared file service for an OpenStack Cloud.
In order to use the manila charm, a suitable backend charm is needed to configure a share backend. Without a backend subordinate charm related to the manila-charm there will be no manila backends configured; the manila charm will be stuck in the blocked state.
Manila share backends are configured using subordinate charms
It's necessary to have the ability to configure a share backend independently of the main charm. This means that plugin charms will be used to configure each backend. Multiple backend charms can be related to the manila charm to allow a manaila (juju) application to support multiple share backends.
Essentially, a plugin needs to be able to configure:
- it's section in the manila.conf along with any network plugin's that it needs (assuming that it's a share that manages it's own share-instance).
- ensure that the relevant services are restarted.
This pre-release of manila provides (in the charm store):
- charm-manila: the main charm,
- interface-manila-plugin : the interface for plugging in the generic backend (and other interfaces),
- charm-manila-generic: the plugin for configuring the generic backend.
The backend provides a piece of the manila.conf configuration file with the sections necessary to configure the backend. This is mostly for the share, rather than the api level.
Usage
Manila (plus manila-generic) relies on services from the mysql/percona, rabbitmq-server, keystone charms, and a storage backend charm. The following yaml file will create a small, unconfigured, OpenStack system with the necessary components to start testing with Manila. Note that these target the 'next' OpenStack charms which are essentially 'edge' charms.
# vim: set ts=2 et:
# Juju 2.0 deploy bundle for development ('next') charms
# UOSCI relies on this for OS-on-OS deployment testing
series: xenial
automatically-retry-hooks: False
services:
mysql:
charm: cs:~openstack-charmers/xenial/percona-cluster
num_units: 1
constraints: mem=1G
options:
dataset-size: 50%
root-password: mysql
rabbitmq-server:
charm: cs:~openstack-charmers/xenial/rabbitmq-server
num_units: 1
constraints: mem=1G
keystone:
charm: cs:~openstack-charmers/xenial/keystone
num_units: 1
constraints: mem=1G
options:
admin-password: openstack
admin-token: ubuntutesting
preferred-api-version: "2"
glance:
charm: cs:~openstack-charmers/xenial/glance
num_units: 1
constraints: mem=1G
nova-cloud-controller:
charm: cs:~openstack-charmers/xenial/nova-cloud-controller
num_units: 1
constraints: mem=1G
options:
network-manager: Neutron
nova-compute:
charm: cs:~openstack-charmers/xenial/nova-compute
num_units: 1
constraints: mem=4G
neutron-gateway:
charm: cs:~openstack-charmers/xenial/neutron-gateway
num_units: 1
constraints: mem=1G
options:
bridge-mappings: physnet1:br-ex
instance-mtu: 1300
neutron-api:
charm: cs:~openstack-charmers/xenial/neutron-api
num_units: 1
constraints: mem=1G
options:
neutron-security-groups: True
flat-network-providers: physnet1
neutron-openvswitch:
charm: cs:~openstack-charmers/xenial/neutron-openvswitch
cinder:
charm: cs:~openstack-charmers/xenial/cinder
num_units: 1
constraints: mem=1G
options:
block-device: vdb
glance-api-version: 2
overwrite: 'true'
ephemeral-unmount: /mnt
manila:
charm: cs:~openstack-charmers/xenial/manila
num_units: 1
options:
debug: True
manila-generic:
charm: cs:~openstack-charmers/xenial/manila-generic
options:
debug: True
relations:
- [ keystone, mysql ]
- [ manila, mysql ]
- [ manila, rabbitmq-server ]
- [ manila, keystone ]
- [ manila, manila-generic ]
- [ glance, keystone]
- [ glance, mysql ]
- [ glance, "cinder:image-service" ]
- [ nova-compute, "rabbitmq-server:amqp" ]
- [ nova-compute, glance ]
- [ nova-cloud-controller, rabbitmq-server ]
- [ nova-cloud-controller, mysql ]
- [ nova-cloud-controller, keystone ]
- [ nova-cloud-controller, glance ]
- [ nova-cloud-controller, nova-compute ]
- [ cinder, keystone ]
- [ cinder, mysql ]
- [ cinder, rabbitmq-server ]
- [ cinder, nova-cloud-controller ]
- [ "neutron-gateway:amqp", "rabbitmq-server:amqp" ]
- [ neutron-gateway, nova-cloud-controller ]
- [ neutron-api, mysql ]
- [ neutron-api, rabbitmq-server ]
- [ neutron-api, nova-cloud-controller ]
- [ neutron-api, neutron-openvswitch ]
- [ neutron-api, keystone ]
- [ neutron-api, neutron-gateway ]
- [ neutron-openvswitch, nova-compute ]
- [ neutron-openvswitch, rabbitmq-server ]
- [ neutron-openvswitch, manila ]
and then (with juju 2.x):
juju deploy manila.yaml
Note that this OpenStack system will need to be configured (in terms of networking, images, etc.) before testing can commence.
Bugs
Please report bugs on Launchpad.
For general questions please refer to the OpenStack Charm Guide.
Configuration
- action-managed-upgrade
- (boolean) If True enables openstack upgrades for this charm via juju actions. You will still need to set openstack-origin to the new repository but instead of an upgrade running automatically across all units, it will wait for you to execute the openstack-upgrade action for this charm on each unit. If False it will revert to existing behavior of upgrading all units on config change.
- database
- (string) Database name for Manila
- manila
- database-user
- (string) Username for Manila database access
- manila
- debug
- (boolean) Enable debug logging
- (string) The default backend for this manila set. Must be one of the 'share-backends' or the charm will block.
- (string) The 'default_share_type' must match the configured default_share_type set up in manila using 'manila create-type'.
- default_share_type
- dns-ha
- (boolean) Use DNS HA with MAAS 2.0. Note if this is set do not set vip settings below.
- haproxy-client-timeout
- (int) Client timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
- haproxy-connect-timeout
- (int) Connect timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
- haproxy-queue-timeout
- (int) Queue timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
- haproxy-server-timeout
- (int) Server timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
- openstack-origin
- (string) Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Cloud Archive release pocket. Supported Cloud Archive sources include: cloud:precise-folsom, cloud:precise-folsom/updates, cloud:precise-folsom/staging, cloud:precise-folsom/proposed. Note that updating this setting to a source that is known to provide a later version of OpenStack will trigger a software upgrade.
- distro
- os-admin-hostname
- (string) The hostname or address of the admin endpoints created in the keystone identity provider. . This value will be used for admin endpoints. For example, an os-admin-hostname set to 'api-admin.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-admin.example.com:9696/
- os-admin-network
- (string) The IP address and netmask of the OpenStack Admin network (e.g., 192.168.0.0/24) . This network will be used for admin endpoints.
- os-internal-hostname
- (string) The hostname or address of the internal endpoints created in the keystone identity provider. . This value will be used for internal endpoints. For example, an os-internal-hostname set to 'api-internal.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-internal.example.com:9696/
- os-internal-network
- (string) The IP address and netmask of the OpenStack Internal network (e.g., 192.168.0.0/24) . This network will be used for internal endpoints.
- os-public-hostname
- (string) The hostname or address of the public endpoints created in the keystone identity provider. . This value will be used for public endpoints. For example, an os-public-hostname set to 'api-public.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-public.example.com:9696/
- os-public-network
- (string) The IP address and netmask of the OpenStack Public network (e.g., 192.168.0.0/24) . This network will be used for public endpoints.
- rabbit-user
- (string) Username used to access rabbitmq queue
- manila
- rabbit-vhost
- (string) Rabbitmq vhost
- openstack
- region
- (string) OpenStack Region
- RegionOne
- (string) The share protocols that the backends will be able to provide. The default is good for the generic backends. Other backends may not support both NFS and CIFS. This is a space delimited list of protocols.
- NFS CIFS
- ssl_ca
- (string) TLS CA to use to communicate with other components in a deployment. . __NOTE__: This configuration option will take precedence over any certificates received over the ``certificates`` relation.
- ssl_cert
- (string) TLS certificate to install and use for any listening services. . __NOTE__: This configuration option will take precedence over any certificates received over the ``certificates`` relation.
- ssl_key
- (string) TLS key to use with certificate specified as ``ssl_cert``. . __NOTE__: This configuration option will take precedence over any certificates received over the ``certificates`` relation.
- use-internal-endpoints
- (boolean) Openstack mostly defaults to using public endpoints for internal communication between services. If set to True this option will configure services to use internal endpoints where possible.
- use-syslog
- (boolean) Setting this to True will allow supporting services to log to syslog.
- verbose
- (boolean) Enable verbose logging
- vip
- (string) Virtual IP(s) to use to front API services in HA configuration. If multiple networks are being used, a VIP should be provided for each network, separated by spaces.
- vip_cidr
- (int) Default CIDR netmask to use for HA vip when it cannot be automatically determined.
- 24
- vip_iface
- (string) Default network interface to use for HA vip when it cannot be automatically determined.
- eth0
- worker-multiplier
- (float) The CPU core multiplier to use when configuring worker processes. By default, the number of workers for each daemon is set to twice the number of CPU cores a service unit has. When deployed in a LXD container, this default value will be capped to 4 workers unless this configuration option is set.