RabbitMQ is an implementation of AMQP, the emerging standard for high
performance enterprise messaging. The RabbitMQ server is a robust and
scalable implementation of an AMQP broker.
RabbitMQ is an implementation of AMQP, the emerging standard for high performance enterprise messaging.
The RabbitMQ server is a robust and scalable implementation of an AMQP broker.
This charm deploys RabbitMQ server and provides AMQP connectivity to clients.
To deploy this charm:
juju deploy rabbitmq-server
deploying multiple units will form a native RabbitMQ cluster:
juju deploy -n 3 rabbitmq-server juju config rabbitmq-server min-cluster-size=3
To make use of AMQP services, simply relate other charms that support the rabbitmq interface:
juju add-relation rabbitmq-server nova-cloud-controller
When more than one unit of the charm is deployed the charm will bring up a
native RabbitMQ cluster. The process of clustering the units together takes
some time. Due to the nature of asynchronous hook execution, it is possible
client relationship hooks are executed before the cluster is complete.
In some cases, this can lead to client charm errors.
To guarantee client relation hooks will not be executed until clustering is
completed use the min-cluster-size configuration setting:
juju deploy -n 3 rabbitmq-server juju config rabbitmq-server min-cluster-size=3
When min-cluster-size is not set the charm will still cluster, however,
there are no guarantees client relation hooks will not execute before it is
Single unit deployments behave as expected.
Generate an unencrypted RSA private key for the servers and a certificate:
openssl genrsa -out rabbit-server-privkey.pem 2048
Get an X.509 certificate. This can be self-signed, for example:
openssl req -batch -new -x509 -key rabbit-server-privkey.pem -out rabbit-server-cert.pem -days 10000
Deploy the service:
juju deploy rabbitmq-server
Enable SSL, passing in the key and certificate as configuration settings:
juju set rabbitmq-server ssl_enabled=True ssl_key="`cat rabbit-server-privkey.pem`" ssl_cert="`cat rabbit-server-cert.pem`"
To change the source that the charm uses for packages:
juju set rabbitmq-server source="cloud:precise-icehouse"
This will enable the Icehouse pocket of the Cloud Archive (which contains a new version of RabbitMQ) and upgrade the install to the new version.
The source option can be used in a few different ways:
source="ppa:james-page/testing" - use the testing PPA owned by james-page source="http://myrepo/ubuntu main" - use the repository located at the provided URL
The charm also supports use of arbitrary archive key's for use with private repositories:
juju set rabbitmq-server key="C6CEA0C9"
Note that in clustered configurations, the upgrade can be a bit racey as the services restart and re-cluster; this is resolvable using (with Juju version < 2.0) :
juju resolved --retry rabbitmq-server/1
Or using the following command with Juju 2.0 and above:
juju resolved rabbitmq-server/1
Network Spaces support
This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.
The amqp relation can be bound to a specific network space, allowing client connections to be routed over specific networks:
juju deploy rabbitmq-server --bind "amqp=internal-space"
alternatively this can also be provided as part of a juju native bundle configuration:
rabbitmq-server: charm: cs:xenial/rabbitmq-server num_units: 1 bindings: amqp: internal-space
NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.
NOTE: Existing deployments using the access-network configuration option will continue to function; this option is preferred over any network space binding provided if set.
Author: OpenStack Charmers email@example.com
- (string) The IP address and netmask of the 'access' network (e.g. 192.168.0.0/24) . This network will be used for access to RabbitMQ messaging services.
- (int) This value dictates the number of replicas ceph must make of any object it stores within the rabbitmq rbd pool. Of course, this only applies if using Ceph as a backend store. Note that once the rabbitmq rbd pool has been created, changing this value will not have any effect (although it can be changed in ceph by manually configuring your ceph cluster).
- (string) The IP address and netmask of the 'cluster' network (e.g. 192.168.0.0/24) . This network will be used for RabbitMQ to cluster.
- (string) RabbitMQ offers three ways to deal with network partitions automatically. Available modes: . ignore - Your network is reliable. All your nodes are in a rack, connected with a switch, and that switch is also the route to the outside world. You don't want to run any risk of any of your cluster shutting down if any other part of it fails (or you have a two node cluster). . pause_minority - Your network is maybe less reliable. You have clustered across 3 AZs in EC2, and you assume that only one AZ will fail at once. In that scenario you want the remaining two AZs to continue working and the nodes from the failed AZ to rejoin automatically and without fuss when the AZ comes back. . autoheal - Your network may not be reliable. You are more concerned with continuity of service than with data integrity. You may have a two node cluster. . For more information see http://www.rabbitmq.com/partitions.html
- (int) Overrides the size of the connection backlog maintained by the server. Environments with large numbers of clients will want to set this value higher than the default (default value varies with rabbtimq version, see https://www.rabbitmq.com/networking.html for more info).
- (int) Run a command with a time limit specified in seconds in cron. This timeout will govern to the rabbitmq stats capture, and that once the timeout is reached a SIGINT is sent to the program, if it doesn't exits before 10 seconds a SIGKILL is sent.
- (int) Multiplier used to calculate the number of threads used in the erl vm worker thread pool using the number of CPU cores extant in the host system. The upstream docs recommend that this multiplier be > 12 per core - we use 24 as default so that we end up with roughly the same as current rabbitmq package defaults and that is what is used internally to the charm if no value is set here. Also, if this value is left unset and this application is running inside a container, the number of threads will be capped based on a maximum of 2 cores.
- (string) Default network interface on which HA cluster will bind to communication with the other members of the HA Cluster.
- (int) Default multicast port number that will be used to communicate between HA Cluster nodes.
- (boolean) By default, without pairing with hacluster charm, rabbitmq will deploy in active/active/active... HA. When pairied with hacluster charm, it will deploy as active/passive. By enabling this option, pairing with hacluster charm will keep rabbit in active/active setup, but in addition it will deploy a VIP that can be used by services that cannot work with mutiple AMQPs (like Glance in pre-Icehouse).
- (string) Apply system hardening. Supports a space-delimited list of modules to run. Supported modules currently include os, ssh, apache and mysql.
- (string) Key ID to import to the apt keyring to support use with arbitary source configuration from outside of Launchpad archives or PPA's.
- (int) Known wait along with modulo nodes is used to help avoid restart collisions. Known wait is the amount of time between one node executing an operation and another. On slower hardware this value may need to be larger than the default of 30 seconds.
- (boolean) Enable the management plugin.
- (int) Number of tries to cluster with other units before giving up and throwing a hook error.
- (int) Minimum number of units expected to exist before charm will attempt to form a rabbitmq cluster.
- (boolean) When set to True the 'ha-mode: all' policy is applied to all the exchanges that match the expression '^(?!amq\.).*'
- (int) This config option is rarely required but is provided for fine tuning, it is safe to leave unset. Modulo nodes is used to help avoid restart collisions as well as distribute load on the cloud at larger scale. During restarts and cluster joins rabbitmq needs to execute these operations serially. By setting modulo-nodes to the size of the cluster and known-wait to a reasonable value, the charm will distribute the operations serially. If this value is unset, the charm will check min-cluster-size or else finally default to the size of the cluster based on peer relations. Setting this value to 0 will execute operations with no wait time. Setting this value to less than the cluster size will distribute load but may lead to restart collisions.
- (string) Used by the nrpe-external-master subordinate charm. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like: . juju-myservice-0 . If you're running multiple environments with the same services in them this allows you to differentiate between them.
- (string) A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup.
- (int) TTL in MS for notification queues in the openstack vhost. Defaults to 1 hour, but can be tuned up or down depending on deployment requirements. This ensures that any un-consumed notifications don't build up over time, causing disk capacity issues.
- (boolean) If True enables IPv6 support. The charm will expect network interfaces to be configured with an IPv6 address. If set to False (default) IPv4 is expected. . NOTE: these charms do not currently support IPv6 privacy extension. In order for this charm to function correctly, the privacy extension must be disabled and a non-temporary address must be configured/available on your network interface.
- (string) List of RabbitMQ queue size check thresholds. Interpreted as YAML in format [<vhost>, <queue>, <warn>, <crit>] Per-queue thresholds can be expressed as a multi-line YAML array: - ['/', 'queue1', 10, 20] - ['/', 'queue2', 200, 300] Or as a list of lists: [['/', 'queue1', 10, 20], ['/', 'queue2', 200, 300]] Wildcards '*' are accepted to monitor all vhosts and/or queues. In case of multiple matches, only the first will apply: wildcards should therefore be used last in order to avoid unexpected behavior.
- [['\*', '\*', 100, 200]]
- (string) The name that will be used to create the Ceph's RBD image with. If the image name exists in Ceph, it will be re-used and the data will be overwritten.
- (string) Default rbd storage size to create when setting up block storage. This value should be specified in GB (e.g. 100G).
- (string) Optional configuration to support use of additional sources such as: . - ppa:myteam/ppa - cloud:xenial-proposed/ocata - http://my.archive.com/ubuntu main . The last option should be used in conjunction with the key configuration option. . Changing the source option on already deployed service/application will trigger the upgrade.
- (string) Enable SSL connections on rabbitmq, valid values are 'off', 'on', 'only'. If ssl_key, ssl_cert, ssl_ca are provided then then those values will be used. Otherwise the service will act as its own certificate authority and pass its ca cert to clients. For HA or clustered rabbits ssl key/cert must be provided.
- (string) Certificate authority cert that the cert. Optional if the ssl_cert is signed by a ca recognized by the os. Format is base64 PEM (concatenated certs if needed).
- (string) X.509 certificate in base64 PEM format (i.e. starts with "-----BEGIN CERTIFICATE-----")
- (boolean) (DEPRECATED see 'ssl' config option.) enable SSL
- (string) Private unencrypted key in base64 PEM format (i.e. starts with "-----BEGIN RSA PRIVATE KEY-----")
- (int) SSL port
- (string) Cron schedule used to generate rabbitmq stats. To disable, either unset this config option or set it to an empty string ('').
- */5 * * * *
- (boolean) If True, services that support it will log to syslog instead of their normal log location.
- (string) Virtual IP to use to front rabbitmq in ha configuration
- (int) Netmask that will be used for the Virtual IP
- (string) Network Interface where to place the Virtual IP