percona cluster #302

Supports: xenial bionic


Percona XtraDB Cluster provides an active/active MySQL compatible alternative implemented using the Galera synchronous replication extensions.


Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the Galera library of MySQL high availability solutions in a single product package which enables you to create a cost-effective MySQL cluster.

The percona-cluster charm deploys Percona XtraDB Cluster and provides DB services to those charms that support the 'mysql-shared' interface. The current list of such charms can be obtained from the Charm Store (the charms officially supported by the OpenStack Charms project are published by 'openstack-charmers').

Series upgrades

Deprecation of percona-cluster charm on focal series

The eoan series is the last series supported by the percona-cluster charm. It is replaced by the mysql-innodb-cluster and mysql-router charms in the focal series. The migration steps are documented in percona-cluster charm: series upgrade to focal.

Caution: Do not upgrade (to the focal series) the machines hosting percona-cluster units. To be clear, if percona-cluster is containerised then it is the LXD container that must not be upgraded.

Upgrades to non-focal series

The procedure to upgrade to a pre-focal series, and thus to a new Percona version, is documented in the OpenStack Charms Deployment Guide.



This section covers common configuration options. See file config.yaml for the full list of options, along with their descriptions and default values.


The max-connections option set the maximum number of allowed connections. The default is 600. This is an important option and is discussed in the Memory section below.


The min-cluster-size option sets the number of percona-cluster units required to form its cluster. It is best practice to use this option as doing so ensures that the charm will wait until the cluster is up before accepting relations from other client applications.


The nrpe-threads-connected option sets Warning and Critical thresholds (in percent) for NRPE check to monitor the number of threads connecting to the MySQL. If the nrpe-external-master relationship is set, a nagios user who does not have permission and can only connect from localhost is created before the check is created.


To deploy a single percona-cluster unit:

juju deploy percona-cluster

To make use of DB services, simply add a relation between percona-cluster and an application that supports the 'mysql-shared' interface. For instance:

juju add-relation percona-cluster:shared-db keystone:shared-db

Passwords required for the correct operation of the deployment are automatically generated and stored by the application leader. The root password for mysql can be retrieved using the following command:

juju run --unit percona-cluster/0 leader-get root-password

Root user DB access is only usable from within one of the deployed units (access to root is restricted to localhost only).

Cold boot

When machines hosting the percona-cluster units are started in order for the application to assume a clustered and healthy state particular steps are required to be taken. This is documented in the OpenStack Charms Deployment Guide.


Note that Percona XtraDB Cluster is not a 'scale-out' MySQL solution; reads and writes are channelled through a single service unit and synchronously replicated to other nodes in the cluster; reads/writes are as slow as the slowest node you have in your deployment.

High availability

When more than one unit is deployed with the hacluster application the charm will bring up an HA active/active cluster. The min-cluster-size option should be used (see description above).

To deploy a three-node cluster:

juju deploy -n 3 --config min-cluster-size=3 percona-cluster

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases the hacluster subordinate charm is used to provide the Corosync and Pacemaker backend HA functionality.

See the OpenStack high availability appendix in the OpenStack Charms Deployment Guide for details.


This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis. To display action descriptions run juju actions percona-cluster. If the charm is not deployed then see file actions.yaml.

  • backup
  • bootstrap-pxc
  • complete-cluster-series-upgrade
  • mysqldump
  • notify-bootstrapped
  • pause
  • resume
  • set-pxc-strict-mode


Percona Cluster is extremely memory sensitive. Setting memory values too low will give poor performance. Setting them too high will create problems that are very difficult to diagnose. Please take time to evaluate these settings for each deployment environment rather than copying and pasting bundle configurations.

The Percona Cluster charm needs to be able to be deployed in small low memory development environments as well as high performance production environments. The charm configuration opinionated defaults favour the developer environment in order to ease initial testing. Production environments need to consider carefully the memory requirements for the hardware or cloud in use. Consult a MySQL memory calculator to understand the implications of the values.

Between the 5.5 and 5.6 releases a significant default was changed. The performance schema defaulted to on for 5.6 and later. This allocates all the memory that would be required to handle max-connections plus several other memory settings. With 5.5 memory was allocated during run-time as needed.

The charm now makes performance schema configurable and defaults to off (False). With the performance schema turned off memory is allocated when needed during run-time. It is important to understand this can lead to run-time memory exhaustion if the configuration values are set too high. Consult a MySQL memory calculator to understand the implications of the values.

The value of max-connections should strike a balance between connection exhaustion and memory exhaustion. Occasionally connection exhaustion occurs in large production HA clouds with a value of less than 2000. The common practice became to set it unrealistically high (near 10k or 20k). In the move to 5.6 on Xenial this became a problem as Percona would fail to start up or behave erratically as memory exhaustion occurred on the host due to performance schema being turned on. Even with the default now turned off this value should be carefully considered against the production requirements and resources available.

MySQL asynchronous replication

This charm supports MySQL asynchronous replication feature which can be used to replicate databases between multiple Percona XtraDB Clusters. In order to setup master-slave replication of "example1" and "example2" databases between "pxc1" and "pxc2" applications, first configure mandatory options:

juju config pxc1 databases-to-replicate="database1:table1,table2;database2"
juju config pxc2 databases-to-replicate="database1:table1,table2;database2"
juju config pxc1 cluster-id=1
juju config pxc2 cluster-id=2

and then relate them:

juju add-relation pxc1:master pxc2:slave

In order to setup master-master replication, add another relation:

juju add-relation pxc2:master pxc1:slave

In the same way circular replication can be setup between multiple clusters.

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

You can ensure that database connections and cluster peer communication are bound to specific network spaces by binding the appropriate interfaces:

juju deploy percona-cluster --bind "shared-db=internal-space cluster=internal-space"

Alternatively, configuration can be provided as part of a bundle:

  charm: cs:xenial/percona-cluster
  num_units: 1
    shared-db: internal-space
    cluster: internal-space

The 'cluster' endpoint binding is used to determine which network space units within the percona-cluster deployment should use for communication with each other; the 'shared-db' endpoint binding is used to determine which network space should be used for access to MySQL databases services from other charms.

Note: Spaces must be configured in the underlying provider prior to attempting to use them.

Note: Existing deployments using the access-network configuration option will continue to function; this option is preferred over any network space binding provided for the 'shared-db' relation if set.


Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.


(string) The IP address and netmask of the 'access' network (e.g. . This network will be used for access to database services.
(int) Sets the expire_logs_days mysql configuration option, which will make mysql server automatically remove logs older than configured number of days.
(string) Sets the max_binlog_size mysql configuration option, which will limit the size of the binary log files. The server will automatically rotate binlogs after they grow to be bigger than this value. Keep in mind that transactions are never split between binary logs, so therefore binary logs might get larger than configured value.
(string) Location on the filesystem where binlogs are going to be placed. Default mimics what mysql-common package would do for mysql. Make sure you do not put binlogs inside mysql datadir (/var/lib/mysql/)!
(int) Cluster ID to be used when using MySQL asynchronous replication. . NOTE: This value must be different for each cluster.
(string) The IP address and netmask of the cluster (replication) network (e.g. . This network will be used for wsrep_cluster replication.
(string) Databases and tables to replicate using MySQL asynchronous replication. The databases should be separated with a semicolon while the tables should be separated with a comma. No tables mean that the whole database will be replicated. For example "database1:table1,table2;database2" will replicate "table1" and "table2" tables from "database1" databasae and all tables from "database2" database. . NOTE: This option should be used only when relating one cluster to the other. It does not affect Galera synchronous replication.
(string) [DEPRECATED] - use innodb-buffer-pool-size. How much data should be kept in memory in the DB. This will be used to tune settings in the database server appropriately. Supported suffixes include K/M/G/T. If suffixed with %, one will get that percentage of RAM allocated to the dataset.
(boolean) Use DNS HA with MAAS 2.0. Note if this is set do not set vip settings below.
(boolean) Turns on MySQL binary logs. The placement of the logs is controlled with the binlogs_path config option.
(int) This setting controls when flow control engages. Simply speaking, if the wsrep_local_recv_queue exceeds this size on a given node, a pausing flow control message will be sent. The fc_limit defaults to 16 transactions. This effectively means that this is as far as a given node can be behind committing transactions from the cluster.
(string) Default network interface on which HA cluster will bind to communication with the other members of the HA Cluster.
(int) Default multicast port number that will be used to communicate between HA Cluster nodes.
(string) Apply system hardening. Supports a space-delimited list of modules to run. Supported modules currently include os, ssh, apache and mysql.
(string) By default this value will be set according to 50% of system total memory or 512MB (whichever is lowest) but also can be set to any specific value for the system. Supported suffixes include K/M/G/T. If suffixed with %, one will get that percentage of system total memory allocated.
(string) Configure whether InnoDB performs change buffering, an optimization that delays write operations to secondary indexes so that the I/O operations can be performed sequentially. . Permitted values include . none Do not buffer any operations. inserts Buffer insert operations. deletes Buffer delete marking operations; strictly speaking, the writes that mark index records for later deletion during a purge operation. changes Buffer inserts and delete-marking operations. purges Buffer the physical deletion operations that happen in the background. all The default. Buffer inserts, delete-marking operations, and purges. . For more details
(boolean) Turns on innodb_file_per_table option, which will make MySQL put each InnoDB table into separate .idb file. Existing InnoDB tables will remain in ibdata1 file - full dump/import is needed to get rid of large ibdata1 file
(int) Configure the InnoDB IO capacity which sets an upper limit on I/O activity performed by InnoDB background tasks, such as flushing pages from the buffer pool and merging data from the change buffer. . This value typically defaults to 200 but can be increased on systems with fast bus-attached SSD based storage to help the server handle the background maintenance work associated with a high rate of row changes. . Alternatively it can be decreased to a minimum of 100 on systems with low speed 5400 or 7200 rpm spindles, to reduce the proportion of IO operations being used for background maintenance work. . For more details
(string) Key ID to import to the apt keyring to support use with arbitrary source configuration from outside of Launchpad archives or PPA's.
(int) Known wait along with modulo nodes is used to help avoid restart collisions. Known wait is the amount of time between one node executing an operation and another. On slower hardware this value may need to be larger than the default of 30 seconds.
(int) This setting limits the number of successive unsuccessful connection requests that a host can make to MySQL. After max-connect-errors successive connection requests from a host are interrupted without a successful connection, the MySQL server blocks that host from making further connections. This setting is only for Ubuntu Xenial and newer releases.
(int) Maximum connections to allow. A value of -1 means use the server's compiled-in default. This is not typically that useful so the charm will configure PXC with a default max-connections value of 600. Note: Connections take up memory resources. Either at startup time with performance-schema=True or during run time with performance-schema=False. This value is a balance between connection exhaustion and memory exhaustion. . Consult a MySQL memory calculator like to understand memory resources consumed by connections. See also performance-schema.
(int) Minimum number of units expected to exist before charm will attempt to bootstrap percona cluster. If no value is provided this setting is ignored.
(int) This config option is rarely required but is provided for fine tuning, it is safe to leave unset. Modulo nodes is used to help avoid restart collisions as well as distribute load on the cloud at larger scale. During restarts and cluster joins percona needs to execute these operations serially. By setting modulo-nodes to the size of the cluster and known-wait to a reasonable value, the charm will distribute the operations serially. If this value is unset, the charm will check min-cluster-size or else finally default to the size of the cluster based on peer relations. Setting this value to 0 will execute operations with no wait time. Setting this value to less than the cluster size will distribute load but may lead to restart collisions.
(string) Used by the nrpe-external-master subordinate charm. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like 'juju-myservice-0'. If you are running multiple environments with the same services in them this allows you to differentiate between them.
(string) A comma-separated list of nagios service groups. If left empty, the nagios_context will be used as the servicegroup
(string) This configuration option represents the warning and critical percentages that are used to check the number of threads connected to MySQL. The value should be written as a string containing two numbers separated by commas.
(string) The hostname or address of the access endpoint for percona-cluster.
(string) This setting sets the gmcast.peer_timeout value. Possible values are documented on the galera cluster site For very busy clouds or in resource restricted environments this value can be changed. WARNING Please read all documentation before changing the default value which may have unintended consequences. It may be necessary to set this value higher during deploy time (PT15S) and subsequently change it back to the default (PT3S) after deployment.
(boolean) The performance schema attempts to automatically size the values of several of its parameters at server startup if they are not set explicitly. When set to on (True) memory is allocated at startup time. The implications of this is any memory related charm config options such as max-connections and innodb-buffer-pool-size must be explicitly set for the environment percona is running in or percona may fail to start. Default to off (False) at startup time giving 5.5 like behavior. The implication of this is one can set configuration values that could lead to memory exhaustion during run time as memory is not allocated at startup time.
(boolean) If True enables IPv6 support. The charm will expect network interfaces to be configured with an IPv6 address. If set to False (default) IPv4 is expected. . NOTE: these charms do not currently support IPv6 privacy extension. In order for this charm to function correctly, the privacy extension must be disabled and a non-temporary address must be configured/available on your network interface.
(string) Configures pxc_strict_mode ( Valid values are 'disabled', 'permissive', 'enforcing' and 'master.' Defaults to 'enforcing', as this is what PXC5.7 on bionic (and above) does. This option is ignored on PXC < 5.7 (xenial defaults to 5.6, trusty defaults to 5.5)
(string) Root account password for new cluster nodes. Overrides the automatic generation of a password for the root user, but must be set prior to deployment time to have any effect.
(string) Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Ubuntu Cloud Archive e.g. . cloud:<series>-<openstack-release> cloud:<series>-<openstack-release>/updates cloud:<series>-<openstack-release>/staging cloud:<series>-<openstack-release>/proposed . See for info on which cloud archives are available and supported.
(string) Percona method for taking the State Snapshot Transfer (SST), can be: 'rsync', 'xtrabackup', 'xtrabackup-v2', 'mysqldump', 'skip' - see
(string) SST account password for new cluster nodes. Overrides the automatic generation of a password for the sst user, but must be set prior to deployment time to have any effect.
(int) Sets table_open_cache (formerly known as table_cache) to mysql.
(string) Valid values are 'safest', 'fast', and 'unsafe'. If set to 'safest', all settings are tuned to have maximum safety at the cost of performance. 'fast' will turn off most controls, but may lose data on crashes. 'unsafe' will turn off all protections but this may be OK in clustered deployments.
(string) Virtual IP to use to front Percona XtraDB Cluster in active/active HA configuration
(int) Netmask that will be used for the Virtual IP.
(string) Network interface on which to place the Virtual IP.
(int) The number of seconds the server waits for activity on a noninteractive connection before closing it. -1 means use the server's compiled in default.
(int) Specifies the number of threads that can apply replication transactions in parallel. Galera supports true parallel replication that applies transactions in parallel only when it is safe to do so. When unset defaults to 48 for >= Bionic or 1 for <= Xenial.