cassandra #60

Supports: focal bionic xenial trusty


Cassandra is a distributed (peer-to-peer) system for the management and storage of structured data.


The Apache Cassandra database is the right choice when you need scalability and high availability without compromising performance. Linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure make it the perfect platform for mission-critical data. Cassandra's support for replicating across multiple datacenters is best-in-class, providing lower latency for your users and the peace of mind of knowing that you can survive regional outages.

See for more information.


This charm supports Apache Cassandra 2.x and 3.x, and Datastax Enterprise 4.7, 4.8, 5.0, 5.1 & 6.0 The default is Apache Cassandra 3.11.

To use a particular Apache Cassandra release, specify the relevant deb archive in in the install_sources config setting when deploying.

      - deb 311 main

To use Datastax Enterprise, set the edition config setting to dse and dse_version to the major version such as "5.1". You also must set the Datastax Enterprise archive URL in install_sources, as the packages require your personal credentials to be downloaded and the URL must include the username and password.

      - deb stable main


Cassandra deployments are relatively simple in that they consist of a set of Cassandra nodes which seed from each other to create a ring of servers:

juju deploy -n3 cs:cassandra

The service units will deploy and will form a single ring.

New nodes can be added to scale up:

juju add-unit cassandra

/!\ Nodes must be manually decommissioned before dropping a unit.

juju run --unit cassandra/1 "nodetool decommission"
# Wait until Mode is DECOMMISSIONED
juju run --unit cassandra/1 "nodetool netstats"
juju remove-unit cassandra/1

It is recommended to deploy at least 3 nodes and configure all your keyspaces to have a replication factor of three. Using fewer nodes or neglecting to set your keyspaces' replication settings means that your data is at risk and availability lower, as a failed unit may take the only copy of data with it.

Production systems will normally want to set max_heap_size and heap_newsize to the empty string, to enable automatic memory size tuning. The defaults have been chosen to be suitable for development environments but will perform poorly with real workloads.


  • Do not attempt to store too much data per node. If you need more space, add more nodes. Most workloads work best with a capacity under 1TB per node, so take care with larger deployments. Recommended capacities are vague and version dependent.

  • You need to keep 50% of your disk space free for Cassandra maintenance operations. If you expect your nodes to hold 500GB of data each, you will need a 1TB partition. Using non-default compaction such as LeveledCompactionStrategy can lower this waste.

  • Much more information can be found in the Cassandra 2.2 documentation

Network Access

The default Cassandra packages are installed from the archive. To avoid this download, place a copy of the packages in a local archive and specify its location in the install_sources configuration option. The signing key is automatically added.

When using DataStax Enterprise, you need to specify the archive location containing the DataStax Enterprise .deb packages in the install_sources configuration item, and the signing key in the install_keys configuration item. Place the DataStax packages in a local archive to avoid downloading from

Oracle Java SE

While OpenJDK is now supported, it is still often recommended to use Oracle Java SE 8. Unfortunately, this software is accessible only after accepting Oracle's click-through license making deployments using it much more cumbersome. You will need to download the Oracle Java SE 8 Server Runtime for Linux, and place the tarball at a URL accessible to your deployed units. The config item private_jre_url needs to be set to this URL.


To relate the Cassandra charm to a service that understands how to talk to Cassandra using Thrift or the native Cassandra protocol::

juju deploy cs:~cassandra-charmers/cqlsh
juju add-relation cqlsh cassandra:database

Alternatively, if you require a superuser connection, use the database-admin relation instead of database::

juju deploy cs:~cassandra-charmers/cqlsh cqlsh-admin
juju add-relation cqlsh-admin cassandra:database-admin

Charms using the recommended charms.reactive framework should include 'interface:cassandra' in their layer.yaml. Documentation for using the interface can be seen at

The cluster is configured to use the recommended 'snitch' (GossipingPropertyFileSnitch), so you will need to configure replication of your keyspaces using the NetworkTopologyStrategy replica placement strategy. The datacenter is set in the Cassandra charm configuration, and provided by the client interface if clients need to do this programatically. For example, using the default datacenter named 'juju':

{ 'class': 'NetworkTopologyStrategy', 'juju': 3};

Although authentication is configured using the standard PasswordAuthentication, by default no authorization is configured and the provided credentials will have access to all data on the cluster. For more granular permissions, you will need to set the authorizer in the service configuration to CassandraAuthorizer and manually grant permissions to the users.

Contact Information


The Juju mailing list



DataStax Enterprise


(string) Authentication backend. Only PasswordAuthenticator and AllowAllAuthenticator are supported. You should only use AllowAllAuthenticator for legacy applications that cannot provide authentication credentials.
(string) Authorization backend, implementing IAuthorizer; used to limit access/provide permissions Out of the box, Cassandra provides AllowAllAuthorizer & CassandraAuthorizer - AllowAllAuthorizer allows any action to any user - set it to disable authorization. - CassandraAuthorizer stores permissions in system_auth.permissions table.
(string) Name of the Cassandra cluster. This is mainly used to prevent machines in one logical cluster from joining another. All Cassandra services you wish to cluster together must have the same cluster_name. This setting cannot be changed after service deployment.
(string) Commit log directory. The path is relative to /var/lib/cassandra or the block storage broker external mount point.
(int) Throttles compaction to the given total throughput (in MB/sec) across the entire system. The faster you insert data, the faster you need to compact in order to keep the sstable count down, but in general, setting this to 16 to 32 times the rate you are inserting data is more than sufficient. Setting this to 0 disables throttling. Note that this account for all types of compaction, including validation compaction.
(string) Space delimited data directories. Use multiple data directories to split data over multiple physical hardware drive partitions. Paths are relative to /var/lib/cassandra or the block storage broker external mount point.
(string) The node's datacenter used by the endpoint_snitch. e.g. "DC1". It cannot be changed after service deployment.
(string) The major DataStax Enterprise version to track when edition is set to 'dse'. One of "4.7", "4.8", "5.0", "5.1", "6.0".
(string) One of 'community', 'dse', or 'apache-snap'. 'community' uses the Apache Cassandra packages. 'dse' is for DataStax Enterprise. Selecting 'dse' overrides the jvm setting. 'apache-snap' uses a snap package of Apache Cassandra.
(string) Space separated list of extra deb packages to install.
(int) Maximum memory to use for sstable chunk cache and buffer pooling. 32MB of this are reserved for pooling buffers, the rest is used as an cache that holds uncompressed sstable chunks. Defaults to the smaller of 1/4 of heap or 512MB. This pool is allocated off-heap, so is in addition to the memory allocated for heap. The cache also has on-heap overhead which is roughly 128 bytes per chunk (i.e. 0.2% of the reserved size if the default 64k chunk size is used). Memory is only allocated when needed.
(string) The size of the JVM's young generation in the heap. If you set this, you should also set max_heap_size. If in doubt, go with 100M per physical CPU core. The default is automatically tuned.
(string) DEPRECATED. Use Juju model-config settings. Value for the http_proxy and https_proxy environment variables. This causes pip(1) and other tools to perform downloads via the proxy server. eg. http://squid.dc1.lan:8080
(string) List of signing keys for install_sources package sources, per charmhelpers standard format (a yaml list of strings encoded as a string). The keys should be the full ASCII armoured GPG public keys. While GPG key ids are also supported and looked up on a keyserver, operators should be aware that this mechanism is insecure. null can be used if a standard package signing key is used that will already be installed on the machine, and for PPA sources where the package signing key is securely retrieved from Launchpad.
- null # Apache and DataStax package signing keys are added automatically.
(string) charm-helpers standard listing of package install sources. If you are using Datastax Enterprise, you will need to override one defaults with your own username and password.
- deb 311x main
(string) Set kernel io scheduler for persistent storage.
(string) Which Java runtime environment to use. May be 'openjdk' or 'oracle'.
(string) Network interface used for connecting to other Cassandra nodes. Must correspond to a single IP address. By default, the unit's public IP address is used.
(string) Total size of Java memory heap, for example 1G or 512M. If you set this, you should also set heap_newsize. The default is automatically tuned.
(string) Used by the nrpe subordinate charms. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like: juju-myservice-0 If you're running multiple environments with the same services in them this allows you to differentiate between them.
(int) The pct of data disk used to trigger a nagios critcal alert
(int) The pct of data disk used to trigger a nagios warning
(int) The pct of heap used to trigger a nagios critcal alert
(int) The pct of heap used to trigger a nagios warning
(string) A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup
(int) Native protocol port for native protocol clients.
(int) Number of tokens per node.
(string) The status of service-affecting packages will be set to this value in the dpkg database. Valid values are "install" and "hold".
(string) The cassandra partitioner to use. Use Murmur3Partitioner, unless another is required for backwards compatibility.
(string) URL for the private jre tar file. DSE requires Oracle Java SE 8 Server JRE (eg. server-jre-8u60-linux-x64.tar.gz).
(string) The rack used by the endpoint_snitch for all units in this service. e.g. "Rack1". This cannot be changed after deployment. It defaults to the service name. Cassandra will store replicated data in different racks whenever possible.
(string) Network interface used for client connections. Must correspond to a single IP address. By default, the unit's public IP address is used.
(int) DEPRECATED, ignored by Cassandra 3.11+. Thrift protocol port for legacy clients.
(string) Saved caches directory. The path is relative to /var/lib/cassandra or the block storage broker external mount point.
(string) How often snapd handles updates for installed snaps. The default (an empty string) is 4x per day. Set to "max" to check once per month based on the charm deployment date. You may also set a custom string as described in the 'refresh.timer' section here:
(int) Cluster secure communication port. TODO: Unused. configure SSL.
(int) Cluster communication port
(int) Throttles all outbound streaming file transfers on nodes to the given total throughput in Mbps. This is necessary because Cassandra does mostly sequential IO when streaming data during bootstrap or repair, which can lead to saturating the network connection and degrading rpc performance. When unset, the default is 200 Mbps or 25 MB/s. 0 to disable throttling.
(int) When executing a scan, within or across a partition, we need to keep the tombstones seen in memory so we can return them to the coordinator, which will use them to make sure other replicas also know about the deleted rows. With workloads that generate a lot of tombstones, this can cause performance problems and even exaust the server heap. Adjust the thresholds here if you understand the dangers and want to scan more tombstones anyway.
(int) When executing a scan, within or across a partition, we need to keep the tombstones seen in memory so we can return them to the coordinator, which will use them to make sure other replicas also know about the deleted rows. With workloads that generate a lot of tombstones, this can cause performance problems and even exaust the server heap. Adjust the thresholds here if you understand the dangers and want to scan more tombstones anyway.
(boolean) Do not start the service before external storage has been mounted using the block storage broker relation. If you do not set this and you relate the service to the storage broker, then your service will have started up using local disk, and later torn down and rebuilt when the external storage became available.