Placement

  • By OpenStack Charmers
  • Cloud
Channel Revision Published Runs on
latest/edge 101 25 Mar 2024
Ubuntu 22.04
yoga/stable 94 08 Sep 2023
Ubuntu 22.04 Ubuntu 20.04
zed/stable 95 08 Sep 2023
Ubuntu 22.10 Ubuntu 22.04
xena/stable 93 08 Sep 2023
Ubuntu 20.04
wallaby/stable 96 12 Sep 2023
Ubuntu 20.04
victoria/stable 98 14 Sep 2023
Ubuntu 20.04
ussuri/stable 99 14 Sep 2023
Ubuntu 20.04 Ubuntu 18.04
train/candidate 66 28 Nov 2022
Ubuntu 18.04
train/edge 97 14 Sep 2023
Ubuntu 18.04
2024.1/candidate 91 24 Jan 2024
Ubuntu 23.10 Ubuntu 23.04 Ubuntu 22.04
2023.2/stable 100 30 Nov 2023
Ubuntu 23.10 Ubuntu 22.04
2023.1/stable 90 22 Aug 2023
Ubuntu 23.04 Ubuntu 22.10 Ubuntu 22.04
juju deploy placement --channel yoga/stable
Show information

Platform:

Ubuntu
22.04 20.04

Overview

The placement charm deploys Placement, the core OpenStack API service that tracks the inventory and usage of various cloud resources (e.g. compute, storage, network addresses). The charm works alongside other Juju-deployed OpenStack services.

Note: The placement charm is supported starting with OpenStack Train.

Important: This documentation supports version 3.x of the Juju client. See the OpenStack Charm guide if you are using the 2.9.x client.

Usage

Configuration

This section covers common and/or important configuration options. See file config.yaml for the full list of options, along with their descriptions and default values. See the Juju documentation for details on configuring applications.

openstack-origin

The openstack-origin option states the software sources. A common value is an OpenStack UCA release (e.g. ‘cloud:bionic-ussuri’ or ‘cloud:focal-victoria’). See Ubuntu Cloud Archive. The underlying host’s existing apt sources will be used if this option is not specified (this behaviour can be explicitly chosen by using the value of ‘distro’).

Deployment

Placement is often containerised. Here a single unit is deployed to a new container on machine ‘1’:

juju deploy --to lxd:1 placement

Placement requires these applications to be present: keystone, nova-cloud-controller, and a cloud database.

The database application is determined by the series. Prior to focal percona-cluster is used, otherwise it is mysql-innodb-cluster. In the example deployment below mysql-innodb-cluster has been chosen.

juju deploy mysql-router placement-mysql-router
juju integrate placement-mysql-router:db-router mysql-innodb-cluster:db-router
juju integrate placement-mysql-router:shared-db placement:shared-db

Add relations to the remaining applications:

juju integrate placement:identity-service keystone:identity-service
juju integrate placement:placement nova-cloud-controller:placement

Upgrading to OpenStack Train

Prior to OpenStack Train, the placement API was managed by the nova-cloud-controller charm. Some extra steps are therefore needed when performing a Stein to Train upgrade. The documented procedure can be found on the Special charm procedures page in the OpenStack Charms Deployment Guide.

High availability

When more than one unit is deployed with the hacluster application the charm will bring up an HA active/active cluster.

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases the hacluster subordinate charm is used to provide the Corosync and Pacemaker backend HA functionality.

See Infrastructure high availability in the OpenStack Charms Deployment Guide for details.

Documentation

The OpenStack Charms project maintains two documentation guides:

Bugs

Please report bugs on Launchpad.


Help improve this document in the forum (guidelines). Last updated 7 months ago.