ironic api #1

Supports: bionic focal
Add to new model

Description

OpenStack bare metal provisioning a.k.a Ironic is an integrated OpenStack program which aims to provision bare metal machines instead of virtual machines, forked from the Nova baremetal driver. It is best thought of as a bare metal hypervisor API and a set of plugins which interact with the bare metal hypervisors. By default, it will use PXE and IPMI in order to provision and turn on/off machines, but Ironic also supports vendor-specific plugins which may implement additional functionality.


Overview

This charm provides the Ironic baremetal API service for an OpenStack cloud, starting with Train. To get a fully functional Ironic deployment, you will also need to deploy the ironic-conductor and the neutron-api-plugin-ironic charms.

Usage

Configuration

This charm requires no special configuration, outside of what is already required/recommended for any other OpenStack API service charm.

Deployment

To deploy (partial deployment only):

juju deploy ironic-api

juju add-relation ironic-api keystone
juju add-relation ironic-api rabbitmq-server
juju add-relation ironic-api mysql

The following interface is provided by this charm:

  • ironic-api - Used to signal readiness of the Ironic API to OpenStack services.

Configuration

action-managed-upgrade
(boolean) If True enables openstack upgrades for this charm via juju actions. You will still need to set openstack-origin to the new repository but instead of an upgrade running automatically across all units, it will wait for you to execute the openstack-upgrade action for this charm on each unit. If False it will revert to existing behavior of upgrading all units on config change.
database
(string) Database name for Ironic
ironic
database-user
(string) Username for Ironic database access
ironic
debug
(boolean) Enable debug logging
dns-ha
(boolean) Use DNS HA with MAAS 2.0. Note if this is set do not set vip settings below.
haproxy-client-timeout
(int) Client timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
haproxy-connect-timeout
(int) Connect timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
haproxy-queue-timeout
(int) Queue timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
haproxy-server-timeout
(int) Server timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
openstack-origin
(string) Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Ubuntu Cloud Archive e.g. . cloud:<series>-<openstack-release> cloud:<series>-<openstack-release>/updates cloud:<series>-<openstack-release>/staging cloud:<series>-<openstack-release>/proposed . See https://wiki.ubuntu.com/OpenStack/CloudArchive for info on which cloud archives are available and supported.
distro
os-admin-hostname
(string) The hostname or address of the admin endpoints created in the keystone identity provider. . This value will be used for admin endpoints. For example, an os-admin-hostname set to 'api-admin.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-admin.example.com:9696/
os-admin-network
(string) The IP address and netmask of the OpenStack Admin network (e.g., 192.168.0.0/24) . This network will be used for admin endpoints.
os-internal-hostname
(string) The hostname or address of the internal endpoints created in the keystone identity provider. . This value will be used for internal endpoints. For example, an os-internal-hostname set to 'api-internal.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-internal.example.com:9696/
os-internal-network
(string) The IP address and netmask of the OpenStack Internal network (e.g., 192.168.0.0/24) . This network will be used for internal endpoints.
os-public-hostname
(string) The hostname or address of the public endpoints created in the keystone identity provider. . This value will be used for public endpoints. For example, an os-public-hostname set to 'api-public.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-public.example.com:9696/
os-public-network
(string) The IP address and netmask of the OpenStack Public network (e.g., 192.168.0.0/24) . This network will be used for public endpoints.
rabbit-user
(string) Username used to access rabbitmq queue
ironic
rabbit-vhost
(string) Rabbitmq vhost
openstack
region
(string) OpenStack Region
RegionOne
ssl_ca
(string) TLS CA to use to communicate with other components in a deployment. . __NOTE__: This configuration option will take precedence over any certificates received over the ``certificates`` relation.
ssl_cert
(string) TLS certificate to install and use for any listening services. . __NOTE__: This configuration option will take precedence over any certificates received over the ``certificates`` relation.
ssl_key
(string) TLS key to use with certificate specified as ``ssl_cert``. . __NOTE__: This configuration option will take precedence over any certificates received over the ``certificates`` relation.
use-internal-endpoints
(boolean) Openstack mostly defaults to using public endpoints for internal communication between services. If set to True this option will configure services to use internal endpoints where possible.
use-syslog
(boolean) Setting this to True will allow supporting services to log to syslog.
verbose
(boolean) Enable verbose logging
vip
(string) Virtual IP(s) to use to front API services in HA configuration. If multiple networks are being used, a VIP should be provided for each network, separated by spaces.
vip_cidr
(int) Default CIDR netmask to use for HA vip when it cannot be automatically determined.
24
vip_iface
(string) Default network interface to use for HA vip when it cannot be automatically determined.
eth0
worker-multiplier
(float) The CPU core multiplier to use when configuring worker processes. By default, the number of workers for each daemon is set to twice the number of CPU cores a service unit has. When deployed in a LXD container, this default value will be capped to 4 workers unless this configuration option is set.