octavia #1

Supports: bionic

Add to new model

Description

Octavia is an open source, operator-scale load balancing solution designed to
work with OpenStack.
. Octavia was borne out of the Neutron LBaaS project. Starting with the
Liberty release of OpenStack, Octavia has become the reference implementation
for Neutron LBaaS version 2.
. Octavia accomplishes its delivery of load balancing services by managing a
fleet of virtual machines, containers, or bare metal servers collectively
known as amphorae which it spins up on demand. This on-demand, horizontal
scaling feature differentiates Octavia from other load balancing solutions,
thereby making Octavia truly suited "for the cloud."
. OpenStack Rocky or later is required.


Overview

This charm provides the Octavia load balancer service for an OpenStack Cloud.

OpenStack Rocky or later is required.

Usage

Octavia relies on services from a fully functional OpenStack Cloud and expects
to be able to add images to glance, create networks in Neutron, store
certificate secrets in Vault (optionally) and spin up instances with Nova.

juju deploy octavia --config openstack-origin=bionic:rocky
juju add-relation octavia rabbitmq-server
juju add-relation octavia mysql
juju add-relation octavia keystone
juju add-relation octavia vault

Bugs

Please report bugs on Launchpad.

For general questions please refer to the OpenStack Charm Guide.


Configuration

action-managed-upgrade
(boolean) If True enables openstack upgrades for this charm via juju actions. You will still need to set openstack-origin to the new repository but instead of an upgrade running automatically across all units, it will wait for you to execute the openstack-upgrade action for this charm on each unit. If False it will revert to existing behavior of upgrading all units on config change.
debug
(boolean) Enable debug logging
dns-ha
(boolean) Use DNS HA with MAAS 2.0. Note if this is set do not set vip settings below.
haproxy-client-timeout
(int) Client timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
haproxy-connect-timeout
(int) Connect timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
haproxy-queue-timeout
(int) Queue timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
haproxy-server-timeout
(int) Server timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
openstack-origin
(string) Repository from which to install OpenStack. May be one of the following: distro (default) ppa:somecustom/ppa (PPA name must include OpenStack Release) deb url sources entry|key id or a supported Ubuntu Cloud Archive pocket. Supported Ubuntu Cloud Archive pockets include: cloud:trusty-liberty cloud:trusty-juno cloud:trusty-kilo cloud:trusty-liberty cloud:trusty-mitaka Note that updating this setting to a source that is known to provide a later version of OpenStack will trigger a software upgrade.
distro
os-admin-hostname
(string) The hostname or address of the admin endpoints created in the keystone identity provider. . This value will be used for admin endpoints. For example, an os-admin-hostname set to 'api-admin.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-admin.example.com:9696/
os-admin-network
(string) The IP address and netmask of the OpenStack Admin network (e.g., 192.168.0.0/24) . This network will be used for admin endpoints.
os-internal-hostname
(string) The hostname or address of the internal endpoints created in the keystone identity provider. . This value will be used for internal endpoints. For example, an os-internal-hostname set to 'api-internal.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-internal.example.com:9696/
os-internal-network
(string) The IP address and netmask of the OpenStack Internal network (e.g., 192.168.0.0/24) . This network will be used for internal endpoints.
os-public-hostname
(string) The hostname or address of the public endpoints created in the keystone identity provider. . This value will be used for public endpoints. For example, an os-public-hostname set to 'api-public.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-public.example.com:9696/
os-public-network
(string) The IP address and netmask of the OpenStack Public network (e.g., 192.168.0.0/24) . This network will be used for public endpoints.
region
(string) OpenStack Region
RegionOne
ssl_ca
(string) SSL CA to use with the certificate and key provided - this is only required if you are providing a privately signed ssl_cert and ssl_key.
ssl_cert
(string) SSL certificate to install and use for API ports. Setting this value and ssl_key will enable reverse proxying, point Glance's entry in the Keystone catalog to use https, and override any certficiate and key issued by Keystone (if it is configured to do so).
ssl_key
(string) SSL key to use with certificate specified as ssl_cert.
use-internal-endpoints
(boolean) Openstack mostly defaults to using public endpoints for internal communication between services. If set to True this option will configure services to use internal endpoints where possible.
use-syslog
(boolean) Setting this to True will allow supporting services to log to syslog.
vip
(string) Virtual IP(s) to use to front API services in HA configuration. If multiple networks are being used, a VIP should be provided for each network, separated by spaces.
vip_cidr
(int) Default CIDR netmask to use for HA vip when it cannot be automatically determined.
24
vip_iface
(string) Default network interface to use for HA vip when it cannot be automatically determined.
eth0
worker-multiplier
(float) The CPU core multiplier to use when configuring worker processes. By default, the number of workers for each daemon is set to twice the number of CPU cores a service unit has. When deployed in a LXD container, this default value will be capped to 4 workers unless this configuration option is set.