Principal charm that deploys ovn-northd, the OVN central control daemon, and ovsdb-server, the Open vSwitch Database (OVSDB).
The ovn-northd daemon is responsible for translating the high-level OVN configuration into logical configuration consumable by daemons such as ovn-controller.
The ovn-northd process talks to OVN Northbound- and Southbound- databases.
The ovsdb-server exposes endpoints over relations implemented by the ovsdb interface.
The charm supports clustering of the OVSDB, you must have a odd number of units for this to work. Note that write performance decreases as you increase the number of units.
Running multiple ovn-northd daemons is supported and they will operate in active/passive mode. The daemon uses a locking feature in the OVSDB to automatically choose a single active instance.
The ovn-central charm provides the Northbound and Southbound OVSDB Databases
and the Open Virtual Network (OVN) central control daemon (
ovn-northd). It is
used in conjunction with either the ovn-chassis
subordinate charm or the ovn-dedicated-chassis
Note: The OVN charms are supported starting with OpenStack Train.
OVN makes use of Public Key Infrastructure (PKI) to authenticate and authorize
control plane communication. The charm therefore requires a Certificate
Authority to be present in the model as represented by the
Note: The ovn-central charm requires a minimum of three units to operate.
This charm supports the use of Juju network spaces.
By binding the
ovsdb-peer endpoints you can
influence which interface will be used for communication with consumers of the
Southbound DB, Cloud Management Systems (CMS) and cluster internal
juju deploy -n 3 --series focal \ --bind "''=oam-space ovsdb=data-space" \ ovn-central
OVN RBAC and securing the OVN services
The charm enables RBAC in the OVN Southbound database by default. The RBAC feature enforces authorization of individual chassis connecting to the database, and also restricts database operations.
In the event of an individual chassis being compromised, RBAC will make it more difficult to leverage database access for compromising other parts of the network.
The charm automatically enables the firewall and will allow traffic from its cluster peers to port 6641, 6643, 6644 and 16642. CMS clients will be allowed to talk to port 6641.
Anyone will be allowed to connect to port 6642.
Please report bugs on Launchpad.
For general questions please refer to the OpenStack Charm Guide.
- (boolean) Allow the charm and packages to restart services automatically when required.
- (string) A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like: juju-myservice-0 If you're running multiple environments with the same services in them this allows you to differentiate between them.
- (string) Comma separated list of nagios servicegroups for the service checks.
- (int) Raft leader election timeout in seconds. The charm allows a value between 1 and 60 seconds. . The Open vSwitch ovsdb-server default of 1 second may not be sufficient for a loaded cluster where the database server may be too busy serving requests to respond to elections in time. . Using a higher value will increase the time to discover a real failure, but you must weigh that against the risk of spurious leader flapping and the unwanted churn that entails. . NOTE: The ovsdb-server will refuse to decrease or increase the value of this timer more than 2x the current value. The charm will compensate for this and decrease / increase the timer in increments, but care should be taken to not decrease / increase the value too much in one operation.
- (int) Maximum number of seconds of idle time on connection to client before sending an inactivity probe message. The Open vSwitch ovsdb-server default of 5 seconds may not be sufficient depending on type and load of the CMS you want to connect to OVN.
- (string) Repository from which to install OVS+OVN May be one of the following: distro (default) ppa:somecustom/ppa (PPA name must include UCA OpenStack Release name) deb url sources entry|key id or a supported Ubuntu Cloud Archive pocket. Supported Ubuntu Cloud Archive pockets include: cloud:xenial-pike cloud:xenial-queens cloud:bionic-rocky Note that updating this setting to a source that is known to provide a later version of Ceph will trigger a software upgrade.