neutron gateway #297

Supports: xenial bionic focal groovy hirsute impish


Neutron is a virtual network service for Openstack, and a part of Netstack. Just like OpenStack Nova provides an API to dynamically request and configure virtual servers, Neutron provides an API to dynamically request and configure virtual networks. These networks connect "interfaces" from other OpenStack services (e.g., virtual NICs from Nova VMs). The Neutron API supports extensions to provide advanced network capabilities (e.g., QoS, ACLs, network monitoring, etc.) . This charm provides central Neutron networking services as part of a Neutron based OpenStack deployment


The neutron-gateway charm deploys the data plane of Neutron, the core OpenStack service that provides software defined networking (SDN) for Nova instances. This provides the Neutron Gateway service, which in turn supplies two key services: L3 network routing and DHCP. The charm works alongside other Juju-deployed OpenStack applications; in particular: neutron-openvswitch, nova-compute, and nova-cloud-controller.

Note: Starting with OpenStack Train, the neutron-gateway and neutron-openvswitch charm combination can be replaced by the OVN charms (e.g. ovn-central, ovn-chassis, and neutron-api-plugin-ovn).



This section covers common and/or important configuration options. See file config.yaml for the full list of options, along with their descriptions and default values. See the Juju documentation for details on configuring applications.


A bridge that Neutron Gateway will bind to, given in the form of a space-delimited bridge:port mapping (e.g. 'br-ex:ens8'). The port will be added to its corresponding bridge.

Note: If network device names are not consistent between hosts (e.g. 'eth1' and 'ens8') a list of values can be provided where a MAC address is used in the place of a device name. The charm will iterate through the list and configure the first matching interface.

The specified bridge(s) should match the one(s) defined in the bridge-mappings option.

Flat or VLAN network types are supported.

The device itself must not have any L3 configuration. In MAAS, it must have an IP mode of 'Unconfigured'.


A space-delimited list of ML2 data provider:bridge mappings (e.g. 'physnet1:br-ex'). The specified bridge(s) should match the one(s) defined in the data-port option.


The openstack-origin option states the software sources. A common value is an OpenStack UCA release (e.g. 'cloud:bionic-ussuri' or 'cloud:focal-victoria'). See Ubuntu Cloud Archive. The underlying host's existing apt sources will be used if this option is not specified (this behaviour can be explicitly chosen by using the value of 'distro').


These deployment instructions assume the following pre-existing applications: neutron-api, nova-cloud-controller, and rabbitmq-server.

Important: For Neutron Gateway to function properly, the nova-cloud-controller charm must have its network-manager option set to 'Neutron'.

Deploy Neutron Gateway:

juju deploy neutron-gateway
juju add-relation neutron-gateway:quantum-network-service nova-cloud-controller:quantum-network-service
juju add-relation neutron-gateway:neutron-plugin-api neutron-api:neutron-plugin-api
juju add-relation neutron-gateway:amqp rabbitmq-server:amqp

Port configuration

Network ports are configured with the bridge-mappings and data-port options but the neutron-api charm also has several relevant options (e.g. flat-network-providers, vlan-ranges, etc.). Additionally, the network topology can be further defined with supplementary openstack client commands.

Example 1
This configuration has a single external network and is typically used when floating IP addresses are combined with a GRE private network.

Charm option values (YAML):

    bridge-mappings: physnet1:br-ex
    data-port: br-ex:eth1
    flat-network-providers: physnet1

Supplementary commands:

openstack network create --provider-network-type flat \
   --provider-physical-network physnet1 --external \
openstack router set router1 --external-gateway external

Example 2
This configuration is for two networks, where an internal private network is directly connected to the gateway with public IP addresses but a floating IP address range is also offered.

Charm option values (YAML):

    bridge-mappings: physnet1:br-data external:br-ex
    data-port: br-data:eth1 br-ex:eth2
    flat-network-providers: physnet1 external

Example 3
This configuration has two external networks, where one is for public instance addresses and one is for floating IP addresses. Both networks are on the same physical network connection (but they might be on different VLANs).

Charm option values (YAML):

    bridge-mappings: physnet1:br-data
    data-port: br-data:eth1
    flat-network-providers: physnet1

Supplementary commands:

openstack network create --provider-network-type vlan \
   --provider-segment 400 \
   --provider-physical-network physnet1 --share \
openstack network create --provider-network-type vlan \
   --provider-segment 401 \
   --provider-physical-network physnet1 --share --external \
openstack router set router1 --external-gateway floating

legacy ext-port option

The ext-port option is deprecated and is superseded by the data-port option. The ext-port option always created a bridge called 'br-ex' for external networks that was used implicitly by external router interfaces.

The following will occur if both the data-port and ext-port options are set:

  • the neutron-gateway unit will be marked as 'blocked' to indicate that the charm is misconfigured
  • the ext-port option will be ignored
  • a warning will be logged

Instance MTU

When using Open vSwitch plugin with GRE tunnels the default MTU of 1500 can cause packet fragmentation due to GRE overhead. One solution to this problem is to increase the MTU on physical hosts and network equipment. When this is not feasible the charm's instance-mtu option can be used to reduce instance MTU via DHCP:

juju config neutron-gateway instance-mtu=1400

Note: The instance-mtu option is supported starting with OpenStack Havana.


This section covers Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis. To display action descriptions run juju actions --schema neutron-gateway. If the charm is not deployed then see file actions.yaml.

  • cleanup
  • get-status-dhcp
  • get-status-lb
  • get-status-routers
  • openstack-upgrade
  • pause
  • restart-services
  • resume
  • restart-services
  • run-deferred-hooks
  • security-checklist
  • show-deferred-events

Deferred service events

Operational or maintenance procedures applied to a cloud often lead to the restarting of various OpenStack services and/or the calling of certain charm hooks. Although normal, such events can be undesirable due to the service interruptions they can cause.

The deferred service events feature provides the operator the choice of preventing these service restarts and hook calls from occurring, which can then be resolved at a more opportune time.

See the Deferred service events page in the OpenStack Charms Deployment Guide for an in-depth treatment of this feature.


The OpenStack Charms project maintains two documentation guides:


Please report bugs on Launchpad.


(string) Experimental enable apparmor profile. Valid settings: 'complain', 'enforce' or 'disable'. AA disabled by default.
(boolean) If True enables openstack upgrades for this charm via juju actions. You will still need to set openstack-origin to the new repository but instead of an upgrade running automatically across all units, it will wait for you to execute the openstack-upgrade action for this charm on each unit. If False it will revert to existing behavior of upgrading all units on config change.
(string) Space-separated list of ML2 data bridge mappings with format <provider>:<bridge>.
(boolean) Juju propagates availability zone information to charms from the underlying machine provider such as MAAS and this option allows the charm to use JUJU_AVAILABILITY_ZONE to set default_availability_zone for Neutron agents (DHCP and L3 agents). This option overrides the default-availability-zone charm config setting only when the Juju provider sets JUJU_AVAILABILITY_ZONE.
(string) Space-delimited list of bridge:port mappings. Specified ports will be added to their corresponding specified bridge. The bridges will allow usage of flat or VLAN network types with Neutron and should match this defined in bridge-mappings. . Ports can be specified through the name or MAC address of the interface to be added to the bridge. If MAC addresses are used, you may provide multiple bridge:mac for the same bridge so as to be able to configure multiple units. In this case the charm will run through the provided MAC addresses for each bridge until it finds one it can resolve to an interface name. . Any changes (subsequent to the initial setting) made to the value of this option will merely add the new values along with the existing ones. If removal of old values is desired, they have to be done manually through the command "ovs-vsctl" in the affected units. If the new values conflict with the previous ones, it may cause a network outage as seen in bug
(boolean) Enable debug logging.
(string) Default availability zone to use for agents (l3, dhcp) on this machine. If this option is not set, the default availability zone 'nova' is used. If customize-failure-domain is set to True, it will override this option only if an AZ is set by the Juju provider. If JUJU_AVAILABILITY_ZONE is not set, the value specified by this option will be used regardless of customize-failure-domain's setting. . NOTE: Router and Network objects have a property called availability_zone_hints which can be used to restrict dnsmasq and router namespace placement by DHCP and L3 agents to specific neutron availability zones. Neutron AZs are not tied to Nova AZs but their names can match. .
(boolean) Manually disable lbaas services. Set this option to True if Octavia is used with neutron. This option is ignored for Train+ OpenStack.
(string) A comma-separated list of DNS servers which will be used by dnsmasq as forwarders.
(string) Comma-separated list of key=value config flags with the additional dhcp options for neutron dnsmasq.
(boolean) Allow the charm and packages to restart services automatically when required.
(boolean) Enable metadata on an isolated network (no router ports).
(boolean) Optional configuration to support use of linux router Note that this is used only for Cisco n1kv plugin.
(boolean) The metadata network is used by solutions which do not leverage the l3 agent for providing access to the metadata service.
(string) [DEPRECATED] Use bridge-mappings and data-port to create a network which can be used for external connectivity. You can call the network external and the bridge br-ex by convention, but neither is required. . Space-delimited list of external ports to use for routing of instance traffic to the external public network. Valid values are either MAC addresses (in which case only MAC addresses for interfaces without an IP address already assigned will be used), or interfaces (eth0) . Note that if data-port is used then this config item is ignored, a warning is logged, and the unit is marked as blocked in order to indicate that the charm is misconfigured.
(string) Optional configuration to set the external-network-id. Only needed when configuring multiple external networks and should be used in conjunction with run-internal-router.
(string) Firewall driver to use to support use of security groups with instances; valid values include iptables_hybrid (default) and openvswitch. This config option is ignored for < Queens.
(int) This option sets the maximum queue size for log entries. Can be used to avoid excessive memory consumption. WARNING: Should be NOT LESS than 25. (Available from Stein)
(string) This option allows setting a path for Firewall Group logs. A valid file system path must be provided. If this option is not provided Neutron will use syslog as a destination. (Available from Stein)
(int) Log entries are queued for writing to a log file when a packet rate exceeds the limit set by this option. Possible values: null (no rate limitation), integer values greater than 100. WARNING: Should be NOT LESS than 100, if set (if null logging will not be rate limited). (Available from Stein)
(string) Space-delimited list of Neutron flat network providers.
(string) Default network interface on which HA cluster will bind to communicate with the other members of the HA Cluster.
(boolean) If True will enable Pacemaker to monitor the neutron-ha-monitor daemon on every neutron-gateway unit, which detects neutron agents status and reschedule resources hosting on failed agents, detects local errors and release resources when network is unreachable or do necessary recover tasks. This feature targets to < Juno which doesn't natively support HA in Neutron itself.
(int) Default multicast port number that will be used to communicate between HA Cluster nodes.
(string) Apply system hardening. Supports a space-delimited list of modules to run. Supported modules currently include os, ssh, apache and mysql.
(int) Configure DHCP services to provide MTU configuration to instances within the cloud. This is useful in deployments where its not possible to increase MTU on switches and physical servers to accommodate the packet overhead of using GRE tunnels.
(string) IPFIX target wit the format "IP_Address:Port". This will enable IPFIX exporting on all OVS bridges to the target, including br-int and br-ext.
(int) Specifies the frequency (in seconds) at which HA routers will check their external network gateway by performing an ICMP ping between the virtual routers. When the ping check fails, this will trigger the HA routers to failover to another node. A value of 0 will disable this check. This setting only applies when using l3ha and dvr_snat. . WARNING: Enabling the health checks should be done with caution as it may lead to rapid failovers of HA routers. ICMP pings are low priority and may be dropped or take longer than the 1 second afforded by neutron, which leads to routers failing over to other nodes.
(string) A space-separated list of kernel modules to load before sysctl options are applied by the charm and system boot. This ensures the sysctl options exist and can be set correctly.
(string) Used by the nrpe-external-master subordinate charm. A string that will be prepended to instance name to set the host name in Nagios. So for instance the hostname would be something like: juju-myservice-0 If you're running multiple environments with the same services in them this allows you to differentiate between them.
(string) A comma-separated list of Nagios service groups. If left empty, the nagios_context will be used as the servicegroup
(string) RabbitMQ Nova user
(string) RabbitMQ Nova Virtual Host
(string) Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Ubuntu Cloud Archive, e.g. . cloud:<series>-<openstack-release> cloud:<series>-<openstack-release>/updates cloud:<series>-<openstack-release>/staging cloud:<series>-<openstack-release>/proposed . See for info on which cloud archives are available and supported. . NOTE: updating this setting to a source that is known to provide a later version of OpenStack will trigger a software upgrade unless action-managed-upgrade is set to True.
(string) The IP address and netmask of the OpenStack Data network (e.g. . This network will be used for tenant network traffic in overlay networks.
(string) "True" or "False" string value. It is safe to leave this option unset. This option allows the DHCP agent to use a veth interface for OVS in order to support kernels with limited namespace support. i.e. Trusty. Changing the value after neutron DHCP agents are created will break access. The charm will go into a blocked state if this is attempted.
(int) Timeout in seconds for ovsdb commands. (Available from Queens)
(string) Network configuration plugin to use for quantum. Supported values include: . ovs - ML2 + Open vSwitch nsx - VMware NSX n1kv - Cisco N1kv ovs-odl - ML2 + Open vSwitch with OpenDayLight Controller
(string) RabbitMQ user
(string) RabbitMQ Virtual Host
(string) Optional configuration to support how the L3 agent option handle_internal_only_routers is configured. all => Set to be true everywhere none => Set to be false everywhere leader => Set to be true on one node (the leader) and false everywhere else. Use leader and none when configuring multiple floating pools
(string) YAML-formatted associative array of sysctl key/value pairs to be set persistently e.g. '{ kernel.pid_max : 4194303 }'.
{ net.ipv4.neigh.default.gc_thresh1 : 128, net.ipv4.neigh.default.gc_thresh2 : 28672, net.ipv4.neigh.default.gc_thresh3 : 32768, net.ipv6.neigh.default.gc_thresh1 : 128, net.ipv6.neigh.default.gc_thresh2 : 28672, net.ipv6.neigh.default.gc_thresh3 : 32768, net.nf_conntrack_max : 1000000, net.netfilter.nf_conntrack_buckets : 204800, net.netfilter.nf_conntrack_max : 1000000 }
(boolean) Setting this to True will allow supporting services to log to syslog.
(string) A JSON-formatted string that will serve as vendor metadata (via "StaticJSON" provider) to all VM's within an OpenStack deployment, regardless of project or domain. For deployments of Rocky or later this value is ignored. Please set the corresponding value in the nova-cloud-controller charm.
(string) A URL serving JSON-formatted data that will serve as vendor metadata (via "DynamicJSON" provider) to all VM's within an OpenStack deployment, regardless of project or domain. . Only supported in OpenStack Newton and higher. For deployments of Rocky or later this value is ignored. Please set the corresponding value in the nova-cloud-controller charm.
(boolean) Enable verbose logging.
(string) Space-delimited list of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks.
(float) The CPU core multiplier to use when configuring worker processes for this service. By default, the number of workers for each daemon is set to twice the number of CPU cores a service unit has. This default value will be capped to 4 workers unless this configuration option is set.