Neutron is a virtual network service for Openstack, and a part of Netstack. Just like OpenStack Nova provides an API to dynamically request and configure virtual servers, Neutron provides an API to dynamically request and configure virtual networks. These networks connect "interfaces" from other OpenStack services (e.g., virtual NICs from Nova VMs). The Neutron API supports extensions to provide advanced network capabilities (e.g., QoS, ACLs, network monitoring, etc.) . This charm provides central Neutron networking services as part of a Neutron based Openstack deployment
- openstack ›
Neutron provides flexible software defined networking (SDN) for OpenStack.
This charm is designed to be used in conjunction with the rest of the OpenStack related charms in the charm store) to virtualized the network that Nova Compute instances plug into.
Its designed as a replacement for nova-network; however it does not yet support all of the features as nova-network (such as multihost) so may not be suitable for all.
Neutron supports a rich plugin/extension framework for propriety networking solutions and supports (in core) Nicira NVP, NEC, Cisco and others...
The Openstack charms currently only support the fully free OpenvSwitch plugin and implements the 'Provider Router with Private Networks' use case.
See the upstream Neutron documentation for more details.
In order to use Neutron with Openstack, you will need to deploy the nova-compute and nova-cloud-controller charms with the network-manager configuration set to 'Neutron':
nova-cloud-controller: network-manager: Neutron
This decision must be made prior to deploying Openstack with Juju as Neutron is deployed baked into these charms from install onwards:
juju deploy nova-compute juju deploy --config config.yaml nova-cloud-controller juju add-relation nova-compute nova-cloud-controller
The Neutron Gateway can then be added to the deploying:
juju deploy quantum-gateway juju add-relation quantum-gateway mysql juju add-relation quantum-gateway rabbitmq-server juju add-relation quantum-gateway nova-cloud-controller
The gateway provides two key services; L3 network routing and DHCP services.
These are both required in a fully functional Neutron Openstack deployment.
See upstream Neutron multi extnet
External Port Configuration
If the port to be used for external traffic is consistent accross all physical servers then is can be specified by simply setting ext-port to the nic id:
quantum-gateway: ext-port: eth2
However, if it varies between hosts then the mac addresses of the external nics for each host can be passed as a space seperated list:
quantum-gateway: ext-port: <MAC ext port host 1> <MAC ext port host 2> <MAC ext port host 3>
Multiple Floating Pools
If multiple floating pools are needed then an L3 agent (which corresponds to a quantum-gateway for the sake of this charm) is needed for each one. Each gateway needs to be deployed as a seperate service so that the external network id can be set differently for each gateway e.g.
juju deploy quantum-gateway quantum-gateway-extnet1 juju add-relation quantum-gateway-extnet1 mysql juju add-relation quantum-gateway-extnet1 rabbitmq-server juju add-relation quantum-gateway-extnet1 nova-cloud-controller juju deploy quantum-gateway quantum-gateway-extnet2 juju add-relation quantum-gateway-extnet2 mysql juju add-relation quantum-gateway-extnet2 rabbitmq-server juju add-relation quantum-gateway-extnet2 nova-cloud-controller Create extnet1 and extnet2 via neutron client and take a note of their ids juju set quantum-gateway-extnet1 "run-internal-router=leader" juju set quantum-gateway-extnet2 "run-internal-router=none" juju set quantum-gateway-extnet1 "external-network-id=<extnet1 id>" juju set quantum-gateway-extnet2 "external-network-id=<extnet2 id>"
When using Open vSwitch plugin with GRE tunnels default MTU of 1500 can cause packet fragmentation due to GRE overhead. One solution is to increase the MTU on physical hosts and network equipment. When this is not possible or practical thi charm's instance-mtu option can be used to reduce instance MTU via DHCP.
juju set quantum-gateway instance-mtu=1400
OpenStack upstream documentation recomments a MTU value of 1400: Openstack documentation
Note that this option was added in Havana and will be ignored in older releases.
- Provide more network configuration use cases.
- Support VLAN in addition to GRE+OpenFlow for L2 separation.
- (string) Database name
- (string) Username for database access
- (boolean) Enable debug logging
- (string) A space-separated list of external ports to use for routing of instance traffic to the external public network. Valid values are either MAC addresses (in which case only MAC addresses for interfaces without an IP address already assigned will be used), or interfaces (eth0)
- (string) Optional configuration to set the external-network-id. Only needed when configuring multiple external networks and should be used in conjunction with run-internal-router.
- (int) Configure DHCP services to provide MTU configuration to instances within the cloud. This is useful in deployments where its not possible to increase MTU on switches and physical servers to accomodate the packet overhead of using GRE tunnels.
- (string) RabbitMQ Nova user
- (string) RabbitMQ Nova Virtual Host
- (string) Optional configuration to support use of additional sources such as: . - ppa:myteam/ppa - cloud:precise-folsom/proposed - cloud:precise-folsom - deb http://my.archive.com/ubuntu main|KEYID . Note that quantum/neutron is only supported >= Folsom.
- (string) The IP address and netmask of the OpenStack Data network (e.g., 192.168.0.0/24) . This network will be used for tenant network traffic in overlay networks.
- (string) Network configuration plugin to use for quantum. Supported values include: . ovs - OpenVSwitch nvp|nsx - Nicira NVP/VMware NSX
- (string) RabbitMQ user
- (string) RabbitMQ Virtual Host
- (string) Optional configuration to support how the L3 agent option handle_internal_only_routers is configured. all => Set to be true everywhere none => Set to be false everywhere leader => Set to be true on one node (the leader) and false everywhere else. Use leader and none when configuring multiple floating pools
- (boolean) If set to True, supporting services will log to syslog.
- (boolean) Enable verbose logging