A combined NexentaEdge OpenStack Swift API gateway and data node.
The combined data node and OpenStack Swift gateway provide both data storage capacity to the NexentaEdge cluster,
and function as a OpenStack Swift API service.
When used with OpenStack or standalone the Swift gateway provides HTTP/REST based object storage using the OpenStack Swift API specification.
This document is intended for Nexenta Partners and Nexenta customers to aid in the planning and execution of a NexentaEdge proof of concept (POC) evaluation. One deployment option for NexentaEdge leverages Canonical's Juju management tool. The functionality of the NexentaEdge deployment tool (NEDEPLOY) and administration tool (NEADM) is contained in a collection of Juju charms. Individual charms allow you to deploy a NexentaEdge cluster with a specified number of nodes, add nodes to the cluster, and configure OpenStack Cinder and Swift storage services.
Use this document together with the NexentaEdge product guides and Configuration Guidelines as these documents outline suitable platforms and components for the solution. Additional product information on NexentaEdge can be found at https://nexenta.com/products/nexentaedge.
To aid in the execution of a POC, it is important to know the planned hardware and software configuration as well as the test cases. At a minimum, Nexenta recommends that a list of all components and suggested configuration is created prior to the actual software installation.
At a high level, this list should include:
- Hardware configuration of all servers as well as their intended use case (gateway, storage node, etc)
- OS Version and Configuration information such as network connectivity
- Network infrastructure information, switch brand and model, as well as topology layout
Collecting this information and providing it to the resource that performs the installation greatly improves the efficiency of the process and reduces the risk of misconfiguration.
Specific server configurations designed for NexentaEdge can be found in the Configuration Guidelines document, and Nexenta recommends that a minimum of 5 servers are used for POC evaluations that require Block I/O. Functional testing can be done on 3 servers, but it is important to understand that NexentaEdge has been designed to work with larger configurations and production deployments should start at 5 nodes.
At a minimum the servers should meet the following recommendation:
- 4 HDD + 1 SSD
- 64 GB RAM
- Intel Xeon E5 v2/v3 CPU
- Dual 10-GbE
When using Block services (iSCSI or NBD) each node needs to have at least one SSD available for Journaling. Additional detail is available in the NexentaEdge Configuration Guidelines.
Network Equipment Requirements
NexentaEdge make extensive use of modern networking technologies such as IPv6 and Multicasting. As a result, there are requirements on the networking equipment that must be fulfilled for successful deployments. Both Switches and Network Interface Cards must support 10Gigabit Ethernet, IPv6 and Multicasting.
As a general guideline, NexentaEdge should work with any 10 Gigabit Ethernet switch that supports:
- Non-Blocking Enterprise class switch
- Multicasting over IPv6
- 9000 MTU jumbo frame support
If the switch supports advanced capabilities like Multicast Listener Discovery (MLD) snooping, Data Center Bridging (DCB), Priority Flow Control (PFC) and Enhanced Transmission Selection (ETS), this functionality can be leveraged by NexentaEdge to improve the networking performance and reliability.
A list of tested network switches can be found in the NexentaEdge Configuration Guidelines.
It is not recommended to run NexentaEdge with low-end/low-cost networking equipment as in general, they do not work well with IPv6, multicasting and do not guarantee non-blocking transfers.
NexentaEdge Juju charms run on supported versions of software. They are:
- NexentaEdge 1.1.0
- Ubuntu Linux 14.04.3 LTS
- Juju core 1.25 stable or newer
- MAAS 1.9.0+bzr4533-0ubuntu1~trusty1 or newer
- OpenStack releases Icehouse, Juno, Kilo, Liberty, Mitaka
Each NexentaEdge node must have replicast ethernet adapter connected to a dedicated replicast network. The 'replicast_eth' configuration option of charm must be configured.
Online NexentaEdge's activation key should be entered into the 'activation key' option for nexentaedge-mgmt charm.
By default, NexentaEdge's charms are deployed without Docker container. To enable this option, set 'nodocker' option to 'false' in the charm or bundle configuration.
By default, NexentaEdge's charms are deployed using the 'balanced' storage profile. To change storage profile, edit 'profile' option on each NexentaEdge charm. Available options are: 'capacity', 'balanced' or 'performance'.
The nexentaedge-swift-gw charm has mandatory configuration part::
replicast_eth: Network interface name of dedicated storage network for cluster communication
Also provides swift gateway options::
operator-roles: Comma-separated list of Swift operator roles; used when integrating with OpenStack Keystone. Default value is "Member,Admin". region: OpenStack region that the NEDGE gateway supports; used when integrating with OpenStack Keystone. Default value is "RegionOne".
and another configuration options with default values::
profile: Used to specify profile: capacity, balanced or performance. Default value is 'capacity'. nodocker: Do not perform docker preparation on node. Default is 'True'
The charm also supports specification of the storage devices to use in the NEdge cluster::
exclude: Used to specify comma separated exclude disks list. Exclude disks list persistently stored and it tells system never use and always skip it during automated disk layouting. Most common use case here is to exclude disk(s) used by "other" application. Note that automated layouting will skip all partitioned and/or mounted disks by default. reserved: Used to specify comma separated reserved disks list. Reserved disks list is used to skip disks from automated disk layouting. Most common use case is to reserve disks for future use.
nexentaedge-swift-gw: exclude: sdd,sde reserved: sdf
Boot things up by using::
juju deploy nexentaedge-swift-gw
You can then deploy nedge cluster with swift gateway by simple doing::
juju deploy -n 10 nexentaedge juju deploy nexentaedge-mgmt juju deploy nexentaedge-swift-gw juju add-relation nexentaedge nexentaedge-mgmt juju add-relation nexentaedge-mgmt nexentaedge-swift-gw
And finally add relation from nedge swift gateway to keystone service::
juju add-relation nexentaedge-swift-gw keystone
Author: Nexenta firstname.lastname@example.org
- (string) Used to specify comma separated exclude disks list. Exclude disks list persistently stored and it tells system never use and always skip it during automated disk layouting. Most common use case here is to exclude disk(s) used by "other" application. Note that automated layouting will skip all partitioned and/or mounted disks by default. Example: sdd,sde
- (boolean) Do not perform docker preparation on server node.
- (string) Comma-separated list of Swift operator roles; used when integrating with OpenStack Keystone.
- (string) Used to specify profile: capacity, balanced or performance. Balanced is a default
- (string) OpenStack region that the NEDGE gateway supports; used when integrating with OpenStack Keystone.
- (string) Network interface name of dedicated storage network for cluster communication.
- (string) Used to specify comma separated reserved disks list. Reserved disks list is used to skip disks from automated disk layouting. Most common use case is to reserve disks for future use. Example: sdg,sdk