A combined NexentaEdge management and data node. All NexentaEdge cluster must have one management node in addition to 3 data nodes. The management node enables command line based management of the NexentaEdge cluster and services.
This document is intended for Nexenta Partners and Nexenta customers to aid in the planning and execution of a NexentaEdge proof of concept (POC) evaluation. One deployment option for NexentaEdge leverages Canonical's Juju management tool. The functionality of the NexentaEdge deployment tool (NEDEPLOY) and administration tool (NEADM) is contained in a collection of Juju charms. Individual charms allow you to deploy a NexentaEdge cluster with a specified number of nodes, add nodes to the cluster, and configure OpenStack Cinder and Swift storage services.
Use this document together with the NexentaEdge product guides and Configuration Guidelines as these documents outline suitable platforms and components for the solution. Additional product information on NexentaEdge can be found at https://nexenta.com/products/nexentaedge.
To aid in the execution of a POC, it is important to know the planned hardware and software configuration as well as the test cases. At a minimum, Nexenta recommends that a list of all components and suggested configuration is created prior to the actual software installation.
At a high level, this list should include:
- Hardware configuration of all servers as well as their intended use case (gateway, storage node, etc)
- OS Version and Configuration information such as network connectivity
- Network infrastructure information, switch brand and model, as well as topology layout
Collecting this information and providing it to the resource that performs the installation greatly improves the efficiency of the process and reduces the risk of misconfiguration.
Specific server configurations designed for NexentaEdge can be found in the Configuration Guidelines document, and Nexenta recommends that a minimum of 5 servers are used for POC evaluations that require Block I/O. Functional testing can be done on 3 servers, but it is important to understand that NexentaEdge has been designed to work with larger configurations and production deployments should start at 5 nodes.
At a minimum the servers should meet the following recommendation:
- 4 HDD + 1 SSD
- 64 GB RAM
- Intel Xeon E5 v2/v3 CPU
- Dual 10-GbE
When using Block services (iSCSI or NBD) each node needs to have at least one SSD available for Journaling. Additional detail is available in the NexentaEdge Configuration Guidelines.
Network Equipment Requirements
NexentaEdge make extensive use of modern networking technologies such as IPv6 and Multicasting. As a result, there are requirements on the networking equipment that must be fulfilled for successful deployments. Both Switches and Network Interface Cards must support 10Gigabit Ethernet, IPv6 and Multicasting.
As a general guideline, NexentaEdge should work with any 10 Gigabit Ethernet switch that supports:
- Non-Blocking Enterprise class switch
- Multicasting over IPv6
- 9000 MTU jumbo frame support
If the switch supports advanced capabilities like Multicast Listener Discovery (MLD) snooping, Data Center Bridging (DCB), Priority Flow Control (PFC) and Enhanced Transmission Selection (ETS), this functionality can be leveraged by NexentaEdge to improve the networking performance and reliability.
A list of tested network switches can be found in the NexentaEdge Configuration Guidelines.
It is not recommended to run NexentaEdge with low-end/low-cost networking equipment as in general, they do not work well with IPv6, multicasting and do not guarantee non-blocking transfers.
NexentaEdge Juju charms run on supported versions of software. They are:
- NexentaEdge 1.1.0
- Ubuntu Linux 14.04.3 LTS
- Juju core 1.25 stable or newer
- MAAS 1.9.0+bzr4533-0ubuntu1~trusty1 or newer
- OpenStack releases Icehouse, Juno, Kilo, Liberty, Mitaka
Each NexentaEdge node must have replicast ethernet adapter connected to a dedicated replicast network. The 'replicast_eth' configuration option of charm must be configured.
Online NexentaEdge's activation key should be entered into the 'activation key' option for nexentaedge-mgmt charm.
By default, NexentaEdge's charms are deployed without Docker container. To enable this option, set 'nodocker' option to 'false' in the charm or bundle configuration.
By default, NexentaEdge's charms are deployed using the 'balanced' storage profile. To change storage profile, edit 'profile' option on each NexentaEdge charm. Available options are: 'capacity', 'balanced' or 'performance'.
The nexentaedge-mgmt charm has mandatory configuration part:
replicast_eth: Network interface name of dedicated storage network for cluster communication activation_key: Online activation key for NexentaEdge cluster.
and another configuration options with default values
cluster_name: Internal NexentaEdge cluster name. Default value is 'clu1'. tenant_name: Internal NexentaEdge tenant name. Default value is 'ten1'. bucket_name: Internal NexentaEdge bucket name. Default value is 'buc1'. profile: Used to specify profile: capacity, balanced or performance. Default value is 'balanced'. nodocker: Do not perform docker preparation on node. Default is 'True'
Also provides swift gateway options, so we can you management node as swift gateway::
operator-roles: Comma-separated list of Swift operator roles; used when integrating with OpenStack Keystone. Default value is "Member,Admin". region: OpenStack region that the NEDGE gateway supports; used when integrating with OpenStack Keystone. Default value is "RegionOne".
The charm also supports specification of the storage devices to use in the NEdge cluster::
exclude: Used to specify comma separated exclude disks list. Exclude disks list persistently stored and it tells system never use and always skip it during automated disk layouting. Most common use case here is to exclude disk(s) used by "other" application. Note that automated layouting will skip all partitioned and/or mounted disks by default. reserved: Used to specify comma separated reserved disks list. Reserved disks list is used to skip disks from automated disk layouting. Most common use case is to reserve disks for future use.
nexentaedge-mgmt: exclude: sdd,sde reserved: sdf
Boot things up by using::
juju deploy nexentaedge-mgmt
You can then deploy nedge cluster by simple doing::
juju deploy -n 10 nexentaedge juju deploy nexentaedge-mgmt juju add-relation nexentaedge nexentaedge-mgmt
We can use management node as swift gateway::
juju add-relation nexentaedge-mgmt keystone
or as cinder gateway too::
juju add-relation nexentaedge-mgmt cinder-nexentaedge
Author: Nexenta firstname.lastname@example.org
- (string) Activation key for NEDGE cluster
- (string) NEDGE bucket name
- (string) NEDGE cluster name
- (string) Used to specify comma separated exclude disks list. Exclude disks list persistently stored and it tells system never use and always skip it during automated disk layouting. Most common use case here is to exclude disk(s) used by "other" application. Note that automated layouting will skip all partitioned and/or mounted disks by default. Example: sdd,sde
- (boolean) Do not perform docker preparation on server node.
- (string) Comma-separated list of Swift operator roles; used when integrating with OpenStack Keystone.
- (string) Used to specify profile: capacity, balanced or performance. Balanced is a default
- (string) OpenStack region that the NEDGE gateway supports; used when integrating with OpenStack Keystone.
- (string) Network interface name of dedicated storage network for cluster communication.
- (string) Used to specify comma separated reserved disks list. Reserved disks list is used to skip disks from automated disk layouting. Most common use case is to reserve disks for future use. Example: sdg,sdk
- (string) NEDGE tenant name