swift proxy #206

Supports: xenial bionic eoan focal trusty groovy
Add to new model

Description

OpenStack Object Storage (code-named Swift) is open source software for creating redundant, scalable object storage using clusters of standardized servers to store petabytes of accessible data. It is not a file system or real-time data storage system, but rather a long-term storage system for a more permanent type of static data that can be retrieved, leveraged, and then updated if necessary. Primary examples of data that best fit this type of storage model are virtual machine images, photo storage, email storage and backup archiving. Having no central "brain" or master point of control provides greater scalability, redundancy and permanence. . This charm deploys the Swift proxy service, providing HTTP based access onto underlying Swift storage services.


Overview

OpenStack Swift is a highly available, distributed, eventually consistent object/blob store.

The swift-proxy charm deploys Swift's proxy component. The charm's basic function is to manage zone assignment and enforce replica requirements for the storage nodes. It works in tandem with the swift-storage charm, which is used to add storage nodes.

Usage

Configuration

This section covers common configuration options. See file config.yaml for the full list of options, along with their descriptions and default values.

zone-assignment

The zone-assignment option defines the zone assignment method for storage nodes. Values include 'manual' (the default) and 'auto'.

replicas

The replicas option stipulates the number of data replicas are needed. This value should be equal to the number of zones. The default value is '3'.

Deployment

Let file swift.yaml contain the deployment configuration:

    swift-proxy:
        zone-assignment: manual
        replicas: 3
    swift-storage-zone1:
        zone: 1
        block-device: /dev/sdb
    swift-storage-zone2:
        zone: 2
        block-device: /dev/sdb
    swift-storage-zone3:
        zone: 3
        block-device: /dev/sdb

Deploy the proxy and storage nodes:

juju deploy --config swift.yaml swift-proxy
juju deploy --config swift.yaml swift-storage swift-storage-zone1
juju deploy --config swift.yaml swift-storage swift-storage-zone2
juju deploy --config swift.yaml swift-storage swift-storage-zone3

Add relations between the proxy node and all storage nodes:

juju add-relation swift-proxy:swift-storage swift-storage-zone1:swift-storage
juju add-relation swift-proxy:swift-storage swift-storage-zone2:swift-storage
juju add-relation swift-proxy:swift-storage swift-storage-zone3:swift-storage

This will result in a three-zone cluster, with each zone consisting of a single storage node, thereby satisfying the replica requirement of three.

Storage capacity is increased by adding swift-storage units to a zone. For example, to add two storage nodes to zone '3':

juju add-unit -n 2 swift-storage-zone3

Note: When scaling out ensure the candidate machines are equipped with the block devices currently configured for the associated application.

This charm will not balance the storage ring until there are enough storage zones to meet its minimum replica requirement, in this case three.

Appendix Swift usage in the OpenStack Charms Deployment Guide offers in-depth guidance for deploying Swift with charms. In particular, it shows how to set up a multi-region (global) cluster.

Swift as backend for Glance

Swift may be used as a storage backend for the Glance image service. To do so, add a relation between the swift-proxy and glance applications:

juju add-relation swift-proxy:object-store glance:object-store

Telemetry

Starting with OpenStack Mitaka improved telemetry collection support can be achieved by adding a relation to rabbitmq-server:

juju add-relation swift-proxy rabbitmq-server

Doing the above in a busy Swift deployment can add a significant amount of load to the underlying message bus.

High availability

When more than one unit is deployed with the hacluster application the charm will bring up an HA active/active cluster.

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases the hacluster subordinate charm is used to provide the Corosync and Pacemaker backend HA functionality.

See OpenStack high availability in the OpenStack Charms Deployment Guide for details.

Network spaces

This charm supports the use of Juju network spaces (Juju v.2.0). This feature optionally allows specific types of the application's network traffic to be bound to subnets that the underlying hardware is connected to.

Note: Spaces must be configured in the backing cloud prior to deployment.

API endpoints can be bound to distinct network spaces supporting the network separation of public, internal, and admin endpoints.

For example, providing that spaces 'public-space', 'internal-space', and 'admin-space' exist, the deploy command above could look like this:

juju deploy --config swift-proxy.yaml swift-proxy \
   --bind "public=public-space internal=internal-space admin=admin-space"

Alternatively, configuration can be provided as part of a bundle:

    swift-proxy:
      charm: cs:swift-proxy
      num_units: 1
      bindings:
        public: public-space
        internal: internal-space
        admin: admin-space

Note: Existing cinder units configured with the os-admin-network, os-internal-network, or os-public-network options will continue to honour them. Furthermore, these options override any space bindings, if set.

Actions

This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis.

  • add-user
  • diskusage
  • dispersion-populate
  • dispersion-report
  • openstack-upgrade
  • pause
  • remove-devices
  • resume
  • set-weight

To display action descriptions run juju actions swift-proxy.

Policy Overrides

This feature allows for policy overrides using the policy.d directory. This is an advanced feature and the policies that the OpenStack service supports should be clearly and unambiguously understood before trying to override, or add to, the default policies that the service uses. The charm also has some policy defaults. They should also be understood before being overridden.

Caution: It is possible to break the system (for tenants and other services) if policies are incorrectly applied to the service.

Policy overrides are YAML files that contain rules that will add to, or override, existing policy rules in the service. The policy.d directory is a place to put the YAML override files. This charm owns the /etc/swift/policy.d directory, and as such, any manual changes to it will be overwritten on charm upgrades.

Overrides are provided to the charm using a Juju resource called policyd-override. The resource is a ZIP file. This file, say overrides.zip, is attached to the charm by:

juju attach-resource swift-proxy policyd-override=overrides.zip

The policy override is enabled in the charm using:

juju config swift-proxy use-policyd-override=true

When use-policyd-override is True the status line of the charm will be prefixed with PO: indicating that policies have been overridden. If the installation of the policy override YAML files failed for any reason then the status line will be prefixed with PO (broken):. The log file for the charm will indicate the reason. No policy override files are installed if the PO (broken): is shown. The status line indicates that the overrides are broken, not that the policy for the service has failed. The policy will be the defaults for the charm and service.

Policy overrides on one service may affect the functionality of another service. Therefore, it may be necessary to provide policy overrides for multiple service charms to achieve a consistent set of policies across the OpenStack system. The charms for the other services that may need overrides should be checked to ensure that they support overrides before proceeding.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.


Configuration

action-managed-upgrade
(boolean) If True enables openstack upgrades for this charm via juju actions. You will still need to set openstack-origin to the new repository but instead of an upgrade running automatically across all units, it will wait for you to execute the openstack-upgrade action for this charm on each unit. If False it will revert to existing behavior of upgrading all units on config change.
auth-type
(string) Auth method to use, tempauth, swauth or keystone. Note that swauth is not supported for OpenStack Train and later.
tempauth
bind-port
(int) TCP port to listen on.
8080
debug
(boolean) Enable debug level logging.
delay-auth-decision
(boolean) Delay authentication to downstream WSGI services.
True
disable-ring-balance
(boolean) This provides similar support to min-hours but without having to modify the builders. If True, any changes to the builders will not result in a ring re-balance and sync until this value is set back to False.
dns-ha
(boolean) Use DNS HA with MAAS 2.0. Note if this is set do not set vip settings below.
enable-multi-region
(boolean) Enables Swift Global Cluster feature as described at https://docs.openstack.org/swift/latest/overview_global_cluster.html Should be used in conjunction with 'read-affinity', 'write-affinity' and 'write-affinity-node-count' options.
ha-bindiface
(string) Default network interface on which HA cluster will bind to communication with the other members of the HA Cluster.
eth0
ha-mcastport
(int) Default multicast port number that will be used to communicate between HA Cluster nodes.
5414
haproxy-client-timeout
(int) Client timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
haproxy-connect-timeout
(int) Connect timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
haproxy-queue-timeout
(int) Queue timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
haproxy-server-timeout
(int) Server timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
harden
(string) Apply system hardening. Supports a space-delimited list of modules to run. Supported modules currently include os, ssh, apache and mysql.
keystone-admin-password
(string) Keystone admin password
keystone-admin-tenant-name
(string) Keystone admin tenant name
service
keystone-admin-user
(string) Keystone admin username
keystone-auth-host
(string) Keystone authentication host
keystone-auth-port
(int) Keystone authentication port
35357
keystone-auth-protocol
(string) Keystone authentication protocol
http
log-headers
(boolean) Enable logging of all request headers.
min-hours
(int) This is the Swift ring builder min_part_hours parameter. This setting represents the amount of time in hours that Swift will wait between subsequent ring re-balances in order to avoid large i/o loads as data is re-balanced when new devices are added to the cluster. Once your cluster has been built, you can set this to a higher value e.g. 1 (upstream default). Note that changing this value will result in an attempt to re-balance and if successful, rings will be redistributed.
nagios_context
(string) Used by the nrpe-external-master subordinate charm. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like 'juju-myservice-0'. If you are running multiple environments with the same services in them this allows you to differentiate between them.
juju
nagios_servicegroups
(string) A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup.
node-timeout
(int) How long the proxy server will wait on responses from the account/container/object servers.
60
openstack-origin
(string) Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Ubuntu Cloud Archive e.g. . cloud:<series>-<openstack-release> cloud:<series>-<openstack-release>/updates cloud:<series>-<openstack-release>/staging cloud:<series>-<openstack-release>/proposed . See https://wiki.ubuntu.com/OpenStack/CloudArchive for info on which cloud archives are available and supported. . NOTE: updating this setting to a source that is known to provide a later version of OpenStack will trigger a software upgrade unless action-managed-upgrade is set to True.
distro
operator-roles
(string) Comma-separated list of Swift operator roles.
Member,Admin
os-admin-hostname
(string) The hostname or address of the admin endpoints created for swift-proxy in the keystone identity provider. . This value will be used for admin endpoints. For example, an os-admin-hostname set to 'files.admin.example.com' with will create the following admin endpoint for the swift-proxy: . https://files.admin.example.com:80/swift/v1
os-admin-network
(string) The IP address and netmask of the OpenStack Admin network (e.g. 192.168.0.0/24) . This network will be used for admin endpoints.
os-internal-hostname
(string) The hostname or address of the internal endpoints created for swift-proxy in the keystone identity provider. . This value will be used for internal endpoints. For example, an os-internal-hostname set to 'files.internal.example.com' with will create the following internal endpoint for the swift-proxy: . https://files.internal.example.com:80/swift/v1
os-internal-network
(string) The IP address and netmask of the OpenStack Internal network (e.g. 192.168.0.0/24) . This network will be used for internal endpoints.
os-public-hostname
(string) The hostname or address of the public endpoints created for swift-proxy in the keystone identity provider. This value will be used for public endpoints. For example, an os-public-hostname set to 'files.example.com' with will create the following public endpoint for the swift-proxy: https://files.example.com:80/swift/v1
os-public-network
(string) The IP address and netmask of the OpenStack Public network (e.g., 192.168.0.0/24) . This network will be used for public endpoints.
partition-power
(int) This value needs to be set according to the parameters of the cluster being deployed. In order to achieve an optimal distribution of objects within your cluster without over consuming system resources it is important that this value not be too low or high but it must also be high enough to account for future expansion of your cluster since it cannot be changed once the rings have been built. A rough calculation for this value should be no less than log2(total_disks * 100).
8
prefer-ipv6
(boolean) If True enables IPv6 support. The charm will expect network interfaces to be configured with an IPv6 address. If set to False (default) IPv4 is expected. . NOTE: these charms do not currently support IPv6 privacy extension. In order for this charm to function correctly, the privacy extension must be disabled and a non-temporary address must be configured/available on your network interface.
rabbit-user
(string) Username used to access rabbitmq queue.
swift
rabbit-vhost
(string) Rabbitmq vhost name.
openstack
read-affinity
(string) Which backend servers to prefer on reads. Format is r<N> for region N or r<N>z<M> for region N, zone M. The value after the equals is the priority; lower numbers are higher priority. . For example first read from region 1 zone 1, then region 1 zone 2, then anything in region 2, then everything else - read_affinity = r1z1=100, r1z2=200, r2=300 . Default is empty, meaning no preference. . NOTE: use only when 'enable-multi-region=True'
recoverable-node-timeout
(int) How long the proxy server will wait for an initial response and to read a chunk of data from the object servers while serving GET / HEAD requests. Timeouts from these requests can be recovered from so setting this to something lower than node-timeout would provide quicker error recovery while allowing for a longer timeout for non-recoverable requests (PUTs).
30
region
(string) OpenStack region that this swift-proxy supports.
RegionOne
replicas
(int) Minimum replicas for each item stored in the cluster.
3
replicas-account
(int) Minimum replicas for each account stored in the cluster. . NOTE: use only when you want to overwrite the global 'replicas' option.
replicas-container
(int) Minimum replicas for each container stored in the cluster. . NOTE: use only when you want to overwrite the global 'replicas' option.
ssl_ca
(string) Base64-encoded SSL CA to use with the certificate and key provided - only required if you are providing a privately signed ssl_cert and ssl_key.
ssl_cert
(string) Base64 encoded SSL certificate to install and use for API ports. . juju set swift-proxy ssl_cert="$(cat cert | base64)" \ ssl_key="$(cat key | base64)" . Setting this value (and ssl_key) will enable reverse proxying, point Swifts's entry in the Keystone catalog to use https, and override any certficiate and key issued by Keystone (if it is configured to do so).
ssl_key
(string) Base64 encoded SSL key to use with certificate specified as ssl_cert.
static-large-object-segments
(int) Enable Static Large Objects (SLO) support. This allows the user to upload several object segments concurrently, after which a manifest is uploaded that describes how to concatenate them, enabling a single large object to be downloaded. . This option sets the maximum number of object segments allowed per large object, allowing control over the maximum large object size. The default minimum segment size is 1MB, while the maximum segment size corresponds to the largest object swift is configured to support (5GB by default). . Ex. Setting this to 1000 would allow up to 1000 5GB object segments to be uploaded for a maximum large object size of 5TB.
statsd-host
(string) Enable statsd metrics to be sent to the specified host. If this value is empty, statsd logging will be disabled.
statsd-port
(int) Destination port on the provided statsd host to send samples to. Only takes effect if statsd-host is set.
3125
statsd-sample-rate
(float) Sample rate determines what percentage of the metric points a client should send to the server. Only takes effect if statsd-host is set.
1
swauth-admin-key
(string) The secret key to use to authenticate as an swauth admin Note that swauth is not supported for OpenStack Train and later.
swift-hash
(string) Hash to use across all swift-proxy servers - don't lose
use-policyd-override
(boolean) If True then use the resource file named 'policyd-override' to install override YAML files in the service's policy.d directory. The resource file should be a ZIP file containing at least one yaml file with a .yaml or .yml extension. If False then remove the overrides.
vip
(string) Virtual IP(s) to use to front API services in HA configuration. . If multiple networks are being used, a VIP should be provided for each network, separated by spaces.
workers
(int) Number of TCP workers to launch (0 for the number of system cores).
write-affinity
(string) This setting lets you trade data distribution for throughput. It makes the proxy server prefer local back-end servers for object PUT requests over non-local ones. Note that only object PUT requests are affected by the write_affinity setting; POST, GET, HEAD, DELETE, OPTIONS, and account/container PUT requests are not affected. The format is r<N> for region N. If this is set, then when handling an object PUT request, some number (see the write_affinity_node_count setting) of local backend servers will be tried before any nonlocal ones. . For example try to write to regions 1 and 2 before writing to any other nodes - write_affinity = r1, r2 . NOTE: use only when 'enable-multi-region=True'
write-affinity-node-count
(string) This setting is only useful in conjunction with write_affinity; it governs how many local object servers will be tried before falling back to non-local ones. . For example assuming 3 replicas and 'write-affinity: r1' then 'write-affinity-node-count: 2 * replicas' will make object PUTs try storing the object’s replicas on up to 6 disks. . NOTE: use only when 'enable-multi-region=True'
zone-assignment
(string) Which policy to use when assigning new storage nodes to zones. . manual - Allow swift-storage services to request zone membership. auto - Assign new swift-storage units to zones automatically. . The configured replica minimum must be met by an equal number of storage zones before the storage ring will be initially balance. Deployment requirements differ based on the zone-assignment policy configured, see this charm's README for details.
manual