This is a purpose-oriented charm, to provide a caching proxy mirror of the
Ubuntu archive on cloud platforms.
This is a hybrid mirror. All data except /ubuntu/pool is synced from
upstream Ubuntu mirrors every 2 hours. /ubuntu/pool is forwarded internally
to squid-deb-proxy, which keeps a local cache of .deb files as they are
- cache-proxy ›
This charm provides a partial caching proxy mirror of the
Ubuntu Software Repository. This is intended for deployment in cloud
environments to provide a cloud-local repository. Metadata will be updated
every two hours.
This is a hybrid mirror / cache. Repository metadata, data under the
ubuntu/dists/ directory, is copied from an upstream Ubuntu mirror and
checked to ensure that it is consistent. Requests for package files in
/ubuntu/pool are forwarded internally to squid-deb-proxy, which keeps a
local cache of .deb files as they are requested from the upstream mirror.
This approach minimized load on the upstream archive server, improves
performance, and requires less disk space than a static archive mirror.
Deploy the charm with these example commands:
# Create cache units juju deploy -n 3 ubuntu-repository-cache --constraints "mem=8G root-disk=80G" juju set-constraints ubuntu-repository-cache mem=8G root-disk=80G # Provide an haproxy front-end for the service juju deploy -n 2 haproxy juju add-relation haproxy:reverseproxy ubuntu-repository-cache:website # Expose haproxy on the public network $ juju expose haproxy
The ubuntu-repository-cache charm's constraint for disk size is optional,
the intention is to allocate sufficient space for the metadata mirror
(approximately 4GB) plus as much space as can be afforded for squid to
cache package files.
Ideally this charm should be deployed with this disk space allocated
as fast storage (which may require ephemeral storage depending on the
provider); use of the manual provider may be necessary to achieve this.
Alternately, the use of provider-specific constraints can be used to specify
an instance type which provides ephemeral storage. The ephemeral storage
device(s) would need to be specified in the charm configuration option
When running, you can browse to http://ip-address to view the repository
when the initial mirror sync has completed. Alternatively, you may add
a proxy in front of this service.
Disabling OS updates/upgrades
When deploying new units of this charm, Juju may update the operating
system. This charm could fail to deploy if this charm is being deployed as
the service for the package repository configured in the cloud image.
If that is the case, the juju environment configuration should be edited
to disable OS update/upgrade. The charm will change the apt source to
point at the archive specified in the 'sync-host' charm configuration
and will perform an update during charm installation.
This charm provides the local-monitoring relationship to enable more detailed monitoring of the metadata service with Nagios though NRPE.
# Example juju deploy nagios juju deploy ubuntu-repository-cache juju deploy nrpe juju add-relation nagios:monitors nrpe:monitors juju add-relation ubuntu-repository-cache:local-monitors nrpe:local-monitors juju expose nagios
Scale out usage
When additional units are added the content they serve will be
synchronized from the lead unit. As the service is scaled, the use of
haproxy in front of the mirror may be desirable to distribute load.
Cache charm performance is sensitive to network throughput, system memory, and disk space.
Suggested minimum hardware per cache unit:
- 2 processors
- 24 GB RAM
- 200 GB storage (preferable fast, ephemeral storage)
Juju deployment constraints can be used to match these needs if the manual provider is not used. If the cloud provider supports the use of constraints to specify exact instance types, they should be used for consistent, repeatable deployment. An example of this is shown below. Exact instance types can be specified for EC2 which gives us known network performance characteristics (as networking can not be specified by generic constraints).
The basic pattern for the repository cache deployment puts multiple units of the
haproxy. The relationship between
ubuntu-repository-cache ensures that only active cache units are contacted by clients.
Testing configuration with explicit AWS instance types:
Cache units (x2) -- c3.8xlarge instance type:
- High network performance
- 60 GB RAM
- 320GB SSD ephemeral storage
HAProxy unit (x1) -- m3.xlarge instance type:
- High network performance
- 15 GB RAM
Example - Deployment with constraints
This example uses the
ephemeral-devices configuration option of the
ubuntu-repository-cache charm to provide access to a large, fast storage device. The value in this example is particular to the device name of ephemeral storage on an EC2 c3.8xlarge instance.
# Create a configuration file for the service $ cat > urc-config.yaml << EOF ubuntu-repository-cache: sync-host: archive.ubuntu.com sync-on-start: false ephemeral-devices: /dev/xvdb EOF # Set instance type constraints for each charm $ juju set-constraints --service haproxy instance-type=m3.xlarge $ juju set-constraints --service ubuntu-repository-cache instance-type=c3.8xlarge # Deploy the charms $ juju deploy --num-unit 2 ubuntu-repository-cache --config=urc-config.yaml $ juju deploy haproxy # Add relationship between haproxy and the cache $ juju add-relation haproxy:reverseproxy ubuntu-repository-cache:website # Expose haproxy on the public network $ juju expose haproxy
Example - Manual deployment
If the cloud has no juju provider, or sufficient control of constraints is not possible, it may be necessary to use Juju's manual provider. In this case, instances would be configured with Ubuntu per the manual provisioning documentation. Then deployment would specify specific machines for deployment.
- Machine #1 in juju is sized for haproxy
- Machine #2 - #4 are sized for ubuntu-repository-cache with ephemeral storage device /dev/sdb
# Create a configuration file for the service $ cat > urc-config.yaml << EOF ubuntu-repository-cache: sync-host: archive.ubuntu.com sync-on-start: false ephemeral-devices: /dev/sdb EOF # Deploy the charms $ juju deploy --to 1 haproxy $ juju deploy --to 2 ubuntu-repository-cache --config=urc-config.yaml $ juju add-unit --to 3 ubuntu-repository-cache $ juju add-unit --to 4 ubuntu-repository-cache # Add relationship between haproxy and the cache $ juju add-relation haproxy:reverseproxy ubuntu-repository-cache:website # Expose haproxy on the public network $ juju expose haproxy
Known Limitations and Issues
- Find existing bugs or report new ones in Launchpad
sync-host - The host name or IP address of the archive which will
be used to keep this mirror updated. The mirror must support 'rsync'
sync-on-start - Pull data from the sync-host during inital charm
deployment. This should be true if deploying a single unit and false
if deploying multiple units to reduce initial startup time. When multiple
units are deployed they will choose a leader and pull data from the
ephemeral-devices - A comma-separated list of storages devices to use for
metadata and squid cache storage. Leave this empty if only
the root disk will be used. the device(s) will be formatted
and mounted during charm installation. This option must be set
at in itial charm deployment. Changes after deployment will not
effect running units, only newly added units. An example would be
'/dev/xvdb,/dev/xvdc' to specify two ephemeral disks for cache
apache2_* - Apache2 configuration options for tuning of security and
Questions and comments can be posted to firstname.lastname@example.org, see
https://lists.ubuntu.com/mailman/listinfo/ubuntu-cloud to subscribe to this
Bugs can be viewed or reported at https://bugs.launchpad.net/ubuntu-repository-cache
- (int) Maximum number of requests a server process serves
- (int) Maximum number of simultaneous client connections
- (int) Maximum number of worker threads which are kept spare
- (int) Minimum number of worker threads which are kept spare
- (int) Upper limit on configurable number of processes
- (int) Initial number of server processes to start
- (int) Sets the upper limit on the configurable number of threads per child process
- (int) Constant number of worker threads in each server process
- (string) Select the worker or prefork multi-processing module
- (string) Security setting. Set to one of On Off EMail
- (string) Security setting. Set to one of Full OS Minimal Minor Major Prod
- (string) Security setting. Set to one of On Off extended
- (string) Provide a comma-separated list of storages devices to use for metadata and squid cache storage. Leave this empty if only the root disk will be used. the device(s) will be formatted and mounted during charm installation. This option must be set at in itial charm deployment. Changes after deployment will not effect running units, only newly added units. An example would be '/dev/xvdb,/dev/xvdc' to specify two ephemeral disks for cache storage.
- (int) The number of days we want to retain logs for
- (boolean) Use daily extension like YYYMMDD instead of simply adding a number
- (string) daily, weekly, monthly, or yearly?
- (string) A space seperated list of ubuntu series metadata to mirror. An empty or blank string will mirror everything.
- (string) A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like: juju-myservice-0 If you're running multiple environments with the same services in them this allows you to differentiate between them.
- (string) A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup
- (boolean) Enable SNMP for Squid (bound on localhost:3401, community "public")
- (int) Age (in seconds) of CRITICAL level in Nagios check for cache sync.
- (int) Age (in seconds) of WARNING level in Nagios check for cache sync.
- (string) The DNS or IP of the site you want to mirror. Default is archive.ubuntu.com.
- (boolean) Pull data from the sync-host during inital charm deployment. This should be true if deploying a single unit and false if deploying multiple units to reduce initial startup time.