Kubernetes is an open-source platform for deplying, scaling, and operations
of appliation containers across a cluster of hosts. Kubernetes is portable
in that it works with public, private, and hybrid clouds. Extensible through
a pluggable infrastructure. Self healing in that it will automatically
restart and place containers on healthy nodes if a node ever goes away.
This charm is an encapsulation of the
Running Kubernetes locally via Docker
document. The released hyperkube image (
is currently pulled from a Google owned container repository
repository. For this charm to
work it will need access to the repository to
docker pull the images.
This charm was built from other charm layers using the reactive framework. The
layer:docker is the base layer. For more information please read Getting
Started Developing charms
The kubernetes charms require a relation to a distributed key value store
(ETCD) which Kubernetes uses for persistent storage of all of its REST API objects.
juju deploy trusty/etcd
juju deploy kubernetes-master
juju deploy kubernetes-node
juju add-relation kubernetes-master etcd
juju add-relation kubernetes-node etcd
For your convenience this charm supports some configuration options to set up
a Kubernetes cluster that works in your environment:
Set the version of the Kubernetes containers to deploy. The version string must
be in the following format "v#.#.#" where the numbers match with the
kubernetes release labels of the kubernetes github project.
Changing the version causes the all the Kubernetes containers to be restarted.
Set the IP range for the Kubernetes cluster. eg: 10.1.0.0/16
The domain name to use for the Kubernetes cluster by the skydns service.
Kubernetes DNS is handled via the kubedns addon. More information about
this service can be obtained in the
Kubernetes DNS admin guide
The kubernetes charm is built to handle multiple storage devices if the cloud
provider works with
The 16.04 (xenial) release introduced ZFS
to Ubuntu. The xenial charm can use ZFS witha raidz pool. A raidz pool
distributes parity along with the data (similar to a raid5 pool) and can suffer
the loss of one drive while still retaining data. The raidz pool requires a
minimum of 3 disks, but will accept more if they are provided.
You can add storage to the kubernetes charm in increments of 3 or greater:
juju add-storage kubernetes/0 disk-pool=ebs,3,1G
Note: Due to a limitation of raidz you can not add individual disks to an
existing pool. Should you need to expand the storage of the raidz pool, the
additional add-storage commands must be the same number of disks as the original
command. At this point the charm will have two raidz pools added together, both
of which could handle the loss of one disk each.
The storage code handles the addition of devices to the charm and when it
recieves three disks creates a raidz pool that is mounted at the /srv/kubernetes
directory by default. If you need the storage in another location you must
mount-point value in layer.yaml before the charms is deployed.
To avoid data loss you must attach the storage before making the connection to
the etcd cluster.
While this charm is meant to be a top layer, it can be used to build other
solutions. This charm sets or removes states from the reactive framework that
other layers could react appropriately. The states that other layers would be
interested in are as follows:
kubelet.available - The hyperkube container has been run with the kubelet
service and configuration that started the apiserver, controller-manager and
proxy.available - The hyperkube container has been run with the proxy service and configuration that handles Kubernetes networking.
kubectl.package.created - Indicates the availability of the
application along with the configuration needed to contact the cluster
securely. You will need to download the
from the kubernetes leader unit to your machine so you can control the cluster.
skydns.available - Indicates when the Domain Name System (DNS) for the cluster is operational.
- Charm Author: Matthew Bruzek <Matthew.Bruzek@canonical.com>
- Charm Contributor: Charles Butler <Charles.Butler@canonical.com>