kubernetes #10

Supports: xenial


Kubernetes is an open-source platform for deplying, scaling, and operations of appliation containers across a cluster of hosts. Kubernetes is portable in that it works with public, private, and hybrid clouds. Extensible through a pluggable infrastructure. Self healing in that it will automatically restart and place containers on healthy nodes if a node ever goes away.


Kubernetes is an open source system for managing application containers across multiple hosts. This version of Kubernetes uses Docker to package, instantiate and run containerized applications.

This charm is an encapsulation of the Running Kubernetes locally via Docker document. The released hyperkube image (gcr.io/google_containers/hyperkube) is currently pulled from a Google owned container repository repository. For this charm to work it will need access to the repository to docker pull the images.

This charm was built from other charm layers using the reactive framework. The layer:docker is the base layer. For more information please read Getting Started Developing charms


The kubernetes charms require a relation to a distributed key value store (ETCD) which Kubernetes uses for persistent storage of all of its REST API objects.

juju deploy etcd
juju deploy kubernetes
juju add-relation kubernetes etcd


For your convenience this charm supports some configuration options to set up a Kubernetes cluster that works in your environment:

version: Set the version of the Kubernetes containers to deploy. The version string must be in the following format "v#.#.#" where the numbers match with the kubernetes release labels of the kubernetes github project. Changing the version causes the all the Kubernetes containers to be restarted.

cidr: Set the IP range for the Kubernetes cluster. eg:

dns_domain: Set the DNS domain for the Kubernetes cluster.


The kubernetes charm is built to handle multiple storage devices if the cloud provider works with Juju storage.

The 16.04 (xenial) release introduced ZFS to Ubuntu. The xenial charm can use ZFS witha raidz pool. A raidz pool distributes parity along with the data (similar to a raid5 pool) and can suffer the loss of one drive while still retaining data. The raidz pool requires a minimum of 3 disks, but will accept more if they are provided.

You can add storage to the kubernetes charm in increments of 3 or greater:

juju add-storage kubernetes/0 disk-pool=ebs,3,1G

Note: Due to a limitation of raidz you can not add individual disks to an existing pool. Should you need to expand the storage of the raidz pool, the additional add-storage commands must be the same number of disks as the original command. At this point the charm will have two raidz pools added together, both of which could handle the loss of one disk each.

The storage code handles the addition of devices to the charm and when it recieves three disks creates a raidz pool that is mounted at the /srv/kubernetes directory by default. If you need the storage in another location you must change the mount-point value in layer.yaml before the charms is deployed.

To avoid data loss you must attach the storage before making the connection to the etcd cluster.

Operational Actions

Microbot - Deploys mini containers that serve up static webpages and identify the container ID that's serving the request. Useful to deploy a phaux workload for visualizations quickly, or to test reverse proxy that does not depend on session affinity.

Pause - Cordon the unit by marking it as unscheduleable. It also drains the workloads from the unit, making it feesible to perform maintenance tasks without disrupting end user experience.

Resume - UnCordon the unit. No workload balancing is done at this time, the kubernetes scheduler will being filling the unit back up with workloads depending on unit-pressure, which is based on resource allocation/uitilization.

State Events

While this charm is meant to be a top layer, it can be used to build other solutions. This charm sets or removes states from the reactive framework that other layers could react appropriately. The states that other layers would be interested in are as follows:

kubelet.available - The hyperkube container has been run with the kubelet service and configuration that started the apiserver, controller-manager and scheduler containers.

proxy.available - The hyperkube container has been run with the proxy service and configuration that handles Kubernetes networking.

kubectl.package.created - Indicates the availability of the kubectl application along with the configuration needed to contact the cluster securely. You will need to download the /home/ubuntu/kubectl_package.tar.gz from the kubernetes leader unit to your machine so you can control the cluster.

kubedns.available - Indicates when the Domain Name System (DNS) for the cluster is operational.

Kubernetes information




(string) Network CIDR to assign to Kubernetes service groups. This must not overlap with any IP ranges assigned to nodes for pods.
(string) The domain name to use for the Kubernetes cluster by the dns service.
(boolean) Enable GRUB cgroup overrides cgroup_enable=memory swapaccount=1. WARNING changing this option will reboot the host - use with caution on production services
(string) The image to pull for the 'etcdctl' command. Recommended to mirror the default on a private registry if connectivity is a concern
(string) Space separated list of extra deb packages to install.
(string) The image to pull for running 'flannel'. Recommended to mirror the default on a private registry if connectivity is a concern
(string) URL to use for HTTP_PROXY to be used by Docker. Only useful in closed environments where a proxy is the only option for routing to the registry to pull images
(string) URL to use for HTTPS_PROXY to be used by Docker. Only useful in closed environments where a proxy is the only option for routing to the registry to pull images
(string) The interface to bind flannel overlay networking. The default value is the result of running the following command: `route | grep default | head -n 1 | awk {'print $8'}`.
(string) List of signing keys for install_sources package sources, per charmhelpers standard format (a yaml list of strings encoded as a string). The keys should be the full ASCII armoured GPG public keys. While GPG key ids are also supported and looked up on a keyserver, operators should be aware that this mechanism is insecure. null can be used if a standard package signing key is used that will already be installed on the machine, and for PPA sources where the package signing key is securely retrieved from Launchpad.
(string) List of extra apt sources, per charm-helpers standard format (a yaml list of strings encoded as a string). Each source may be either a line that can be added directly to sources.list(5), or in the form ppa:<user>/<ppa-name> for adding Personal Package Archives, or a distribution component to enable.
(string) The status of service-affecting packages will be set to this value in the dpkg database. Valid values are "install" and "hold".
(string) The root certificate to use for this grouping of charms. If empty (default) the leader will generate a self signed Certificate Authority (CA). All certificates will be based on the root certificate.
(string) The version of Kubernetes to use in this charm. The version is inserted in the configuration files that specify the hyperkube container to use when starting a Kubernetes cluster. Changing this value will restart the Kubernetes cluster.