kubernetes worker #590

Supports: xenial bionic
Add to new model

Description

Kubernetes is an open-source platform for deploying, scaling, and operations
of application containers across a cluster of hosts. Kubernetes is portable
in that it works with public, private, and hybrid clouds. Extensible through
a pluggable infrastructure. Self healing in that it will automatically
restart and place containers on healthy nodes if a node ever goes away.


Kubernetes Worker

Usage

This charm deploys a container runtime, and additionally stands up the Kubernetes
worker applications: kubelet, and kube-proxy.

In order for this charm to be useful, it should be deployed with its companion
charm kubernetes-master
and linked with an SDN-Plugin and a container runtime such as
containerd.

This charm has also been bundled up for your convenience so you can skip the
above steps, and deploy it with a single command:

juju deploy charmed-kubernetes

For more information about Charmed Kubernetes
consult the bundle README.md file.

Scale out

To add additional compute capacity to your Kubernetes workers, you may
juju add-unit scale the cluster of applications. They will automatically
join any related kubernetes-master, and enlist themselves as ready once the
deployment is complete.

Snap Configuration

The kubernetes resources used by this charm are snap packages. When not
specified during deployment, these resources come from the public store. By
default, the snapd daemon will refresh all snaps installed from the store
four (4) times per day. A charm configuration option is provided for operators
to control this refresh frequency.

NOTE: this is a global configuration option and will affect the refresh
time for all snaps installed on a system.

Examples:

## refresh kubernetes-worker snaps every tuesday
juju config kubernetes-worker snapd_refresh="tue"

## refresh snaps at 11pm on the last (5th) friday of the month
juju config kubernetes-worker snapd_refresh="fri5,23:00"

## delay the refresh as long as possible
juju config kubernetes-worker snapd_refresh="max"

## use the system default refresh timer
juju config kubernetes-worker snapd_refresh=""

For more information on the possible values for snapd_refresh, see the
refresh.timer section in the system options documentation.

Operational actions

The kubernetes-worker charm supports the following Operational Actions:

Pause

Pausing the workload enables administrators to both drain and cordon
a unit for maintenance.

Resume

Resuming the workload will uncordon a paused unit. Workloads will automatically migrate unless otherwise directed via their application declaration.

Private registry

This charm supports the docker-registry interface, which can automatically
configure docker on the kubernetes-worker to communicate with a deployed
docker-registry charm.

Example usage

Deploy and relate docker-registry to kubernetes-worker, with optional basic auth and TLS enabled:

juju deploy ~containers/docker-registry
juju config docker-registry auth-basic-user=YOUR_USER auth-basic-password=YOUR_PASSWORD

juju relate docker-registry easyrsa
juju relate kubernetes-worker:docker-registry docker-registry:docker-registry

Configure kubernetes-worker to use images pushed to the docker-registry charm:

juju config kubernetes-worker default-backend-image=YOUR_REGISTRY/defaultbackend-amd64:1.5

Learn more about the docker-registry capabilities at docker-registry.

Known Limitations

Kubernetes workers will try to spread load across any presented IP addresses, but
a single worker will only ever try to connect to a single IP. If HA is desired, a
solution such as HACluster is recommended. HACluster
can be related to the kubeapi-load-balancer
or directly to the kubernetes-master if no
load balancing is necessary. If you have your own external virtual IP or load
balancer, set the IP address in the configuration parameter named
loadbalancer-ips on the master charm.

External access to pods can be performed through a Kubernetes
Ingress Resource
.

When using NodePort type networking, there is no automation in exposing the
ports selected by kubernetes or chosen by the user. They will need to be
opened manually and can be performed across an entire worker pool.

If your NodePort service port selected is 30510 you can open this across all
members of a worker pool named kubernetes-worker like so:

juju run --application kubernetes-worker open-port 30510/tcp

Don't forget to expose the kubernetes-worker application if its not already
exposed, as this can cause confusion once the port has been opened and the
service is not reachable.

Note: When debugging connection issues with NodePort services, its important
to first check the kube-proxy service on the worker units. If kube-proxy is not
running, the associated port-mapping will not be configured in the iptables
rulechains.

If you need to close the NodePort once a workload has been terminated, you can
follow the same steps inversely.

juju run --application kubernetes-worker close-port 30510

Configuration

allow-privileged
(string) This option is now deprecated and has no effect.
true
channel
(string) Snap channel to install Kubernetes worker services from
1.16/stable
default-backend-image
(string) Docker image to use for the default backend. Auto will select an image based on architecture.
auto
ingress
(boolean) Deploy the default http backend and ingress controller to handle ingress requests.
True
ingress-ssl-chain-completion
(boolean) Enable chain completion for TLS certificates used by the nginx ingress controller. Set this to true if you would like the ingress controller to attempt auto-retrieval of intermediate certificates. The default (false) is recommended for all production kubernetes installations, and any environment which does not have outbound Internet access.
ingress-ssl-passthrough
(boolean) Enable ssl passthrough on ingress server. This allows passing the ssl connection through to the workloads and not terminating it at the ingress controller.
kubelet-extra-args
(string) Space separated list of flags and key=value pairs that will be passed as arguments to kubelet. For example a value like this: runtime-config=batch/v2alpha1=true profiling=true will result in kubelet being run with the following options: --runtime-config=batch/v2alpha1=true --profiling=true Note: As of Kubernetes 1.10.x, many of Kubelet's args have been deprecated, and can be set with kubelet-extra-config instead.
kubelet-extra-config
(string) Extra configuration to be passed to kubelet. Any values specified in this config will be merged into a KubeletConfiguration file that is passed to the kubelet service via the --config flag. This can be used to override values provided by the charm. Requires Kubernetes 1.10+. The value for this config must be a YAML mapping that can be safely merged with a KubeletConfiguration file. For example: {evictionHard: {memory.available: 200Mi}} For more information about KubeletConfiguration, see upstream docs: https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
{}
labels
(string) Labels can be used to organize and to select subsets of nodes in the cluster. Declare node labels in key=value format, separated by spaces.
nagios_context
(string) Used by the nrpe subordinate charms. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like: juju-myservice-0 If you're running multiple environments with the same services in them this allows you to differentiate between them.
juju
nagios_servicegroups
(string) A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup
nginx-image
(string) Docker image to use for the nginx ingress controller. Auto will select an image based on architecture.
auto
proxy-extra-args
(string) Space separated list of flags and key=value pairs that will be passed as arguments to kube-proxy. For example a value like this: runtime-config=batch/v2alpha1=true profiling=true will result in kube-apiserver being run with the following options: --runtime-config=batch/v2alpha1=true --profiling=true
require-manual-upgrade
(boolean) When true, worker services will not be upgraded until the user triggers it manually by running the upgrade action.
True
snap_proxy
(string) DEPRECATED. Use snap-http-proxy and snap-https-proxy model configuration settings. HTTP/HTTPS web proxy for Snappy to use when accessing the snap store.
snap_proxy_url
(string) DEPRECATED. Use snap-store-proxy model configuration setting. The address of a Snap Store Proxy to use for snaps e.g. http://snap-proxy.example.com
snapd_refresh
(string) How often snapd handles updates for installed snaps. Setting an empty string will check 4x per day. Set to "max" to delay the refresh as long as possible. You may also set a custom string as described in the 'refresh.timer' section here: https://forum.snapcraft.io/t/system-options/87
max
sysctl
(string) YAML formatted associative array of sysctl values, e.g.: '{kernel.pid_max : 4194303 }'. Note that kube-proxy handles the conntrack settings. The proper way to alter them is to use the proxy-extra-args config to set them, e.g.: juju config kubernetes-master proxy-extra-args="conntrack-min=1000000 conntrack-max-per-core=250000" juju config kubernetes-worker proxy-extra-args="conntrack-min=1000000 conntrack-max-per-core=250000" The proxy-extra-args conntrack-min and conntrack-max-per-core can be set to 0 to ignore kube-proxy's settings and use the sysctl settings instead. Note the fundamental difference between the setting of conntrack-max-per-core vs nf_conntrack_max.
{ net.ipv4.conf.all.forwarding : 1, net.ipv4.neigh.default.gc_thresh1 : 128, net.ipv4.neigh.default.gc_thresh2 : 28672, net.ipv4.neigh.default.gc_thresh3 : 32768, net.ipv6.neigh.default.gc_thresh1 : 128, net.ipv6.neigh.default.gc_thresh2 : 28672, net.ipv6.neigh.default.gc_thresh3 : 32768, fs.inotify.max_user_instances : 8192, fs.inotify.max_user_watches: 1048576 }