Installing Tiller On Your Kubernetes Cluster

Now we need to install Tiller on our cluster.

We have done almost all the hard work. What remains is to set up a shell script to achieve the following:

  • Create the ServiceAccount in the kube-system namespace.
  • Create the ClusterRoleBinding to give the tiller account access to the cluster.
  • Finally use helm to install the tiller service

Three example commands are provided in the docs:

kubectl -n kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller \
  --clusterrole cluster-admin \

helm init --service-account tiller

From the previous video, we have kubectl available through Docker. We will need a similar solution for helm.

This is a fairly straightforward one. We will make use of linkyard/docker-helm, adding a short new entry to the Makefile:

    @docker run --rm \
        -v $(CURDIR)/kube_config_rancher-cluster.yml:/crv-helm/kube_config_rancher-cluster.yml \
        linkyard/docker-helm \
            --kubeconfig=/crv-helm/kube_config_rancher-cluster.yml \

Much like for kubectl, we need to ensure we provide the kube_config_rancher-cluster.yml as the active --kubeconfig, and that means mounting the file into the resulting container.

Let's translate the three commands into a shell script.

touch bin/
chmod +x bin/

Into which I will add:


make kubectl cmd="-n kube-system create serviceaccount tiller"

make kubectl cmd="create clusterrolebinding tiller \
  --clusterrole cluster-admin \

make helm cmd="init --service-account tiller"

This isn't super idempotent. In other words, you can run it multiple times, but each command will error if it's already completed.

Ahh well, it gets us further forwards.

We can check up on the deployment of tiller:

make kubectl cmd="-n kube-system  rollout status deploy/tiller-deploy"

deployment "tiller-deploy" successfully rolled out

A Little Gotcha

When we run make helm cmd="init --service-account tiller", the install log tells us a bunch of stuff. It's all important.

For us, specifically, we need to pay attention to the fact that some new files have been created.

    name: tiller
  type: ClusterIP
  loadBalancer: {}

Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: 
Adding local repo with URL: 
$HELM_HOME has been configured at /root/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
make[1]: Leaving directory '/home/chris/Development/docker/rancher-2'

The important bit for us is Creating ....

We need those files for every subsequent time we run helm.

It's important that we retain these files after the container is removed. In other words, immediately after the command has executed, the container is cleaned up, taking these files with it.

We need to explicitly tell Docker we want to keep these around:

    @docker run --rm \
        -v $(CURDIR)/kube_config_rancher-cluster.yml:/crv-helm/kube_config_rancher-cluster.yml \
        -v $(CURDIR)/helm:/root/.helm \
        linkyard/docker-helm \
            --kubeconfig=/crv-helm/kube_config_rancher-cluster.yml \

Which means after running:

tree helm

├── cache
│   └── archive
├── plugins
├── repository
│   ├── cache
│   │   ├── local-index.yaml -> /root/.helm/repository/local/index.yaml
│   │   └── stable-index.yaml
│   ├── local
│   │   └── index.yaml
│   └── repositories.yaml
└── starters

7 directories, 4 files


Onwards, to installing Rancher!