Installing Tiller On Your Kubernetes Cluster
Now we need to install Tiller on our cluster.
We have done almost all the hard work. What remains is to set up a shell script to achieve the following:
- Create the
ServiceAccount
in thekube-system
namespace. - Create the
ClusterRoleBinding
to give thetiller
account access to the cluster. - Finally use
helm
to install thetiller
service
Three example commands are provided in the docs:
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account tiller
From the previous video, we have kubectl
available through Docker. We will need a similar solution for helm
.
This is a fairly straightforward one. We will make use of linkyard/docker-helm
, adding a short new entry to the Makefile
:
helm:
@docker run --rm \
-v $(CURDIR)/kube_config_rancher-cluster.yml:/crv-helm/kube_config_rancher-cluster.yml \
linkyard/docker-helm \
--kubeconfig=/crv-helm/kube_config_rancher-cluster.yml \
$(cmd)
Much like for kubectl
, we need to ensure we provide the kube_config_rancher-cluster.yml
as the active --kubeconfig
, and that means mounting the file into the resulting container.
Let's translate the three commands into a shell script.
touch bin/install_tiller_on_the_cluster.sh
chmod +x bin/install_tiller_on_the_cluster.sh
Into which I will add:
#!/bin/sh
make kubectl cmd="-n kube-system create serviceaccount tiller"
make kubectl cmd="create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller"
make helm cmd="init --service-account tiller"
This isn't super idempotent. In other words, you can run it multiple times, but each command will error if it's already completed.
Ahh well, it gets us further forwards.
We can check up on the deployment of tiller
:
make kubectl cmd="-n kube-system rollout status deploy/tiller-deploy"
deployment "tiller-deploy" successfully rolled out
A Little Gotcha
When we run make helm cmd="init --service-account tiller"
, the install log tells us a bunch of stuff. It's all important.
For us, specifically, we need to pay attention to the fact that some new files have been created.
name: tiller
type: ClusterIP
status:
loadBalancer: {}
...
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
make[1]: Leaving directory '/home/chris/Development/docker/rancher-2'
The important bit for us is Creating ...
.
We need those files for every subsequent time we run helm
.
It's important that we retain these files after the container is removed. In other words, immediately after the command has executed, the container is cleaned up, taking these files with it.
We need to explicitly tell Docker we want to keep these around:
helm:
@docker run --rm \
-v $(CURDIR)/kube_config_rancher-cluster.yml:/crv-helm/kube_config_rancher-cluster.yml \
-v $(CURDIR)/helm:/root/.helm \
linkyard/docker-helm \
--kubeconfig=/crv-helm/kube_config_rancher-cluster.yml \
$(cmd)
Which means after running:
tree helm
helm
├── cache
│ └── archive
├── plugins
├── repository
│ ├── cache
│ │ ├── local-index.yaml -> /root/.helm/repository/local/index.yaml
│ │ └── stable-index.yaml
│ ├── local
│ │ └── index.yaml
│ └── repositories.yaml
└── starters
7 directories, 4 files
Cool.
Onwards, to installing Rancher!