Creating a Dynamic rancher-cluster.yml File


Following along with the instructions for installing Kubernetes with Rancher 2.x's RKE, there are three steps that we now need to take care of:

  • Create a rancher-cluster.yml file
  • Dynamically customise this file for our new k8s cluster
  • Run the rke up... command through Docker

We're going to cover the first two of these points here. And then bring up the cluster in the next video.

As it's not immediately obvious, RKE stands for Rancher Kubernetes Engine.

Here's the example rancher-cluster.yml starting point from the linked docs:

nodes:
  - address: 165.227.114.63
    internal_address: 172.16.22.12
    user: ubuntu
    role: [controlplane,worker,etcd]
  - address: 165.227.116.167
    internal_address: 172.16.32.37
    user: ubuntu
    role: [controlplane,worker,etcd]
  - address: 165.227.127.226
    internal_address: 172.16.42.73
    user: ubuntu
    role: [controlplane,worker,etcd]

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h

At this stage we could either hardcode the config into a rancher-cluster.yml file. Or we could create the file dynamically, using all the config we've created in this tutorial series so far.

My preference is to dynamically customise the rancher-cluster.yml file with both internal and external IP addresses. And we will be switching to a different user to the one listed. The rest of the settings are fine.

Before getting clever, let's keep things simple.

Here's what we want to achieve as the outcome of our rancher-cluster.yml file:

nodes:
  - address: 6.10.118.220
    user: rancherk8s
    role: [controlplane]
  - address: 6.10.118.221
    user: rancherk8s
    role: [controlplane]

  - address: 7.11.119.230
    user: rancherk8s
    role: [etcd]
  - address: 7.11.119.231
    user: rancherk8s
    role: [etcd]

  - address: 8.12.120.240
    user: rancherk8s
    role: [worker]
  - address: 8.12.120.241
    user: rancherk8s
    role: [worker]

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h

Note that the IP address info is totally made up for the purposes of demonstration. We've already covered how we would get this info from Terraform.

This file has nothing to do with Ansible.

An immediate question may be: What's the best location for this file?

I will ultimately leave that up to you. For simplicity, I am going to start by creating this file right inside our existing Ansible project. In this tutorial series you have gained all the knowledge required to place this file anywhere you like.

Creating Local Files With Ansible

Everything we have done so far has involved running Ansible inside Docker on our local machine, but targeting remote machines.

Now we will cover how to use our Ansible / Docker setup to target our local machine.

For this we will need a Jinja2 template.

I'm going to create a new role for this purpose:

make create_role role=codereviewvideos.rke_cluster

# may need to fix your new directory ownership
sudo chown -R $(whoami):$(whoami) roles

I'm also going to create a new playbook:

touch rke-cluster.yml

Adding in the following contents:

---
- name: Create the Rancher 2 Cluster Config for RKE
  hosts: 127.0.0.1
  connection: local
  roles:
    - codereviewvideos.rke_cluster

This playbook looks slightly different to the previous two we have created.

We have the hosts key specifically targeting localhost / 127.0.0.1.

We also specify the connection type of local.

This is directly from the Ansible documentation on local playbooks.

Then we specify our new role as the only role to run during this playbook.

Dynamic RKE Cluster Template

We have learned all the techniques required to create a template that will look at our configuration, and pull out the appropriate variables.

Start by creating the new template:

touch roles/codereviewvideos.rke_cluster/templates/rancher-cluster.yml.j2

I've kept the full, expected outcome filename (rancher-cluster.yml) as part of my template, but that isn't necessary.

Into this new Jinja2 template file, I will add the following contents:

#{{ ansible_managed }}

nodes:
  {% for host in groups['rancher-2-kubernetes-nodes'] %}
- address: "{{ hostvars[host]['ansible_host'] }}"
    user: "{{ hostvars[host].users[0].username }}"
    role: [controlplane,worker,etcd]
  {% endfor %}

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h

This looks very similar to the approach we took with group_vars/rancher-2-kubernetes-nodes.yml in a previous video.

The formatting of this template is important. Although the spacing looks odd, it is necessary to output valid YAML.

I've added #{{ ansible_managed }}, which when the real file is created will be replaced by #Ansible managed. This isn't necessary, but helps anyone who opens the file to immediately see that this file is different, somehow. You can read more here if at all curious.

Note that this has an implicit dependency on the first user in our users array as being the one that Kubernetes uses for SSH connections. This is fine for me, as I have no other use for this server. I can't imagine you'd be using a K8s node for anything other than this purpose, so I am happy with this. But as ever, use your own best judgment.

Creating rancher-cluster.yml

With our template in place, we need to tell our new role that it should transform this template into an output file.

This, again, requires new syntax.

# roles/codereviewvideos.rke_cluster/tasks/main.yml

---
- name: Create the rancher-cluster.yml file
  local_action:
    module: template
    src: rancher-cluster.yml.j2
    dest: /crv-ansible/rancher-cluster.yml

local_action is a shorthand syntax which specifies that this particular task should be run against our local computer.

There are various ways to write this command.

My preference is for the more verbose format as above.

Another way would be:

---
- name: Create the rancher-cluster.yml file
  local_action: template src=rancher-cluster.yml.j2 dest=/crv-ansible/rancher-cluster.yml

The fact you can write the same configuration in multiple ways is a blessing and curse, in my opinion.

This should be all that we need to do in order to create the desired rancher-cluster.yml file.

Remember that /crv-ansible is mapped in our resulting Docker container to the project root. As such when we run the playbook in a moment, we should anticipate the rancher-cluster.yml being created there.

For this to happen, we need to add a new entry into site.yml:

---
- import_playbook: rancher-2-kubernetes-host.yml
- import_playbook: rancher-2-load-balancer.yml
+- import_playbook: rke-cluster.yml

And then:

make run_playbook

After the playbook run completes, a new rancher-cluster.yml file should have been created in your current directory containing the dynamically populated cluster setup.

It should be exactly what we set out to achieve earlier. Only now, it's dynamic.

Things are already starting to get a little messy with the way Ansible is creating configuration related specifically to Rancher and placing those files in the current directory.

I'd like to move them, so we shall do just that in the very next video.

Episodes