Installing software on your new VPS with Docker and Ansible
In this tutorial series we are using Terraform and Ansible to provision multiple host servers for use with a high availability Rancher 2 Kubernetes cluster, in an as automated manner as possible.
By the end of this video you will have:
- Created a basic Ansible playbook directory structure
- Used Docker to run Ansible commands
- Completed a basic server provisioning using your Dockerised Ansible setup
In order to complete this tutorial series you will need:
- One or more sufficiently powered servers
- An SSH key pair
- Your SSH key on your remote server
- Docker installed on your local machine
- Ansible - we will be using Docker to run Ansible
- Patience and / or persistence :)
I will be using Ubuntu as my target server.
At present, Rancher 2 requires Docker 17.03.2 which isn't straightforward to install on Ubuntu 18.04 LTS. Therefore, I will be using Ubuntu 16.04 LTS for simplicity. This requirement may have changed by the time you follow this tutorial.
I will be using Digital Ocean for this tutorial. Digital Ocean, as with many other cloud providers, offer their own Kubernetes service. If you're happy with this then you don't need Rancher.
I'm suggesting you use Digital Ocean as they offer a $100 in account credit when you first sign up. This is more than enough to run multiple sufficiently sized droplets (think: 4gb+) whilst completing this tutorial. Digital Ocean make it super easy to add your public SSH key on to your remote / cloud server. Full disclosure: this is an affiliate link. You are free to use any cloud provider, or any other server provider that you like.
In the real world I use Hetzner for my servers. For the same price as a cloud provider VPS you will get a vastly more powerful server, but you will need to manage everything yourself. As ever, consider the trade offs and use your own judgement.
Lastly, before starting be aware that I consider this as an "intermediate" tutorial (i.e. it is not aimed at beginners) and a working knowledge of Docker is assumed. We will not be covering the ins-and-outs of every command.
Are we all ready?
Then let's goooooooo!
Ansible in Docker
We will be using Docker to run Ansible. This has the advantage that you do not need to install and maintain Ansible and its dependencies on your local machine.
However, if you use SSH keys that require a pass-phrase, this process will not work. At least, it will not work easily and I do not have a solution. Therefore if your SSH key requires a pass-phrase, you will need to install Ansible on a machine and run the commands outside of Docker.
If you have been following along with CodeReviewVideos for any length of time, you'll know I'm all about making use of existing solutions rather than reinventing the wheel. In other words, I will be making use of an existing Ansible Docker image, rather than rolling my own.
With this is mind, we will be using [William-Yeh/docker-ansible][2]
as our starting point. This is clearly not an official image from Ansible themselves. Unfortunately at the time of writing, Ansible have deprecated their Docker image. I'm unsure why.
A quick test to ensure this 'works':
docker run --rm williamyeh/ansible:alpine3
Unable to find image 'williamyeh/ansible:alpine3' locally
alpine3: Pulling from williamyeh/ansible
4fe2ade4980c: Pull complete
7ed25876fe73: Pull complete
Digest: sha256:8072eb5536523728d4e4adc5e75af314c5dc3989e3160ec4f347fc0155175ddf
Status: Downloaded newer image for williamyeh/ansible:alpine3
ansible-playbook 2.7.2
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.15 (default, Aug 16 2018, 14:17:09) [GCC 6.4.0]
Seems good.
Now would be a great time to ensure you can connect to your installation target - the server you want to provision / manage with Ansible:
➜ ~ ssh root@6.10.118.222
The authenticity of host '6.10.118.222 (6.10.118.222)' can't be established.
ECDSA key fingerprint is SHA256:XcfbNEw6zhqSZYDdaeTGfIQFgyP09Cp2Q8+SW+gnn1U.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '6.10.118.222' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.15.0-38-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
root@Ubuntu-1604-xenial-64-minimal ~ #
Notice that I didn't need to provide a password, nor an SSH pass-phrase. This is because we provisioned our servers in the previous two videos, and added our local user's SSH key to the authorized_keys
file of the VPS's root
user.
Whilst you're on your remote server, check that you don't already have vim installed. We'll be installing this using Ansible as we go through this tutorial. If you find that you do have vim installed already, substitute the installation of vim with something else as you move through this tutorial. htop
might be a potential alternative.
For now, we can move on to setting up the Ansible playbook.
Rancher 2 Kubernetes Host Ansible Playbook
There are some 'common' tasks we will need to do, and some specific things that would only happen when provisioning a Rancher 2 Kubernetes host machine.
The common tasks I would typically run on any server I provision would be:
- Install SSH
- Setup User accounts
- Enable unattended upgrades
- Set the server's time zone
- Install vim, htop, or other software
- Install and configure a firewall
These are my chosen common tasks. Yours may be different. That's fine, update and amend accordingly.
The specific tasks are anything that only a Rancher 2 / Kubernetes host would need. Examples are:
- Enable specific firewall rules
- Install Docker, helm, Kubernetes, and Rancher itself
We will start with the common role.
Ansible Common Role
Create your Ansible directory structure wherever you see fit. I'm following the Ansible 2.7 best practices
mkdir ansible && cd "$_"
touch site.yml rancher-2-kubernetes-node.yml
The 'master playbook' is called site.yml
. It's a bit unintuitive in my opinion, but that's the way Ansible docs tell me it should be done, and who am I to argue with the almighty red hat?
Inside site.yml
:
---
- import_playbook: rancher-2-kubernetes-node.yml
This probably feels overkill, but again, this is following the Ansible Best Practices. Stick with me.
Inside rancher-2-kubernetes-node.yml
:
---
- name: Rancher 2 Kubernetes Nodes
hosts: rancher-2-kubernetes-nodes
roles:
- codereviewvideos.common
We're going to have a group
set up for our rancher-2-kubernetes-nodes
.
We will add in any of our target Rancher 2 host servers into that group. When we run our playbook, every server in that group
will receive the configuration instructions. At the moment this would just be anything from our as-yet-undefined codereviewvideos.common
role.
As a heads up, I have used the codereviewvideos.
prefix to namespace my role. The reasoning for this will become more evident as we pull in additional roles from other people. Use your own prefix, or use codereviewvideos.
whilst completing the tutorials.
Now we need to define our codereviewvideos.common
role.
mkdir roles
At this point we need to make a decision.
When installing Ansible onto our machine, one of the nice things we get is access to is the ansible-galaxy init {our_role_name_here}
command. Essentially this creates a full directory structure for our forthcoming role with barely any effort on our part.
But we're using Docker to run Ansible. So how can we achieve this?
docker run --rm \
-v $(pwd):/etc/ansible/roles \
williamyeh/ansible:alpine3 \
ansible-galaxy init codereviewvideos.common --init-path=/etc/ansible/roles
It's long winded and quite impractical for day to day usage, but it works.
The ansible-galaxy init
command will create the requested role name (codereviewvideos.common
in our case) in the same directory that you run the command. As we run the command with Docker, the current working directory will be the root dir (/
) unless we tell Docker otherwise.
In this case we tell Docker to mount a volume mapping our local current working directory ($(pwd)
) to the /etc/ansible/roles
path in the resulting container.
Then we use the --init-path=/etc/ansible/roles
to force ansible-galaxy
to use that path instead of the default.
There are ways to make this process a touch more palatable. One such way is to use a Makefile
:
# from your project root dir
touch Makefile
vim Makefile
Adding the contents:
create_role:
@docker run --rm \
-v $(CURDIR)/roles:/etc/ansible/roles \
williamyeh/ansible:alpine3 \
ansible-galaxy init $(role) --init-path=/etc/ansible/roles
As ever with Makefile
's, be careful to use tabs not spaces.
This should then allow you to run:
make create_role role=test
- test was created successfully
tree roles
roles
├── codereviewvideos.common
│ ├── README.md
│ ├── defaults
│ │ └── main.yml
│ ├── files
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── tasks
│ │ └── main.yml
│ ├── templates
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
└── test
├── README.md
├── defaults
│ └── main.yml
├── files
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
18 directories, 16 files
rm -rf ./roles/test
Swell.
Now you might not need all the gubbins that this command creates. However, it's a good structure to work from, and very little real effort on our part (I assume you copy / pasted my commands rather than typed them out).
Towards Running Our First Ansible Playbook
To finish up this first section, we're going to add a task to our 'common' role which will be installing Vim onto our server. We will then run that command to get that software on to the server, and more generally, to ensure Ansible is working as expected.
You may use vim on a day to day basis. If you do, you likely have a bunch of special commands that are unique to your own work-flow. I use vim to do ad-hoc editing, and very little else. For my day to day work I use an IDE (typically a JetBrains one, or VS Code currently). Therefore all I care about here is ensuring I have vim on my server, but not some highly customised version of vim. The basic Ansible package management commands will suffice in my situation.
Inside roles/codereviewvideos.common/tasks/main.yml
---
- name: install vim
package:
name: vim
state: present
In other words, we are going to start with a server that doesn't have vim installed. Then we will run our Ansible playbook using Docker. After the playbook run completes, our target server should have vim installed. Super.
We need to tell Ansible about our 'inventory' of servers. In our case we have just one server for the moment. As per the Ansible best practices directory structure we are going to create a production
inventory file in our local Ansible root dir:
# from your project root dir
touch production
And in to that file:
[rancher-2-kubernetes-nodes]
rk8s-node-1 ansible_host=6.10.118.222
It would be nicer to use valid DNS names - www.example.com
or whatever - but I'm assuming we don't have that info just yet, so I've just made up a name. You could just go with the IP address, but this makes certain tasks harder a little later on.
When using Ubuntu 16.04, we will hit upon a problem in that Python 3 is installed, but by default, Ansible will try to use Python 2 when running commands on the remote system. There are various ways to solve this problem including host_vars
, and / or group_vars
entries. I prefer an approach that I consider simpler at this stage, which is to add an extra bit of config inside the inventory file:
[rancher-2-kubernetes-nodes]
rk8s-node-1 ansible_host=6.10.118.222 ansible_python_interpreter=/usr/bin/python3
If for any reason you think you will have your host in multiple groups, an alternative form may be:
rk8s-node-1 ansible_ssh_host=6.10.118.222 ansible_python_interpreter=/usr/bin/python3
[rancher-2-kubernetes-nodes]
rk8s-node-1
We're almost there.
However if we run the ansible-playbook
command just now, we will still hit two problems:
- Ansible will likely struggle to connect via SSH
- Even if it can connect, it won't run our commands as
sudo
Fixing both of these can be achieved using a new file:
touch ansible.cfg
And into that file:
[defaults]
host_key_checking = false
[privilege_escalation]
become=True
become_method=sudo
become_user=root
host_key_checking
ensures we don't hit the first time SSH connection pop-up:
The authenticity of host '6.10.118.222 (6.10.118.222)' can't be established.
ECDSA key fingerprint is SHA256:XcfdNEw6zjsSZXbdaeTGfIQFgyP09Cp2Q7+SW+gnn8U.
Are you sure you want to continue connecting (yes/no)?
As we wouldn't see this prompt, we couldn't accept it, and the whole process would fail.
Note that this may be too insecure for you and your production needs. Use with appropriate levels of caution.
The privilege_escalation
section is regarding as which user our remote commands will be run. We want to install software as root
. Again, this may be insecure for your production needs, so adapt accordingly.
This should be enough to run the Ansible playbook process.
Here's the command we need to run:
docker run --rm \
-v $(pwd):/crv-ansible \
-v ~/.ssh/id_rsa:/root/.ssh/id_rsa \
-v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
-w /crv-ansible \
williamyeh/ansible:alpine3 \
ansible-playbook -i production site.yml -vvv
This sets the current working directory to mount inside the resulting Docker container as /crv-ansible
directory.
We mount both our public and private SSH keys, which are essential for allowing remote connectivity to our target server(s).
We set the current working direct (-w
) to /crv-ansible
, which we know from above contains all our sub dirs like the roles
dir, and any files such as our inventory and playbook configs.
We're telling the ansible-playbook
command to use the production
inventory, and the site.yml
master playbook.
I've added -vvv
for super massive extra crazy verbosity levels on our terminal output. Why? Well, in my experience, Ansible has a habit of going wrong quite frequently. It is just easier to have all the debug output right there for me without having to re-run the command again with the -vvv
flag. Feel free to remove it.
Anyway, I would add this command to my Makefile
:
run_playbook:
@docker run --rm \
-v $(CURDIR):/crv-ansible \
-v ~/.ssh/id_rsa:/root/.ssh/id_rsa \
-v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
-w /crv-ansible \
williamyeh/ansible:alpine3 \
ansible-playbook -i production site.yml -vvv
So I can make run_plabook
and achieve the same thing.
With Ansible you can (read: should be able to) run, and re-run the same command over and over. You should aim for idempotency in all of your commands.
I'm not going to paste the full output, but after running you should see:
PLAY RECAP *********************************************************************
6.10.118.222 : ok=2 changed=0 unreachable=0 failed=0
And checking your remote server, vim (or whatever software you opted for) should now be installed.
Bonza, we're well on our way with Dockerised Ansible.