Install and Configure iptables Firewall with Ansible
In this tutorial we will cover using Docker and Ansible to install
iptables on our remote Rancher 2 / Kubernetes host.
By the end of this tutorial we will have:
- Added a new third party role
- Created a custom config applied only to our
To get the port requirements for Rancher 2 Nodes, Kubernetes Cluster Nodes, and some other suggested ports, we will need to refer to the installation reference.
This may change between when I'm writing / recording this, and when you're following along. So don't follow this blindly.
Don't Make Life Hard On Yourself
We could create our own role to install and manage our system's firewall. But the whole point of a tool like Ansible is to leverage the power of the community, and the expertise within.
I will start by pulling in
geerlingguy.firewall. If you stick around in the world of Ansible for any length of time you will undoubtedly come across one of it's biggest proponents, Jeff Geerling. Jeff is an awesome dude, and we have a lot to thank him for.
make install_role role=geerlingguy.firewall - downloading role 'firewall', owned by geerlingguy - downloading role from https://github.com/geerlingguy/ansible-role-firewall/archive/2.4.1.tar.gz - extracting geerlingguy.firewall to /crv-ansible/roles/geerlingguy.firewall - geerlingguy.firewall (2.4.1) was installed successfully
This role takes care of installation and basic configuration, providing some sane defaults. But we will want and need to add our own rules.
Here's the Rancher Nodes config we will use:
# group_vars/rancher-2-kubernetes-nodes firewall_allowed_tcp_ports: - "22" - "80" - "443" - "2376" - "2379" - "2380" - "6443" - "9099" - "10250" - "10254" firewall_allowed_udp_ports: - "8472" firewall_additional_rules: - "iptables -A INPUT -p tcp --match multiport --dports 30000:32767 -j ACCEPT" - "iptables -A INPUT -p udp --match multiport --dports 30000:32767 -j ACCEPT"
Don't forget to add the role to your Rancher 2 / Kubernetes playbook:
--- - name: Rancher 2 Kubernetes Nodes hosts: rancher-2-kubernetes-nodes roles: - codereviewvideos.common - geerlingguy.firewall - geerlingguy.docker - singleplatform-eng.users
Important: It's really, really important to run the
geerlingguy.firewall before the
geerlingguy.docker role. The ordering matters.
What this means in our case is that we need to destroy, and completely re-provision our server. Normally this would be an absolute nightmare. But thanks to Ansible, this is but a minor inconvenience.
Save off, and run your main
site.yml playbook. We have a
Makefile shortcut command for this:
And after a few seconds, you should see:
PLAY RECAP ********************************************************************* 22.214.171.124 : ok=12 changed=2 unreachable=0 failed=0
Jump onto the remote box via ssh, and validate your config:
root@Ubuntu-1604-xenial-64-minimal ~ # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:2376 ACCEPT tcp -- anywhere anywhere tcp dpt:2379 ACCEPT tcp -- anywhere anywhere tcp dpt:2380 ACCEPT tcp -- anywhere anywhere tcp dpt:6443 ACCEPT tcp -- anywhere anywhere tcp dpt:9099 ACCEPT tcp -- anywhere anywhere tcp dpt:10250 ACCEPT tcp -- anywhere anywhere tcp dpt:10254 ACCEPT udp -- anywhere anywhere udp dpt:8472 ACCEPT icmp -- anywhere anywhere ACCEPT udp -- anywhere anywhere udp spt:ntp ACCEPT tcp -- anywhere anywhere multiport dports 30000:32767 ACCEPT udp -- anywhere anywhere multiport dports 30000:32767 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED LOG all -- anywhere anywhere limit: avg 15/min burst 5 LOG level debug prefix "Dropped by firewall: " DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:ntp
That's our firewall configuration at a level that I believe is OK for production. Any corrections most welcome.
Next we will take everything we've covered so far, and use it to provision a different VPS instance - our Load Balancer.