I was tired of SSHing into each of my Ubuntu virtual machines to run updates. It was a tedious and time-consuming process. That’s when I decided to try using Ansible to automate the process.

Before we dive into the specifics of the setup, I should note that I am running Ansible inside a Proxmox LXC container. A Proxmox CT is a lightweight virtualization technology that allows you to run multiple isolated Linux systems on a single host. The difference between a traditional container and a Proxmox CT is that Proxmox CTs share the kernel with the host system, making them faster and more resource efficient. You could choose to run this inside a full VM or on bare metal, however an LXC makes the most sense for my environment.

First, I need a LXC container to run ansible. That’s as simple as clicking Create CT in Proxmox, selecting a template and following the prompts and allocating resoruces. Then I needed to set up Ansible inside my Proxmox CT. I started by creating a new user for SSH access, as I didn’t want to run Ansible commands from the console. I followed these steps to create a new user called “ansible”:

  1. Console in to your Proxmox CT as root.
  2. Run the following command to create a new user: adduser ansible.
  3. Follow the prompts to set the user’s password and other information.
  4. Grant the new user admin permissions: sudo usermod -aG sudo ansible.

Once the new user was created I could SSH into the LXC from my workstation using terminal and install Ansible by using the following command:

apt install ansible

Now that Ansible was installed, I needed to create an inventory file to tell Ansible which hosts to manage. I created a file called “hosts.ini” in the same directory as my playbook with the following contents:

[web]
192.168.1.101
192.168.1.102
192.168.1.103
192.168.1.104

This inventory file specifies that Ansible should manage the hosts with IP addresses 192.168.1.101, 192.168.1.102, 192.168.1.103, and 192.168.1.104.

Next, I need to generate an SSH key using the command:

ssh-keygen

And then copy each SSH key from the Ansible CT to each of the VMs using the command:

ssh-copy-id user@192.16

Next, I created a playbook called “update.yml” in the same directory as my inventory file with the following contents:

---
- name: Update packages
  hosts: web
  become: true
  become_user: root
  tasks:
    - name: Update packages
      apt:
        update_cache: yes
        upgrade: yes
      become: true
      become_user: root
  vars:
    ansible_become_pass: "your_sudo_password_here"

This playbook tells Ansible to target the hosts specified in the “web” group in the inventory file. The “tasks” section of the playbook contains a single task that uses the “apt” module to update all packages on the remote hosts.

Now that my inventory file and playbook were set up, I ran the following command to update my Ubuntu VMs:

ansible-playbook -i hosts.ini update.yml

This command tells Ansible to use the “hosts.ini” file as the inventory and the “update.yml” playbook to update the specified hosts.

I can run this command manually or I can setup a Cron job to execute it on a schedule. Open the Crontab file by running Crontab -e and adding in the following example:

0 3 * * * ansible-playbook -i /path/to/hosts.ini /path/to/update.yml

In this example, the 0 3 * * * part specifies the schedule for the cron job, which is every day at 3am. The ansible-playbook command is followed by the -i option, which specifies the inventory file location, and the path to the update.yml playbook.

Now I can be confident all my Linux VM’s in my homelab are update date and better protected!

This is only the tip of the iceberg for using Ansible. Ansible is a vastly powerful tool and I am eager to expand it’s use in my homelab.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like