Managing Servers in Python Using Fabric

The days of running a website on a single computer have long passed and there are many tools for managing your fleet of servers, such as Puppet, Chef, and Ansible. These allow you to orchestrate...

A tool for managing infrequent but repetitive actions

The days of running a website on a single computer have long passed and there are many tools for managing your fleet of servers, such as Puppet, Chef, and Ansible. These allow you to orchestrate your servers as you see fit, though they are not without their complexities.

If you like a more hands-on approach to managing your servers, there are tools such as PSSH or MultiSSH that allow you to run manual commands on multiple servers in parallel. However, these are not easily scriptable and can be overwhelming when managing anything above a handful of machines.

Between those two extremes sits Fabric. It allows you to command groups of servers using a combination of Python and shell commands and can be the perfect tool for managing infrequent but repetitive actions.

The Setup

To get the most out of Fabric, create an account on the remote servers specifically for Fabric to use.

You'll want to set up public-key authentication for this user, or some other form of password-less login. Fabric provides many ways. You should be doing this for all of your server logins anyhow, but in case you don't feel like looking it up, the following steps should suffice.

On the computer that will run Fabric:

ssh-keygen -t ecdsa -b 521

This will generate a key file at the location you specify. Copy the public key to the user account you created on each server. This can be done easily with:

ssh-copy-id -i /path/to/public/key fabric_user@hostname

Once the file has been copied, verify that you can log in to each of the servers with that identity file.

Then, if you plan to run privileged commands on those servers, you'll need to give that account sudo privileges. It's tempting to give blanket permissions with NOPASSWD:ALL, but if you know in advance which commands you plan to run, it is much safer to explicitly specify those paths in the sudoers file.

fabric_user ALL=NOPASSWD: /path/of/executable, /different/path -with arguments

With these in place, you are prepared to trigger server actions on multiple servers at once.

Using Fabric

To install Fabric on the computer you'll use to trigger the commands, run pip install fabric. Then create a file called fabfile.py to house your Fabric commands.

First, you'll need to set up your server "groups". Fabric provides two types of groups: SerialGroup and ThreadingGroup. As the names suggest, serial groups run their commands on one server at a time, while threading groups run in parallel. Other than that, they have the same semantics. Choose whichever best suits your requirements.

Here is a simple group that will run commands in sequence:

from fabric import SerialGroup

server_group = SerialGroup(
    'hostname-or-ip1',
    'hostname-or-ip2',
    'hostname-or-ip3',
    'hostname-or-ip4',
    user='fabric_user',
    connect_kwargs={'key_filename': '/path/to/private/key'}
)

With that in place, it's time to orchestrate our first task across these servers. As an example, let's create a simple sanity check that verifies we can connect to all of our servers.

To do that, we will log in to each server and get its hostname. That could be done inside fabfile.py:

from fabric import task

@task
def check_hosts(c):
	server_group.run('hostname')

If you include a server group like the one created above, you can run the command fab check-hosts and it will output the names of every host it reaches.

Of course, we can get more complex. In this (admittedly contrived) example, we'll check our servers for free space and compress a log file if we're running low. There are better solutions to this (see logrotate), but this demonstrates a lot of the power available in Fabric, which you can adapt to your own situation.

@task
def compress_logs(c):
    for connection in server_group:
    	free_space = connection.run(
        	"df / | tail -n1 | awk '{print substr($5, 1, length($5)-1)}'"),
            hide=True
        )
        if int(free_space.stdout.strip()) > 25:
        	continue

        with connection.cd('/var/log'):
            connection.run('sudo mv big-log-file big-log-file-tmp')
            connection.run('sudo tar cjf big-log-file-$(date +"%Y%m%d%H%M%S").bz2 big-log-file-tmp)
            connection.run('sudo rm big-log-file-tmp')

You can loop over the servers in a group, and get an individual connection for each one. Note that if you do this with a ThreadingGroup, it effectively becomes serial.

Fabric supports the syntax allowed by your shell. In this example, I assume the `bash` shell is used by the Fabric user, which is the default. You may use pipes and variable interpolation, exactly as you would in the shell.

Normally Fabric will print the output of any commands to the screen, as we saw in the hostname example. If you wish to prevent this, the `hide` argument to `run()` allows that.

The Result object returned from running your command includes stdout, stderr, information about the exit status of the command, and information about the environment. You can use these in Python to make decisions about what to run next.

Fabric also provides a context manager for changing directories: connection.cd. If you pass it a path, all commands inside that context will execute in that directory.

If you set up passwordless sudo for your Fabric user, you can include sudo in your calls to the connection.run method. Fabric includes a connection.sudo method, but that is specifically for when you need to enter a sudo password from your local shell.

If you put this task in fabfile.py, it can be run from the command-line using fab compress-logs.

Conclusion

We have only scratched the surface of what Fabric provides. You can learn much more in the Fabric documentation, but hopefully, this has given you a feel for what is possible. It is a handy tool when you need to quickly automate repetitive server tasks. With its ease of installation and host of available tools, it can make managing your fleet of servers much easier.

The JBS Quick Launch Lab

Free Qualified Assessment

Quantify what it will take to implement your next big idea!

Our assessment session will deliver tangible timelines, costs, high-level requirements, and recommend architectures that will work best. Let JBS prove to you and your team why over 24 years of experience matters.

Get Your Assessment