Simple Bash Scripts for Lazy People | Part 5: When to Choose Bash

This is part 5 of a five-part series:

  • Part One has examples for common daily tasks in Git.
  • Part Two, similar examples for Rails.
  • Part Three, miscellaneous cases.
  • Part Four dissects an example of a failed attempt at a useful script.
  • Part Five concludes with a brief discussion of when to use Bash as opposed to some other scripting language.

TL;DR for Part 5:

  • If your task involves only filesystem and OS-level tasks, Bash is probably a good fit
  • Math! ((foo += 1)) or ((foo = foo+100))
  • shellcheck.net is your friend

When To Use Bash

A common response I get when talking about Bash scripts with workmates is “why not use $language?”, where $language is often Ruby or Perl. Here’s my general thinking on that:

Perl is a great fit for doing text processing. I have a use case with one project where I need to parse a listing of files on a file system and update an SQLite database. Perl’s a natural here.

I’d use Ruby (or Python, or Perl, or PHP, or R, etc. — whatever was the dominant language on the project) where the problem required other functionality of a high-level language, like connecting to a third-party service or data manipulation.

Bash’s natural fit is where everything you’re doing is related to the OS & file system.

If the tasks is “get x from foo and put it to bar”, that’s scp, and there’s no reason to use a language that just has scp wrapped as a native function. In the use case I mentioned for Perl, that script is actually part of a process that works like this:

  1. cron on source host runs a Bash script that executes a find and stores a gzip of the output at canonical location
  2. cron on destination host runs a Bash script to scp and unzip the file, then runs the Perl script to parse the file and update the database

Here’s another use case for Bash where a lot of folks would use something else:

A web application has a feature where users can request a zip archive of a large number of files. These can be hundreds of megs in size so it’s not realistic to generate them synchronously with page requests.

The web app enqueues the request. cron runs a Bash script that gets a lock (since it has no way of knowing how long it will run), pulls the queue from the database, and for each item makes the corresponding zip file, stores it at a user-accessible location, and emails the user telling them that their archive is ready.

This script has 14 functions for things like outputting log entries consistently, normalizing or escaping IDs that could be inconsistent between data sources, preparing the email content, sending the email, generating the archive, and so on.

For a lot of people a scripting language would be the natural first choice. But every single thing it’s doing is an OS-level action. Even the database interaction is just a function of passing text to a DB client, which is all the scripting language’s wrappers would be doing.

Here’s an example (where client-specific things are redacted with foo-bar-baz stuff).

Also note the math using the (( var_name math_expression )) syntax, which is crazy useful.

The database interaction comes with the transcript_files function:

Obviously with another language you could make a prepared statement, or use an ORM, but you’d still have to test for and log bad data, so I don’t think it would be less code. The files we care about are guaranteed to live in a directory called “$transcript_key Transcript“, and we have the full, absolute path to those files in the database. transcript_files ultimately invokes the sqlite3 client with a query that returns a single column, so the output is a list of file names, which is what fetch_files needs to execute the scp commands to move the files from the remote host to the host on which the archive needs to be available to web users.

Hopefully you’re wondering what the point of the SQL is when that’s the same as find "$transcript_dir" -type f -name '*.pdf'. The database exists because the web app needs to give users info about files that live on a separate file server and we’d rather cache that state and update every few hours than make a request to the file system at page load time, which would be much slower.

Executing a find on the remote host would mean we’d be doing this by SSH:

ssh user@host -- "find \"$transcript_dir\" -type f -name '*.pdf'"

I’m nervous about the mix of single quotes needed to make sure '*.pdf' is an expression and not a glob of the working directory, and the double quotes around $transcript_dir. Getting the quotes right both in constructing the ssh command and how the find is interpreted on the remote host usually winds up being a rabbit hole, so I’m happy already knowing what those files are from my database.

I’m dubious using a higher-level language buys you anything other than some syntax sugar. Unless you’re relying on some feature of the language to do something you can’t easily do on the command line, there’s just no point.

Final Thoughts

I’ll give the last word here to the last word in bash scripting, shellcheck.net. Run every single script you write through there, and you will learn heaps and heaps and heaps every time!

Leave a Reply