BLOG
ARTICLE

My Shell Script Best Practices

5 min read

Using shell scripts is something that any software engineer would do in their career span at least once. Here are some of 'best practices' that I often do at work after a lot of trials and errors.

Prelude with the correct shebang

Most of the time, using the below shebang suffices.

1
#!/bin/bash

However, it does not guarantee full portability as some systems may place bash somewhere else rather than /bin . If that is not the case, the alternative below should work relatively well in modern systems.

1
#!/usr/bin/env bash

Use set built-in options

Nothing is more discouraging than finding out the script that you wrote fails big time -- just because of a tiny, avoidable mistake. Consider the following script...

1
rm -rf ${dir_name}/*

... which if executed will turn into the below command since dir_name is undefined. Coupled with sudo this will leave a permanent scar in your heart. (Spoiler alert it will delete everything in your root directory)

1
rm -rf /*

To prevent this, I always like to use set built-in options to help me mitigate the risk early.

1
2
3
set -o errexit
set -o nounset
set -o xtrace

errexit will exit when a command in script throws an error. nounset will treat unset variables and parameters as an error. Lastly, xtrace will print out every executed command, which helps during debugging. I usually comment out xtrace later after making sure everything works.

By setting these options, the above rm -rf disaster can be prevented since bash will complain that dir_name is not defined before executing the command, then exit immediately.

1
test.sh: 7: dir_name: parameter not set

You can read more in bash manual.

Check for executed command's exit code

When executing a command, sometimes you want to check its exit code to define actions to take on success or failure. You can use $? to get the exit code of the last executed command, but I prefer to do without $? .

1
2
3
4
5
if YOUR_COMMAND_HERE; then
    # Do something on success
else
    # Do something on failure
fi

There are several reasons to do this:

  1. You don't pollute the code with a temporary variable.
  2. Simpler by assuming positive logic first.
  3. Avoid the common mistake of checking the wrong exit code of the command.
1
2
3
4
5
6
7
YOUR_COMMAND_HERE
echo "Command is executed"
if [[ "$?" == "0" ]]; then
    # This will always be executed because now you're checking the exit code of echo instead
else
    # This will never be executed because now you're checking the exit code of echo instead
fi

Note: I prefer to use [[ ... ]] instead of [ ... ] as explained here.

Using case instead of if

If you do a lot of conditionals, oftentimes using case would be better than using repeated if elseif . This is especially true when you're dealing with plenty of possibilities, such as various exit codes returned by your shell command.

1
2
3
4
5
6
7
8
9
10
11
12
13
YOUR_COMMAND_HERE
exitCode=$?
case $exitCode in
0)
    # Do something on success 
    ;;
1)
    # Do something on failure (exit code 1)
    ;;
*)
    # Default case
    ;;
esac

Note: For variable names or function names I prefer to use camelCase instead of snake_case . There appears to be no strict naming convention for shell scripts, so stick with one style consistently is good enough.

Of course, if you write a script of your own you would know what exit code to expect when you execute it and thus prioritize dealing with error cases first.

1
2
3
4
5
6
7
8
9
10
11
12
13
YOUR_COMMAND_HERE
exitCode=$?
case $exitCode in
99)
    # Prioritize failure case first
    ;;
0)
    # Do something on success 
    ;;
*)
    # Default case
    ;;
esac

Define your error exit code

Traditionally Unix command returns 0 for success and 'non-zero from 1 to 255' for failure. You can leverage this to define your error code to make a more readable script.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/bash

SUCCESS_CODE=0
READ_FILE_FAILED=245
SCP_SEND_FAILED=250

if readFileFunction; then
    echo "Read file success"
else
    echo "Read file failure"
    exit READ_FILE_FAILED
fi

if sendFileViaSCPFunction; then
    echo "Send file via SCP success"
else
     echo "Send file via SCP failure"
    exit SCP_SEND_FAILED
fi 

exit SUCCESS_CODE

Use lock file

Most of the time you only need a single script to run at a certain time, e.g. populating data into a file from fetching API. You don't want another instance of script to run and mess up your data file. You can implement a lock file to prevent this.

The idea of a lock file is simple:

  1. Let your script check the existence of a file at the very beginning, something like /tmp/PROGRAM_NAME/.lock, and gracefully stop the execution when the said file exists.
  2. If there is no such file, create the file and delete it after execution is finished.

Use trap as cleanup function

Shell script has a handy built-in called trap which allows you to execute a command when our script receives a signal.

1
trap commands signals

signals is a list of signals to look out for and commands is the list of commands to execute when one of the signals is received by the script. It gets really ugly if we want to do a lot of commands in one line, so I usually create a function to act as a 'cleanup'.

1
2
3
4
5
6
7
8
9
10
11
#!/bin/bash

trap 'cleanup' EXIT

cleanup() {
    # We can add cleanup code here, for example lock file deletion
    # rm -f "/tmp/PROGRAM_NAME/.lock"
    echo "Cleanup function is called"
}

echo "Start the script"

When using trap, be mindful of how you structure your functions, or your trap would not work as expected. There's an excellent write-up here about the topic.

Jerfareza Daviano

Hi, I'm Jerfareza
Daviano 👋🏼

Hi, I'm Jerfareza Daviano 👋🏼

I'm a Full Stack Developer from Indonesia currently based in Japan.

Passionate in software development, I write my thoughts and experiments into this personal blog.