Bash: The Linux Command Line

If you are like many Linux users you're first view of Linux was the graphical user interface (GUI). If you're adventurous or curious and have tried to do some advanced things with LInux you may have encountered the Linux command line. Like all operating systems there is a low-level interface that opens up all the power of the operating system. In Linux the command line is so powerful that many advanced users use it for so many tasks that it becomes indispensable. Those who use it find it easier, faster and more powerful that the graphical tools.

Graphical tools have the advantage of being more intuitive so new users can become effective very fast. However mouse clicks and hunting and searching for menu options and icons are often very slow for many tasks making the command line faster for lots of operations.

Command line environments are not as intuitive as GUIs. This set of articles is designed to help users get accustomed to Bash.

Bash Basics - For New Users

If you're new to Linux and Unixes in general, then you are new to Bash. Bash is the command line shell that is the default for most Linux distros.

What is Bash

You probably already know that the command line is the most basic way of interacting with the operating system. Bash is the default command line and is very powerful on Linux in fact the operating system actually boots using mostly Bash commands. If it can boot an operating system imagine what you can do with it. It's used to automate complex jobs through scripting, when learned it can be used more quickly and more effectively than the GUI for many tasks, and it allows you to access many of the top-quality tools that don't have a GUI interface.

Getting into Bash

You can get to the command line in many ways. You can choose the "Terminal" or "Console" item from the GUI menu, this will launch an X-Windows terminal emulator with a bash shell running inside. You can also use one of the virtual text consoles. Pressing Ctrl-Alt and a function key at the same time switches to another virtual console. Most GUIs have Ctrl-Alt-F2 as a text console. To return to the GUI try Ctrl-Alt-F1 through to F8 to find the GUI console. Finally another way to obtain a shell prompt is over the network using SSH. With SSH you log into your computer using the same user and password as your GUI and are presented with a Bash shell.

Core ideas of Bash

Using the command line can be quite simple. The term "command line" is suitable, you type a command, press Enter and Bash executes the command. Most commands take additional arguments. Consider the cd command, this command changes the current directory, i.e. the directory that subsequent commands will work in. Think of it like opening a new folder in a GUI environment. The cd command can work without any arguments, entering cd on the command line and pressing the Enter key will change the current directory to your user's home directory. That's useful, but you first had to be somewhere else. To get somewhere else type in a directory name after the cd command, e.g. cd /tmp. This changes the current directory to /tmp, a system-wide directory for holding all sorts of temporary files.

You'll notice something about the cd command we just used. The command was first on the line and the argument after it. You'll also notice that a space was used to separate the command from the argument, this is required.

Let's take a look at another command. The cp command copies files and takes two or more arguments. It takes one or more files to copy and a destination. The destination can be either a new file name in which case the file is renamed, or it can be a destination directory to which the file(s) are copied in which case the file names don't change.

cp my_report.txt MyReport.txt
cp baked_beans.txt corn_bread.txt documents/recipies

What really are commands?

Commands come in two general varieties. There are built-in commands that are programmed into the Bash shell. These are often simple commands or ones that are designed to affect how Bash works. The command we used above, cd is a built-in command. When Bash sees that you've typed a built-in command it interprets that command itself.

The other type of command is an executable program. This can be a compiled executable that exists as a binary, machine language program, or it can be a script or any other type of interpreted program. Scripts are text files that are interpreted by another program. Some scripts are Bash scripts written to be interpreted by the Bash shell.

All non-built-in commands are files.


We have seen that commands can take arguments, they can also take options. What is the difference between an option and an argument? Well there really isn't anything technically different, options can be thought of as a specific type of argument. It's a customary thing really. Options are switches that change the function of the command where a arguments are thought of as the object the command is working with. This isn't an absolute truth since it's up to the programmer of the command to determine how the arguments are used.

Options are arguments that begin with one or more hyphens (-). They change how the command operates. Take the -i of the mv command, which moves files. The -i switch makes the mv command prompt before overwriting files.

mv -i baked_beans.txt corn_bread.txt documents/recipes

Some options themselves require an additional argument. Take the mail command, it can be used to send email. The arguments are a list of email addresses to send the email to. Optionally subject line text can be specified on the command line:

mail -s "Buy Viagra Now"

In the above case the -s option requires the subject line text to follow. I had to put the text in quotes because it includes spaces. If I didn't then Bash would think that the subject line was "Buy" and the other words would be email addresses.

More recently programmers have started using longer option names, either because it makes them more memorable, or because they ran out of single letter options. These new-style options customarily begin with two hyphens (--).

We can see this in the ls command. This command lists files and is one of the oldest commands and has a huge list of possible options. We will use the -all option. Normally ls will not list hidden files, the --all option causes ls to display hidden files as well.

ls --all 

More Commands

There are a lot of command available for use in Linux. On my laptop, which doesn't have everything installed, there are over 4,000 commands. You can see a long list of commands by looking in the /bin and /usr/bin directories.

ls /bin /usr/bin

These are the main directories for user programs. If you want to know more about a program then use the man program, it displays the manual page for a specific command. To see the man page for the ls command type man ls.

You can find a list of the built-in commands by looking at the man page for Bash (man bash) and searching for the SHELL BUILTIN heading (hint: while viewing the page type /^SHELL BUILTIN and press Enter.)

Bash Environment Variables

The shell is configured in a few different ways, but one of the main ways is through what are called environment variables. These variables are not only used by the shell, but can be accessed by all Linux programs.

To explain environment variables I'll start by showing one of the most basic variables, the PATH variable. This variable defines which directories are searched to find commands. So when you type a command like vi the Bash shell looks at the contents of the PATH variable and searches each of the directories for a file called vi. The first one that is found is executed. So let's look at the contents of PATH by typing this into a shell prompt:

echo $PATH

You might see something like this:


We used the echo command to print something to the screen and if we put a dollar sign in front of an environment variable name the shell actually substitutes the contents of the variable before running the command line.

The above is my PATH variable and it shows many directories are separated with colons.

Another way to look at environment variables is the set command. Run set and you see a huge list of variables that exist.

To set or change a variable you can use the following syntax:


Commonly we might add something to PATH. For example you may create scripts in a bin directory in your home directory and want to run these without putting a full path name in for each command. This is a special example because we want to add to the value not replace it. To do this we can use the PATH variable while setting it. So to add /home/john/bin to the PATH we would run:


In the above example, Bash substitutes $PATH for the contents of the PATH variable before assigning the new value to PATH. This way the new directory is added to the end.

Environment variables also have a certain property. We often want processes that we run from a shell to inherit the environment variables that we set. If we simply define variables like above it only affects the variable for this shell and not child programs. To allow subsequent programs to inherit the variable we need to export the variable:

export PATH

Shell Prompt

One other common use for environment variables is to redefine what the shell prompt looks like. This is held in the PS1 and PS2 variables. Setting PS1='$ ' sets the command prompt to a very simple dollar sign. If you want to get elaborate there are many variables that you can put into the command prompt so that it displays things like the host name, current directory, date and other variable facts. Let's change this to the host name and current directory. To do this we will use the \h and \w variables. In order for the backslash characters to be left alone we need to make sure that we use single quotes, not double quotes when we set the variable.

PS1='\h@[\w]$ '
export PS1

To find other variables to use in prompts view the man page for bash (i.e. run man bash) and search for the PROMPTING section.

Clearing variables

One may think that by setting a variable to an empty string will clear it. In fact it does not. There is a different between an unset variable and variable that is set to an empty string.

To delete a variable use the unset command:

unset PS1

This will make your prompt nothing. To restore it repeat the PS1 setting in the Shell Prompt section above.

Making permanent changes

You will find that any changes that you make in the shell only work for that shell instance and any shell or program launched from it. As soon as you exit the shell the environment is lost.

All these changes disappear when you exit the shell. If you want to configure a variable that is permanent edit the .bashrc file in your home directory and add the above lines to the end of the file. You may also find environment variables set in your .profile file as well, although changes to this file will require logging out of the GUI and back in to see the changes.

System-Wide Changes

There are other places where environment variables are set. You can set them for all users in the /etc/profile and /etc/bashrc files.

How Other Programs Use Environment Variables

Programs other than Bash can be configured using environment variables too. One that comes to mind is rsync. This program synchronises files between two directories even if they are on different servers. It uses a few different variables but the one I use most is RSYNC_RSH which tells rsync to use ssh to connect to remote servers.

The easist way to see if a program uses environment variables is to look at its man page. Search the page for the word ENV.

Other ways to set Variables

When executing another program you can specify environment variables before the name of the command. So to set rsync to use ssh we need to set the RSYNC_RSH variable. We could set as we have done above and export it, or we can specify it on the command line before the command:

RSYNC_RSH=ssh rsync /source/dir /dest/dir

The variable is passed to the command but is never set in the current shell. We can demonstrate this by running bash and giving an echo command as an argument:

H=Hello W=World bash -c 'echo sub shell: $H $W'
echo 'this shell: $H $W'

We see the words "sub shell: Hello World" printed to the screen. If we were to repeat the same echo command on the next line by itself it would print only "this shell:" because the variables were never set in the current shell.

NOTE: The use of single quotes is important. Any dollar signs used within single quotes are not interpreted as variable names. This way the bash sub shell actually sees the dollar signs. If we used double quotes the current shell would replace the variables before the sub shell was executed.

Bash Pattern Matching

Pattern matching in Bash is also called Globbing. It sounds all bloaty and goey, but it's really boring and plain and not sticky at all, but also quite useful. Glob is actually the name of the glibc function that does the real work.

File pattern matching is usually about selecting groups of files, but it can be useful in avoiding typing long file names. Rather than type out a full file name, just type a pattern that contains a unique part and you've matched the file.

How it Works

When you issue a command and the command or argument contains a pattern, the shell first expands the pattern to one or more file names and then runs the command. If the pattern matches the pattern is replaced with the matching files and the command doesn't see the pattern, only the matching files. If the pattern fails to match, then, as a default, the command is given the pattern.

By default the shell doesn't match hidden file names, i.e. file and directory names that begin with a dot (.) won't get matched. This behaviour can be changed (see Configuring Pattern Matching below.)

Case Sensitive

By default file patterns are also case sensitive. Meaning that the files "upper" and "UPPER" are different.


The most used pattern is the asterisk (*). It matches zero or more of any character. It can be used at the beginning, middle or end of a pattern.

Some examples are:

Pattern Matches
*.pdf Anything.pdf or just .pdf
img*jpg img0001.jpg or imgjpg
index.* index.html or index.php or index.html.bak
* anything or really or Anything

Question Mark

A lesser used but still useful pattern is the question mark (?). This indicates any one character.

Pattern Matches
?ndex.html Index.html or index.html
file.?? file.01 or file.js

More complicated

Rather than matching all characters you can specify a list of characters, a range or a class of characters. Using the square brackets ([ and ]) you can specify the characters to match. Despite the pattern taking up more than one character the pattern will only match one character.

Pattern Matches
messages.[123] messages.1, messages.2 or messages.3
page[a-z].txt pagea.txt, pageb.txt ... pagez.txt
page[-a-z].txt As above, but also matches page-.txt
page[^m-z].txt or page[!m-z].txt Doesn't match files pagem.txt through pagez.txt but matches all others (i.e. page?.txt)

Classes can be used, these are shortforms for full ranges of characters. The following clases can be used: alnum, alpha, ascii, blank, cntrl, digit, graph, lower, print, punct, space, upper, word, xdigit. Use a class like this:

ls img[:digit:].jpg

That matches img0.jpg through img9.jpg. It seems like it's useless, but consider that this works independant of language. The real use of these classes is that if the locale changes (i.e. the language) then the characters that match also change. So this will match English or Arabic numerals.

Extended Patterns

Extended patterns can be handy in niche situations, but you'll probably find that they are disabled by default in your bash. To enable them you would have to run:

shopt -s extglob

And like all the settings I've mentioned, if you want it to be turned on every time to start a shell you need to add it to your .bashrc file or the system-wide /etc/bashrc file.

With extended patterns you can create more complex patterns that match more than a single character but less than all characters of any length.

?(pattern-list) Matches zero or one occurrence of the given patterns
*(pattern-list) Matches zero or more occurrences of the given patterns
+(pattern-list) Matches one or more occurrences of the given patterns
@(pattern-list) Matches one of the given patterns
!(pattern-list) Matches anything except one of the given patterns

The patters can be any of the regular patterns so +([:digit:]) matches one or more digits.

Configuring Pattern Matching

There are shell options that allow control over how the shell matches patterns and how it reacts to failed patterns.

dotglob If set, bash includes filenames beginning with a ‘.’ in the results of pathname expansion.
extglob If set, the extended pattern matching features described above under Pathname Expansion are enabled.
failglob f set, patterns which fail to match filenames during pathname expansion result in an expansion error.
globstar If set, the pattern ** used in a filename expansion context will match a files and zero or more directories and subdirectories. If the pattern is followed by a /, only directories and subdirectories match.

To see if these options are set run:

shopt dotglob

To set it:

shopt -s dotglob

To unset it:

shopt -u dotglob

As you have come to expect it, to make these changes permanent you need to add the shopt commands to your .bashrcfile or the system-wide /etc/bashrc file.

Seeing File Expansion Work

The easy way to see file expansion working is to use the echo command. Just give it a pattern and it will print the results.

echo *.jpg

Another way it to position you cursor at the end of the pattern and press Tab twice and the shell will list matching files below. You can also expand the pattern on the line by positioning your cursor at the end of the pattern and pressing Ctrl-x then *, the matching files now appear on the command line.

Bash Aliases and Functions

Aliases and functions are interesting ways to customize Bash. You can create handy short forms and simplify complex commands.


Within Bash one can create different names for commands. Aside from simply creating a different name, the command can include options and other arguments. This is called an alias in Bash.

Creating aliases are easy. Let's create a simple alias for the ls command that does a long listing, i.e. a listing with permissions, file size and date. I type ls -l very often and it would be handy to not have to type out the whole thing every time. I know, it's only 4 characters (there's a space), but those keys can add up over time and carpel tunnel syndrome is a serious career risk for people like me.

So let's shorten it to a single simple l using an alias with this command:

alias l='ls -l'

Once set we can now use the l command and it works just like typing the whole thing out.

We can see what aliases are set using the alias command all by itself:


After we have set the above alias it should look like:

alias l='ls -l'

We can use aliases like real commands providing options and arguments. When specified additional options or arguments are added to the end of the aliased command. Let's say we want to add the -h option (make file size human readable) when we run l and we also want to list a specific file:

l -h .bashrc

From the results it's as if we typed this command: ls -l -h .bashrc.

We can delete an alias using the unalias command. If we also use the -a option of this command it removes all aliases:

unalias l

Alias Scope

Aliases have a limited scope in that they are only available to the shell in which they were defined. Subshells do not see them, nor can they inherit them.

To make an alias available to all shells define the alias in your .bashrc file or in the system-wide /etc/bashrc file.

Alias Limitations

Aliases are limited in that any arguments are added to the end of the aliased command. Although we make an alias out of a complex command the arguments always go the end. So let's say we want to add paging to our l command by piping the result through less, we might try this:

alias l='ls -l | less'

When we use the l command without an alias it works as expected but as soon as we add a file name it, say the same .bashrc. When we try this we see the .bashrc file listed to the screen. That's because the actual command executed is ls -l | less .bashrc

Because of this limitation we can't do many other neat things. This is where we can use functions.


Functions are more powerful than aliases. Not only can we re-arrange parameters but we can actually create simple programs using them.

The topic of functions really becomes a programming lesson quickly so I'll only go into a few simple examples here to solve the limitations of alias.

Let's say we want to achieve the last failed example, we want to page the output of ls -l. we would write a function like this:

unalias l
l() {
    ls -l "$@" | less;

I've used separate lines to write this function but it can all be put on a single line if that's more convenient. It's proper programming style to have it on seprate lines.

When you type this in you'll notice that you get a different prompt after the first line. This is the PS2 prompt that we talked about in the Bash Environment Variables page. This prompt means that the command is not complete. Once you type the closing brace (}) you will be back to the normal prompt.

Notice that I've also deleted the l alias. This is because both cannot exist at the same time.

With the l function created you can now use it like a command. Feel free to create more complex multi-line functions that do very complex things if you like.

To see the functions you've created run the set command. The functions will listed after the environment variables.

Functions can be deleted using the unset command:

unset l

Function Scope

Functions are also local to the instance of bash in which they are created. To make it available to sub shells add it to your .bashrc file or the system-wide /etc/bashrc file.

Bash Command Line Quoting

If you've been following these articles or using the shell you know that there are several characters that have special meaning in the shell. For example you may have seen that the dollar sign $ is used to signify the use of a variable. You may also have seen the asterisk * used to specify a group of files, so-called file globbing. Even the space is a simple special character, it separates a command and it's arguments. These are only a few of the special characters, there are many more.

Special characters are interpreted by the shell before the arguments are passed to the command. This relieves each of the commands from the burden and allows the shell to provide consistency in the use of special characters.

There are times that we find special characters in file names or we want commands to see the special characters rather than have the shell interpret or remove the special commands. Fortunately the shell provides a way to do this, it's called quoting, and it uses three other special characters.

Double quotes

Double quotes can be used to disable only some of the special characters. It is useful in disabling all but the $, `, \ and ! characters. One basic way to look at this is that it tells the shell to ignore shell globbing and spaces. e.g:

rm "file name"

Single quotes is the most powerful of quotes removing the special meaning of all characters except the single quote. e.g.:

vi 'us$ account.txt'

Backslash can be used to quote a single character. We normally think that quotes need to surround a string, but in the case of the backslash since it quotes only a single character it is only needed before the character. The backslash can quote any character as long as it is not quoted by itself or a single quote. The only exception is the newline character, in this case the newline is ignored and not passed to the command. e.g.

cp us\$account.txt "us dollar account.txt"

Another method of quoting actually adds special characters. The $'' quote allows specification of many non-printable characters, like newline, bell, tab and others.

echo $'bing \a'

For additional variables to use in the $'' quote view the man page for bash (i.e. run man bash) and search for the QUOTING section.

Finally there is a $"" quoting method. This seems to be the same as the double quotes, except that it will change how it works to suit other languages.


Have you ever seen a filename that begins with a hyphen. These are often mistakes, but if you find one it can be hard to get rid of. When you try to use rm (e.g. rm -file the command will think that the -file are options and will give an error message. This happens with all sorts of other commands too.

The fix is easy and fairly consistent, use -- to separate options from arguments. To use this method supply all your options before the hyphened file then use -- and then the file name, e.g.

rm -r -- -file

Unprintable Characters

These can be another problem. Somehow, usually because of pressing a function key at the wrong time, a file is created that begins or contains a non-printable character. If the name only contains unprintable characters or doesn't include anything unique that a file glob can match then you have to resort to this trick.

First let's create a file that has the name of an escape character:

date > $'\033'

I used one of the quotes I mentioned above to create a file named with a char 27, the ASCII code for Esc. In octal 27 is 33.

Now if you use ls you'll see a file with an odd name. Here is how to identify it and delete it:

List the directory using the -b option, this will show non-printable characters as backslash codes:

ls -lb

You'll see a file apparently called \033, this is your "Esc" file. To delete it we'll use the same code:

rm $'033'

If the file only begins with that you can use a shell glob to handle the rest:

rm $'\033'*

Bash Command Line Editting and History

In Bash the text on the command line can be edited in place using cursor keys, backspace and delete. Bash also retains a history of entered commands so you can easily reuse or edit previous commands and use all or part of previous command lines in new commands. The simplest way of accessing history is using the up and down arrow key, then using left, right, backspace and delete to edit the line.

This normally works quite well if you are using a decent terminal emulation. This is true for the Linux console and an X-Windows terminal emulator, but you may have problems if you are trying to use a Windows terminal emulation program or a clunky old serial terminal. I'm not going to to into these at this time because not may clunky old terminals were spared from landfill and if you're using Windows you can just download Putty and get a decent emulator.

Editing Keys

The keys for editing are simple. If all is set correctly you can use your cursor keys. Up goes to the previous issued command, down to the next command. Left and Right move between characters in the displayed command so that you can selectively edit characters. Delete and Backspace should work as expected as should Home and End.

In fact it's so simple that mentioning this could be insulting, so I'm sorry.

But if in fact you don't have cursor keys working you can use the Ctrl-P for previous issued command and Ctrl-N for the next command. Ctrl-F and Ctrl-B stand in for cursor right and left. Ctrl-H should work for backspace and delete.

If you don't like these you can map your own keys using the bind built-in command.

Displaying History

You can display the entire history of lines you've issued using the history built-in command:


This will show a enumerated list of all the previous commands.

You can also selectively list history using the fc command:

fc -l -20

The above shows the last 20 commands.

Searching history

There are several ways to search through history to find certain commands. The easiest way to search history is using the reverse-search-history function. To access it press Ctrl-R, you will be prompted for a search string, enter a search string, either part of a command or an argument. The most recent matching entry will automatically appear as you type. If the most recent isn't what you want you can press Ctrl-R again to select the next most recent matching entry. When you see the command you want use cursor keys to edit the entry if you like or press Enter to execute the command. So what if you go too far and want to forward through matching entries. Well that's complicated. There is a simple keystroke that does this, Ctrl-S, but if you try it won't work. So we can find out why it doesn't work or we can change it to another keystroke. That's right, it's all configurable.

Ctrl-S Problem

Ctrl-S happens to be a hold-over from the days of low-speed serial communications. It was a special code that would tell the remote end of the serial line to stop sending characters. It was a way to prevent lost characters. It was implemented in the tty serial driver so it's essentially the kernel that "eats" the code. Today, we have hardware consoles and network connections so we can do away with this character. Now if you ever access your system from an old serial cable or telephone modem (and I don't mean a DSL or cable modem) you might want to reconsider this change.

To prevent Ctrl-S from being eaten by the tty driver run this command:

stty stop ''

If you want this to be a permanent change you will have to add this to your .bashrc file.

After that change the tty driver will pass the Ctrl-S to the process and your forward-history-search function will work.

Another way to deal with this is to bind a new key to the function. We use the bind command along with the official name of the function forward-history-search. In this example we'll bind Ctrl-B (cursor left) to the function since we can use cursor keys to move the cursor.

bind '\C-b':forward-search-history

If we want to make this permanent we would need to add it to your .bashrc file.

Now you can use Ctrl-B to search forwards through history.

History Expansion

This next section is pretty intense so if you think you have all you need to use Bash history ignore this section. If you want to become a command line king you'll need to keep reading there are some real time savers here if you're willing to commit some of these keystrokes to memory.

Another way to use history is through expansion. This means use special codes to insert previous commands, or parts of them, into the current line and optionally to change them during substitution.

The simplest form of history expansion is an Event Designator. It starts with an exclamation mark (!) and has many forms:

Event Designator Example Description
!! !! Substitute the last command line in full
!n !123 Substitute command line 123 from history. The number comes from the listing from either the histOry or fc command.
!-n !-3 Substitute the third last command line from history.
!string !vi Substitute the last command that begins with vi.
!?string? !?recipe? Substitute the last command line which contains the word .
!# !# Substitute the current command line so far.
^string1^string2^ ^needle^haystack^ Substitute the previous command but replace needle with haystack

We can use one of the substitutions anywhere in a command line. Typically it's used to re-issue commands and often they are used on a blank line:


This would simply re-run the previous command. This doesn't sound as easy as simply pressing Up cursor and Enter. But consider that you want to re-issue a line from a long time ago, but you know it was the last time that command was used. It could be really handy to re-issue the last of a specific command:


That would run the last man command. Be careful though it will also match a command called "mangle" it matches beginning of line. It's usually pretty safe though since you would have had to issue a "mangle" command after the last man command.

If we want to echo the previous line it would be simple:

echo !!

The result of the above would be that the previous command would be displayed on the screen.

This also shows one way to test history expansion, by echoing the history you are trying to substitute. Another way is to press Esc ^. This key sequence will expand the history on the command line without executing it.

A more useful example of expansion is:

logger "Finished: !!"
sudo tail /var/log/messages

The above would log the previous line to the system logs. The sudo tail command runs tail as root and lists the last few lines of the /var/log/messages file. You should see your last command listed in the log file.

Advanced Substitition

We can also grab just a portion of a previous command line, this syntax is called Word Designators and allows one to extract words from a previous command you and substituting them into the current command. This is really useful if you are working on large arguments, whether that is a command name or file name, or a list of complex arguments.

Word designators always follow an Event Designator (see above) but are separated from them with a colon (:). They apply to the command line selected by the Event Designator. So if we wanted to pull words from the last line we would start with !!: and add one of the following word designators:

Word Designator Example Description
0 !!:0 Substitute word 0 from the last command line (i.e. the command from the previous line)
n !!:2 Subtitute word 2 from the last command line
$ !!:$ Substitute the last argument from the last command line
% !?hel?:% Substitute the most recent word found in history that contains the text hel.This really works by substituting the last word searched for with !?string?. The search doesn't need to be on the same line. In other words once you've searched once with !?string? you can use !% over and over and it will give the same result.
n-m !!:2-4 Sustitute words n through m. You can abbreviate 0-n to -n.
* !!:* Substitute arguments 1 and on.
n* Substitute arguments n and on.
n- Substitute arguments n to the second last

There are some interesting uses of this. Let's say you used a very long file name that has a unique name, recipe. To verify you could search for this name without executing the command by putting the substitution after an echo command:

# Sometime before you ran
#     vi /home/john/documents/recipes/southwest/quacamole_dip.txt
echo !?recipe?

Once found the !% will now work until the next search. That means you can use the argument easily in another command:

lp !%

That command would result in this string lp /home/john/documents/recipies/southwest/quacamole_dip.txt and would print the file only you typed a lot less.

You can also re-issue the same command arguments over and over. Lets say you just looked into a file and decide it's in the wrong directory:

vi /home/john/documents/recipes/southwest/poutine.txt
mv !$ /home/john/documents/recipes/canadian

As you can see !$ and !% are shortforms for !!:$ and !!:%.


We can also apply a modifier to the selected history. This is handy for doing slight alterations to substitute text.

Modifier Example Description
h !:1:h Substitute only the head of the file path. i.e. remove the trailing element of the path. E.g. /home/john/.bashrc becomes /home/john
t !:1:t Substitute only the tail of the file path. i.e. remove all but the last element of the path. E.g. /home/john/.bashrc becomes .bashrc
r !:1:r Remove the trailling file suffix. i.e. remove everthing after the last dot in the file name. e.g. /home/john/resume.txt becomes /home/john/resume
e !:1:e Leave only the trailing file suffix. e.g. /home/john/resume.txt becomes .txt
p !:p Substitute but don't execute, only print.
q !*:q Quote the substituted words with single quotes to avoid expansion. To see this in effect try:
echo $PATH $PATH
echo !*:q
x !*:x Quote the substituted words at word breaks (spaces, newlines). To see this in effect try:
echo $PATH $PATH
echo !*:x
s/old/new/ !?poutine?:s/southwest/canadian/ Substitute new for the first occurrence of old in the event
line. Any delimiter can be used in place of /. The final
delimiter is optional if it is the last character of the event
line. The delimiter may be quoted in old and new with a single
backslash. If & appears in new, it is replaced by old. A single backslash will quote the &. If old is null, it is set to
the last old substituted, or, if no previous history substitutions took place, the last string in a !?string[?] search.
& !-5:& Repeat the last substitution. (presumably on another line)
g !?images?:gs/.jpeg/.jpg/ Cause changes to be applied over the entire event line. This is
used in conjunction with ‘:s’ (e.g., ‘:gs/old/new/’) or ‘:&’.
If used with ‘:s’, any delimiter can be used in place of /, and
the final delimiter is optional if it is the last character of
the event line. An a may be used as a synonym for g.
G !?images?:Gs/mexico/Mexico/ Apply the following ‘s’ modifier once to each word in the event

Bash Parameter Expansion

We've seen environment variables, these are really just an arbitrary group of variables that have special meaning to shells or commands. We can use variables for our own personal use too. Typically variables are used in shell scripts or in complex shell commands and can be substituted into a command line by prefixing the variable name with a dollar sign ($.) This is also called Parameter Expansion.

There are interesting ways to assign variables as well as substituting them. You've seen the typical assignment and substitution:

filename="firecracker coleslaw recipe.txt"
echo "$filename"

Beyond simple substitution we can use syntax that will modify the variables value for the substitution and optionally modify the variable at the same time. Additionally it's possible to cause a shell command or script to fail if a variable is not set.

For my first example I'll be using the for command we will rename files en mass by changing the file extension from .TXT to .txt.

First I'll show you a simple loop that uses a file glob to select files and uses the for command to assign each file name to a variable one at a time, looping for each file name, then we'll print it just to show it works:

for filename in *.TXT
    echo "$filename"

If all goes well, and you have .TXT files in the current directory, you'll see a list of file names. Now we will want to use the mv command to rename the files, but do to so we need to give mv the destination name, a filename with a .txt extension. This is Linux so there are several ways to do this. We could use sed to do a text replace, but it's far easier to use a special Bash substitution syntax, that does the replace for us. We will use the ${varname/%.TXT$/.txt}. This command does a pattern match on the contents of varname and, because of the /% it matches trailing text so if the filename ends with .TXT it is replaced by .txt.

Here is the combined result:

for filename in *.TXT
    mv "$filename" "${filename/%.TXT/.txt}"

The above substitution command changes the substitution, not the variable contents so if we were to look at $filename after the mv command it would still show the .TXT extension.

There are many types of pattern substitution, some allowing replacement text and others simple removing matched text. Check the man page for a full listing of substitution commands. 

The pattern is the same as shell glob pattern matching and you can use variables as part of the pattern or the replacement text. Here is a more complex example:


# Now do the replace
for filename in *$old
    mv "$filename" "${filename/%$old/$new}"

For the above commands to work it should be copied into a file (let's say rename.ext,) then the file should be run using bash rename.ext .from .to. The variables $1 and $2 are set to command line arguments 1 and 2. So to change .TXT to .txt use bash rename.ext .TXT .txt. If you omit the first line (#!/bin/bash) you can make a shell function from this too.

There are many types of special substitution commands that change how the variable is substituted and others that change the variable itself. Consider these two similar substitutions, they substitute the same value, but they differ in that one sets var and the other doesn't. These particular substitutions take place only when the variable is empty.

# Set var to nothing, this expands to a value, but doesn't set var.
echo ${var:-not set}
echo "the value of var=$var"

# try again using different expansion, this sets the value of var too.
echo ${var:=not set}
echo "the value of var=$var"

The above examples are very useful in setting variables to default values inside scripts if they are not set var="${var:-default}".

Or perhaps you want the length of the variables contents

var="hello world"
echo ${#var}

Or perhaps a sub string of a variable

# Print out the from letter 6 the following 5 chars.
var="hello world"
echo ${var:6:5}

# Print from letter 0 the next 5 chars
echo ${var:0:5}

There are many more variable substitution syntaxes to use. Some care if the variable is set, others just that it is empty (or unset)

Of the pattern matching syntaxes, some work on the beginning of a string, some on the end. Some match the most text, others the least. Some will substitute all occurrences of a pattern while others only substitute the first occurrence. I won't list them all here, check the man page (man bash and use the / command to search for Parameter Expansion.

Bash Redirection and Paging

One of the most powerful features of Bash (and Linux/Unix shells in general) is the ability to pass data to and from files and between commands. When passing data from files to commands or vice-versa it's called redirection. When passing data between commands it's called pipelining.

These features by themselves don't seem so interesting but because all the commands are designed to work with input and output, commands tend to be simple and single purpose. This may sound backwards, but consider that these simple commands can be strung together to create a complex result. This is the power of Bash, all commands are building blocks.

Standard Input, Output and Error

To fully understand redirection and pipelining you have to understand the standard input and output channels that all programs have by default. Inside the program each channel looks and feels like a file. When a program is created it has an input channel and without redirection or pipelining this channel is your keyboard. The output and error channels both are connected to your terminal emulator (i.e. your console or your xterm window.)

These are often called STDIN, STDOUT and STDERR and in the shell are assigned the numeric values 0, 1 and 2 respectively. Remember these short forms, because I'll use them and remember these numbers because I will reference them later.

Well designed programs can take their input data from STDIN and send the modified data to STDOUT and they report any warnings or errors to STDERR.

The reason for a STDERR is that it is handy to have errors not appear in the output data. If they did the data could become corrupt. Consider a program that converts JPEG images to BMP format. If error text appeared in the stream the BMP file would be corrupt.

Redirection and pipelining change where the STDIN, STDOUT and STDERR go. With redirection rather than STDIN or STDOUT (or both) coming from your keyboard or going to your terminal (respectively) they go to a file. With pipelining they are directed from or to another program.


Redirection uses the greater than (>)and less than (<) symbols to change where data comes goes to or comes from. Let's look at an example.

Let's use image conversion tools in these examples. Here is a simple one that converts a JPEG image file to a generic PNM image file:

jpegtopnm image.jpg > image.pnm

The jpegtopnm command takes a file and converts it to PNM format and sends it to STDOUT (i.e. sends it to the screen). If we didn't redirect the output we would see binary data on the screen and it would likely put the terminal emulator in a weird state. In the above command we see the > symbol and it "points" to the image.pnm file. This command creates or overwrites the image.pnm file.

If an error were to occur it would display on the screen. Note that the simple > only redirects STDOUT to a file, not STDERR.

Rather than replace a file we can append to a file using the (>>) symbols.

ls -l >> list.txt

The above creates the list.txt file if it doesn't exist. If the file does exist the output of ls -l is appended to the file.

Most commands take file names as input so there isn't as much reason to use the < redirection. But we will show how it works using the same command. This time we will redirect input and output:

jpegtopnm < image.jpg > image.pnm

The order isn't important we could have easily written jpegtopnm > image.pnm < image.jpg. Both commands take input from image.jpg and put the result in image.pnm.

We can also redirect error messages. To do this we need to use a modified > symbol, we prefix the symbol with the number of the STDERR channel, if you recall from above, it's 2. So the symbol we use is 2>.

pegtopnm image.jpb > image.pnm 2> errors.txt

This way you can store the errors in a file so that you can reference it later. Perhaps you're Googling to find out why the error is happening.

Perhaps you want both the STDERR and STDOUT to go to the same place. We can do that to. Sometimes we don't want to see any output. This is often done in scripts to hide odd looking errors from users.

wget -O - > /dev/null 2>&1

Above we are using the wget command to poll a server but we don't want to create a file or show error messages. We use the > to direct STDERR to /dev/null. By redirecting to /dev/null the output is discarded. The symbol 2> redirects STDERR and the symbol &1 indicates to redirect to STDOUT. The &1 needs to follow the > without a space.

The order of these are important. Redirection operations are sequential from left to right. So first STDOUT is redirected to /dev/null, then STDERR is directed to the same output at STDOUT, which has already been redirected to /dev/null. So both go to /dev/null.

If we had specified this in reverse order (i.e. 2>&1 > /dev/null) then STDERR would be redirected to the same as STDOUT (the screen) and then STDOUT is redirected to /dev/null. The result is that STDERR goes to the screen.

Another way to use redirection is to write errors to STDERR. This is common if you write shell scripts. To send output to the screen we commonly use echo:

echo "Danger Will Robinson, Danger!" >&2

That echo sends the error message to STDERR. This way the users of the script can expect its output to be consistent with standard Linux commands.

Special Files

When you use certain file names Bash works differently than expected. There are several that are special, but the most interesting ones are /dev/tcp/host/port and /dev/udp/host/port. These files don't actually exist, bash creates sockets to the host using the specified port.

To really make network sockets work they need to be bidirectional. You send a request and receive a response. That's not simple to do with one single command, that's not how most Linux commands work. To make this work bidirectionally we use an interesting trick. We use the exec command.

exec 3<> /dev/tcp/
echo -e "GET / HTTP/1.1\n\n" >&3
cat <&3


Passing output from one command into the input of another command is called pipelining. This is a way to string commands together to achieve a more complex result. This allows commands to be good a performing simple functions and not encumbered by having to offer lots of unrelated functions. It also allows the user the flexibility of choosing different programs.

Consider paging, that is displaying output one page at a time. This is one example of user choices that pipelining allows, users can choose between more and less, two paging commands.

ls -l | less

In the above example the output from ls -l is "connected" to the input of less which displays a page at a time.

We can connect more than two commands in this way. Let's use grep which filters lines based on regular expressions (i.e. patterns). If a line matches the pattern it is outputted, if not it is filtered. So let's list processes using ps looking for a specific user's process (john's in this case) and list the output one page at a time using less:

ps -efw | grep john | less

In the above example the output of ps is passed to grep and its output is passed to less. Internally this is done by connecting the STDOUT file in one program to the STDIN on the next.

Standard error (STDERR) can be piped to another command using the |& symbol. It can be handy if the error output is long and needs to be paged or grepped. Compiling a C program might qualify as lots of error output:

gcc myprogram.c |& less

Combining Redirection and Pipelining

Redirection and pipelining can be combined. Redirection is applied every command individually so each command can have it's output or input redirected. In some ways this can defeat pipelining. Here is an example:

ps -efw | grep john > johns_procs.txt

That was a simple example where the output of the last command in the pipeline is saved to a file. Let's look at a stupid example. We will redirect the input of the last command:

ls -l | less < /etc/hosts

What you will see is the contents of /etc/hosts. The output of ls -l is lost because the input of less comes from the redirection.

Now back to a normal example. This is a similar to the |& symbol except that it combines the STDOUT and STDERR output and pages them together.

gcc myprogram.c 2>&1 | less