Note: I'm looking for a new (and potentially paid) lead author to help expand this to a more comprehensive Guide. While it's very popoular, it could be both deeper and more helpful. If you like to write and are close to being an expert on this material and willing to consider helping, please drop me a note at josh (0x40) holloway.com. –jlevy, Holloway
Fluency on the command line is a skill often neglected or considered arcane, but it improves your flexibility and productivity as an engineer in both obvious and subtle ways. This is a selection of notes and tips on using the command-line that we've found useful when working on Linux. Some tips are elementary, and some are fairly specific, sophisticated, or obscure. This page is not long, but if you can use and recall all the items here, you know a lot.
This work is the result of many authors and translators. Some of this originally appeared on Quora, but it has since moved to GitHub, where people more talented than the original author have made numerous improvements. Please submit a question if you have a question related to the command line. Please contribute if you see an error or something that could be better!
Scope:
Notes:
apt
yum
dnf
pacman
pip
brew
Learn basic Bash. Actually, type man bash and at least skim the whole thing; it's pretty easy to follow and not that long. Alternate shells can be nice, but Bash is powerful and always available (learning only zsh, fish, etc., while tempting on your own laptop, restricts you in many situations, such as using existing servers).
man bash
Learn at least one text-based editor well. The nano editor is one of the simplest for basic editing (opening, editing, saving, searching). However, for the power user in a text terminal, there is no substitute for Vim (vi), the hard-to-learn but venerable, fast, and full-featured editor. Many people also use the classic Emacs, particularly for larger editing tasks. (Of course, any modern software developer working on an extensive project is unlikely to use only a pure text-based editor and should also be familiar with modern graphical IDEs and tools.)
nano
vi
Finding documentation:
man
man man
apropos
help
help -d
type command
curl cheat.sh/command
Learn about redirection of output and input using > and < and pipes using |. Know > overwrites the output file and >> appends. Learn about stdout and stderr.
>
<
|
>>
Learn about file glob expansion with * (and perhaps ? and [...]) and quoting and the difference between double " and single ' quotes. (See more on variable expansion below.)
*
?
[
]
"
'
Be familiar with Bash job management: &, ctrl-z, ctrl-c, jobs, fg, bg, kill, etc.
&
jobs
fg
bg
kill
Know ssh, and the basics of passwordless authentication, via ssh-agent, ssh-add, etc.
ssh
ssh-agent
ssh-add
Basic file management: ls and ls -l (in particular, learn what every column in ls -l means), less, head, tail and tail -f (or even better, less +F), ln and ln -s (learn the differences and advantages of hard versus soft links), chown, chmod, du (for a quick summary of disk usage: du -hs *). For filesystem management, df, mount, fdisk, mkfs, lsblk. Learn what an inode is (ls -i or df -i).
ls
ls -l
less
head
tail
tail -f
less +F
ln
ln -s
chown
chmod
du
du -hs *
df
mount
fdisk
mkfs
lsblk
ls -i
df -i
Basic network management: ip or ifconfig, dig, traceroute, route.
ip
ifconfig
dig
traceroute
route
Learn and use a version control management system, such as git.
git
Know regular expressions well, and the various flags to grep/egrep. The -i, -o, -v, -A, -B, and -C options are worth knowing.
grep
egrep
-i
-o
-v
-A
-B
-C
Learn to use apt-get, yum, dnf or pacman (depending on distro) to find and install packages. And make sure you have pip to install Python-based command-line tools (a few below are easiest to install via pip).
apt-get
In Bash, use Tab to complete arguments or list all available commands and ctrl-r to search through command history (after pressing, type to search, press ctrl-r repeatedly to cycle through more matches, press Enter to execute the found command, or hit the right arrow to put the result in the current line to allow editing).
In Bash, use ctrl-w to delete the last word, and ctrl-u to delete the content from current cursor back to the start of the line. Use alt-b and alt-f to move by word, ctrl-a to move cursor to beginning of line, ctrl-e to move cursor to end of line, ctrl-k to kill to the end of the line, ctrl-l to clear the screen. See man readline for all the default keybindings in Bash. There are a lot. For example alt-. cycles through previous arguments, and **alt-*** expands a glob.
man readline
Alternatively, if you love vi-style key-bindings, use set -o vi (and set -o emacs to put it back).
set -o vi
set -o emacs
For editing long commands, after setting your editor (for example export EDITOR=vim), ctrl-x ctrl-e will open the current command in an editor for multi-line editing. Or in vi style, escape-v.
export EDITOR=vim
To see recent commands, use history. Follow with !n (where n is the command number) to execute again. There are also many abbreviations you can use, the most useful probably being !$ for last argument and !! for last command (see "HISTORY EXPANSION" in the man page). However, these are often easily replaced with ctrl-r and alt-..
history
!n
n
!$
!!
Go to your home directory with cd. Access files relative to your home directory with the ~ prefix (e.g. ~/.bashrc). In sh scripts refer to the home directory as $HOME.
cd
~
~/.bashrc
sh
$HOME
To go back to the previous working directory: cd -.
cd -
If you are halfway through typing a command but change your mind, hit alt-# to add a # at the beginning and enter it as a comment (or use ctrl-a, #, enter). You can then return to it later via command history.
#
Use xargs (or parallel). It's very powerful. Note you can control how many items execute per line (-L) as well as parallelism (-P). If you're not sure if it'll do the right thing, use xargs echo first. Also, -I{} is handy. Examples:
xargs
parallel
-L
-P
xargs echo
-I{}
find . -name '*.py' | xargs grep some_function cat hosts | xargs -I{} ssh root@{} hostname
pstree -p is a helpful display of the process tree.
pstree -p
Use pgrep and pkill to find or signal processes by name (-f is helpful).
pgrep
pkill
-f
Know the various signals you can send processes. For example, to suspend a process, use kill -STOP [pid]. For the full list, see man 7 signal
kill -STOP [pid]
man 7 signal
Use nohup or disown if you want a background process to keep running forever.
nohup
disown
Check what processes are listening via netstat -lntp or ss -plat (for TCP; add -u for UDP) or lsof -iTCP -sTCP:LISTEN -P -n (which also works on macOS).
netstat -lntp
ss -plat
-u
lsof -iTCP -sTCP:LISTEN -P -n
See also lsof and fuser for open sockets and files.
lsof
fuser
See uptime or w to know how long the system has been running.
uptime
w
Use alias to create shortcuts for commonly used commands. For example, alias ll='ls -latr' creates a new alias ll.
alias
alias ll='ls -latr'
ll
Save aliases, shell settings, and functions you commonly use in ~/.bashrc, and arrange for login shells to source it. This will make your setup available in all your shell sessions.
Put the settings of environment variables as well as commands that should be executed when you login in ~/.bash_profile. Separate configuration will be needed for shells you launch from graphical environment logins and cron jobs.
~/.bash_profile
cron
Synchronize your configuration files (e.g. .bashrc and .bash_profile) among various computers with Git.
.bashrc
.bash_profile
Understand that care is needed when variables and filenames include whitespace. Surround your Bash variables with quotes, e.g. "$FOO". Prefer the -0 or -print0 options to enable null characters to delimit filenames, e.g. locate -0 pattern | xargs -0 ls -al or find / -print0 -type d | xargs -0 ls -al. To iterate on filenames containing whitespace in a for loop, set your IFS to be a newline only using IFS=$'\n'.
"$FOO"
-0
-print0
locate -0 pattern | xargs -0 ls -al
find / -print0 -type d | xargs -0 ls -al
IFS=$'\n'
In Bash scripts, use set -x (or the variant set -v, which logs raw input, including unexpanded variables and comments) for debugging output. Use strict modes unless you have a good reason not to: Use set -e to abort on errors (nonzero exit code). Use set -u to detect unset variable usages. Consider set -o pipefail too, to abort on errors within pipes (though read up on it more if you do, as this topic is a bit subtle). For more involved scripts, also use trap on EXIT or ERR. A useful habit is to start a script like this, which will make it detect and abort on common errors and print a message:
set -x
set -v
set -e
set -u
set -o pipefail
trap
set -euo pipefail trap "echo 'error: Script failed: see failed command above'" ERR
In Bash scripts, subshells (written with parentheses) are convenient ways to group commands. A common example is to temporarily move to a different working directory, e.g.
# do something in current dir (cd /some/other/dir && other-command) # continue in original dir
In Bash, note there are lots of kinds of variable expansion. Checking a variable exists: ${name:?error message}. For example, if a Bash script requires a single argument, just write input_file=${1:?usage: $0 input_file}. Using a default value if a variable is empty: ${name:-default}. If you want to have an additional (optional) parameter added to the previous example, you can use something like output_file=${2:-logfile}. If $2 is omitted and thus empty, output_file will be set to logfile. Arithmetic expansion: i=$(( (i + 1) % 5 )). Sequences: {1..10}. Trimming of strings: ${var%suffix} and ${var#prefix}. For example if var=foo.pdf, then echo ${var%.pdf}.txt prints foo.txt.
${name:?error message}
input_file=${1:?usage: $0 input_file}
${name:-default}
output_file=${2:-logfile}
$2
output_file
logfile
i=$(( (i + 1) % 5 ))
{1..10}
${var%suffix}
${var#prefix}
var=foo.pdf
echo ${var%.pdf}.txt
foo.txt
Brace expansion using {...} can reduce having to re-type similar text and automate combinations of items. This is helpful in examples like mv foo.{txt,pdf} some-dir (which moves both files), cp somefile{,.bak} (which expands to cp somefile somefile.bak) or mkdir -p test-{a,b,c}/subtest-{1,2,3} (which expands all possible combinations and creates a directory tree). Brace expansion is performed before any other expansion.
{
}
mv foo.{txt,pdf} some-dir
cp somefile{,.bak}
cp somefile somefile.bak
mkdir -p test-{a,b,c}/subtest-{1,2,3}
The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and filename expansion. (For example, a range like {1..20} cannot be expressed with variables using {$a..$b}. Use seq or a for loop instead, e.g., seq $a $b or for((i=a; i<=b; i++)); do ... ; done.)
{1..20}
{$a..$b}
seq
for
seq $a $b
for((i=a; i<=b; i++)); do ... ; done
The output of a command can be treated like a file via <(some command) (known as process substitution). For example, compare local /etc/hosts with a remote one:
<(some command)
/etc/hosts
diff /etc/hosts <(ssh somehost cat /etc/hosts)
When writing scripts you may want to put all of your code in curly braces. If the closing brace is missing, your script will be prevented from executing due to a syntax error. This makes sense when your script is going to be downloaded from the web, since it prevents partially downloaded scripts from executing:
{ # Your code here }
A "here document" allows redirection of multiple lines of input as if from a file:
cat <<EOF input on multiple lines EOF
In Bash, redirect both standard output and standard error via: some-command >logfile 2>&1 or some-command &>logfile. Often, to ensure a command does not leave an open file handle to standard input, tying it to the terminal you are in, it is also good practice to add </dev/null.
some-command >logfile 2>&1
some-command &>logfile
</dev/null
Use man ascii for a good ASCII table, with hex and decimal values. For general encoding info, man unicode, man utf-8, and man latin1 are helpful.
man ascii
man unicode
man utf-8
man latin1
Use screen or tmux to multiplex the screen, especially useful on remote ssh sessions and to detach and re-attach to a session. byobu can enhance screen or tmux by providing more information and easier management. A more minimal alternative for session persistence only is dtach.
screen
tmux
byobu
dtach
In ssh, knowing how to port tunnel with -L or -D (and occasionally -R) is useful, e.g. to access web sites from a remote server.
-D
-R
It can be useful to make a few optimizations to your ssh configuration; for example, this ~/.ssh/config contains settings to avoid dropped connections in certain network environments, uses compression (which is helpful with scp over low-bandwidth connections), and multiplex channels to the same server with a local control file:
~/.ssh/config
TCPKeepAlive=yes ServerAliveInterval=15 ServerAliveCountMax=6 Compression=yes ControlMaster auto ControlPath /tmp/%r@%h:%p ControlPersist yes
A few other options relevant to ssh are security sensitive and should be enabled with care, e.g. per subnet or host or in trusted networks: StrictHostKeyChecking=no, ForwardAgent=yes
StrictHostKeyChecking=no
ForwardAgent=yes
Consider mosh an alternative to ssh that uses UDP, avoiding dropped connections and adding convenience on the road (requires server-side setup).
mosh
To get the permissions on a file in octal form, which is useful for system configuration but not available in ls and easy to bungle, use something like
stat -c '%A %a %n' /etc/timezone
For interactive selection of values from the output of another command, use percol or fzf.
percol
fzf
For interaction with files based on the output of another command (like git), use fpp (PathPicker).
fpp
For a simple web server for all files in the current directory (and subdirs), available to anyone on your network, use: python -m SimpleHTTPServer 7777 (for port 7777 and Python 2) and python -m http.server 7777 (for port 7777 and Python 3).
python -m SimpleHTTPServer 7777
python -m http.server 7777
For running a command as another user, use sudo. Defaults to running as root; use -u to specify another user. Use -i to login as that user (you will be asked for your password).
sudo
For switching the shell to another user, use su username or su - username. The latter with "-" gets an environment as if another user just logged in. Omitting the username defaults to root. You will be asked for the password of the user you are switching to.
su username
su - username
Know about the 128K limit on command lines. This "Argument list too long" error is common when wildcard matching large numbers of files. (When this happens alternatives like find and xargs may help.)
find
For a basic calculator (and of course access to Python in general), use the python interpreter. For example,
python
>>> 2+3 5
To locate a file by name in the current directory, find . -iname '*something*' (or similar). To find a file anywhere by name, use locate something (but bear in mind updatedb may not have indexed recently created files).
find . -iname '*something*'
locate something
updatedb
For general searching through source or data files, there are several options more advanced or faster than grep -r, including (in rough order from older to newer) ack, ag ("the silver searcher"), and rg (ripgrep).
grep -r
ack
ag
rg
To convert HTML to text: lynx -dump -stdin
lynx -dump -stdin
For Markdown, HTML, and all kinds of document conversion, try pandoc. For example, to convert a Markdown document to Word format: pandoc README.md --from markdown --to docx -o temp.docx
pandoc
pandoc README.md --from markdown --to docx -o temp.docx
If you must handle XML, xmlstarlet is old but good.
xmlstarlet
For JSON, use jq. For interactive use, also see jid and jiq.
jq
jid
jiq
For YAML, use shyaml.
shyaml
For Excel or CSV files, csvkit provides in2csv, csvcut, csvjoin, csvgrep, etc.
in2csv
csvcut
csvjoin
csvgrep
For Amazon S3, s3cmd is convenient and s4cmd is faster. Amazon's aws and the improved saws are essential for other AWS-related tasks.
s3cmd
s4cmd
aws
saws
Know about sort and uniq, including uniq's -u and -d options -- see one-liners below. See also comm.
sort
uniq
-d
comm
Know about cut, paste, and join to manipulate text files. Many people use cut but forget about join.
cut
paste
join
Know about wc to count newlines (-l), characters (-m), words (-w) and bytes (-c).
wc
-l
-m
-w
-c
Know about tee to copy from stdin to a file and also to stdout, as in ls -al | tee file.txt.
tee
ls -al | tee file.txt
For more complex calculations, including grouping, reversing fields, and statistical calculations, consider datamash.
datamash
Know that locale affects a lot of command line tools in subtle ways, including sorting order (collation) and performance. Most Linux installations will set LANG or other locale variables to a local setting like US English. But be aware sorting will change if you change locale. And know i18n routines can make sort or other commands run many times slower. In some situations (such as the set operations or uniqueness operations below) you can safely ignore slow i18n routines entirely and use traditional byte-based sort order, using export LC_ALL=C.
LANG
export LC_ALL=C
You can set a specific command's environment by prefixing its invocation with the environment variable settings, as in TZ=Pacific/Fiji date.
TZ=Pacific/Fiji date
Know basic awk and sed for simple data munging. See One-liners for examples.
awk
sed
To replace all occurrences of a string in place, in one or more files:
perl -pi.bak -e 's/old-string/new-string/g' my-files-*.txt
To rename multiple files and/or search and replace within files, try repren. (In some cases the rename command also allows multiple renames, but be careful as its functionality is not the same on all Linux distributions.)
repren
rename
# Full rename of filenames, directories, and contents foo -> bar: repren --full --preserve-case --from foo --to bar . # Recover backup files whatever.bak -> whatever: repren --renames --from '(.*)\.bak' --to '\1' *.bak # Same as above, using rename, if available: rename 's/\.bak$//' *.bak
As the man page says, rsync really is a fast and extraordinarily versatile file copying tool. It's known for synchronizing between machines but is equally useful locally. When security restrictions allow, using rsync instead of scp allows recovery of a transfer without restarting from scratch. It also is among the fastest ways to delete large numbers of files:
rsync
scp
mkdir empty && rsync -r --delete empty/ some-dir && rmdir some-dir
For monitoring progress when processing files, use pv, pycp, pmonitor, progress, rsync --progress, or, for block-level copying, dd status=progress.
pv
pycp
pmonitor
progress
rsync --progress
dd status=progress
Use shuf to shuffle or select random lines from a file.
shuf
Know sort's options. For numbers, use -n, or -h for handling human-readable numbers (e.g. from du -h). Know how keys work (-t and -k). In particular, watch out that you need to write -k1,1 to sort by only the first field; -k1 means sort according to the whole line. Stable sort (sort -s) can be useful. For example, to sort first by field 2, then secondarily by field 1, you can use sort -k1,1 | sort -s -k2,2.
-n
-h
du -h
-t
-k
-k1,1
-k1
sort -s
sort -k1,1 | sort -s -k2,2
If you ever need to write a tab literal in a command line in Bash (e.g. for the -t argument to sort), press ctrl-v [Tab] or write $'\t' (the latter is better as you can copy/paste it).
$'\t'
The standard tools for patching source code are diff and patch. See also diffstat for summary statistics of a diff and sdiff for a side-by-side diff. Note diff -r works for entire directories. Use diff -r tree1 tree2 | diffstat for a summary of changes. Use vimdiff to compare and edit files.
diff
patch
diffstat
sdiff
diff -r
diff -r tree1 tree2 | diffstat
vimdiff
For binary files, use hd, hexdump or xxd for simple hex dumps and bvi, hexedit or biew for binary editing.
hd
hexdump
xxd
bvi
hexedit
biew
Also for binary files, strings (plus grep, etc.) lets you find bits of text.
strings
For binary diffs (delta compression), use xdelta3.
xdelta3
To convert text encodings, try iconv. Or uconv for more advanced use; it supports some advanced Unicode things. For example:
iconv
uconv
# Displays hex codes or actual names of characters (useful for debugging): uconv -f utf-8 -t utf-8 -x '::Any-Hex;' < input.txt uconv -f utf-8 -t utf-8 -x '::Any-Name;' < input.txt # Lowercase and removes all accents (by expanding and dropping them): uconv -f utf-8 -t utf-8 -x '::Any-Lower; ::Any-NFD; [:Nonspacing Mark:] >; ::Any-NFC;' < input.txt > output.txt
To split files into pieces, see split (to split by size) and csplit (to split by a pattern).
split
csplit
Date and time: To get the current date and time in the helpful ISO 8601 format, use date -u +"%Y-%m-%dT%H:%M:%SZ" (other options are problematic). To manipulate date and time expressions, use dateadd, datediff, strptime etc. from dateutils.
date -u +"%Y-%m-%dT%H:%M:%SZ"
dateadd
datediff
strptime
dateutils
Use zless, zmore, zcat, and zgrep to operate on compressed files.
zless
zmore
zcat
zgrep
File attributes are settable via chattr and offer a lower-level alternative to file permissions. For example, to protect against accidental file deletion the immutable flag: sudo chattr +i /critical/directory/or/file
chattr
sudo chattr +i /critical/directory/or/file
Use getfacl and setfacl to save and restore file permissions. For example:
getfacl
setfacl
getfacl -R /some/path > permissions.txt setfacl --restore=permissions.txt
To create empty files quickly, use truncate (creates sparse file), fallocate (ext4, xfs, btrfs and ocfs2 filesystems), xfs_mkfile (almost any filesystems, comes in xfsprogs package), mkfile (for Unix-like systems like Solaris, Mac OS).
truncate
fallocate
xfs_mkfile
mkfile
For web debugging, curl and curl -I are handy, or their wget equivalents, or the more modern httpie.
curl
curl -I
wget
httpie
To know current cpu/disk status, the classic tools are top (or the better htop), iostat, and iotop. Use iostat -mxz 15 for basic CPU and detailed per-partition disk stats and performance insight.
top
htop
iostat
iotop
iostat -mxz 15
For network connection details, use netstat and ss.
netstat
ss
For a quick overview of what's happening on a system, dstat is especially useful. For broadest overview with details, use glances.
dstat
glances
To know memory status, run and understand the output of free and vmstat. In particular, be aware the "cached" value is memory held by the Linux kernel as file cache, so effectively counts toward the "free" value.
free
vmstat
Java system debugging is a different kettle of fish, but a simple trick on Oracle's and some other JVMs is that you can run kill -3 <pid> and a full stack trace and heap summary (including generational garbage collection details, which can be highly informative) will be dumped to stderr/logs. The JDK's jps, jstat, jstack, jmap are useful. SJK tools are more advanced.
kill -3 <pid>
jps
jstat
jstack
jmap
Use mtr as a better traceroute, to identify network issues.
mtr
For looking at why a disk is full, ncdu saves time over the usual commands like du -sh *.
ncdu
du -sh *
To find which socket or process is using bandwidth, try iftop or nethogs.
iftop
nethogs
The ab tool (comes with Apache) is helpful for quick-and-dirty checking of web server performance. For more complex load testing, try siege.
ab
siege
For more serious network debugging, wireshark, tshark, or ngrep.
wireshark
tshark
ngrep
Know about strace and ltrace. These can be helpful if a program is failing, hanging, or crashing, and you don't know why, or if you want to get a general idea of performance. Note the profiling option (-c), and the ability to attach to a running process (-p). Use trace child option (-f) to avoid missing important calls.
strace
ltrace
-p
Know about ldd to check shared libraries etc — but never run it on untrusted files.
ldd
Know how to connect to a running process with gdb and get its stack traces.
gdb
Use /proc. It's amazingly helpful sometimes when debugging live problems. Examples: /proc/cpuinfo, /proc/meminfo, /proc/cmdline, /proc/xxx/cwd, /proc/xxx/exe, /proc/xxx/fd/, /proc/xxx/smaps (where xxx is the process id or pid).
/proc
/proc/cpuinfo
/proc/meminfo
/proc/cmdline
/proc/xxx/cwd
/proc/xxx/exe
/proc/xxx/fd/
/proc/xxx/smaps
xxx
When debugging why something went wrong in the past, sar can be very helpful. It shows historic statistics on CPU, memory, network, etc.
sar
For deeper systems and performance analyses, look at stap (SystemTap), perf, and sysdig.
stap
perf
sysdig
Check what OS you're on with uname or uname -a (general Unix/kernel info) or lsb_release -a (Linux distro info).
uname
uname -a
lsb_release -a
Use dmesg whenever something's acting really funny (it could be hardware or driver issues).
dmesg
If you delete a file and it doesn't free up expected disk space as reported by du, check whether the file is in use by a process: lsof | grep deleted | grep "filename-of-my-big-file"
lsof | grep deleted | grep "filename-of-my-big-file"
A few examples of piecing together commands:
It is remarkably helpful sometimes that you can do set intersection, union, and difference of text files via sort/uniq. Suppose a and b are text files that are already uniqued. This is fast, and works on files of arbitrary size, up to many gigabytes. (Sort is not limited by memory, though you may need to use the -T option if /tmp is on a small root partition.) See also the note about LC_ALL above and sort's -u option (left out for clarity below).
a
b
-T
/tmp
LC_ALL
sort a b | uniq > c # c is a union b sort a b | uniq -d > c # c is a intersect b sort a b b | uniq -u > c # c is set difference a - b
Pretty-print two JSON files, normalizing their syntax, then coloring and paginating the result:
diff <(jq --sort-keys . < file1.json) <(jq --sort-keys . < file2.json) | colordiff | less -R
Use grep . * to quickly examine the contents of all files in a directory (so each line is paired with the filename), or head -100 * (so each file has a heading). This can be useful for directories filled with config settings like those in /sys, /proc, /etc.
grep . *
head -100 *
/sys
/etc
Summing all numbers in the third column of a text file (this is probably 3X faster and 3X less code than equivalent Python):
awk '{ x += $3 } END { print x }' myfile
To see sizes/dates on a tree of files, this is like a recursive ls -l but is easier to read than ls -lR:
ls -lR
find . -type f -ls
Say you have a text file, like a web server log, and a certain value that appears on some lines, such as an acct_id parameter that is present in the URL. If you want a tally of how many requests for each acct_id:
acct_id
egrep -o 'acct_id=[0-9]+' access.log | cut -d= -f2 | sort | uniq -c | sort -rn
To continuously monitor changes, use watch, e.g. check changes to files in a directory with watch -d -n 2 'ls -rtlh | tail' or to network settings while troubleshooting your wifi settings with watch -d -n 2 ifconfig.
watch
watch -d -n 2 'ls -rtlh | tail'
watch -d -n 2 ifconfig
Run this function to get a random tip from this document (parses Markdown and extracts an item):
function taocl() { curl -s https://raw.githubusercontent.com/jlevy/the-art-of-command-line/master/README.md | sed '/cowsay[.]png/d' | pandoc -f markdown -t html | xmlstarlet fo --html --dropdtd | xmlstarlet sel -t -v "(html/body/ul/li[count(p)>0])[$RANDOM mod last()+1]" | xmlstarlet unesc | fmt -80 | iconv -t US }
expr: perform arithmetic or boolean operations or evaluate regular expressions
expr
m4: simple macro processor
m4
yes: print a string a lot
yes
cal: nice calendar
cal
env: run a command (useful in scripts)
env
printenv: print out environment variables (useful in debugging and scripts)
printenv
look: find English words (or lines in a file) beginning with a string
look
cut, paste and join: data manipulation
fmt: format text paragraphs
fmt
pr: format text into pages/columns
pr
fold: wrap lines of text
fold
column: format text fields into aligned, fixed-width columns or tables
column
expand and unexpand: convert between tabs and spaces
expand
unexpand
nl: add line numbers
nl
seq: print numbers
bc: calculator
bc
factor: factor integers
factor
gpg: encrypt and sign files
gpg
toe: table of terminfo entries
toe
nc: network debugging and data transfer
nc
socat: socket relay and tcp port forwarder (similar to netcat)
socat
netcat
slurm: network traffic visualization
slurm
dd: moving data between files or devices
dd
file: identify type of a file
file
tree: display directories and subdirectories as a nesting tree; like ls but recursive
tree
stat: file info
stat
time: execute and time a command
time
timeout: execute a command for specified amount of time and stop the process when the specified amount of time completes.
timeout
lockfile: create semaphore file that can only be removed by rm -f
lockfile
rm -f
logrotate: rotate, compress and mail logs.
logrotate
watch: run a command repeatedly, showing results and/or highlighting changes
when-changed: runs any command you specify whenever it sees file changed. See inotifywait and entr as well.
when-changed
inotifywait
entr
tac: print files in reverse
tac
comm: compare sorted files line by line
strings: extract text from binary files
tr: character translation or manipulation
tr
iconv or uconv: conversion for text encodings
split and csplit: splitting files
sponge: read all input before writing it, useful for reading from then writing to the same file, e.g., grep -v something some-file | sponge some-file
sponge
grep -v something some-file | sponge some-file
units: unit conversions and calculations; converts furlongs per fortnight to twips per blink (see also /usr/share/units/definitions.units)
units
/usr/share/units/definitions.units
apg: generates random passwords
apg
xz: high-ratio file compression
xz
ldd: dynamic library info
nm: symbols from object files
nm
ab or wrk: benchmarking web servers
wrk
strace: system call debugging
mtr: better traceroute for network debugging
cssh: visual concurrent shell
cssh
rsync: sync files and folders over SSH or in local file system
wireshark and tshark: packet capture and network debugging
ngrep: grep for the network layer
host and dig: DNS lookups
host
lsof: process file descriptor and socket info
dstat: useful system stats
glances: high level, multi-subsystem overview
iostat: Disk usage stats
mpstat: CPU usage stats
mpstat
vmstat: Memory usage stats
htop: improved version of top
last: login history
last
w: who's logged on
id: user/group identity info
id
sar: historic system stats
iftop or nethogs: network utilization by socket or process
ss: socket statistics
dmesg: boot and system error messages
sysctl: view and configure Linux kernel parameters at run time
sysctl
hdparm: SATA/ATA disk manipulation/performance
hdparm
lsblk: list block devices: a tree view of your disks and disk partitions
lshw, lscpu, lspci, lsusb, dmidecode: hardware information, including CPU, BIOS, RAID, graphics, devices, etc.
lshw
lscpu
lspci
lsusb
dmidecode
lsmod and modinfo: List and show details of kernel modules.
lsmod
modinfo
fortune, ddate, and sl: um, well, it depends on whether you consider steam locomotives and Zippy quotations "useful"
fortune
ddate
sl
These are items relevant only on macOS.
Package management with brew (Homebrew) and/or port (MacPorts). These can be used to install on macOS many of the above commands.
port
Copy output of any command to a desktop app with pbcopy and paste input from one with pbpaste.
pbcopy
pbpaste
To enable the Option key in macOS Terminal as an alt key (such as used in the commands above like alt-b, alt-f, etc.), open Preferences -> Profiles -> Keyboard and select "Use Option as Meta key".
To open a file with a desktop app, use open or open -a /Applications/Whatever.app.
open
open -a /Applications/Whatever.app
Spotlight: Search files with mdfind and list metadata (such as photo EXIF info) with mdls.
mdfind
mdls
Be aware macOS is based on BSD Unix, and many commands (for example ps, ls, tail, awk, sed) have many subtle variations from Linux, which is largely influenced by System V-style Unix and GNU tools. You can often tell the difference by noting a man page has the heading "BSD General Commands Manual." In some cases GNU versions can be installed, too (such as gawk and gsed for GNU awk and sed). If writing cross-platform Bash scripts, avoid such commands (for example, consider Python or perl) or test carefully.
ps
gawk
gsed
perl
To get macOS release information, use sw_vers.
sw_vers
These items are relevant only on Windows.
Access the power of the Unix shell under Microsoft Windows by installing Cygwin. Most of the things described in this document will work out of the box.
On Windows 10, you can use Windows Subsystem for Linux (WSL), which provides a familiar Bash environment with Unix command line utilities.
If you mainly want to use GNU developer tools (such as GCC) on Windows, consider MinGW and its MSYS package, which provides utilities such as bash, gawk, make and grep. MSYS doesn't have all the features compared to Cygwin. MinGW is particularly useful for creating native Windows ports of Unix tools.
Another option to get Unix look and feel under Windows is Cash. Note that only very few Unix commands and command-line options are available in this environment.
You can perform and script most Windows system administration tasks from the command line by learning and using wmic.
wmic
Native command-line Windows networking tools you may find useful include ping, ipconfig, tracert, and netstat.
ping
ipconfig
tracert
You can perform many useful Windows tasks by invoking the Rundll32 command.
Rundll32
Install additional Unix programs with the Cygwin's package manager.
Use mintty as your command-line window.
mintty
Access the Windows clipboard through /dev/clipboard.
/dev/clipboard
Run cygstart to open an arbitrary file through its registered application.
cygstart
Access the Windows registry with regtool.
regtool
Note that a C:\ Windows drive path becomes /cygdrive/c under Cygwin, and that Cygwin's / appears under C:\cygwin on Windows. Convert between Cygwin and Windows-style file paths with cygpath. This is most useful in scripts that invoke Windows programs.
C:\
/cygdrive/c
/
C:\cygwin
cygpath
With the exception of very small tasks, code is written so others can read it. With power comes responsibility. The fact you can do something in Bash doesn't necessarily mean you should! ;)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Copyright© 2013-2020
All Rights Reserved 京ICP备2023019179号-8