Linux CLI: The Commands I Use Every Day
A collection of bash commands and shortcuts I reach for constantly, organized by what I'm trying to do rather than by category.

A text file on my desktop called commands.txt has been accumulating for about four years. Every time I look something up, solve a problem, or pick up a trick in the terminal, the command goes in there with a note about what it does. File's gotten long. A lot of it is redundant or outdated. But going through it recently, pulled out the stuff that actually gets regular use and figured it was worth organizing here.
Not a systematic Linux introduction. Just what I type, arranged by the kinds of problems I'm solving when a terminal is open.
Moving Around Faster
Spent my first year using arrow keys to navigate commands and backspace to fix typos. Then someone showed me the keyboard shortcuts.
| Keystroke | What it does |
| :--- | :--- |
| Ctrl + R | Reverse search through command history. Start typing and it shows matching previous commands. |
| Ctrl + A | Jump to the beginning of the line. |
| Ctrl + E | Jump to the end of the line. |
| Alt + B | Move back one word. |
| Alt + F | Move forward one word. |
| Ctrl + U | Delete everything before the cursor. |
| Ctrl + K | Delete everything after the cursor. |
| Ctrl + W | Delete the previous word. |
| Ctrl + L | Clear the screen. Same as typing clear but faster. |
Ctrl + R would be the last one I'd give up, from what I've seen. Probably use it 40-50 times a day. Long docker commands, ssh addresses I can't remember, complex find invocations from three weeks ago โ instead of retyping, Ctrl + R and type the first few characters.
Word-jumping shortcuts (Alt + B, Alt + F) are the next most valuable. Long command, need to change something in the middle โ jumping word by word beats holding the arrow key.
These shortcuts might not work in every terminal emulator. macOS Terminal requires "Use Option as Meta Key" for the Alt shortcuts. iTerm2 has a similar setting. Windows Terminal needed some configuration too โ don't remember the specifics.
When Something Is Wrong on a Server
A pattern to how I diagnose problems. Usually goes: what's using resources โ what's listening on ports โ what does the log say.
Processes eating CPU or memory:
ps aux --sort=-%mem | head -15
Sorts all processes by memory usage descending, shows top 15. Replace -%mem with -%cpu for CPU. Caught runaway Node.js processes and leaking Python scripts this way. Percentage columns aren't always accurate on containers โ they show percentages relative to the host, not container limits. Watch out for that.
What's on a port:
ss -tulnp | grep 3000
ss is the modern replacement for netstat. Shows which process is bound to port 3000. Used constantly for "address already in use" errors โ usually a zombie process from a crashed dev server. lsof -i :3000 gives similar info in a different format. Switch between them depending on what's installed on the machine.
Watching a command update in real time:
watch -n 2 'docker stats --no-stream'
Re-runs the command every N seconds and shows output. Good for catching resource spikes in the act. --no-stream on docker stats is necessary because without it, docker stats runs its own loop and watch can't capture it.
Finding Files
find is powerful and the syntax is hostile. Still look up the flags regularly.
Modified in the last 24 hours:
find . -mtime -1 -type f
Delete all Python cache files:
find . -name "*.pyc" -delete
Careful with -delete. No confirmation prompt. Typed this wrong once โ put -delete before -name and it tried to delete everything. Flag order matters with find and mistakes can be destructive. Always dry run first without -delete to check what matches.
Biggest directories:
du -sh /* 2>/dev/null | sort -rh | head -10
2>/dev/null suppresses permission-denied errors for directories you can't read. Without it, useful output is buried in error messages.
Mostly switched to fd for file finding (Rust rewrite of find with saner syntax) and ncdu for disk usage analysis (interactive, lets you drill into directories). Both noticeably faster on large filesystems. More detail about these modern replacements and the rest of my terminal setup in my terminal productivity tools post. But find and du exist on every system, which matters when you're SSH'd into a minimal server without your nice tools.
Text Processing
The pipe-things-together approach. Sounds academic until you need it, then it's surprisingly practical.
Pull email addresses from a file:
grep -oP '[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}' dump.txt
-o outputs only the matching part, -P enables Perl-compatible regex. Used variations to extract URLs, IP addresses, phone numbers from log files.
Top 10 IPs hitting your web server:
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -10
Extract the first column (IP), sort, count duplicates, sort by count descending, show top 10. When a server is being hammered and you need to identify who's responsible quickly, this probably runs faster than opening any GUI tool.
Find and replace across all JavaScript files:
find . -name "*.js" -exec sed -i 's/oldFunction/newFunction/g' {} +
Not very comfortable with sed beyond simple substitutions. The in-place flag (-i) always makes me nervous. On macOS, sed -i requires an empty string argument (sed -i '' 's/...') which differs from GNU sed on Linux. That inconsistency has tripped me up more than once, I think. For anything complex, I'll use a proper text editor or sd.
Shell Scripts That Don't Fail Silently
Every bash script starts with the same three flags. Don't even think about it anymore.
#!/bin/bash
set -euo pipefail
-e โ exit immediately if any command returns non-zero. Without this, the script continues after failures and the resulting state is unpredictable.
-u โ treat undefined variables as errors. Without this, referencing an unset variable evaluates to empty string, which causes bizarre behavior. Once had a cleanup script with rm -rf "$DEPLOY_DIR/" where $DEPLOY_DIR was unset. Without -u, that becomes rm -rf /. Caught it in testing. With -u, the script would have stopped and reported the undefined variable.
-o pipefail โ if any command in a pipe chain fails, the whole pipeline fails. Without it, curl url | process_data returns the exit code of process_data even if curl failed. Script thinks everything worked when the input was actually empty.
These flags have prevented subtle failures enough times that I consider them mandatory. If you're writing more involved automation, I shared some practical Python automation examples that handle the same defensive approach in a different language. Some people add set -x too โ prints each command before execution. Good for debugging, too noisy for regular use.
The Archive Extractor
Tired of looking up tar flags. tar xzf? tar xjf? tar xJf? Can't remember which goes with which compression format.
extract() {
case "$1" in
*.tar.bz2) tar xjf "$1" ;;
*.tar.gz) tar xzf "$1" ;;
*.tar.xz) tar xJf "$1" ;;
*.zip) unzip "$1" ;;
*.gz) gunzip "$1" ;;
*.tar) tar xf "$1" ;;
*.7z) 7z x "$1" ;;
*) echo "Don't know how to extract: $1" ;;
esac
}
extract whatever.tar.gz figures out the right command. Small convenience. Used often enough to earn its place in my shell config.
Miscellaneous One-Liners
Things I used to Google every time and eventually memorized:
Which shell:
echo $SHELL
Public IP:
curl -s ifconfig.me
Random password:
openssl rand -base64 24
Kill a process by name:
pkill -f "node server.js"
-f matches the full command line, not just the process name. Without it, pkill node kills ALL Node processes. With -f, you target a specific one.
Command history with timestamps:
HISTTIMEFORMAT="%F %T " history | tail -20
Requires HISTTIMEFORMAT to be set โ isn't by default on most systems. Added it to .bashrc so history always includes timestamps. Helped reconstruct timelines during incident investigations more than once โ "when exactly did that deploy command run?"
SSH Config and Remote Work
Spend enough time SSH'ing into servers and the config file starts mattering. Instead of typing ssh -i ~/.ssh/key.pem ubuntu@10.0.3.47 -p 2222 every time, define a host in ~/.ssh/config:
Host staging
HostName 10.0.3.47
User ubuntu
Port 2222
IdentityFile ~/.ssh/key.pem
ServerAliveInterval 60
Now it's just ssh staging. The ServerAliveInterval sends a keepalive packet every 60 seconds, which prevents the connection from dropping when you step away for five minutes. Without it, idle SSH sessions die and you have to reconnect.
Can also use this for jump hosts โ accessing a server behind a bastion:
Host production
HostName 10.0.5.12
User deploy
ProxyJump bastion
Host bastion
HostName bastion.example.com
User ubuntu
ssh production automatically tunnels through the bastion. No manual two-step process. Set this up once and forget about it.
For transferring files, scp works but rsync is better for anything beyond a single file:
rsync -avz --progress ./build/ staging:/var/www/app/
The -a flag preserves permissions and timestamps. -z compresses during transfer. --progress shows per-file transfer status. And rsync only transfers the differences โ if you've already synced once and only a few files changed, the second sync is near-instant instead of re-uploading everything.
Process Management
Beyond finding and killing processes, a few more things come up regularly.
Run something in the background that survives logout:
nohup node server.js > app.log 2>&1 &
nohup prevents the process from receiving SIGHUP when you close the terminal. Output goes to app.log. The & at the end backgrounds it. This is the quick-and-dirty way to keep something running on a server. For anything real, you'd use systemd or a process manager, but for a quick "start this and come back to it later," nohup works.
Check what a process is doing:
strace -p 12345 -e trace=network
strace attaches to a running process and shows its system calls. The -e trace=network filter shows only network-related calls. Useful when a process is hanging and you want to see if it's stuck waiting on a DNS lookup or a socket connection. Replaced many rounds of "I have no idea what this process is doing" with "oh, it's blocking on a DNS resolution that's timing out."
Quick HTTP server for a directory:
python3 -m http.server 8080
Serves the current directory on port 8080. Useful for sharing files between machines, testing static HTML, or serving a build output to check it in a browser. One line, no setup, no configuration. Kill it when you're done.
The Tools That Survived
Better tools exist for most of these tasks. ripgrep has replaced grep for me โ noticeably faster on large codebases and the default settings are saner. fd has mostly replaced find. But for the rest, the old GNU tools are what I know, they're on every server, and they work. When you SSH into a minimal Alpine container or an ancient Ubuntu server, ripgrep isn't there but grep is. eza isn't there but ls is. The old tools are the lingua franca.
Environment Management
A few things related to keeping your shell environment sane across machines and sessions.
Dotfile management. I keep .bashrc, .zshrc, .gitconfig, and a few other config files in a git repository. On a new machine, clone the repo and symlink the files to their expected locations. Some people use tools like stow or chezmoi for this. I just use a shell script with ln -sf commands. Low-tech. Works.
Environment variables. For development secrets and project-specific config, I use direnv. Drop a .envrc file in a project directory, and direnv automatically loads those environment variables when you cd into the directory and unloads them when you leave. No more "forgot to source my .env file" or "accidentally used production credentials in development."
# .envrc
export DATABASE_URL="postgres://dev:password@localhost:5432/myapp"
export REDIS_URL="redis://localhost:6379"
export API_KEY="dev-key-12345"
cd into the project directory, variables are available. cd out, they're gone. Each project has its own isolated environment without polluting the global shell.
History across sessions. By default, bash history is per-session. Open three terminal tabs, commands from tab 1 aren't available in tab 2's history search. Adding these to .bashrc fixes that:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n; $PROMPT_COMMAND"
histappend appends to the history file instead of overwriting. The PROMPT_COMMAND writes the current session's commands to the file after each command and reads new entries from other sessions. Now Ctrl+R searches across all open terminals. Small config change. Noticeable quality-of-life improvement.
Aliases That Save Typing
After years of repetitive commands, my .bashrc has accumulated a collection of aliases. Most are short. A few save real time over the course of a day.
alias gs='git status'
alias gd='git diff'
alias gl='git log --oneline -20'
alias dc='docker compose'
alias k='kubectl'
alias ll='ls -alh'
alias ports='ss -tulnp'
The git ones get typed dozens of times daily. gs instead of git status sounds trivial, but multiply that by fifty uses a day and it adds up. The dc alias for Docker Compose is a newer addition โ docker compose up, docker compose logs, docker compose down becomes dc up, dc logs, dc down. Shorter to type, easier to remember. Some people go further with functions that combine multiple steps, but plain aliases cover 90% of what I need.
Cron Jobs (The Scheduler You Already Have)
Cron runs scheduled commands on Linux. If you need something to happen every day at 3 AM, or every 15 minutes, or every Monday at noon โ cron handles it. Edit your crontab with crontab -e and add a line:
# Run backup script at 2 AM daily
0 2 * * * /home/anurag/scripts/backup.sh >> /home/anurag/logs/backup.log 2>&1
# Clean temp files every Sunday at midnight
0 0 * * 0 find /tmp -mtime +7 -delete
# Check website every 30 minutes
*/30 * * * * python3 /home/anurag/scripts/web_check.py
The five fields are: minute, hour, day-of-month, month, day-of-week. */30 means "every 30." 0 2 * * * means "minute 0, hour 2, every day." The syntax is cryptic. I use crontab.guru to double-check every time.
The >> logfile 2>&1 redirect is important. Without it, cron's output goes nowhere โ if your script fails, you have no way to find out. Redirect stdout and stderr to a log file so failures leave a trail.
One gotcha: cron runs with a minimal environment. Your PATH is probably different from your interactive shell's PATH. Full paths to executables are safer than relying on python3 being in cron's PATH. Same goes for any environment variables your script depends on โ they're not set in cron's environment unless you set them explicitly.
Maybe I'll modernize the rest of my workflow eventually. Or maybe I'll still be typing ps aux | grep five years from now. Probably the second one, not sure.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

My Terminal Setup, Explained
The Zsh, tmux, and Git configs I use daily. Annotated so you can understand what each block does and take what's useful.

SSH Tunneling โ The Networking Swiss Army Knife Nobody Taught Me
Local forwarding, remote forwarding, dynamic SOCKS proxies, jump hosts, and the SSH config shortcuts that replaced half my VPN usage.

Learning Docker โ What I Wish Someone Had Told Me Earlier
Why most Docker tutorials fail beginners, the critical difference between images and containers, and what actually happens when you run a container.