SSH Tunneling โ The Networking Swiss Army Knife Nobody Taught Me
Local forwarding, remote forwarding, dynamic SOCKS proxies, jump hosts, and the SSH config shortcuts that replaced half my VPN usage.

Spent the first two years of my career using SSH exclusively for remote shell access. ssh user@server, run commands, exit. That's it. Didn't know SSH could do anything else. Then one day a senior engineer on my team typed a command with -L flags and suddenly his local browser was accessing a database dashboard that lived on a private network he wasn't directly connected to.
"How did you do that?"
"SSH tunnel."
That was the start of a rabbit hole that completely changed how I think about network access. SSH tunneling is one of those tools that seems niche until you understand it, and then you start seeing uses for it everywhere. Port forwarding, jump hosts, SOCKS proxies โ it's all built into a tool that's already installed on every server you'll ever touch.
Local Port Forwarding โ The One You'll Use Most
The scenario: there's a service running on a remote machine that you can't access directly. Maybe it's a database listening on localhost only. Maybe it's an internal web dashboard behind a firewall. You have SSH access to the machine. That's enough.
ssh -L 5432:localhost:5432 user@database-server
This creates a tunnel. Here's what happens:
- SSH opens port 5432 on your local machine
- Any traffic hitting local port 5432 gets encrypted and forwarded through the SSH connection
- The remote server receives the traffic and sends it to
localhost:5432on its end - The database (listening on localhost on the remote machine) sees a local connection
From your perspective, localhost:5432 on your laptop now behaves as if the remote database were running locally. Point your database client at localhost:5432 with the remote database's credentials, and you're connected.
The syntax is -L local_port:destination_host:destination_port. The destination is resolved from the remote server's perspective. This is the part that confused me initially โ the localhost in the command refers to localhost as seen from the remote server, not your local machine.
This means you can reach things the remote server can reach, even if you can't reach them directly:
ssh -L 6379:redis-internal.vpc:6379 user@bastion
Here, your local port 6379 tunnels through the bastion host to redis-internal.vpc:6379 โ a Redis instance on a private VPC that your laptop has no direct route to. The bastion can reach it. You can reach the bastion. SSH handles the rest.
Practical Example: Accessing a Production Database Safely
This is my most common use case. Production databases should never be exposed to the internet. They sit in a private subnet, accessible only from application servers and a bastion host. When I need to run a query or check data, I don't open a security group rule. I tunnel through the bastion:
ssh -L 5433:prod-db.internal:5432 -N -f user@bastion.example.com
Two new flags here. -N means "don't execute a remote command" โ just set up the tunnel, no interactive shell needed. -f sends SSH to the background after authentication. The tunnel stays open, the terminal is free for other work.
Now I connect with psql:
psql -h localhost -p 5433 -U app_user -d production
Local port 5433 (I use a different port to avoid conflicting with any local Postgres) tunnels to the production database. Connection is encrypted end-to-end through SSH. No database port exposed to the internet. No VPN required. Audit trail in the bastion's SSH logs shows exactly who connected and when.
When I'm done:
# Find and kill the background SSH tunnel
kill $(lsof -ti:5433)
Or just close the terminal. The tunnel dies with the SSH process.
Remote Port Forwarding โ Exposing Local Services
Local forwarding brings remote services to your machine. Remote forwarding does the opposite โ it makes a local service accessible from the remote machine. Less commonly needed, but invaluable when you need it.
ssh -R 8080:localhost:3000 user@remote-server
This tells the remote server to listen on port 8080 and forward any incoming traffic back through the SSH tunnel to port 3000 on your local machine. If someone on the remote server hits localhost:8080, they're actually reaching your laptop's port 3000.
When Would You Use This?
Webhook development. Working on a service that receives webhooks from Stripe, GitHub, or any external service. Those services need to reach your development server. Your laptop isn't publicly accessible. Instead of deploying to a staging server every time you change a line of code, tunnel your local server through a remote host:
ssh -R 9000:localhost:3000 user@public-server.example.com
Configure the webhook provider to send events to http://public-server.example.com:9000. Traffic flows: webhook provider hits the public server on port 9000, SSH forwards it to your laptop's port 3000. You see the webhook locally, debug it, iterate quickly.
There's one catch: by default, remote forwarding only binds to localhost on the remote side. External traffic can't reach it. To make it accessible from outside the remote server, you need GatewayPorts yes in the remote server's sshd_config. Without that setting, only processes on the remote server itself can use the forwarded port.
Sharing a local dev server with a teammate. Running a prototype on localhost:3000. Teammate needs to see it. Remote forward through a shared server. Quick and temporary โ no deployment, no tunneling service, no configuration beyond the SSH command.
I should note that tools like ngrok and Cloudflare Tunnel have, in my experience, largely replaced this use case for webhook development. They're more polished, handle HTTPS automatically, and don't require a public server. But if you already have a server with SSH access and don't want another tool in your stack, remote forwarding works fine.
Dynamic Port Forwarding โ Your Own SOCKS Proxy
This is the one that surprised me the most. SSH can act as a SOCKS proxy, effectively routing all your traffic through the remote server.
ssh -D 1080 user@remote-server
This opens a SOCKS5 proxy on local port 1080. Configure your browser or application to use localhost:1080 as a SOCKS proxy, and all traffic goes through the SSH connection, exits from the remote server, and comes back the same way.
It's like a lightweight VPN for specific applications.
Why This Is Useful
Accessing region-locked services. Server in a different region? Route your browser through it. Not for circumventing Netflix geo-restrictions โ for legitimate scenarios like testing how your website behaves from different geographic locations, or accessing internal tools that are IP-restricted to a specific network.
Browsing securely on untrusted networks. Coffee shop WiFi. Hotel networks. Conference center connections where who knows what's between you and the internet. SOCKS proxy through your own server encrypts everything between your laptop and the exit point. Not as thorough as a full VPN (only proxied applications are protected), but requires zero additional software.
Accessing an entire internal network. With local port forwarding, you tunnel to one specific service. With a SOCKS proxy through a bastion host, you can access any service on the internal network โ just configure your application to use the SOCKS proxy. Database here, web dashboard there, API endpoint over there โ all through a single SSH connection.
In Firefox, you can set a SOCKS proxy under Settings > Network Settings. For command-line tools, many support the ALL_PROXY environment variable:
export ALL_PROXY=socks5://localhost:1080
curl http://internal-service.vpc:8080/health
That curl command goes through the SOCKS proxy, through the SSH tunnel, resolves internal-service.vpc from the remote server's DNS, and returns the result. Your local machine never needs to know the internal network's IP ranges.
Jump Hosts โ Chaining SSH Connections
Production environments often sit behind a bastion (jump) host. You SSH to the bastion, then from there SSH to the actual server. Doing this manually means two SSH commands and two authentication steps.
The -J flag (available since OpenSSH 7.3) handles this in one command:
ssh -J user@bastion.example.com user@internal-server.vpc
SSH connects to the bastion, then transparently creates a second connection from the bastion to the internal server. One command. One authentication flow (if you have key-based auth set up). The bastion is used as a relay โ your terminal connects directly to the internal server.
Multiple jumps work too:
ssh -J user@bastion1,user@bastion2 user@final-destination
Chains through two jump hosts. Haven't needed more than two in practice, but as far as I can tell, the capability exists.
Combining Tunnels with Jump Hosts
The real power, from what I have seen, is combining these techniques. Tunnel to a database through a jump host:
ssh -J user@bastion -L 5432:db.internal:5432 -N user@app-server
Connects through the bastion to the app server, then sets up a local tunnel from the app server to the database. Your laptop accesses localhost:5432, traffic flows through the bastion, through the app server, to the database. Three hops. One command.
SSH Config โ Stop Typing Long Commands
Every long SSH command you type regularly should live in ~/.ssh/config. This file transforms multi-flag commands into simple hostnames.
Host bastion
HostName bastion.example.com
User ubuntu
IdentityFile ~/.ssh/bastion-key.pem
ServerAliveInterval 60
ServerAliveCountMax 3
Host app-server
HostName 10.0.5.12
User deploy
ProxyJump bastion
IdentityFile ~/.ssh/app-key.pem
Host db-tunnel
HostName 10.0.5.12
User deploy
ProxyJump bastion
LocalForward 5433 prod-db.internal:5432
IdentityFile ~/.ssh/app-key.pem
RequestTTY no
Now:
ssh bastion # Direct connection to bastion
ssh app-server # Jumps through bastion automatically
ssh -N db-tunnel # Sets up the database tunnel, no shell
The db-tunnel entry is my favorite pattern. LocalForward in the config does the same thing as -L on the command line. RequestTTY no means no interactive shell โ just the tunnel. I have entries like this for every service I tunnel to regularly. Muscle memory is ssh -N db-tunnel and then I'm connected.
ServerAliveInterval 60 sends a keepalive packet every 60 seconds. Without it, idle connections get killed by firewalls or NAT tables that expire inactive sessions. Spent too long being confused by "broken pipe" errors on connections that had been idle for a few minutes before adding this.
ServerAliveCountMax 3 means if three consecutive keepalives get no response, the client terminates the connection. Without this, a dead connection can hang indefinitely โ your terminal looks like it's connected, but nothing you type goes through. The combination of these two settings means: check every 60 seconds, give up after 3 misses (180 seconds of no response).
Multiplexing โ Reuse Connections
If you SSH to the same server frequently, connection multiplexing speeds things up. Add to your config:
Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h-%p
ControlPersist 600
Create the sockets directory:
mkdir -p ~/.ssh/sockets
First SSH connection to a host creates a persistent socket. Subsequent connections to the same host reuse that socket โ no new TCP handshake, no new authentication. The second ssh command feels instant because it's piggybacking on an existing connection.
ControlPersist 600 keeps the master connection alive for 10 minutes after the last session closes. So if you disconnect and reconnect within 10 minutes, the reconnection is instant.
This also makes SCP and rsync faster for repeated transfers โ they use the existing multiplexed connection instead of establishing a new one each time.
Real Scenarios From My Workflow
A few concrete examples of how these techniques combine in my day-to-day work.
Debugging a Production Issue
Application is returning 500 errors. Need to check the database and the application logs. Instead of SSHing to multiple machines separately:
# Terminal 1: Database tunnel
ssh -N db-tunnel
# Terminal 2: Log tail through jump host
ssh app-server 'tail -f /var/log/app/error.log'
# Terminal 3: Connect to database
psql -h localhost -p 5433 -U app_user -d production
Three terminal panes. Full visibility into the production environment. All connections go through the bastion. All encrypted. All logged.
Port Scanning an Internal Network
Security audit. Need to check what ports are open on internal hosts. Only access is through a bastion.
ssh -D 9050 user@bastion
Then use proxychains with nmap:
proxychains nmap -sT -Pn 10.0.5.0/24 -p 22,80,443,3306,5432
All scan traffic routes through the SOCKS proxy, through the bastion, to the internal network. The bastion's logs show the SSH connection. The internal hosts see traffic coming from the bastion's IP.
Multiple Tunnels at Once
Sometimes you need several tunnels simultaneously โ database, cache, monitoring dashboard. You can stack multiple -L flags:
ssh -L 5433:db.internal:5432 \
-L 6380:redis.internal:6379 \
-L 9090:grafana.internal:3000 \
-N user@bastion
One SSH connection, three tunnels. Local ports 5433, 6380, and 9090 each point to different internal services. I have an SSH config entry called all-tunnels for this exact setup when I need full development access to the internal environment.
Security Considerations
SSH tunneling is powerful and that power comes with responsibility. A few things to keep in mind.
Tunneling bypasses network controls. Firewalls block direct access to a database for a reason. When you tunnel through a bastion, you're probably circumventing that control. Make sure your organization's security policy allows it. Some compliance frameworks (PCI-DSS, SOC2) have specific requirements about how production data is accessed.
The bastion host sees the tunnel metadata. It doesn't see the encrypted payload, but it knows that you established a tunnel to a specific internal host and port. SSH logs on the bastion are your audit trail. Make sure they're collected and monitored.
Key management matters. If your SSH private key is compromised, every tunnel you can create with it is compromised. Use strong passphrases on keys. Use ssh-agent to avoid typing the passphrase repeatedly without storing it in plaintext.
# Start the agent
eval $(ssh-agent)
# Add your key (prompts for passphrase once)
ssh-add ~/.ssh/my_key
# All subsequent SSH commands use the loaded key
ssh bastion # No passphrase prompt
Don't leave tunnels running indefinitely. Background tunnels with -f are easy to forget. A tunnel to a production database that stays open on your laptop while you carry it to a coffee shop is an unnecessary exposure. Kill tunnels when you're done. Write a cleanup alias if you need to:
alias kill-tunnels='pkill -f "ssh.*-[NL]"'
Troubleshooting Common Issues
"Address already in use." The local port you're trying to forward is already taken. Either another tunnel is running, or a local service is using that port. Use ss -tulnp | grep PORT to find what's holding it. Either kill that process or choose a different local port.
"channel 0: open failed: administratively prohibited." The SSH server is configured to deny port forwarding. Check AllowTcpForwarding in the server's sshd_config. If you don't control the server, you're stuck โ the admin has explicitly disabled this feature.
Tunnel connects but traffic doesn't flow. Usually a firewall on the destination host. The SSH tunnel gets you to the remote network, but the target service might have its own firewall rules. The tunnel works at the SSH level โ the service at the other end needs to accept the connection independently.
Connection drops after idle period. Add ServerAliveInterval to your SSH config as shown above. Intermediate firewalls and NAT devices drop inactive TCP connections. Keepalives prevent this.
SSH tunneling replaced, from what I've seen, about 80% of my VPN usage for development and operations work. It's more targeted โ you expose exactly the ports you need, nothing more. It's simpler โ no VPN client, no split tunneling configuration, no routing table conflicts. It works everywhere SSH works, which is everywhere.
The remaining 20% where I still use a VPN: when I need full network-level access for tools that don't support SOCKS proxies, or when the tunnel count would get unwieldy. For most daily operations work, though, I think SSH tunnels are the faster, simpler, and more secure option. Already installed. Already configured. Just needs the right flags.
Keep Reading
- Linux Server Hardening โ After the First SSH In โ SSH gets you in, but locking down the server after that first connection is just as important.
- Cybersecurity Survival: A Practical Scenario โ Understanding tunneling is part of a broader security skill set; this walks through real incident response.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

DNS Explained Properly โ Recursive Resolvers, TTL, and Why Propagation Isn't Real
The thing nobody explains well: what actually happens between typing a URL and getting an IP address, and why 'DNS propagation' is a misleading term.

Learning Docker โ What I Wish Someone Had Told Me Earlier
Why most Docker tutorials fail beginners, the critical difference between images and containers, and what actually happens when you run a container.

Observability Beyond Grep โ Logs, Metrics, Traces, and Why They All Matter
Why grepping through log files stops working at scale, the real difference between logs, metrics, and traces, and how OpenTelemetry ties them together.