Linux Server Hardening โ After the First SSH In
The steps I take on every new Linux server before it faces the internet: SSH lockdown, firewall rules, fail2ban, automatic updates, and the security that actually matters.

Spun up a fresh VPS once, installed Nginx, deployed a small project, went to bed. Woke up to 47,000 failed SSH login attempts in auth.log. The server had been online for about eight hours. Nobody knew it existed โ I hadn't pointed a domain at it yet. Didn't matter. Automated scanners found it within minutes of the IP going live.
That was the morning I learned that a server connected to the internet is under attack by default. Not targeted attacks from sophisticated hackers โ automated bots scanning every IP address on the internet, trying default credentials, looking for known vulnerabilities. The overwhelming majority of "attacks" are noise, from what I've seen. But noise with root access to your server is still catastrophic.
Every new server I set up now follows the same hardening checklist. Takes about 30 minutes. Prevents the most common ways things go wrong. None of this is paranoia โ it's the baseline.
Step One: Create a Non-Root User
Cloud providers usually give you root access on a new VPS. First thing: stop using root for daily operations. Create a regular user with sudo privileges.
adduser anurag
usermod -aG sudo anurag
On RHEL-based systems (CentOS, Rocky, Alma), replace sudo with wheel:
useradd anurag
passwd anurag
usermod -aG wheel anurag
Test the new user before proceeding. I think this is the step most people skip. Open a new terminal, SSH in as the new user, verify sudo works. Don't lock yourself out by disabling root before confirming the new user can elevate privileges. Done that. It's a bad day when your only option is the cloud provider's rescue console.
ssh anurag@your-server-ip
sudo whoami # Should output: root
Once confirmed, everything from here is done as the non-root user with sudo where needed.
Step Two: SSH Key Authentication
Password authentication over SSH is the single biggest vulnerability on a fresh server. Those 47,000 login attempts I mentioned? All brute-forcing passwords. Even a strong password is vulnerable to persistent automated attacks. SSH keys are effectively unbrute-forceable.
If you don't already have an SSH key pair on your local machine:
ssh-keygen -t ed25519 -C "anurag@workstation"
Ed25519 is probably the modern choice โ shorter keys, better security, faster than RSA. Copy the public key to the server:
ssh-copy-id anurag@your-server-ip
Or manually:
mkdir -p ~/.ssh
chmod 700 ~/.ssh
echo "your-public-key-here" >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
The permissions matter more than you might think. SSH refuses to use key files with overly permissive permissions. 700 on the .ssh directory, 600 on authorized_keys. Get these wrong and SSH silently falls back to password authentication, which defeats the purpose.
Test key-based login from a new terminal before disabling passwords. I keep stressing this test-before-locking pattern because the alternative is getting locked out of your own server.
Step Three: Lock Down SSH
Now that key authentication works, configure SSH to reject everything else. Edit /etc/ssh/sshd_config:
sudo nano /etc/ssh/sshd_config
The settings that matter:
# Disable root login entirely
PermitRootLogin no
# Disable password authentication
PasswordAuthentication no
# Disable empty passwords (obviously)
PermitEmptyPasswords no
# Only allow specific users
AllowUsers anurag
# Disable X11 forwarding (reduces attack surface)
X11Forwarding no
# Set a login grace period
LoginGraceTime 30
# Limit authentication attempts per connection
MaxAuthTries 3
# Disable SSH protocol 1 (ancient and broken)
Protocol 2
AllowUsers anurag is a whitelist โ only the listed user can SSH in. Even if someone creates another user on the system, they can't use SSH unless they're on this list. Additional users separated by spaces: AllowUsers anurag deploy.
After editing, validate the config before restarting SSH. A syntax error in sshd_config will prevent SSH from starting, locking you out:
sudo sshd -t
If no errors are printed, restart:
sudo systemctl restart sshd
Keep your current SSH session open. Open a new terminal and test. If the new connection works, you're good. If it doesn't, you still have the original session to fix whatever went wrong.
Some people also change the SSH port from 22 to something else. I do this โ not because it's real security (anyone scanning your server will find the alternative port quickly), but because it dramatically reduces the noise in auth.log, from what I've seen. Moving to port 2222 or similar cuts automated login attempts by 95%+ because most bots only try port 22.
Port 2222
If you change the port, update your SSH config and remember to open the new port in your firewall before closing port 22. I covered managing SSH configs and using the ~/.ssh/config file in my Linux CLI post โ saves typing the port number on every connection.
Step Four: Firewall with UFW
UFW (Uncomplicated Firewall) is a frontend for iptables that does what the name promises โ makes firewall configuration uncomplicated. Default deny incoming, allow outgoing, open only what you need.
# Set default policies
sudo ufw default deny incoming
sudo ufw default allow outgoing
# Allow SSH (use your custom port if changed)
sudo ufw allow 2222/tcp
# Allow HTTP and HTTPS
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
# Enable the firewall
sudo ufw enable
Check the rules:
sudo ufw status verbose
Order matters slightly less with UFW than with raw iptables, but the principle is the same: default deny, explicit allow. Anything not explicitly allowed is blocked.
If you're running additional services โ a Node.js app on port 3000, a database on port 5432 โ don't open those ports to the internet. Use a reverse proxy (Nginx) for web traffic and SSH tunnels or VPN for database access. The database should never be reachable from the public internet. Found an exposed PostgreSQL instance at a previous job โ no password, open to the world. The data was already being scraped when we discovered it.
For rate limiting SSH connections specifically:
sudo ufw limit 2222/tcp
limit allows 6 connections per 30 seconds from a single IP, then blocks further attempts. Light protection against brute force on top of key-only authentication.
Step Five: Fail2Ban
Even with SSH keys and a firewall, the noise continues. Bots hitting port 22 (or whatever port you're using), failed authentication attempts filling up logs. Fail2ban watches log files for patterns that indicate abuse and temporarily bans the offending IP addresses.
sudo apt install fail2ban
Create a local config file (don't edit the default โ it gets overwritten on updates):
sudo nano /etc/fail2ban/jail.local
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 3
banaction = ufw
[sshd]
enabled = true
port = 2222
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
[nginx-http-auth]
enabled = true
filter = nginx-http-auth
logpath = /var/log/nginx/error.log
maxretry = 5
[nginx-botsearch]
enabled = true
filter = nginx-botsearch
logpath = /var/log/nginx/access.log
maxretry = 2
Three failed SSH login attempts within 10 minutes (findtime = 600) triggers a one-hour ban (bantime = 3600). The banaction = ufw setting tells fail2ban to add blocks through UFW instead of raw iptables, keeping your firewall rules consistent.
The nginx jails catch bots probing for common exploits โ wp-login.php on a non-WordPress site, phpmyadmin paths, common vulnerability scanners. They won't stop a determined attacker but they cut down the noise dramatically.
Start and enable:
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
Check what's been banned:
sudo fail2ban-client status sshd
After a few days, you'll see dozens or hundreds of banned IPs. All automated bots that tried default credentials and got blocked after three attempts. Without fail2ban, those bots keep trying indefinitely, filling your logs and wasting server resources.
Step Six: Automatic Security Updates
From what I have seen, the worst security incidents I've seen in practice weren't from sophisticated attacks. They were from known vulnerabilities that had patches available but nobody applied them. The Equifax breach was a known Apache Struts vulnerability with a patch that had been available for months. Applying security updates automatically prevents this class of failure.
On Ubuntu/Debian:
sudo apt install unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades
Verify it's enabled by checking /etc/apt/apt.conf.d/20auto-upgrades:
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
The default configuration only applies security updates, not all updates. This is probably the right default. Security patches are small, well-tested, and critical. Upgrading major package versions automatically can break things.
Configure email notifications so you know when updates are applied. In /etc/apt/apt.conf.d/50unattended-upgrades:
Unattended-Upgrade::Mail "your-email@example.com";
Unattended-Upgrade::MailReport "on-change";
For automatic reboots when a kernel update requires it:
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "04:00";
Controversial setting. Automatic reboots at 4 AM means brief downtime. For a solo project, that's fine. For a production service, you'd want a more controlled process โ maybe a notification that a reboot is needed, then a human scheduling it during a maintenance window. I use automatic reboots on personal servers and staging environments, manual reboots on production.
Step Seven: Secure Shared Memory
A quick setting that prevents certain classes of attacks. Shared memory (/dev/shm) can be used by attackers to execute malicious programs. Mounting it as non-executable limits this:
Add to /etc/fstab:
tmpfs /dev/shm tmpfs defaults,noexec,nosuid,nodev 0 0
Apply without rebooting:
sudo mount -o remount /dev/shm
This can break some applications that legitimately use shared memory for execution (certain database engines, Chrome/Chromium in headless mode). Test your applications after applying. For a web server running Nginx and Node.js, it's fine.
Step Eight: Log Monitoring
All the security measures above generate logs. Fail2ban bans get logged. SSH attempts get logged. Firewall blocks get logged. But logs are only useful if someone reads them.
For solo projects, I use a simple approach โ a daily summary email:
sudo apt install logwatch
Configure /etc/logwatch/conf/logwatch.conf:
Output = mail
MailTo = your-email@example.com
Detail = Med
Range = yesterday
Logwatch parses system logs and sends a daily digest. A 30-second scan of the email tells me whether anything unusual happened โ new banned IPs, failed service starts, disk space warnings.
For more serious monitoring, centralized logging with the ELK stack or Grafana Loki gives you searchable, alertable logs across multiple servers. Overkill for a single VPS. Necessary once you're managing more than a few machines. I covered the broader approach to monitoring and observability in my system design post if you're scaling beyond a single server.
What I Don't Bother With
Security hardening can become an infinite rabbit hole. Some measures add meaningful protection. Others add complexity without proportional benefit.
SELinux/AppArmor tuning. Both are enabled by default on most distributions and the default policies are reasonable. Custom policies are worth the effort for high-value targets. For a blog, a side project, or a small business site, the defaults are fine.
Port knocking. Neat concept โ SSH port stays closed until you knock on a sequence of other ports in the right order. Adds obscurity without real security. If an attacker is capturing your network traffic, they can see the knock sequence. If they're not, fail2ban and key-only authentication already protect you.
Full disk encryption at rest. Important for laptops. Less relevant for cloud VPS instances where the provider already manages physical security. If your threat model includes the hosting provider being compromised, you have bigger problems than disk encryption.
Elaborate intrusion detection systems. OSSEC, Wazuh, and similar tools are powerful but complex to configure and maintain. For a team with dedicated security operations, absolutely. For a developer running a few servers, the maintenance burden often exceeds the security benefit. Fail2ban covers the most common attack vectors with minimal overhead.
The Checklist
Every new server, in order:
- Create a non-root user with sudo
- Set up SSH key authentication
- Disable root login and password authentication
- Configure UFW firewall (deny all, allow specific ports)
- Install and configure fail2ban
- Enable unattended security upgrades
- Secure shared memory
- Set up basic log monitoring
Takes 30 minutes. Prevents the vast majority of common attacks. The servers running my personal projects and the blog you're reading right now follow this exact checklist.
After the Basics
Once the baseline is in place, ongoing maintenance matters more than adding more security layers.
Review logs weekly. Even with logwatch emails, spending five minutes reading auth.log and checking fail2ban status once a week catches things that automated alerts miss.
Keep software updated. Unattended upgrades handle security patches. But the applications you deploy โ Node.js dependencies, Python packages, Docker images โ need updating too. npm audit, pip-audit, and Trivy for container scanning should be part of your deployment pipeline. I covered automated security auditing as part of CI/CD workflows in my GitHub Actions post.
Backup your server. Security hardenings won't help if the server's disk fails or if ransomware encrypts everything. Automated backups to an offsite location (different provider, different region). Test restoring from backup periodically. The backup you never tested is the backup that doesn't work.
Principle of least privilege. Each service runs as its own user with only the permissions it needs. Nginx runs as www-data, not root. Your Node.js app runs as a dedicated deploy user, not root. Database access is restricted to the specific databases and operations each application needs. If one service is compromised, the blast radius is limited to what that service's user can access.
# Create a dedicated user for your application
sudo useradd -r -s /bin/false appuser
# The -r flag creates a system user
# The -s /bin/false prevents interactive login
The -s /bin/false part means even if someone compromises the application and gets shell access as appuser, they can't start an interactive session. They can execute commands as that user, but the attack surface is reduced.
The Reality Check
No server is perfectly secure. The goal isn't perfect security โ it's making your server harder to compromise than the millions of other servers on the internet that haven't been hardened at all. Automated bots go for the easiest targets. A server with SSH keys, a firewall, and fail2ban is not an easy target. The bot moves on.
For targeted attacks by skilled humans, this baseline isn't enough. But if you're running a personal project, a small business, or a staging environment, you're not facing targeted attacks. You're facing automated noise. The steps above handle the noise, and that's what matters for 99% of servers.
A Note on Docker and Containers
If you're running applications in Docker containers, the host OS still needs hardening. Containers provide isolation but they share the host kernel. A container escape vulnerability combined with an unhardened host is game over. Everything above applies to Docker hosts too. And don't run containers as root โ I covered this in my Docker post with the specific Dockerfile changes needed.
Docker also introduces its own attack surface. The Docker daemon runs as root. Anyone with access to the Docker socket can effectively become root on the host. Limit who can run Docker commands. Don't expose the Docker API over the network without TLS authentication. And keep your base images updated โ a container built on a six-month-old Ubuntu image has six months of unpatched vulnerabilities baked in.
Thirty minutes of setup. Prevents years of potential headaches. Do it on every server, every time, no exceptions.
Further Resources
- CIS Benchmarks โ Industry-standard security configuration guides for Linux distributions, covering hardening steps validated by security professionals worldwide.
- NIST SP 800-123: Guide to General Server Security โ The National Institute of Standards and Technology's guidelines for securing server operating systems, including patching, access control, and logging.
- Linux Audit โ Practical guides on Linux security hardening, vulnerability assessment, and compliance checking with tools like Lynis.
Written by
Anurag Sinha
Full-stack developer specializing in React, Next.js, cloud infrastructure, and AI. Writing about web development, DevOps, and the tools I actually use in production.
Stay Updated
New articles and tutorials sent to your inbox. No spam, no fluff, unsubscribe whenever.
I send one email per week, max. Usually less.
Comments
Loading comments...
Related Articles

Cybersecurity Survival: A Practical Scenario
Walk through a simulated breach to understand which skills actually matter in real-world incident response.

OAuth2 Finally Made Sense โ The Flows, the Confusion, and the Mistakes Everyone Makes
Why OAuth2 is confusing on first encounter, what the authorization code flow actually does step by step, and the implementation mistakes I've seen in production.

API Security Best Practices: A Direct Technical Breakdown
Authentication, authorization, rate limiting, and input validation security mechanics.