Skip to main content
Technical Blueprints

Nginx Security Hardening: The Pragmatic Agency Guide

A no-fluff walkthrough of the Nginx hardening I apply on every server we manage. Covers TLS, security headers, rate limiting, and what to skip. Written for agencies, not enterprises.

Published Updated 8 min read

Most Nginx hardening guides on the internet were written for enterprises with a dedicated security team. You don’t have that. You have a Tuesday afternoon between a client call and a deploy, and you want a server that doesn’t get probed into the dirt overnight.

This is the configuration I actually run on the agency-grade servers we manage at Webnestify. It’s the floor, not the ceiling. Deliberately conservative. Nothing here breaks WordPress or WooCommerce, and nothing forces you to argue with a SaaS vendor about exotic header values.

If you’re hardening your first server today, copy the snippets, test in staging first, and read the Verification section at the bottom before you sleep on it.


The “default Nginx” trap

A fresh Nginx install ships configured for maximum compatibility, not security. That’s a sane default for a project that has to run on every Linux distro, every cloud, every reverse-proxy scenario in existence. It is not a sane default for a server you’re putting on the public internet.

Three problems with default Nginx:

  1. It announces its version in every response (Server: nginx/1.24.0), a free shopping list for vulnerability scanners.
  2. It accepts every TLS protocol it knows about, including ones browsers stopped trusting years ago.
  3. It has no rate limits. The first script kiddie that finds your /wp-login.php will hammer it 200 times per second until something gives.

You don’t fix this with a 400-line config. You fix it with five changes.

The five changes that actually matter

Here’s the short list, ranked by impact-per-line:

ChangeWhat it stopsEffort
TLS protocol & cipher hardeningDowngrade attacks, BEAST/POODLE-class issues10 min
Security response headersXSS, clickjacking, MIME-sniffing15 min
Rate limiting on auth endpointsCredential stuffing, login brute-force10 min
Hide Nginx version + disable autoindexReconnaissance, accidental directory listing2 min
Disable unused HTTP methodsCheap WAF-bypass and protocol-abuse attempts5 min

Other patterns (Content-Security-Policy, mTLS, ModSecurity rule tuning) are next steps, not starting lines.

Hardened Nginx server configuration architecture diagram for agency-grade managed cloud hosting

The five-line baseline above sits in the http {} block of nginx.conf and applies to every virtual host on the server.

TLS configuration that doesn’t break clients

The goal here is modern, but not aggressive. Cut TLS 1.0 and 1.1 (deprecated since 2020). Keep TLS 1.2 because some legacy POS terminals and corporate proxies still need it. Add TLS 1.3 because there’s no reason not to.

Drop this in the http {} block of /etc/nginx/nginx.conf:

# /etc/nginx/nginx.conf — http block
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305';
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1h;
ssl_session_tickets off;

# OCSP stapling — faster TLS handshakes for visitors
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 1.0.0.1 valid=300s;
resolver_timeout 5s;

The cipher list above is the Mozilla “intermediate” profile, which is the right choice for ~99% of agency sites. The “modern” profile is stricter but locks out anything older than ~2018, which will break a surprising number of B2B integrations.

What you should skip

I see these in copy-pasted gists. Skip them unless you have a specific reason:

  • ssl_dhparam with a custom 4096-bit Diffie-Hellman group. The cipher list above is all-ECDHE. You don’t need DH params at all.
  • HSTS preload submission on a brand-new domain. Submit HSTS preload only after the site has been HTTPS-only for at least 6 months. It’s a one-way door.
  • ssl_early_data on; for TLS 1.3 0-RTT. Convenient but introduces replay-attack risk on POST requests. Not worth it for most sites.

Security headers — the real anti-XSS layer

Layered HTTP security headers protecting a web application from XSS and clickjacking attacks

Each header is a distinct defense layer. The browser is your enforcement point, not the server.

This is where most of the hardening value lives, and where most copy-pasted configs fail because they paste headers without understanding what they do.

# /etc/nginx/nginx.conf — http block
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=(), interest-cohort=()" always;

# Content-Security-Policy in REPORT-ONLY mode first.
# Move to enforcement only after a week of clean reports.
add_header Content-Security-Policy-Report-Only "default-src 'self' https:; script-src 'self' 'unsafe-inline' https:; style-src 'self' 'unsafe-inline' https:; img-src 'self' data: https:; font-src 'self' https: data:" always;

The always flag matters. Without it, Nginx skips these headers on error responses (404s, 500s). Those are exactly when you want them most.

Why Content-Security-Policy-Report-Only first?

Because every CSP I’ve ever shipped to production broke something subtle on day one. A WordPress plugin that inlines a Google Fonts URL. An analytics snippet loaded from a CDN you forgot existed. A theme that base64-embeds a small SVG with data: URIs.

Ship CSP in report-only mode for at least 5–7 days. Read the violation reports. Adjust. Then flip it to Content-Security-Policy.

Rate limiting without breaking legitimate traffic

The mistake everyone makes: setting one global rate limit and watching real visitors get throttled. You don’t want a global limit. You want scoped limits on the endpoints attackers care about.

# http block — define the zones
limit_req_zone $binary_remote_addr zone=login_zone:10m rate=5r/m;
limit_req_zone $binary_remote_addr zone=api_zone:10m rate=60r/m;
# server block — apply per-endpoint
location = /wp-login.php {
    limit_req zone=login_zone burst=3 nodelay;
    # ...your normal location config
}

location /xmlrpc.php {
    deny all;  # Just block it. You almost certainly don't need XML-RPC.
    return 403;
}

location /api/ {
    limit_req zone=api_zone burst=10 nodelay;
    proxy_pass http://upstream;
}

Why 5 requests per minute for login? A human typing their password wrong gets 5 tries before the throttle kicks in. A botnet running a credential-stuffing list gets also 5 tries per minute, which makes a 10,000-password run take ~33 hours and become economically uninteresting.

The line that breaks more sites than any attacker

client_max_body_size 8M;

Set too low (the default 1M), and your client can’t upload a 4MB hero image to WordPress. They’ll see a generic “413 Request Entity Too Large” and call you on Saturday.

Set too high (512M), and a single attacker filling your /tmp with garbage requests can DoS the box.

The sweet spot for most agency sites is 8M for content sites, 64M when there’s image-heavy media uploads, 256M only if you’re explicitly hosting video uploads.

Hide what doesn’t need to be public

# http block
server_tokens off;        # Stop announcing the Nginx version
autoindex off;            # No "Index of /" pages
# server block — block sensitive files
location ~ /\.(?!well-known) {
    deny all;
    return 404;
}

location ~* \.(env|log|sql|bak|old|backup|swp)$ {
    deny all;
    return 404;
}

Returning 404 (not 403) is intentional. It tells the scanner “there’s nothing here,” not “there’s something here that I’m forbidding you from seeing.” The latter is a beacon.

Disable unused HTTP methods

# server block
if ($request_method !~ ^(GET|HEAD|POST|PUT|PATCH|DELETE|OPTIONS)$) {
    return 405;
}

Yes, if in Nginx is evil in some contexts, but for method filtering at the server level it’s explicitly endorsed by the upstream docs. This stops the cute TRACE/TRACK-method probes that show up in every scanner I’ve ever seen.

Verification: did it actually work?

Server monitoring dashboard showing Nginx access logs, TLS handshakes, and rate-limit enforcement metrics

Without active log review for the first week, you don’t know if you hardened the server or just made it harder to reach for legitimate visitors.

If you can’t measure the change, you didn’t actually make it. Run all three of these after deploying:

  1. Headers and TLS, locally:
    curl -I https://your-domain.com
    curl --tlsv1.2 -I https://your-domain.com  # should succeed
    curl --tlsv1.1 -I https://your-domain.com  # should fail
  2. TLS configuration, externally: SSL Labs Server Test. Aim for an A grade. A+ is achievable but not always worth the tradeoffs.
  3. Headers, externally: securityheaders.com. Aim for A with the report-only CSP. You’ll get A+ once CSP is enforcing.

Then, the part nobody does:

Tail access.log and error.log for at least 5 days. The most expensive bug isn’t an attack you blocked. It’s a legitimate workflow you silently broke.

sudo tail -f /var/log/nginx/access.log /var/log/nginx/error.log

Look for sustained 403/405/413 patterns from the same client IPs that are not attackers (your office IP, a known integration partner, a CDN edge). Adjust whitelist rules as needed.

What’s not in this guide (and why)

I get asked about these by clients every month. They have their place, but they’re a step beyond the “every server, day one” baseline:

  • mTLS for admin endpoints. Excellent for B2B APIs. Overkill for /wp-admin/.
  • ModSecurity + OWASP CRS. Adds real protection but adds real complexity. Worth it on commerce sites; usually not on brochure sites.
  • fail2ban as an Nginx companion. I run it on every box, but the rate-limit zones above already do most of the work it would do.
  • GeoIP blocking. Tempting. Almost always backfires when a real customer travels.

If your business specifically needs one of these, that’s a separate conversation. Happy to have it on a discovery call.

Closing the loop

This config has been running unchanged on the agency tier of Webnestify hosting for the past 18 months. We’ve shipped maybe two updates to it: one when TLS 1.3 cipher recommendations shifted, and one when Permissions-Policy replaced the older Feature-Policy header.

The point isn’t that this config is perfect. The point is that it’s stable, measurable, and scoped to the threats agency-sized businesses actually face. That’s the difference between hardening that holds up over a year and hardening that becomes someone else’s problem in three months.

Frequently Asked Questions

Want this handled, not just understood?

Reading the playbook is one thing. Running it on production at 2am is another. If you'd rather have me run it for you, the door is open.

Apply for Access