Skip to main content
Open Source Solutions

Uptime Kuma: My Self-Hosted Monitoring Setup

How I deploy Uptime Kuma for client environments: the Docker stack, the proxy in front, and the notification traps I keep watching agencies fall into.

Published Updated 9 min read

I run Uptime Kuma on every client environment that doesn’t already have a monitoring contract. It’s the open-source self-hosted monitoring tool that finally made me cancel my UptimeRobot Pro subscription, and after eighteen months across roughly thirty client servers, I have opinions about how to deploy it correctly.

This is the actual stack I ship: Uptime Kuma, Nginx Proxy Manager in front, Watchtower handling image updates, all on a small VPS that does nothing else. Three containers, one purpose. About forty minutes from a fresh server to a working dashboard with TLS and a status page on a custom domain.

If you’ve never deployed it, copy the compose files below verbatim, change the volume paths to match your server layout, and read the notification-trap section before you ship anything to a real client.

Why self-hosted monitoring is worth the hour it takes

Hosted monitoring services are great until they aren’t. UptimeRobot’s free tier checks every five minutes, which is forever in incident time. Their Pro plan starts at $7/month per 50 monitors, which adds up fast if you’re an agency with 30 clients. Both vendors own your historical uptime data, which becomes its own problem the day you migrate.

Uptime Kuma replaces them for around 95% of what an agency needs: HTTP/HTTPS checks down to 20-second intervals, keyword and JSON-path response matching, TCP port checks, certificate expiry warnings, push monitors for cron jobs, and a public status page you can put on a customer-facing subdomain. The thing it doesn’t have is a global probe network. Pingdom checks your site from 100 cities; Kuma checks it from the one location where you host it. For most agency clients that’s a fine tradeoff, but it’s the one worth being honest about.

The other thing self-hosting gives you is the data. Two years of uptime history stays in your SQLite file: queryable, exportable, yours.

The Uptime Kuma deployment stack

Here’s what I run on every monitoring box. The whole stack fits in one docker-compose.yml if you want, but I keep them in separate files so I can update one service without touching the others.

Prerequisites

You need a VPS with at least 2GB of RAM. A 1GB box technically runs Uptime Kuma, but leaves no headroom for the proxy in front. Pay the extra euro per month and skip the resource-fight.

You also need DNS access for the domain you’ll use, ideally a provider that supports API-driven Let’s Encrypt challenges (Cloudflare is the obvious choice). And you need a properly hardened server before you put anything on it. If you haven’t done that yet, start with the security guides under /insights/cybersecurity-hardening/ and come back. Deploying on an unsecured server is a category of mistake I won’t help you debug.

Docker engine

Install Docker from the official Docker repository. Follow the official Docker installation guide for Ubuntu. Don’t install from apt’s default repo or from a third-party “convenient” script you found on someone’s blog. The official repository is the only correct source.

Nginx Proxy Manager

Nginx Proxy Manager (NPM) is the GUI in front of every Docker stack I deploy. It handles TLS termination, automatic Let’s Encrypt renewal, and HTTP-to-HTTPS redirects without me touching a config file. The web UI runs on port 81. Public traffic goes through 80 and 443.

version: "3.3"
services:
  npm:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    volumes:
      - /home/user/docker/npm/data:/data
      - /home/user/docker/npm/ssl:/etc/letsencrypt
    environment:
      DISABLE_IPV6: 'true'
    labels:
      - "com.centurylinklabs.watchtower.enable=true"

Change /home/user/docker/npm/ to whatever path layout your server uses. I keep everything Docker-related under /srv/docker/<service>/ on production boxes, but pick a convention and stick to it across every host you operate.

Warning: Port 81 is the NPM admin UI. Do not leave it exposed to the public internet. Whitelist your office IP at the firewall level, or put it behind a self-hosted VPN. I cover the VPN option in the Mistborn self-hosted VPN platform post. That’s the path I take for clients where a static office IP isn’t reliable.

Uptime Kuma itself

The Uptime Kuma container is a single image with one volume for its SQLite database. Default port is 3001. You’ll proxy this through NPM in a moment.

version: '3.3'

services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime-kuma
    volumes:
      - /home/user/docker/uptime-kuma/data:/app/data
    ports:
      - 3001:3001
    restart: always
    labels:
      - "com.centurylinklabs.watchtower.enable=true"

Bring it up:

docker compose -f /path/to/uptime-kuma.yml up -d

In the NPM admin, add a Proxy Host pointing status.yourdomain.com to http://uptime-kuma:3001, request a Let’s Encrypt certificate, and tick “Force SSL” and “HTTP/2 Support”. Two minutes of clicking, and you have a TLS-secured monitoring dashboard.

Watchtower for image updates

Watchtower handles the boring part: pulling new image tags and restarting containers when an upstream release ships. I label every container I want auto-updated with com.centurylinklabs.watchtower.enable=true, then let Watchtower decide what to do.

version: '3'

services:
  watchtower:
    image: containrrr/watchtower:latest
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /etc/timezone:/etc/timezone:ro
    environment:
      - WATCHTOWER_CLEANUP=true
      - WATCHTOWER_INCLUDE_RESTARTING=true
      - WATCHTOWER_INCLUDE_STOPPED=true
      - WATCHTOWER_REVIVE_STOPPED=true
      - WATCHTOWER_LABEL_ENABLE=true
      - WATCHTOWER_NOTIFICATIONS=email
      - WATCHTOWER_NOTIFICATION_EMAIL_FROM=
      - WATCHTOWER_NOTIFICATION_EMAIL_TO=
      - WATCHTOWER_NOTIFICATION_EMAIL_SERVER=smtp.postmarkapp.com
      - WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT=587
      - WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER=
      - WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=
    command: --interval 86400
    labels:
      - "com.centurylinklabs.watchtower.enable=true"

The --interval 86400 runs Watchtower once every 24 hours. I set it to fire at 04:00 server time, away from any business-hours traffic, by aligning the start time of the container.

WATCHTOWER_CLEANUP=true matters more than people realize. Without it, every release leaves the old image tag on disk, and after six months of updates you’ve got 40GB of dangling image layers eating your VPS storage.

The notification trap that burns everyone

Here’s the part the install guide doesn’t warn you about: Uptime Kuma’s notification setup is where most agency deployments quietly stop working.

The default email transport assumes you have a working SMTP relay. If you point it at your domain’s own mail server, half your alerts will land in spam, the other half will get rate-limited, and you’ll find out at 6am when a customer’s site has been down for two hours and nothing alerted.

Use a transactional mail provider. Postmark, Mailgun, and Amazon SES all work. Set up SPF, DKIM, and DMARC on the sending domain before you configure the Uptime Kuma notification. Send a test alert. Verify it arrives in Inbox, not Promotions or Spam.

Better yet, skip email for incident alerts entirely. The Telegram bot integration takes about three minutes to set up. Create a bot via @BotFather, drop the token into Kuma, send a test message. Telegram alerts arrive within a second on every device I have it installed on. Slack and Discord webhooks work the same way. Email becomes the secondary channel for daily digest reports, not the primary channel for “the site is on fire right now.”

Sizing and intervals: what to actually pick

The default check interval is 60 seconds. For most agency sites, that’s the right number. Going down to 20-second intervals quadruples the requests you’re firing into your client’s server, which can distort their analytics and trigger fail2ban if you haven’t whitelisted the monitoring IP.

A 2GB VPS handles roughly 100 monitors at 60-second intervals with CPU usage well under 10%. I’ve benchmarked one of my production boxes at 47 active monitors averaging 3-4% CPU and around 280MB of RAM. You’ll outgrow that plan only if you’re running monitoring as a service for your customers.

For status pages, the trick is creating a dedicated status group in Uptime Kuma per client, then publishing a page that only shows that client’s monitors. Each client gets their own URL like status.yourdomain.com/status/clientname. No cross-client data leakage, one dashboard to operate.

Verification: did the monitor actually fire?

The mistake I see in every greenfield deployment: people install Uptime Kuma, set up six monitors, and never test the alerting path. Then a real outage happens and the alert silently doesn’t go anywhere because the SMTP password was wrong or the Telegram bot was muted.

Run this test on day one:

  1. Add a monitor pointing to a service you control.
  2. Stop the service. Wait for the configured “Retries” count, then the “Heartbeat Interval” (default: 60s).
  3. Confirm the alert arrives on every notification channel you set up.
  4. Restart the service. Confirm the recovery alert arrives.

After that, set yourself a calendar reminder for +30 days to repeat the test. Notification paths rot. SMTP tokens expire. Telegram bots get suspended. Catching it during a planned test beats catching it during a real incident.

What I don’t bother with

A few features in the broader monitoring ecosystem that I’ve consciously left out of this stack:

  • Distributed probe networks. Worth it if you sell uptime SLAs to enterprise customers. Overkill for agency hosting.
  • Synthetic transaction monitoring. Tools like Checkly are great for testing user journeys (login, add-to-cart, checkout). Uptime Kuma doesn’t do this; if you need it, run Checkly alongside, not instead of.
  • Full APM integration. Datadog, New Relic, and Grafana Cloud all have their place, but they’re a different category of tool. Uptime Kuma answers “is it up?”. APM answers “why is it slow?”. Don’t conflate them.

If a client specifically needs synthetic monitoring or APM-grade tracing, that’s separate tooling. The “every server, day one” baseline is what’s in this post.

Closing the loop

This three-container stack of Uptime Kuma, Nginx Proxy Manager, and Watchtower has been the default monitoring layer on every Webnestify-managed environment for the past eighteen months. We’ve replaced it twice: once when a client outgrew it and went to Datadog, and once when another client wanted Pingdom-style multi-region probes. Everywhere else, it just runs.

The point isn’t that this is the most powerful monitoring setup possible. It’s that it’s small enough to operate, cheap enough to deploy on every client environment without a budget conversation, and reliable enough that I sleep better since I rolled it out.

Watch on YouTube

Video walkthrough

Prefer the screen-recording version of this guide? Watch it on YouTube — opens in a new tab so the player only loads when you ask for it.

Frequently Asked Questions

Want this handled, not just understood?

Reading the playbook is one thing. Running it on production at 2am is another. If you'd rather have me run it for you, the door is open.

Apply for Access