Skip to main content
Open Source Solutions

MeshCentral Self-Hosted Remote Management for Agency Fleets

How I deploy MeshCentral self-hosted to replace TeamViewer for agency client SLAs: the Docker stack, the proxy, and the agent install rules I never break.

Published Updated 9 min read

I run MeshCentral self-hosted on every agency operations stack where the alternative would be a TeamViewer site licence and a spreadsheet of installer keys. After about three years of using it across a couple dozen client environments, my opinion is settled: for in-house remote management of a fleet you actually operate, MeshCentral is one of the better deals in open source.

This is the deployment I ship: MeshCentral behind Nginx Proxy Manager, on a small Debian VPS that does nothing else, with the agent install token scoped per device group. About thirty minutes from a fresh server to a working dashboard with TLS and the first agent connected. The rest of this post is the actual Compose file, the configuration switches that matter, and the agent-scoping rules I won’t budge on.

If you’re new to it, copy the Compose block below verbatim, change the volume paths and hostname to match your server, and read the security section before you expose port 443.

Why self-hosted RMM beats a TeamViewer subscription for agencies

I cancelled my TeamViewer subscription the year I crossed twelve clients. The arithmetic was simple: per-seat pricing on a tool I used daily across a fleet I already operated didn’t make sense when the open-source alternative ran on a 2GB VPS for less than five euro a month. Agency operators sit in a specific place on the RMM market. We manage the same fleet for years, we know every endpoint by name, and we don’t need a global relay network because our endpoints are mostly servers we already have other paths into.

MeshCentral covers what I actually use day-to-day: agent-based remote desktop on Windows, terminal sessions on Linux servers, file transfer in both directions, wake-on-LAN, group-based access control, and a working audit log of who connected to what and when. The audit log alone earns its keep when a client asks who touched a server two weeks ago.

What MeshCentral doesn’t do as well as the proprietary alternatives: consumer-grade unattended access, mobile-client polish, and global relay infrastructure. If your business is helpdesk support to one-off consumer endpoints across the planet, TeamViewer’s network is doing real work for you. If your business is operating a known fleet, MeshCentral’s lack of relay is a non-issue because your endpoints reach the server directly.

What you need before you start

A fresh Debian 12 VPS with at least 2GB of RAM. MeshCentral itself runs comfortably in 512MB, but you need headroom for Nginx Proxy Manager and a buffer for active sessions. A 1GB box works for one or two simultaneous remote desktop sessions; the moment three operators are connected at once, you’ll feel it.

You also need DNS access for the hostname you’ll use (something like mesh.yourdomain.com), and a properly hardened server. If you haven’t done the baseline yet, start with Linux server security fundamentals and come back. Putting an RMM panel on an unhardened server is the kind of mistake that ends up in someone else’s incident write-up.

Installing the Docker engine

Install Docker from the official Docker repository. The official Docker installation guide for Ubuntu/Debian is the canonical reference. Don’t install from the distro’s default apt repository (the version is usually two majors behind), and don’t pipe a random get.docker.com script into a shell unless you’ve read it.

Quick sanity check after install:

docker --version
docker compose version
systemctl is-enabled docker

The MeshCentral self-hosted Compose stack

Here’s the Compose file I deploy. It runs MeshCentral on the same Docker network as Nginx Proxy Manager, which terminates TLS in front. MeshCentral itself listens on port 8086 internally; NPM proxies mesh.example.com:443 to it.

version: '3'
services:
  meshcentral:
    restart: always
    container_name: meshcentral
    image: typhonragewind/meshcentral:latest
    ports:
      - 8086:443
    environment:
      - HOSTNAME=mesh.example.com
      - REVERSE_PROXY=127.0.0.1
      - REVERSE_PROXY_TLS_PORT=443
      - IFRAME=false
      - ALLOW_NEW_ACCOUNTS=false
      - WEBRTC=false
      - TZ=Europe/Bratislava
    volumes:
      - /home/user/docker/meshcentral/data:/opt/meshcentral/meshcentral-data
      - /home/user/docker/meshcentral/user_files:/opt/meshcentral/meshcentral-files
    labels:
      - "com.centurylinklabs.watchtower.enable=true"

  npm:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      - '80:80'
      - '443:443'
      - '81:81'
    environment:
      DISABLE_IPV6: 'true'
      PUID: 1000
      PGID: 1000
    volumes:
      - /home/user/docker/npm/data:/data
      - /home/user/docker/npm/ssl:/etc/letsencrypt
    labels:
      - "com.centurylinklabs.watchtower.enable=true"

A few things worth flagging before you docker compose up:

  • Change every /home/user/docker/... path to wherever you actually keep service data. I use /srv/docker/<service>/ on production boxes; pick a convention and stick to it.
  • HOSTNAME=mesh.example.com is the public hostname MeshCentral will use to construct agent installer URLs. Set it to the FQDN you’ll point at this server in DNS, not localhost or the VPS IP.
  • ALLOW_NEW_ACCOUNTS=false is the single most important environment variable in this file. The first user to register becomes admin; after that, registration is closed. If you leave this true, anyone who finds the URL can register an account and start poking around.
  • WEBRTC=false is the safer default. WebRTC support exists in MeshCentral but isn’t officially released and changes behaviour around peer-to-peer session handling. Turn it on later if you have a specific reason; don’t ship with it on by default.
  • IFRAME=false blocks the panel from being embedded in another site. There’s no legitimate agency reason to embed the RMM in an iframe, and a few illegitimate ones.

Warning: Port 81 is the NPM admin UI. Do not leave it exposed to the public internet. Whitelist your office IP at the firewall level, or put the whole stack behind a self-hosted VPN. I cover the VPN side in the Mistborn self-hosted VPN platform and Wireguard Easy posts. For agency setups where a static office IP isn’t reliable, the VPN path is the right answer.

Bring the stack up:

docker compose -f /path/to/meshcentral.yml up -d
docker compose -f /path/to/meshcentral.yml logs -f meshcentral

The first start takes a minute as MeshCentral generates its self-signed cert and database. Watch the logs until you see the “MeshCentral HTTP server running” line.

Wiring up the proxy and TLS

In the NPM admin UI on port 81:

  1. Add a Proxy Host. Domain: mesh.example.com. Forward Hostname/IP: meshcentral. Forward Port: 443. Scheme: https.
  2. Tick “Block Common Exploits” and “Websockets Support”. Websockets are non-negotiable here. MeshCentral relies on them for the agent control channel and for the in-browser remote desktop.
  3. SSL tab: request a Let’s Encrypt cert for the same hostname. Tick “Force SSL”, “HTTP/2 Support”, and “HSTS Enabled”.
  4. Save and test. The MeshCentral login page should load over HTTPS at https://mesh.example.com.

Register the admin account immediately. Set a long password. Enable 2FA on the account from the user menu. MeshCentral supports TOTP out of the box, and there’s no excuse for not turning it on for the admin.

If you run a separate identity provider for your agency, MeshCentral can authenticate against Authentik via SAML or LDAP. For a small agency, local accounts with TOTP are fine. Once the operator team grows past four or five people, route everything through SSO so revocation is one place instead of per-tool.

Scoping agent installs the right way

This is the part of MeshCentral I see deployed badly more often than not. Every endpoint that connects to MeshCentral does so via an agent installed with a specific device group token. The token determines which group the device joins, which determines who can connect to it, what permissions they have, and what the agent is even allowed to do on the host.

The temptation, when you’re setting it up the first time, is to create one device group called “Clients” and install every agent into it. Don’t. Once a device is in a group, moving it to a different group requires reinstalling the agent on the endpoint, which on a remote server you can’t easily reach is exactly the kind of friction that makes you not bother.

My standing rule: one device group per client environment, per platform tier. A small client gets two groups (acme-servers, acme-laptops). A bigger one with multiple environments gets four (acme-prod-servers, acme-staging-servers, acme-laptops-staff, acme-laptops-contractors). Each group has its own install token, its own access control list, and its own audit trail.

To set this up, in MeshCentral go to “My Devices” → device group → “Edit Device Group” → “Features and Consent”. Disable “Desktop View”, “Desktop Control”, and “Desktop Limited Input” for the server group. Leave them on for the laptop group. The agent enforces these settings client-side, so even an admin user can’t escalate beyond what the group allows without explicitly changing the group config and reinstalling the agent.

Operating MeshCentral day to day

Two things I do every Monday morning that have caught problems before they became incidents. First, review the audit log. MeshCentral logs every connection, file transfer, and command run via the agent. Skim the last seven days. Anything you don’t recognise is worth investigating before it becomes a story. Second, check the agent connection state. A device that’s been red for more than a week is either decommissioned (remove it from the group) or broken (fix the agent). Stale dead agents in the panel make real outages harder to spot.

For the “is the server up?” question, I keep monitoring separate. MeshCentral is for operator access. Uptime Kuma is for outage detection. Don’t conflate them.

What I don’t use MeshCentral for

MeshCentral is not the right tool for customer-facing helpdesk on consumer endpoints. The agent-less mode exists, but TeamViewer and ScreenConnect have a better UX for that specific job. It’s also not a patch-management platform: I run patch automation via n8n workflows that hit endpoints over SSH for Linux and PowerShell remoting for Windows. MeshCentral is the operator path; n8n is the automation path. And it has no MDM features, so iPhones and Androids stay in whatever MDM the client already uses.

Closing the loop

This MeshCentral self-hosted stack has been the operator surface on every Webnestify-managed environment for around three years. The total cost is one small VPS, the total operational overhead is a Monday-morning audit-log review, and the visibility it gives me into a fleet I’m responsible for is worth more than any per-seat tool I’ve trialled since.

If you’re an agency operator paying TeamViewer per seat, run the numbers on a 2GB Hetzner box and the time it takes you to register a Let’s Encrypt cert. The break-even is somewhere around five managed endpoints. After that, MeshCentral pays for itself every month.

Watch on YouTube

Video walkthrough

Prefer the screen-recording version of this guide? Watch it on YouTube — opens in a new tab so the player only loads when you ask for it.

Frequently Asked Questions

Want this handled, not just understood?

Reading the playbook is one thing. Running it on production at 2am is another. If you'd rather have me run it for you, the door is open.

Apply for Access