I run 2FAuth on the same private VPS that hosts my password manager, and it’s the self-hosted 2FA manager I trust with roughly 80 TOTP seeds across personal and client accounts. After a year and a half of using it as my primary authenticator, the deployment pattern has settled into something I’m willing to recommend, and I have opinions about the parts the install guide doesn’t cover.
This post is the actual stack I ship: the 2FAuth container, Nginx Proxy Manager in front for TLS, all sitting behind a Wireguard VPN so the admin URL isn’t on the public internet. About thirty minutes from a fresh server to a working vault with a custom domain and WebAuthn-protected login.
If you’ve never deployed 2FAuth, copy the compose file below verbatim, change the placeholder values, and read the backup section before you import a single seed. The backup discipline is where most self-hosted authenticator deployments quietly fail.
Why a self-hosted 2FA manager is worth running
Most people end up at one of three places for TOTP codes. Google Authenticator on a single phone, Authy across devices, or a password manager with built-in authenticator support. Each has trade-offs, and each has a failure mode that a self-hosted vault sidesteps.
Google Authenticator’s single-phone-only model was famously the cause of countless lost-account incidents until they added cloud sync, and the cloud sync introduced its own privacy questions. Authy is being sunset on desktop and has had data exposure incidents that affected millions of accounts. Bitwarden’s authenticator is fine, but it puts your password and your second factor in the same vault, which a security-minded reader will recognize as a category collapse.
2FAuth lives on a server you control. The SQLite database is encrypted with your APP_KEY. You can back it up, restore it on a new VPS in 10 minutes, and you’re not waiting for a vendor to decide what features your codes get. For roughly the cost of a coffee per month on a small VPS, you own the vault end to end.
The thing 2FAuth doesn’t give you is a mobile app. It’s a web app that works in mobile browsers, but if you want a native iOS or Android experience, you’ll either bookmark the PWA or pair 2FAuth with a mobile authenticator that you sync from. I keep Aegis on my phone as a read-only mirror for the seeds I use most often, and 2FAuth as the canonical store.
The 2FAuth deployment stack
Here’s what runs on my private VPS. The whole thing fits in two compose files: one for the proxy, one for 2FAuth itself.
Prerequisites
You need a VPS with at least 2GB of RAM. A 1GB box runs 2FAuth fine on its own, but leaves no headroom for the Nginx Proxy Manager that should sit in front. For a personal-scale deployment, a 2GB CX22-class instance is more than enough.
You also need a hardened server before anything else lands on it. SSH keys, no root login, UFW with deny-by-default, the lot. If you haven’t done that yet, my Linux server security fundamentals post is the baseline I run on every fresh box. A 2FA vault on an unhardened server is the worst kind of false confidence.
DNS access for the domain you’ll use, ideally with a provider that supports API-driven Let’s Encrypt challenges. Cloudflare is the pragmatic default.
And before you even open the proxy to the internet, decide whether you actually need a public URL at all. For my own vault I don’t. The 2FAuth instance is reachable only over my Wireguard tunnel, which I cover in the Wireguard Easy and Mistborn posts. The login screen never sees public scanner traffic, and that’s the whole point.
Docker engine
Install Docker from the official Docker repository. Follow the official Docker installation guide for Ubuntu. Don’t install from apt’s default repo or from a one-line curl | sh script you found in someone’s blog post. The official repository is the only correct source.
Nginx Proxy Manager
Nginx Proxy Manager (NPM) is the GUI in front of every Docker stack I deploy. It handles TLS termination, automatic Let’s Encrypt renewal, and HTTP-to-HTTPS redirects without me touching a config file. The web UI runs on port 81. Public traffic goes through 80 and 443.
version: "3.3"
services:
npm:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- /home/user/docker/npm/data:/data
- /home/user/docker/npm/ssl:/etc/letsencrypt
environment:
DISABLE_IPV6: 'true'
labels:
- "com.centurylinklabs.watchtower.enable=true"
Change /home/user/docker/npm/ to whatever path layout your server uses. I keep everything Docker-related under /srv/docker/<service>/ on production boxes; pick a convention and stick to it.
Warning: Port 81 is the NPM admin UI. Do not leave it exposed to the public internet. Whitelist your office IP at the firewall level, or put it behind a self-hosted VPN. For 2FAuth specifically, the same logic applies to ports 80 and 443. If the vault is for personal use only, lock the whole stack behind the VPN and skip the public-DNS step entirely.
2FAuth itself
The 2FAuth container is a single image with one volume for its SQLite database. Default port is 8000. You’ll proxy this through NPM in a moment. The compose file below is the trimmed version of the upstream example, with the variables I actually change called out.
version: "3"
services:
2fauth:
image: 2fauth/2fauth
container_name: 2fauth
volumes:
- /home/user/docker/2fauth/data:/2fauth
ports:
- 8000:8000/tcp
restart: unless-stopped
environment:
- APP_NAME=2FAuth
- APP_ENV=local
- APP_DEBUG=false
- SITE_OWNER=you@yourdomain.com
- APP_KEY=ReplaceThisWithA32CharRandomString
- APP_URL=https://2fa.yourdomain.com
- IS_DEMO_APP=false
- LOG_CHANNEL=daily
- LOG_LEVEL=notice
- DB_DATABASE=/srv/database/database.sqlite
- CACHE_DRIVER=file
- SESSION_DRIVER=file
- MAIL_DRIVER=log
- MAIL_HOST=smtp.example.com
- MAIL_PORT=587
- MAIL_FROM=2fauth@yourdomain.com
- MAIL_USERNAME=null
- MAIL_PASSWORD=null
- MAIL_ENCRYPTION=tls
- MAIL_FROM_NAME="2FAuth"
- MAIL_FROM_ADDRESS=2fauth@yourdomain.com
- AUTHENTICATION_GUARD=web-guard
- WEBAUTHN_NAME=2FAuth
- WEBAUTHN_USER_VERIFICATION=preferred
- TRUSTED_PROXIES=*
- BROADCAST_DRIVER=log
- QUEUE_DRIVER=sync
- SESSION_LIFETIME=120
labels:
- "com.centurylinklabs.watchtower.enable=true"
The two values that absolutely must change before you bring this up: APP_KEY and SITE_OWNER. The APP_KEY is the encryption key that protects every TOTP seed in the database. Generate a random 32-character string. Lose it and every code in the vault becomes unrecoverable, even from a database backup.
openssl rand -base64 24 | head -c 32
Drop the output into APP_KEY, and store a copy of it in your offline password manager next to the database backup. These two artifacts together are what restore a vault.
Bring it up:
docker compose -f /path/to/2fauth.yml up -d
In the NPM admin, add a Proxy Host pointing 2fa.yourdomain.com to http://2fauth:8000, request a Let’s Encrypt certificate, and tick “Force SSL” plus “HTTP/2 Support”. On the Advanced tab, I add the standard security headers (HSTS, X-Frame-Options DENY, Content-Security-Policy with a strict source list). My Nginx security hardening post covers the exact set I use.
First-login setup
Browse to the URL, register the admin account, and set a long unique password. Then immediately go into Settings → Security and enable WebAuthn. A hardware key like a YubiKey or the WebAuthn capability on your laptop / phone gives you phishing-resistant login. With WebAuthn enforced, even a leaked password isn’t enough.
Optional but worth doing: enable the auto-lock so the vault locks itself after 5 minutes of inactivity and the OTP obfuscation that hides codes until you click them. Both are in Settings → Security.
The backup pattern that actually works
Here’s the part the install guide skims over, and where most self-hosted authenticator deployments quietly fail: backups.
The 2FAuth SQLite database lives in the volume mounted at /2fauth. Back up that whole directory daily. I use restic with a remote repository on Backblaze B2, but borg, rclone with a daily cron, or your VPS provider’s native snapshots all work. The non-negotiable is off-site and encrypted at rest. A backup that lives only on the same VPS as the original is not a backup, it’s a copy.
The second backup, and this is the one people forget: keep an export of the seeds themselves. 2FAuth supports exporting the entire vault to an encrypted JSON file. Do this the day you finish populating it, and again every quarter. Store the encrypted export somewhere completely independent of the VPS backup chain. A USB drive in a desk drawer, a printed QR-code book, your password manager’s secure notes section. The reason for two backup channels: if you lose the APP_KEY, the database backup is unrecoverable. The seed export is independently encrypted with a passphrase you set at export time, so it survives APP_KEY loss.
The third backup, and this is the one people really forget: the per-service backup codes. Every account you protect with 2FA hands you a list of one-time recovery codes at setup. Save them. Print them. Put them in a safe. The day your 2FAuth vault is unreachable for whatever reason (server down, DNS broken, Wireguard config corrupted), the per-service backup codes are how you log in. I keep mine in a labelled paper folder, low-tech and reliable.
Sizing, intervals, and the threat model
Resource usage on this stack is trivial. My production 2FAuth box runs at roughly 1.5% CPU and 180MB of RAM with about 80 active TOTP seeds and the proxy in front. A 1GB VPS would be enough; I run 2GB only because I share the box with my password manager.
The threat model worth thinking about is not the 2FAuth code itself. The maintainer Bubka is responsive, the codebase is Laravel, and the attack surface is well-trodden web-app territory. The realistic risks are:
- Public exposure of the admin URL. A login form on a public domain gets credential-stuffed. Even with a strong password and WebAuthn, you’re paying a tax in scanner traffic and adding a category of risk for zero practical benefit. Put it behind a VPN.
- Weak APP_KEY. Generated once, often weakly, and never rotated. Use 32 random characters from openssl, store the value, never commit it to a public git repo.
- Backup gaps. Covered above. The recovery path matters more than the active path.
- Compromised admin account. WebAuthn closes most of this. Without WebAuthn, the entire vault sits behind one password.
For a personal vault, you don’t need CrowdSec or fail2ban on the proxy if the vault itself is behind a VPN. If you do choose to expose it publicly, my CrowdSec installation post is the right next step, plus IP allow-listing in NPM if you have a reasonably static address.
Importing from your current authenticator
Migration is straightforward and the part that surprised me with how well it works.
From Google Authenticator, open the app, go to Settings → Transfer accounts → Export accounts, select all, and you get a QR code containing every seed. In 2FAuth, click the ”+” icon to add a new account, choose “Import from another app”, point your phone camera at the QR code shown on the laptop running 2FAuth, and the seeds populate within a couple of seconds. I’ve migrated about 60 accounts this way without a hiccup.
From Aegis, export to JSON (encrypted or plaintext) and import the file directly through 2FAuth’s import dialog. From Authy, the migration is messier because Authy doesn’t expose an export of the underlying seeds; you’ll have to disable 2FA on each service and re-enable through 2FAuth, which is the only path that genuinely produces a clean migration.
After migration, do not delete the original authenticator until you’ve verified that 2FAuth produces working codes for every service for at least a week. Run both in parallel during the transition, and only retire the old app once you’ve confirmed every account.
What I don’t bother with
A few things in the broader 2FA-vault ecosystem that I’ve consciously left out of this stack:
- Reverse-proxy authentication guard. 2FAuth supports a
reverse-proxy-guardmode where Authentik or similar handles auth in front. Powerful, but for a personal vault the built-in WebAuthn is enough. Worth doing for team deployments, where centralizing auth in Authentik or similar is the right move. - Multi-user support. 2FAuth supports multiple user accounts on one instance. I don’t recommend this. A 2FA vault is one of the few applications where I’d run separate instances per user rather than a shared one. The blast radius of an admin compromise on a shared vault is too high.
- Redis backend. The compose example above uses file-based caching and sessions, which is fine for personal scale. Redis is overkill for a vault one person uses.
If I were running 2FAuth for a small team rather than for myself, I’d put it behind Authentik with SSO, enforce hardware-key WebAuthn at the identity provider, and run a separate instance per environment (personal, work, client-shared). For one person, the simple stack is the right scale.
Verifying the deployment before you trust it
Run this sanity check before you migrate a single seed in:
- Add one test account in 2FAuth using a service you can afford to lock yourself out of (a throwaway email, a test GitHub account).
- Verify the TOTP code 2FAuth shows matches what the canonical authenticator on your phone shows. Both apps should produce the same 6 digits at the same moment.
- Log out of 2FAuth completely. Log back in using WebAuthn. Verify the WebAuthn challenge works on at least two devices you own (laptop + phone, or laptop + hardware key).
- Stop the 2FAuth container, restart it, and verify the data persists. (
docker compose down && docker compose up -din the 2fauth directory.) - Take a backup of the volume. Restore it to a different temporary directory. Spin up a second 2FAuth container pointing at the restored volume, with the same APP_KEY. Verify it shows the same accounts. This is your disaster-recovery rehearsal.
Step 5 is the one people skip. Do it once, before you have 80 seeds in there. Better to discover a backup-restore problem on day one with one test seed than during a real incident.
Closing the loop
This 2FAuth stack has been my primary authenticator for the past 18 months, and it’s replaced Authy on every device I own. The setup time is a couple of hours including the security hardening, the recurring cost is a 4€/month VPS, and the operational burden is one daily backup job and a quarterly seed export.
The thing self-hosting a 2FA vault has actually given me, beyond the privacy and the data ownership, is the discipline of taking 2FA seriously. When the recovery path is your problem rather than a vendor’s, you stop treating backup codes as throwaway and you stop assuming the magic cloud will fix it. That mindset shift, more than any specific feature, is the real benefit. The companion read for that side of it is the human element in cybersecurity defense post: most security incidents are people problems, and a 2FA vault you actually understand is one fewer place for a person problem to hide.