I run Authentik as the self-hosted identity provider in front of every internal app on my private cloud. After about two years using it for myself plus a handful of agency deployments, I have a clear view of when it earns its keep and when it does not. The short version: a self-hosted Authentik IdP replaces a pile of separate login screens with one front door, but only past a certain scale does the operational cost pay off.
This post is the actual stack I ship: Postgres, Redis, the Authentik server, and the Authentik worker, all behind Nginx Proxy Manager for TLS. One evening from a clean VPS to a working IdP with the first SSO connection wired up.
If you have never deployed Authentik, copy the Compose file below, change the placeholder values, and read the sizing section before you commit.
What Authentik actually does
Authentik is an open-source identity provider. Your apps redirect users to Authentik to log in, Authentik checks credentials and enforced MFA, and sends a signed token back. The app trusts the token instead of running its own login screen. One login covers everything for the user; one place to disable a leaving employee covers everything for the admin.
Out of the box it speaks the four protocols that matter: OAuth2 / OIDC for modern apps, SAML for enterprise SaaS, LDAP for legacy apps that want a directory, and a proxy outpost for apps with no auth integration at all. The proxy outpost is the dark-horse feature, the one that lets you put a login wall in front of a tool that has zero native auth. I use it most often for internal dashboards that assume “the network is the perimeter”.
The piece that takes a beat to get used to is the flow model. Authentik does not hard-code “username + password + MFA”. You build a flow as a sequence of stages: identification, password, TOTP, WebAuthn, consent, redirect. You can swap the order, add conditional branches, or skip stages entirely for particular groups. The first time you see the flow editor it looks over-engineered. The fifth time it looks like the only sane way to express real auth.
The Authentik deployment stack
Here is what runs on my private VPS. The whole thing fits in one Compose file with four services and a small env file.
Prerequisites
A VPS with at least 2GB of RAM. The official docs say 2GB is the minimum and I would not run a production instance on less. For an agency-scale deployment with a dozen connected apps and 20 to 30 users, 4GB is comfortable.
You also need a hardened server before anything else lands on it. SSH keys, no root login, UFW with deny-by-default. If you have not done that yet, my Linux server security fundamentals post is the baseline I run on every fresh box. An identity provider on an unhardened server is the worst possible outcome: one compromise and every connected app is wide open.
DNS access for the domain you will use, ideally with a provider that supports API-driven Let’s Encrypt challenges. Cloudflare is the pragmatic default.
Docker engine
Install Docker from the official Docker repository. Follow the official Docker installation guide for Ubuntu. Do not install from apt’s default repo or from a one-line curl | sh script you found on someone’s blog. The official repository is the only correct source.
Docker Compose file
This is the production reference design with Postgres, Redis, the server, and the worker. The volumes I keep under a per-service directory so backups are easy to scope.
version: "3.4"
services:
postgresql:
image: docker.io/library/postgres:12-alpine
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
start_period: 20s
interval: 30s
retries: 5
timeout: 5s
volumes:
- ./database:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${PG_PASS:?database password required}
POSTGRES_USER: ${PG_USER:-authentik}
POSTGRES_DB: ${PG_DB:-authentik}
env_file:
- .env
redis:
image: docker.io/library/redis:alpine
command: --save 60 1 --loglevel warning
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
start_period: 20s
interval: 30s
retries: 5
timeout: 3s
volumes:
- ./redis:/data
server:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-latest}
restart: unless-stopped
command: server
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
volumes:
- ./media:/media
- ./custom-templates:/templates
env_file:
- .env
ports:
- "${COMPOSE_PORT_HTTP:-9000}:9000"
- "${COMPOSE_PORT_HTTPS:-9443}:9443"
depends_on:
- postgresql
- redis
worker:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-latest}
restart: unless-stopped
command: worker
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
user: root
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./media:/media
- ./certs:/certs
- ./custom-templates:/templates
env_file:
- .env
depends_on:
- postgresql
- redis
volumes:
database:
driver: local
redis:
driver: local
The companion .env file. The two lines that absolutely must change before you bring this up are PG_PASS and AUTHENTIK_SECRET_KEY.
PG_USER=authentik
PG_PASS=
AUTHENTIK_SECRET_KEY=
AUTHENTIK_ERROR_REPORTING__ENABLED=true
AUTHENTIK_EMAIL__HOST=smtp.postmarkapp.com
AUTHENTIK_EMAIL__PORT=587
AUTHENTIK_EMAIL__USERNAME=
AUTHENTIK_EMAIL__PASSWORD=
AUTHENTIK_EMAIL__USE_TLS=true
AUTHENTIK_EMAIL__USE_SSL=false
AUTHENTIK_EMAIL__TIMEOUT=10
AUTHENTIK_EMAIL__FROM=
COMPOSE_PORT_HTTP=80
COMPOSE_PORT_HTTPS=443
AUTHENTIK_TAG=latest
Generate the two secrets and append them to .env in one shot:
echo "PG_PASS=$(openssl rand -base64 36 | tr -d '\n')" >> .env
echo "AUTHENTIK_SECRET_KEY=$(openssl rand -base64 60 | tr -d '\n')" >> .env
The AUTHENTIK_SECRET_KEY is what signs every token Authentik issues. Treat it the same way you treat the master key on a password manager: store a copy somewhere off the server, never commit it to a public git repo, and do not regenerate it casually because every existing session and signed artifact becomes invalid the moment you do.
The worker service runs as root with the docker socket mounted: that is the upstream-recommended setup for the docker integration on outpost deployments. If you do not plan to use the docker integration, drop user: root and the socket mount. For most agency deployments the default is fine.
Bring it up with docker compose up -d. First boot takes a minute or two while Postgres initialises and migrations run.
TLS in front of Authentik
I terminate TLS at Nginx Proxy Manager and proxy auth.yourdomain.com to http://authentik-server:9000. Let’s Encrypt cert with “Force SSL” and “HTTP/2” enabled. On the Advanced tab, the standard security headers (HSTS, X-Frame-Options, a strict CSP). My Nginx security hardening post covers the header set I use.
If you put Authentik behind any reverse proxy other than NPM, set AUTHENTIK_LISTEN__TRUSTED_PROXY_CIDRS to the proxy’s network so Authentik trusts the X-Forwarded-For headers. A misconfigured proxy header is the single most common source of “the IP policy is broken” tickets in the Authentik forum.
First-login setup
Browse to https://auth.yourdomain.com/if/flow/initial-setup/ and create the admin account. Set a long unique password. Then immediately go to Directory > Users, open the admin user, and enrol a hardware key under Authenticator. Hardware-key MFA on the admin account is non-negotiable: the realistic compromise vector for a self-hosted IdP is the admin account, so close that door first.
While you are there, set up the email stage so password resets work. Authentik will not send any email until the SMTP env vars are set, which means your password-reset flow silently does nothing. Postmark, Mailgun, or any transactional SMTP provider works.
Wiring the first app
Pick one app to onboard first; do not try to migrate everything at once. For a typical OIDC-capable app (Grafana, Vaultwarden, Nextcloud, Portainer):
- In Authentik, go to Applications > Providers, create an OAuth2 / OpenID Provider, set the redirect URI, save the client ID and client secret.
- Create an Application that points at the new provider. Bind a group-policy if you want only certain users to see it.
- In the target app’s auth settings, paste the issuer URL (
https://auth.yourdomain.com/application/o/<slug>/), the client ID, and the client secret. Tick “OIDC”. - Test from a private window with a test account, not the admin account. If something is wrong, it is almost always the redirect URI mismatch or the issuer URL trailing slash.
For SAML apps, the Authentik docs have copy-paste configs for the most common SaaS targets. For apps with no native auth at all, the proxy outpost is the answer; that is a longer post in itself.
Sizing and the threat model
My production Authentik box runs at roughly 4% CPU and 800MB of RAM with about 30 users and 12 connected apps. 2GB is tight; 4GB has room to spare. Each connected app adds almost no marginal cost; user count and flow complexity move the dial.
The threat model worth thinking about is not the Authentik code itself. The realistic risks are:
- Compromised admin account. WebAuthn closes most of this. Without it, the entire identity layer sits behind one password.
- Database loss with no backup. Authentik has no useful state without Postgres. Lose the volume with no backup and you rebuild every flow, provider, and application from scratch.
- Secret key loss. Lose the
AUTHENTIK_SECRET_KEYand every OAuth session and signed token breaks. You will not lose data, but every connected app needs its provider re-issued. - Misconfigured trusted proxies. If the proxy CIDR settings are wrong, Authentik sees every request as coming from the proxy IP. IP policies break, geolocation logging breaks, brute-force detection breaks.
For an agency deployment, I run CrowdSec on the same box to catch credential-stuffing attempts. The login screen is on a public domain by design, so it gets the usual scanner traffic.
When SSO is overkill
I will say the thing the average IdP vendor will not.
For a 3-person agency with Vaultwarden, a single WordPress admin, and a shared Trello board, Authentik is overkill. The operational cost (one VPS, one evening of setup, one quarterly update job, plus a learning curve for whoever inherits it) outweighs the benefit of one login. Three people remembering three passwords in a password manager is fine. Bitwarden plus 2FAuth for the second factor solves the same problem at a tenth of the complexity.
The break-even point I have seen in practice sits around 6 connected apps and 5+ users, especially when those users are not all technical. Past that point, the calculus flips: every new app costs zero marginal effort to onboard with SSO, every leaving employee gets cut off in one click, and every audit conversation gets shorter because there is one source of truth for “who has access to what”.
The backup pattern that actually works
Here is the part the install guide skims over, and where most self-hosted IdP deployments quietly fail: backups.
The Postgres volume is the entire state of Authentik. Users, groups, providers, applications, flows, policies, sessions. Back up that directory daily, off-site, encrypted at rest. I use restic against a Backblaze B2 repository; borg or rclone work too. The non-negotiable is off-site and encrypted. A backup on the same VPS as the original is not a backup, it is a copy.
The second backup, the one people forget: an export of the Authentik configuration via the built-in blueprint export. Run it weekly, store the output in version control. Blueprints are YAML files describing the entire config declaratively, which means a clean restore from blueprints is faster than a Postgres restore and gives you a diff-able audit trail.
The third backup, the one people really forget: store the AUTHENTIK_SECRET_KEY somewhere completely independent of the VPS backup chain. A password manager you do not host on the same box, or a printed copy in a safe. Lose the secret key and every connected app needs its provider re-issued, every user needs to re-enrol their authenticators.
What I leave out of the default deployment
A few features I deliberately leave out of my default agency stack:
- Geo-IP policies based on MaxMind GeoLite. Useful for “block logins from countries we do not operate in”, but adds a moving part for marginal security gain. Worth it for compliance-heavy clients, skippable for the rest.
- SCIM provisioning. Great if you have 50+ users churning monthly; overkill if you onboard one contractor per quarter. Manual user creation in the admin UI is fine at small scale.
- Outpost-based proxy auth for every internal app. I run the proxy outpost for the 2 or 3 internal dashboards that need it; everything else uses native OIDC. Default to OIDC, fall back to SAML, fall back to the proxy outpost only when neither is available.
Verifying the deployment before you trust it
Run this sanity check before you migrate a single user in:
- Create one test user in Authentik. Add them to a
test-usersgroup. - Wire one test application (a throwaway Grafana or a fresh self-hosted app you can break safely).
- Bind a group-policy on the application that requires membership in
test-users. Verify the test user can log in. Verify your admin account, which is not intest-users, gets a “permission denied” page rather than a successful login. - Stop the entire stack with
docker compose down, restart it, and verify the test user can still log in. This catches the “I forgot to mount a volume” class of bug. - Back up the Postgres volume. Restore it to a different directory. Spin up a second Authentik stack against the restored volume with the same secret key. Verify it shows the same users and apps. This is your disaster-recovery rehearsal.
Step 5 is the one people skip. Do it once, before you have 30 users and 12 apps in there. Better to discover a backup-restore problem on day one with one test user than during a real incident.
Closing the loop
This Authentik stack has been the identity layer for my own infrastructure for about two years, and for several agency deployments for somewhat less. The setup time is one evening including the security hardening, the recurring cost is a 4 to 8 EUR / month VPS, and the operational burden is one daily backup job and a quarterly update window.
The thing self-hosting an identity provider has actually given me, beyond the SSO and the audit trail, is the discipline of thinking about access as a layer rather than a per-app afterthought. When auth lives in one place, you stop tolerating “we’ll remove their account next week” and start treating offboarding as a single atomic action. The companion read for that side of it is the human element in cybersecurity defense post: most incidents are people problems, and an identity provider you understand is one fewer place for a person problem to hide.