I have been running Immich self-hosted photo backup for my own family library and a couple of clients since the project hit beta in 2023, and it is the first piece of self-hosted software I genuinely trust with photos I cannot afford to lose. This post is the stack I actually deploy: the official Compose file, Caddy as the reverse proxy, the storage and backup layout, and the parts the upstream install guide skips over.
Self-hosted photo apps are a graveyard of half-finished projects. PhotoPrism is fine but the mobile apps lag the web UI by a year. Piwigo feels like 2008. Immich is the first one where my partner can pick up the iPhone, take a photo, and trust that it will land on a server I control before the day is out. That is the entire bar, and most projects fail it.
Roughly 45 minutes from a hardened VPS to a working Immich with TLS and mobile uploads. The slow part is the first ML scan of an existing library, which for my 80,000-photo archive ran 18 hours on a CX32.
Why self-host photos instead of paying for Google
Most people should not. Google Photos at 2 EUR per month for 100GB or 10 EUR per month for 2TB gives you a battle-tested upload pipeline, ML that works on the first try, and someone else’s pager rotation when a disk fails in Iowa. If your only requirement is “phone photos back up automatically”, pay the bill and walk away.
Self-hosting Immich earns its keep when one of these is true:
- Privacy actually matters to you in a concrete way. Photos of your kids are training data for whoever holds them. If you have an opinion about that, Immich is the only self-hosted photo app whose UX does not punish you for it.
- Per-seat economics break. A family of four on Google One Family at the 2TB tier is 100 EUR per year. The math flips once you cross the 5TB Google tier or share the server with extended family.
- You are already running a self-hosted stack. If you have Nextcloud AIO for files and Authentik for SSO, Immich slots in as the photo layer with shared identity and the same backup target.
- You hit Google’s free-tier compression. Anything Google considers “high quality” is recompressed. For a family archive that is fine; for a wedding photographer’s portfolio it is not.
If none of those apply, pay Google and skip the rest of this post.
Prerequisites for the Immich self-hosted installation
Non-negotiables before any of this lands on a real server:
- A hardened Linux host. SSH keys only, no root login, UFW deny-by-default. My Linux server security fundamentals post is the baseline I run on every fresh box.
- A real domain with DNS access. The Immich mobile app refuses to work over plain HTTP for upload and push, so you need a hostname with valid TLS. Cloudflare DNS is the pragmatic default.
- Server sizing. 4GB RAM is the floor for a single user under 200GB; 8GB if you want ML to run without swap thrashing or you are storing more than 500GB. CPU matters most during the initial ML scan.
- Storage planning. Two phones on auto-upload will add 50 to 100GB a year before videos. Plan at 3 to 5 times the current library size on a dedicated block volume.
I run my production Immich on a Hetzner CX32 (4 vCPU, 8GB RAM) with a 500GB attached volume mounted at /srv/docker/immich/library. That handles a family of three, around 80,000 items, with face recognition and CLIP search enabled.
The Immich Docker Compose stack
Immich ships an official Compose file that is the canonical way to install. The upstream version is at https://immich.app/docs/install/docker-compose and you should pull from there at install time rather than copy-paste a static snippet from a blog post. The structure looks like this:
name: immich
services:
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:release
extends:
file: hwaccel.transcoding.yml
service: cpu
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
env_file:
- .env
ports:
- 2283:2283
depends_on:
- redis
- database
restart: always
healthcheck:
disable: false
immich-machine-learning:
container_name: immich_machine_learning
image: ghcr.io/immich-app/immich-machine-learning:release
volumes:
- model-cache:/cache
env_file:
- .env
restart: always
healthcheck:
disable: false
redis:
container_name: immich_redis
image: docker.io/valkey/valkey:8-bookworm
healthcheck:
test: redis-cli ping || exit 1
restart: always
database:
container_name: immich_postgres
image: ghcr.io/immich-app/postgres:14-vectorchord0.3.0-pgvectors0.2.0
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: '--data-checksums'
volumes:
- ${DB_DATA_LOCATION}:/var/lib/postgresql/data
restart: always
volumes:
model-cache:
The companion .env file holds the secrets and paths:
UPLOAD_LOCATION=/srv/docker/immich/library
DB_DATA_LOCATION=/srv/docker/immich/postgres
TZ=Europe/Bratislava
IMMICH_VERSION=release
DB_PASSWORD=replace_with_a_long_random_string
DB_USERNAME=postgres
DB_DATABASE_NAME=immich
The values that matter:
UPLOAD_LOCATIONis where every original photo and video lands. This is the directory you back up. Put it on the dedicated volume, not the OS disk.DB_DATA_LOCATIONis the Postgres data directory. Same disk as the upload location is fine; just make sure it is on the volume that gets snapshotted.DB_PASSWORDneeds to be long and random. Generate it withopenssl rand -base64 32and never reuse a password from another service.IMMICH_VERSION=releasepins to the latest stable. I runreleasein production and only switch tov1.x.ypinning when an upgrade has burned me.
Bring it up:
docker compose -f /srv/docker/immich/docker-compose.yml up -d
The first start pulls about 4GB of images and takes a few minutes on a typical VPS connection. Once it is up, Immich listens on port 2283. Open http://server-ip:2283 in a browser, create the admin account, and you are ready to put a proxy in front of it.
Reverse proxy with Caddy
Immich works fine behind any modern proxy (Nginx Proxy Manager, Traefik, plain Nginx). I run Caddy specifically for the photo box because the config is one paragraph and the auto-TLS is bulletproof. If you already run NPM for the rest of your stack, by all means use it; the proxy host config is a one-liner there too.
Install Caddy:
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install caddy
Edit /etc/caddy/Caddyfile:
photos.example.org {
reverse_proxy localhost:2283
# Mobile uploads can be large. Bump the body limit.
request_body {
max_size 50000MB
}
}
Reload:
sudo systemctl reload caddy
That max_size line matters. A 4K video from a phone can easily clear 1GB, and Caddy’s default body limit will reject it with a 413. I once spent an afternoon debugging “uploads work for photos but fail for videos” before noticing that line was missing.
Open the firewall:
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
After Caddy provisions a Let’s Encrypt cert (usually within 30 seconds), https://photos.example.org should show the Immich login.
Warning: Port 2283 is the Immich server port and should never be exposed directly to the public internet. Block it at the firewall (
ufw deny 2283/tcp) so traffic only reaches Immich through Caddy. Same rule applies to Postgres on 5432.
Mobile app and the first ML pass
Install the Immich mobile app from the App Store or Play Store. On first launch, point it at https://photos.example.org (no trailing slash, no port). Sign in with the admin account, then enable backup for the Camera and Screenshots albums. The first sync of an existing library will take hours; let the phone charge overnight on Wi-Fi.
Once photos land, the machine-learning container kicks in: Smart Search (CLIP) builds embeddings for natural-language queries like “snow on mountains”, face detection clusters faces for you to name, and object detection tags photos with what is in them.
For my 80,000-item library, the initial pass took about 18 hours on a CX32 with the ML container pinned to two vCPUs. CPU sat at near 100% the whole time. Schedule the import for a weekend. After the first pass, ongoing ML work for new uploads is invisible: a few seconds per photo, processed in the background.
Backups, the part the install guide skims over
Immich’s data is two things: the upload directory (UPLOAD_LOCATION) holding the originals, and the Postgres database (DB_DATA_LOCATION) holding metadata, face clusters, embeddings, and albums. You need both to restore. Backing up only one is worse than backing up neither because it gives you false confidence.
What I run for my own family library:
- Nightly Borg snapshots of the library and Postgres directories to a Hetzner Storage Box. Encrypted, deduplicated, retention of 7 daily, 4 weekly, 12 monthly.
- Weekly pull to a Synology at home over WireGuard. The off-site copy that survives “Hetzner Falkenstein takes a lightning strike”. I use Wireguard Easy for the tunnel.
- Postgres dump before upgrades. A
pg_dumpbefore a major version bump has saved me twice. Five seconds, known-good rollback target.
The non-negotiable: rehearse the restore on a second VPS before you trust the backup. Spin up a fresh Immich, point it at a restored snapshot, log in, verify photos render. Do this once before you have 80,000 family photos on the line.
For the Postgres dump:
docker exec -t immich_postgres pg_dumpall -c -U postgres > immich-pg-$(date +%F).sql
Throw that file in your Borg repository alongside the data directory.
Sizing reality and security baseline
Numbers from my production setup so you can calibrate:
- 80,000 items, 350GB on disk, family of three: roughly 2.5GB RAM steady, 4GB during the daily ML window, 6GB during a major import. The 8GB CX32 has plenty of headroom.
- First ML scan: both vCPUs pinned at 95 to 100% for 18 hours. Near-idle after that.
- Storage growth: 350GB at start of year one, 480GB by end of it, with two phones on auto-upload and one occasional camera dump.
The security model is mostly about access control:
- Strong admin password and 2FA. Immich supports TOTP; enable it on the admin account before you do anything else.
- Disable public registration in the admin settings. The default is disabled, but verify.
- Watch the API surface. Immich’s API is meant for your devices, not the public. Caddy handles TLS; if you are paranoid, wrap the instance in a VPN like Mistborn and only let the mobile app reach it through the tunnel.
- Keep Postgres off the host network. The Compose file does this correctly; just do not expose 5432 “for debugging” and forget to undo it.
I also wire the proxy host into Uptime Kuma with a 5-minute HTTP check so I find out about an outage before my partner does.
When to walk away
If after reading this you are not sure self-hosting Immich is right for you, the honest answer is probably no. Google Photos is excellent. iCloud Photos is excellent. The teams behind both have engineering budgets that exceed the entire self-hosted ecosystem combined.
The people who get value out of self-hosted Immich already run other self-hosted services, have a specific privacy reason that survives the operational tax, or hit Google’s storage tiers hard enough that the math flips. For everyone else, pay the 2 EUR a month and put the engineering brain somewhere else.
For those who decided the trade is worth it, the stack above is the working version. Pair it with Authentik for SSO if you have multiple users, Uptime Kuma watching the proxy, and a real off-site backup, and you have the spine of a private photo cloud that holds up.
Recurring cost on my setup is roughly 12 EUR per month for the VPS, the volume, and the Storage Box. The monthly operational burden is checking the backup log on the first Monday of the month and applying the Immich update when the release notes look clean. A few minutes a month, in exchange for a photo library that does not phone home.