this post was submitted on 28 Jun 2025
79 points (98.8% liked)

Selfhosted

48727 readers
910 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
79
submitted 23 hours ago* (last edited 23 hours ago) by njordomir@lemmy.world to c/selfhosted@lemmy.world
 

Hello Self-Hosters,

What is the best practice for backing up data from docker as a self-hoster looking for ease of maintenance and foolproof backups? (pick only one :D )

Assume directories with user data are mapped to a NAS share via NFS and backups are handled separately.

My bigger concern here is how do you handle all the other stuff that is stored locally on the server, like caches, databases, etc. The backup target will eventually be the NAS and then from there it'll be double-backed up to externals.

  1. Is it better to run #cp /var/lib/docker/volumes/* /backupLocation every once in a while, or is it preferable to define mountpoints for everything inside of /home/user/Containers and then use a script to sync it to wherever you keep backups? What pros and cons have you seen or experienced with these approaches?

  2. How do you test your backups? I'm thinking about digging up an old PC to use to test backups. I assume I can just edit the ip addresses in the docker compose, mount my NFS dirs, and failover to see if it runs.

  3. I started documenting my system in my notes and making a checklist for what I need to backup and where it's stored. Currently trying to figure out if I want to move some directories for consistency. Can I just do docker-compose down edit the mountpoints in docker-compose.yml and run docker-compose up to get a working system?

you are viewing a single comment's thread
view the rest of the comments
[–] Sinirlan@lemmy.world 9 points 22 hours ago (4 children)

I just took line of least effort, all my docker containers are hosted on dedicated VM in proxmox, so I just backup entire VM on weekly basis to my NAS. Already had to restore it once when I was replacing SSD in proxmox host, worked like a charm.

[–] njordomir@lemmy.world 2 points 14 hours ago

I miss this from cloud hosting. It's helpful to be able to save, clone, or do whatever with the current machine state and easily just flash back to where you were if you mess something up. Might be too much to set up for my current homelab though. My server does have btrfs snapshots of everything directly in grub which has let me roll back a few big screwups here and there.

load more comments (3 replies)