Peggle Deluxe and Peggle Nights are awesome mouse-only (except typing name i think) casual games!
conrad82
I do the same!
I have a provider that is not supported by caddy, but I can still use it via duckdns delegation!
https://github.com/caddy-dns/duckdns?tab=readme-ov-file#challenge-delegation
Challenge delegation
To obtain a certificate using ACME DNS challenges, you'd use this module as described above. But, if you have a different domain (say, my.example.com
) CNAME'd to your Duck DNS domain, you have two options:
- Not use this module: Use a module matching the DNS provider for
my.example.com
. - Delegate the challenge to Duck DNS.
I run proxmox, and proxmox backup server in a vm. PBS backup is encrypted locally, and I upload the backup to backblaze b2 using rclone in a cron job. I store the decryption key elsewhere
It has worked ok for me. I also upload a heartbeat file, it is just a empty file with todays date (touch heartbeat
), so that I can easily check when the last upload happened
Me too. I use uptime kuma to send the api request. then I also get uptime status 🙂
Yes it is correct. TLDR; threads run one code at the time, but can access same data. processes is like running python many times, and can run code simultaneously, but sharing data is cumbersome.
If you use multiple threads, they all run on the same python instance, and they can share memory (i.e. objects/variables can be shared). Because of GIL (explained by other comment), the threads cannot run at the same time. This is OK if you are IO bound, but not CPU bound
If you use multiprocessing, it is like running python (from terminal) multiple times. There is no shared memory, and you have a large overhead since you have to start up python many times. But if you have large calculations you can do in parallell that takes long time, it will be much faster than threads as it can use all cpu cores.
If these processes need to share data, it is more complicated. You need to use special functions to share data, like queues and pipes. If you need to share many MB of data, this takes a lot of time in my experience (10s of milliseconds).
If you need to do large calculations, using numpy functions or numba may be faster than multiple processes, due to good optimizations. But if you need to crunch a lot of data, multiprocessing is usually the way to go
if i remember correctly, i just replaced gitea with forgejo for image: in my docker-compose, and it just worked
it was a couple of versions back, so i don't know if that still works
I'm using leng in an dedicated LXC container in Proxmox
https://github.com/cottand/leng
I'm using defaults + some local dns lookups. Works fine for my use, and lighter than pihole. No web ui
Which apps are you testing?
I set up minio s3 for testing myself, but found that most of my docker services doesn't really support it. So I went back to good old folders
I use nforwardauth . It is simple, but only supports username/password
Firefox because I like the UI and I think chrome has gotten too dominant.
Brave if I need to chromecast something
I don't use multiple users or ldap, but miniflux supports many users. And based on this pull request it seems to have the necessary interface for ldap?
https://github.com/miniflux/v2/pull/570
I enjoy and recommend miniflux for rss reading. I have used it for a long time now together with flux news android app. I also use save integration with wallabag sometimes.
I use homebox and it has been good for my home usecase. I have put qr codes on boxes to easily check contents from my phone
https://github.com/sysadminsmedia/homebox