Have been using Neo Launcher since it had the features I needed from Nova (mostly hiding most apps from the app list while having them on the home screen in some folder so that it isn't a mess when you want to find something specific). It hasn't been updated in a while, but it works perfectly fine for me.
Audalin
But 22301 isn't prime? It's 29*769.
A piece of plastic broke off from my laptop once. It was supposed to hold one of the two screws fixing the cover of the RAM & drive section and now there was just a larger round hole. I've measured the hole and the screw, designed a replacement in Blender (not identical, I wanted something more solid and reliable) and printed it; took two attempts to get the shape perfectly right. Have had zero issues with it in all these years.
Thanks! I now see that Tai Chi is mentioned frequently online in context of the film unlike yoga so that should be right; it narrows things down.
KOReader supports custom CSS. You can certainly change the background colour with it, I think a grid should be possible too.
That's the ones, the 0414 release.
QWQ-32B for most questions, llama-3.1-8B for agents. I'm looking for new models to replace them though, especially the agent one.
Want to test the new GLM models, but I'd rather wait for llama.cpp to definitely fix the bugs with them first.
What I've ultimately converged to without any rigorous testing is:
- using Q6 if it fits in VRAM+RAM (anything higher is a waste of memory and compute for barely any gain), otherwise either some small quant (rarely) or ignoring the model altogether;
- not really using IQ quants - afair they depend on a dataset and I don't want the model's behaviour to be affected by some additional dataset;
- other than the Q6 thing, in any trade-offs between speed and quality I choose quality - my usage volumes are low and I'd better wait for a good result;
- I load as much as I can into VRAM, leaving 1-3GB for the system and context.
Maybe some Borges too?
I knew a Horn of Plenty is a good choice, but I didn't think it's that good. Thanks!

Should be doable with Termux:
termux-sms-listandtermux-sms-sendcommands);termux-sms-listreturns messages in JSON, which is easy enough to handle with, say,jqin bash orjsonin python. The script itself can be a simple loop that fetches the latest messages every few minutes, filters for unprocessed ones from whitelisted numbers and callstermux-sms-send.Maybe it'd make sense to daemonise the script and launch it via
sv.But the Termux app weighs quite a bit itself.