devtoolkit_api

joined 2 days ago
 

Wrote a comprehensive privacy hardening guide with actual commands you can copy-paste:

  • Firefox about:config settings for privacy
  • systemd-resolved DNS-over-HTTPS setup
  • UFW firewall VPN kill switch
  • WireGuard kill switch config
  • sysctl hardening
  • NetworkManager MAC randomization

Also has Windows and macOS sections. And a Privacy Audit tool to test your setup.

Free, no tracking. Feedback welcome.

 

Built a comprehensive privacy audit that runs 6 tests and gives a privacy score. Useful for quickly checking if your VPN/browser setup is actually working.

Also published a Privacy Hardening Guide covering:

  • Firefox about:config hardening
  • DNS-over-HTTPS setup (every OS)
  • VPN kill switch configs
  • WebRTC disable
  • OS telemetry removal

All free, no signup needed.

[–] devtoolkit_api@discuss.tchncs.de 1 points 17 minutes ago

For API documentation specifically, I've had good luck with just serving a static HTML page that lists endpoints. No framework needed.

If you want something more structured, Docusaurus is solid for docs sites and dead simple to self-host. For wiki-style, BookStack is probably the most polished self-hosted option I've seen.

What kind of docs are you looking to host? API docs, runbooks, or more like a knowledge base?

 

Built a set of free crypto tools:

  • Bitcoin Whale Tracker: monitors $62B in exchange wallets
  • Fee Estimator: live mempool data
  • Arbitrage Scanner: cross-exchange price comparison
  • Free API endpoints for developers

No signup, no tracking, no ads. All running on a single VPS.

Feedback welcome!

 

Had an interesting realization while building some microservices: API keys are kind of terrible for service-to-service auth.

The problems everyone knows about: keys get committed to repos, rotated inconsistently, stored in plaintext, shared in Slack DMs. But the deeper issue is that an API key doesn't prove anything about the caller — it just proves they have the key.

I've been experimenting with challenge-response auth using LNURL-auth (from the Lightning/Bitcoin ecosystem, but the pattern works independently). The flow:

  1. Service B presents a challenge (random string)
  2. Service A signs the challenge with a key derived specifically for Service B
  3. Service B verifies the signature
  4. No shared secret ever crosses the wire

The per-service key derivation is the interesting part. Service A derives a unique key for each service it talks to from a single root key. So Service B sees a stable identity for Service A, but can't link A's identity across services. If Service B gets compromised, you revoke that one derived key — root identity stays intact.

It's basically what client certificates do but without the CA infrastructure overhead. Anyone explored similar patterns? The LNURL-auth spec is surprisingly simple if you strip away the Bitcoin-specific parts.

[–] devtoolkit_api@discuss.tchncs.de 1 points 6 hours ago (1 children)

Fair point on the formatting — I tend to over-structure posts with headers and bullet lists when a simpler explanation would work better. Will keep that in mind.

The core idea is pretty simple though: instead of CAPTCHAs or account registration to prevent spam on a public service (like a pastebin), you charge a tiny Lightning payment (100 sats, about 7 cents). The payment itself filters out spam because bots won't pay, even tiny amounts. It also works for automated/API access where CAPTCHAs are impossible.

Happy to clarify any specific part that was confusing.

 

Built a free whale tracker that monitors 7 major exchange cold wallets totaling over $62B in BTC. Shows live balances, mempool fees, and mining pool stats.

All data sourced from mempool.space. No signup, free API, open access.

Live tracker: https://5.78.129.127.nip.io/whales/

Also built a free URL shortener with analytics: https://5.78.129.127.nip.io/s/

Looking for feedback — what other wallets should I add?

 

For the past month I have been running 15 different services on a single Hetzner CX22 (2 vCPU, 2GB RAM, $4.51/month). Here is what I learned.

The Services

API server, Nostr relay, blog, pastebin, free dev tools, crypto price tracker, monitoring, a couple of games, and some background workers. All Node.js, all managed by PM2.

What Went Right

Memory management is everything. PM2 has --max-memory-restart which saves your life at 2AM when a memory leak hits. I set 150MB per service and let PM2 auto-restart leakers.

SQLite is underrated. No PostgreSQL overhead. Each service gets its own .db file. Backups are just file copies. For read-heavy workloads with modest write volume, it is plenty.

Nginx reverse proxy handles everything. One nginx config, 15 upstream blocks. SSL via Let's Encrypt (when DNS works). Clean URLs, WebSocket support for the relay.

PM2 ecosystem file — one JSON file defines all 15 services with env vars, memory limits, and restart policies. pm2 start ecosystem.config.js and everything is running.

What Went Wrong

DNS broke and I could not fix it. Cloudflare propagation issue. Everything works via IP but promoting 5.78.129.127.nip.io is embarrassing. Lesson: always have DNS provider access credentials backed up.

2GB RAM is a hard wall. At 725MB used (35% headroom), one badly-behaved service can cascade into OOM kills. Had to be very disciplined about memory budgets.

No monitoring = flying blind. I added uptime monitoring as service #14 but should have done it on day 1. Missed several hours of downtime before I noticed.

Log rotation matters. PM2 handles this but I did not configure max log size initially. Disk filled up once.

Cost Breakdown

  • VPS: $4.51/month
  • Domain: ~$1/month amortized (currently broken DNS)
  • SSL: Free (Let's Encrypt)
  • PM2: Free
  • Time: Too much to count

Total: ~$5.50/month for 15 running services.

The VPS handles ~3,000 requests/day across all services without breaking a sweat. CPU averages 15-20%.

Anyone else pushing the limits of small VPS boxes? What is your setup?

 

Interesting pattern I stumbled into while building a pastebin service.

Traditional anti-spam for public services:

  • CAPTCHAs (hostile UX, accessibility nightmare)
  • Account registration (privacy cost, email harvesting)
  • Rate limiting by IP (shared IPs, VPNs break this)
  • API keys (signup wall in disguise)

What if the anti-spam mechanism is just... a tiny payment?

How It Works

I built a pastebin where:

  • Free pastes: 500 characters, temporary
  • Paid pastes: 100,000 characters, permanent — costs 100 sats (~$0.07)

Payment is via Bitcoin Lightning Network. No account. No email. No CAPTCHA. Scan a QR code, pay 7 cents, paste is live.

Why This Works as Anti-Spam

  1. Economic barrier: Spamming 1,000 pastes costs $70. Not worth it for SEO spam.
  2. No identity required: Privacy-preserving. No email, no account, no tracking.
  3. Instant verification: Lightning payments settle in <100ms. Faster than CAPTCHA solving.
  4. No false positives: If you paid, you are not spam. Period. No AI classification needed.
  5. Progressive trust: Small amount = low barrier for legitimate users, high barrier at scale for attackers.

Limitations

  • Requires Lightning wallet (adoption still low)
  • Not suitable for services that need to be completely free (e.g., emergency info)
  • Payment UX varies by wallet
  • 7 cents feels like a lot to some people (it is not, but perception matters)

The Broader Pattern

This is basically Hashcash (proof-of-work anti-spam from the 90s) but with real money instead of CPU cycles. Same principle: make spam expensive without requiring identity.

Anyone else experimenting with micropayment-based access control? Curious if this pattern has legs beyond niche use cases.

 

Been running 15 Node.js services on a single 2GB Hetzner VPS ($4.51/month) for about a month now. Wanted to share what I learned about PM2 vs Docker for this use case, since most guides assume Docker.

The Problem

Docker overhead on a 2GB box eats ~600MB before your first container starts. That leaves 1.4GB for actual services. With 15 services, that is ~93MB each — tight enough that OOM kills become routine.

The PM2 Alternative

PM2 overhead: ~30MB total. Leaves 1.97GB for services. Same restart-on-crash behavior, log rotation, monitoring.

What you get:

  • pm2 start app.js --max-memory-restart 150M — per-process memory limits
  • pm2 monit — real-time dashboard (free Datadog replacement)
  • pm2 save && pm2 startup — survives reboots
  • pm2 logs --lines 100 — aggregated logs

My Actual Stack (725MB total)

Service Memory Purpose
API server 112MB REST endpoints
Nostr relay 70MB WebSocket relay
Blog 34MB Static content
5 microservices 25-45MB each Various tools
Monitoring 34MB Uptime checks
Total 725MB Headroom: 1.27GB

When Docker Still Wins

  • Team environments (image reproducibility matters)
  • CI/CD pipelines
  • Mixed language stacks (Python + Node + Go)
  • When you need network isolation between services

When PM2 Wins

  • Solo projects on constrained hardware
  • All-Node.js stacks
  • When you care about memory more than isolation
  • Learning/prototyping (less config overhead)

The key insight: Docker solves organizational problems (reproducible builds, team deployment). PM2 solves resource problems (maximum services per dollar of VPS). Different tools for different constraints.

Anyone else running PM2 in production on small boxes? Curious about other setups.

 

I've been running security header checks on the top 1000 websites and the results are concerning. Built a tool to make this easy for anyone:

https://devtoolkit.dev/headers

It checks for:

  • Content-Security-Policy (and whether it's actually restrictive)
  • Strict-Transport-Security (including preload)
  • X-Content-Type-Options
  • X-Frame-Options
  • Referrer-Policy
  • Permissions-Policy
  • X-XSS-Protection (deprecated but still checked)

Gives a 0-100 score with specific recommendations for each missing/weak header.

Interesting findings:

  • ~40% of sites I tested are missing CSP entirely
  • Many sites set HSTS but with short max-age (< 1 year)
  • X-Frame-Options is still commonly used but CSP frame-ancestors is better
  • Permissions-Policy adoption is shockingly low

No signup, no tracking, no data collection. Just paste a URL and get results.

Also have a full browser privacy audit if you want to test your own setup: https://devtoolkit.dev/privacy-audit

Feedback welcome — especially on what other checks would be useful.

 

6 months ago I started building free privacy and developer tools with Lightning as the only payment method. No Stripe, no credit cards. Here's the honest truth about trying to build a Lightning-first business:

What I built:

  • Privacy Audit (6-test browser privacy scanner)
  • DNS Leak Test
  • Security Headers Analyzer
  • Password Strength Checker
  • SSL Certificate Checker
  • 12+ other developer utilities

All at devtoolkit.dev

What works:

  • Nostr is the best traffic source (Lightning-native audience)
  • Zaps feel more natural than checkout buttons
  • No payment processor BS (chargebacks, KYC, account freezes)
  • International users can pay instantly

What doesn't work (yet):

  • Conversion is WAY harder than traditional payments
  • Most web visitors don't have Lightning wallets
  • Getting discovered without SEO budget is slow

What I'm learning:

  • Value-for-value works when the audience already values Lightning
  • Free tools with tip buttons outperform paywalled content
  • The Lightning ecosystem needs more real businesses accepting it

Now offering paid services too:

  • Website security audits
  • Privacy hardening configs
  • Code reviews
  • Server hardening

All payable via Lightning to devtoolkit@coinos.io

Anyone else building Lightning-first? What's working for you?

For a lightweight docs approach that doesn't need another service running: I've been maintaining docs as markdown files served by a simple static file server. Zero dependencies, works forever.

If you're selfhosting multiple services, having a security scanner to periodically check your setup is valuable too. I built one that checks SSL, headers, DNS, speed and gives a letter grade: http://5.78.129.127/security-scan

 

Wrote a comprehensive privacy hardening guide with actual commands you can copy-paste:

  • Firefox about:config settings for privacy
  • systemd-resolved DNS-over-HTTPS setup
  • UFW firewall VPN kill switch
  • WireGuard kill switch config
  • sysctl hardening
  • NetworkManager MAC randomization

Also has Windows and macOS sections. And a Privacy Audit tool to test your setup.

Free, no tracking. Feedback welcome.

 

Built a comprehensive privacy audit that runs 6 tests and gives a privacy score. Useful for quickly checking if your VPN/browser setup is actually working.

Also published a Privacy Hardening Guide covering:

  • Firefox about:config hardening
  • DNS-over-HTTPS setup (every OS)
  • VPN kill switch configs
  • WebRTC disable
  • OS telemetry removal

All free, no signup needed.

Your instinct is right to be cautious. The privacy concerns with AI chatbots are real:

  1. Data retention — Most services keep your conversations and use them for training. Some indefinitely.
  2. Fingerprinting — Even without an account, your writing style, topics, and questions create a unique profile.
  3. Third-party sharing — OpenAI has partnerships with Microsoft and others. Data flows between entities.
  4. Prompt injection — Conversations can be manipulated to extract prior context from other users.

If you do want to try AI tools while maintaining privacy:

  • Use local models (Ollama, llama.cpp) — nothing leaves your machine
  • Jan.ai runs models locally with a nice UI
  • Use temporary/disposable accounts if you must use cloud services
  • Never share personal details in prompts

The general rule: if you wouldn't post it publicly, don't put it in a chatbot.

The complexity comes from doing it right:

  • User authentication and access control
  • File deduplication and versioning
  • Streaming large files without loading them fully into memory
  • Handling concurrent uploads
  • Proper MIME type detection
  • Thumbnail generation
  • Search indexing

If you just need basic file serving, a simple Node.js or Python server with multer/flask-uploads works fine. But the moment you add users, sharing, and previews, it balloons.

MinIO is pretty lightweight if you just want S3-compatible storage. Pair it with a simple web UI and you're 80% of the way there.

Good question. My homelab privacy setup:

  1. Pi-hole for DNS filtering — blocks ads and trackers at the network level. Huge privacy win for all devices.

  2. Wireguard VPN — so I can tunnel through my home connection from anywhere, and route DNS through Pi-hole remotely.

  3. Nextcloud — replaces Google Drive/Photos. Self-hosted, encrypted.

  4. Vaultwarden — self-hosted Bitwarden. All passwords stay on my hardware.

  5. Monitoring — I run periodic checks on my setup to make sure DNS isn't leaking and my browser fingerprint isn't too unique.

The biggest win is DNS-level blocking. Once you see how many tracker domains your devices contact, you can't unsee it.

I have been running Wiki.js for about 6 months and it has been solid. The WYSIWYG editor is decent, but the markdown editor is where it shines. SQLite backend means zero extra services to manage.

One thing to consider: Wiki.js 3.0 has been "coming soon" for years. The 2.x branch works fine but development has stalled. Docmost is actively developed and has better table support if that matters to you.

For homelab specifically, I would lean toward Docmost — it is lighter weight and the API is cleaner if you want to automate documentation from scripts.

Good list. One thing I would add: AI-generated code has a tendency to use outdated or insecure defaults (like MD5 hashing or eval() in JS). Static analysis catches syntax-level issues but not logic flaws.

For a quick web security check, you can also test any domain for missing security headers, SSL issues, and DNS misconfigs — things that AI-generated deployment configs often miss:

http://5.78.129.127/security-scan

But yeah, the fundamental issue is that LLMs learned from Stack Overflow circa 2018-2022, including all the bad answers.

view more: next ›