The same morons scrape Wikipedia instead of downloading the archive files which trivially can be rendered as web pages locally
Natanael
Like a public service CAPTCHA / BOINC hybrid
Needs to be IPv6, including support for subnets to message multiple devices
It's very cool how these devices find their location, though. When you first boot the system up, it spends about 5 minutes measuring the rotation of the Earth. For this reason, you can't reset it when in motion. Based on what it feels it can determine your exact location on the surface of the earth.
That gets you longitude but not latitude, right?
https://www.science.org/doi/10.1126/sciadv.add3854
https://www.nature.com/articles/s41467-025-58381-6
I'm not gonna do the math. But it seems those fiber once have a longer path relatively speaking, so higher latency
Nuh-uh, I never visit the same site twice
Vad var det du sa
Even the dogs are FBI agents
That's olidligt (as they said in their Swedish marketing campaign, lol)
You don't want your trailer to bounce like a hiphopper's car?
If you buy potato chips as shock absorbers you'll come home with potato dust
Still tasty though
If they had the slightest bit of survival instinct they'd share a archive.org / Google-ish scraper and web cache infrastructure, and pull from those caches, and everything would just be scraped once, repeated only occasionally.
Instead they're building maximally dumb (as in literally counterproductive and self harming) scrapers who don't know what they're interacting with.
At what point will people start to track down and sabotage AI datacenters IRL?