this post was submitted on 14 May 2025
92 points (100.0% liked)

TechTakes

1870 readers
143 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old
[–] db0@lemmy.dbzer0.com 18 points 1 week ago (2 children)

It's a constant cat and mouse atm. Every week or so, we get another flood of scraping bots, which force us to triangulate which fucking DC IP range we need to start blocking now. If they ever start using residential proxies, we're fucked.

[–] Irelephant@lemm.ee 13 points 1 week ago (1 children)

I have a tiny neocities website which gets thousands of views a day, there is no way that anyone is viewing it often enough for that to be organic.

[–] db0@lemmy.dbzer0.com 13 points 1 week ago (1 children)

quickly, add some ad revenue :P

[–] 01189998819991197253 9 points 1 week ago

From ai vendors. Let them pay you for scraping you lol

[–] self@awful.systems 10 points 1 week ago (1 children)

at least OpenAI and probably others do currently use commercial residential proxying services, though reputedly only if you make it obvious you’re blocking their scrapers, presumably as an attempt on their end to limit operating costs

[–] db0@lemmy.dbzer0.com 6 points 1 week ago (2 children)

Oh never heard of that. I have blocked their scrapers via agents but I haven't felt residential proxy pain.

[–] pikesley@mastodon.me.uk 15 points 1 week ago

@db0 @self Residential Proxy Pain are playing at the Dublin Castle in Camden this Friday, £4 advance, £5 on the door

[–] self@awful.systems 10 points 1 week ago (3 children)
[–] zogwarg@awful.systems 2 points 3 days ago

Infinite-garbage-maze does seem more appealing than "proof-of-work" (the crypto parentage is yuckish enough ^^) as a countermeasure, though I would understand if some would not feel confortable with direct sabotage—say for example a UN organization.

[–] db0@lemmy.dbzer0.com 8 points 1 week ago

Daym, I should set me up some iocane as well I think

[–] db0@lemmy.dbzer0.com 5 points 1 week ago (1 children)

PS: Looks like that sync issue between our instances is resolved now?

[–] self@awful.systems 2 points 6 days ago

yep, it seems so! I haven’t put the permanent fix for the nodeinfo bug into place yet but it’ll be live as soon as I’m able to give it an appropriate level of testing.

[–] dgerard@awful.systems 10 points 1 week ago* (last edited 1 week ago)

jwz gave the game away, so i'll reveal:

the One Weird Trick for this week is that the bots pretend to be an old version of Chrome. So you can block on useragent

so I blocked old Chrome from hitting the expensive mediawiki call on rationalwiki and took our load average from 35 (unusable) to 0.8 (schweeet)

caution! this also blocks the archive sites, which pretend to be old chrome. I refined it to only block the expensive query on mediawiki, vary as appropriate.

nginx code:

        # block some bot UAs for complex requests
        # nginx doesn't do nested if, so we set a test variable
        # if $BOT is both Complex and Old, block as bot
        set $BOT "";
        if ($uri ~* (/w/index.php)) {
            set $BOT "C"; }

            if ($http_user_agent ~* (Chrome/[2-9])) {
                set $BOT "${BOT}O";}
            if ($http_user_agent ~* (Chrome/1[012])) {
                set $BOT "${BOT}O";}
            if ($http_user_agent ~* (Firefox/3)) {
                set $BOT "${BOT}O";}
            if ($http_user_agent ~* (MSIE)) {
                set $BOT "${BOT}O";}

            if ($BOT = "CO") {
                return 503;}

you always return "503" not "403", because 403 says "fuck off" but the scrapers are used to seeing 503 from servers they've flattened.

I give this trick at least another week.

[–] Hirom@beehaw.org 6 points 1 week ago* (last edited 5 days ago) (1 children)

In my experience with bots, a portion of them obey robots.txt, but it's tricky to find the user agent string that some bots react to.

So I recommend having a robots.txt that not only target specific bots, but also tell all bots to avoid specific paths/queries.

Example for dokuwiki

User-agent: *
Noindex: /lib/
Disallow: /_export/
Disallow: /user/
Disallow: /*?do=
Disallow: /*&do=
Disallow: /*?rev=
Disallow: /*&rev=
[–] Irelephant@lemm.ee 4 points 1 week ago (1 children)

Would it be possible to detect the gptbot (or similar) of their user agent, and server them different data?

Can they detect that?

[–] froztbyte@awful.systems 10 points 1 week ago* (last edited 1 week ago) (2 children)

yes, you can match on user agent, and then conditionally serve them other stuff (most webservers are fine with this). nepenthes and iocaine are the current preferred/recommended servers to serve them bot mazes

the thing is that the crawlers will also lie (openai definitely doesn't publish all its own source IPs, I've verified this myself), and will attempt a number of workarounds (like using residential proxies too)

[–] Hirom@beehaw.org 4 points 1 week ago* (last edited 1 week ago)

Generating plausible-looking gibberish require resources. Giving any kind of response to these bots is a waste of resources, even if it's giberish.

My current approach is to have a robots.txt for bots than honor it. And drop all traffic during 24h for IPs used by bots that ignore robots.txt or misbehave.

[–] Irelephant@lemm.ee 4 points 1 week ago

Can they detect that they're being served different content though?

[–] Soyweiser@awful.systems 3 points 1 week ago (1 children)

Re the blocking of fake useragents, what people could try is see if there are things older useagents do (or do wrong) which these do not. I heard of some companies doing that. (Long ago I also heard of somebody using that to catch mmo bots in a specific game. There was a packet that if the server send it to a legit client, the client crashed, a bot did not). I'd assume the specifics are treated as secret just because you don't want the scrapers to find out.

[–] YourNetworkIsHaunted@awful.systems 2 points 2 days ago (1 children)

You could probably do something by getting into the weeds of browser updates, at least for web traffic. Like, if they're showing themselves as an older version of chrome send a badly formatted cookie to crash it? Redirect to /%%30%30?

[–] Soyweiser@awful.systems 1 points 2 days ago

Yes, there I heard there is some javascript that various older versions of chrome/firefox don't properly execute for example. So you can use that to determine which version they are (as long as nobody shares that javascript with the public. So this might even not be javascript, I honestly know nothing about it just heard it).