this post was submitted on 13 Jan 2026
281 points (97.6% liked)
Technology
78627 readers
2929 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Doesn't work, but I guess if it makes people feel better I suppose they can waste their resources doing this.
Modern LLMs aren't trained on just whatever raw data can be scraped off the web any more. They're trained with synthetic data that's prepared by other LLMs and carefully crafted and curated. Folks are still thinking ChatGPT 3 is state of the art here.
Let's say I believe you. If that's the case, why are AI companies still scraping everything?
Raw materials to inform the LLMs constructing the synthetic data, most likely. If you want it to be up to date on the news, you need to give it that news.
The point is not that the scraping doesn't happen, it's that the data is already being highly processed and filtered before it gets to the LLM training step. There's a ton of "poison" in that data naturally already. Early LLMs like GPT-3 just swallowed the poison and muddled on, but researchers have learned how much better LLMs can be when trained on cleaner data and so they already take steps to clean it up.
Ai devalues datasets when it refines, many resources are aimed towards solving the degradation that occurs when ai trains on ai. Gradients become poor and quality follows
You're thinking of "model decay", I take it? That's not really a thing in practice.
It is not only a theory, really the models suck balls at their accuracy when it has only rephrased something that was errenous in nature, its very understandable to me
From what I've heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn't quite mastered.
I've also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can't personally speak to its efficacy.
Faults in replication? That can become cancer for humans. AI as well I guess.
Do you have any basis for this assumption, FaceDeer?
Based on your pro-AI-leaning comments in this thread, I don't think people should accept defeatist rhetoric at face value.
A basic Google search for "synthetic data llm training" will give you lots of hits describing how the process goes these days.
Take this as "defeatist" if you wish, as I said it doesn't really matter. In the early days of LLMs when ChatGPT first came out the strategy for training these things was to just dump as much raw data onto them as possible and hope quantity allowed the LLM to figure something out from it, but since then it's been learned that quality is better than quantity and so training data is far more carefully curated these days. Not because there's "poison" in it, just because it results in better LLMs. Filtering out poison will happen as a side effect.
It's like trying to contaminate a city's water supply by peeing in the river upstream of the water treatment plant drawing from it. The water treatment plant is already dealing with all sorts of contaminants anyway.
That may be an argument if only large companies existed and they only trained foundation models.
Scraped data is most often used for fine-tuning models for specific tasks. For example, mimicking people on social media to push an ad/political agenda. Using a foundational model that speaks like it was trained on a textbook doesn't work for synthesizing social media comments.
In order to sound like a Lemmy user, you need to train on data that contains the idioms, memes and conversational styles used in the Lemmy community. That can't be created from the output of other models, it has to come from scraping.
Poisoning the data going to the scrapers will either kill the model during training or force everyone to pre-process their data, which increases the costs and expertise required to attempt such things.
Are you proposing flooding the Fediverse with fake bot comments in order to prevent the Fediverse from being flooded with fake bot comments? Or are you thinking more along the lines of that guy who keeps using "Þ" in place of "th"? Making the Fediverse too annoying to use for bot and human alike would be a fairly phyrric victory, I would think.
I am proposing neither of those things.
The way to effectively use this is to detect scraping through established means and, instead of banning them, altering the output to feed the target poisoned data instead of/in addition to the real content.
Banning a target gives them information about when they were detected and allows them to alter their profile to avoid that. If they're never banned then they lose that information and also they now have to deploy additional resources to attempt to detect and remove poisoned data.
Either way, it causes the adversary to spend a lot of resources at very little cost to you.
I have no idea what "established means" would be. In the particular case of the Fediverse it seems impossible, you can just set up your own instance specifically intended for harvesting comments and use that. The Fediverse is designed specifically to publish its data for others to use in an open manner.
Sure, and if the AI companies want to configure their crawlers to actually use APIs and ActivityPub to efficiently scrape that data, great. Problem is that there's been crawlers that have done things very inefficiently (whether by malice, ignorance, or misconfiguration) and scrape the HTML of sites repeatedly, driving up some hosting costs and effectively DOSing some of the sites.
If you put Honeypot URLs in the mix and keep out polite bots with robots.txt and keep out humans by hiding those links, you can serve poisoned responses only to the URLs that nobody should be visiting and not worry too much about collateral damage to legitimate visitors.
I have a sneaking suspicion that the vast majority of the people raging about AIs scraping their data are not raging about it being done inefficiently.
Maybe not, but at least in part because they don't understand what the previous poster said. If their scrapers were more efficient at data harvesting by employing API calls instead of scraping your whole domain, it would be much less burdensome on the target's server resources and one would think they would be less annoyed by that than if the same thing had happened without that burden.
Their grievances with LLMs and their owners may not be limited to that, but they are certainly likely to include it.