andallthat

joined 2 years ago
[–] andallthat@lemmy.world 13 points 3 days ago* (last edited 3 days ago) (1 children)

I agree. I was almost skipping it because of the title, but the article is nuanced and has some very good reflections on topics other that AI. Every technical progress is a tradeoff. The article mentions cars to get to the grocery store and how there are advantages in walking that we give up when always using a car. Are cars in general a stupid and useless technology? No, but we need to be aware of where the tradeoffs are. And eventually most of these tradeoffs are economic in nature.

By industrializing the production of carpets we might have lost some of our collective ability to produce those hand-made masterpieces of old, but we get to buy ok-looking carpets for cheap.

By reducing and industrializing the production of text content, our mastery of language is declining, but we get to read a lot of not-very-good content for free. This pre-dates AI btw, as can be seen by standardized tests in schools everywhere.

The new thing about GenAI, though is that it upends the promise that technology was going to do the grueling, boring work for us and free up time for us to do the creative things that give us joy. I feel the roles have reversed: even when I have to write an email or a piece of coding, AI does the creative piece and I'm the glorified proofreader and corrector.

[–] andallthat@lemmy.world 2 points 3 days ago

cover letters, meeting notes, some process documentation: the stuff that for some reason "needs" to be done, usually written by people who don't want to write it for people who don't want to read it. That's all perfect for GenAI.

[–] andallthat@lemmy.world 63 points 5 days ago (1 children)

"he's no longer the sensitive man and caring lover that I used to know"

[–] andallthat@lemmy.world 3 points 6 days ago

That would be... horrifyingly effective. Just the thought makes me want to bleach my brain

[–] andallthat@lemmy.world 10 points 1 week ago

In other news: AI is a better human than Duolingo CEO

[–] andallthat@lemmy.world 1 points 1 week ago

You are right. Bunch of incel 19-year-olds... This is probably more about hiding their browser history from their moms

[–] andallthat@lemmy.world 16 points 1 week ago* (last edited 1 week ago)

" Under the mighty gaze of our Beloved Supreme Leader, steel folded and the Great Warship itself bowed. Cower and tremble, enemies of our Powerful State!"

[–] andallthat@lemmy.world 10 points 1 week ago

Easy, we just give AI access to all our files and personal information and it will know our age!

[–] andallthat@lemmy.world 5 points 2 weeks ago (1 children)

Look up stuff where? Some things are verifiable more or less directly: the Moon is not 80% made of cheese,adding glue to pizza is not healthy, the average human hand does not have seven fingers. A "reasoning" model might do better with those than current LLMs.

But for a lot of our knowledge, verifying means "I say X because here are two reputable sources that say X". For that, having AI-generated text creeping up everywhere (including peer-reviewed scientific papers, that tend to be considered reputable) is blurring the line between truth and "hallucination" for both LLMs and humans

[–] andallthat@lemmy.world 35 points 2 weeks ago* (last edited 2 weeks ago) (9 children)

Basically, model collapse happens when the training data no longer matches real-world data

I'm more concerned about LLMs collaping the whole idea of "real-world".

I'm not a machine learning expert but I do get the basic concept of training a model and then evaluating its output against real data. But the whole thing rests on the idea that you have a model trained with relatively small samples of the real world and a big, clearly distinct "real world" to check the model's performance.

If LLMs have already ingested basically the entire information in the "real world" and their output is so pervasive that you can't easily tell what's true and what's AI-generated slop "how do we train our models now" is not my main concern.

As an example, take the judges who found made-up cases because lawyers used a LLM. What happens if made-up cases are referenced in several other places, including some legal textbooks used in Law Schools? Don't they become part of the "real world"?

[–] andallthat@lemmy.world 26 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I tried reading the paper. There is a free preprint version on arxiv. This page (from the article linked by OP) also links the code they used and the data they tried compressing, in the end.

While most of the theory is above my head, the basic intuition is that compression improves if you have some level of "understanding" or higher-level context of the data you are compressing. And LLMs are generally better at doing that than numeric algorithms.

As an example if you recognize a sequence of letters as the first chapter of the book Moby-Dick you'll probably transmit that information more efficiently than a compression algorithm. "The first chapter of Moby-Dick"; there .. I just did it.

[–] andallthat@lemmy.world 2 points 2 weeks ago* (last edited 2 weeks ago)

I was not blaming your country at all, you're more than doing your part. It's just frustrating.

Thinking of the families of the victims, I hope that knowing they are not forgotten and people are trying to uncover the truth about what happened will at least provide some closure.

 

Most of our financial decisions are already algorithmically driven.

Now with this vision of the near future where e-commerce uses only AI-generated content on apps built by AI developers and AI-agents (soon?) buying it independently, money does not need a human in the middle any longer.

 

I have posted this on Reddit (askeconomics) a while back but got no good replies. Copying it here because I don't want to send traffic to Reddit.

What do you think?

I see a big push to take employees back to the office. I personally don't mind either working remote or in the office, but I think big companies tend to think rationally in terms of cost/benefit and I haven't seen a convincing explanation yet of why they are so keen to have everyone back.

If remote work was just as productive as in-person, a remote-only company could use it to be more efficient than their work-in-office competitors, so I assume there's no conclusive evidence that this is the case. But I haven't seen conclusive evidence of the contrary either, and I think employers would have good reason to trumpet any findings at least internally to their employees ("we've seen KPI so-and-so drop with everyone working from home" or "project X was severely delayed by lack of in-person coordination" wouldn't make everyone happy to return in presence, but at least it would make a good argument for a manager to explain to their team)

Instead, all I keep hearing is inspirational wish-wash like "we value the power of working together". Which is fine, but why are we valuing it more than the cost of office space?

On the side of employees, I often see arguments like "these companies made a big investment in offices and now they don't want to look stupid by leaving them empty". But all these large companies have spent billions to acquire smaller companies/products and dropped them without a second thought. I can't believe the same companies would now be so sentimentally attached to office buildings if it made any economic sense to close them.

view more: next ›