FaceDeer

joined 1 year ago
[–] FaceDeer@fedia.io 4 points 1 hour ago

I was cutting a cardboard box up with a box cutter, holding the box steady with my off hand while pushing the blade downward through the cardboard. I realized that my hand was below the blade and therefore there was a risk I'd cut myself if the blade suddenly moved more quickly through the cardboard than anticipated. Safety first! So I stopped cutting, leaving the blade in the cardboard, and lifted my hand to grip the cardboard above where I was cutting instead.

Slammed my thumb right into the blade as I moved my hand, peeling a nasty slice of skin off. Took a lot of stitches to tack it back in place, still have a scar from that.

[–] FaceDeer@fedia.io 3 points 1 hour ago (1 children)

He's too big a chicken for that.

[–] FaceDeer@fedia.io 1 points 3 hours ago (1 children)

I just wont do you the favor to post any of them

Why comment in the first place if you're unwilling to back it up?

This is a public forum, you're not just answering me here.

[–] FaceDeer@fedia.io 1 points 3 hours ago (3 children)

...which you can't or won't do, apparently.

[–] FaceDeer@fedia.io 2 points 3 hours ago

However, a human would also need to verify that the generated solution actually solves a problem.

That's already an issue with human-generated answers to problems. :)

"Verification" could be done by an AI agent too, though, as I described above. Depends on the sort of problem. A programming solution can be tested in a simple sandbox, a medical solution would require a bit more effort to validate (whether by human or by AI).

I just don’t think current LLMs are quite smart enough yet.

Certainly, we're both speculating about future developments here.

[–] FaceDeer@fedia.io 1 points 3 hours ago (5 children)

So I take it you're not going to post those numbers, then.

[–] FaceDeer@fedia.io 1 points 4 hours ago (7 children)

Using anecdotal evidence is a cheap trick and I believe you know it. It's not evidence at all. Numbers show that I'm right and you're wrong in this case.

So... got any?

"Think of the children" is used as a thought stopper by the political right to push their laws against humanity through.

I refer you back to your earlier comment analogizing LLMs to "csam".

[–] FaceDeer@fedia.io 2 points 4 hours ago (2 children)

I did suggest a possible solution to this - the AI search agent itself could post a question in a forum somewhere if has been unable to find an answer.

This isn't a feature yet of mainstream AI search agents but I've been following development and this sort of thing is already being done by hobbyists. Agentic AI workflows can be a lot more sophisticated than simple "do a search summarize results." An AI agent could even try to solve the problem itself - reading source code, running tests in a sandbox, and so forth. If it figures out a solution that it didn't find online, maybe it could even post answers to some of those unanswered forum questions. Assuming the forum doesn't ban AI of course.

Basically, I think this is a case of extrapolating problems without also extrapolating the possibilities of solutions. Like the old Malthusian scenario, where Malthus projected population growth without also accounting for the fact that as demand for food rises new technologies for making food production more productive would also be developed. We won't get to a situation where most people are using LLMs for answers without LLMs being good at giving answers.

[–] FaceDeer@fedia.io 1 points 4 hours ago (9 children)

Thanks for showing that you have no actual arguments.

You did it first by jumping to "think of the children!" And analogizing running a program to cannibalism.

They have no real benefit.

No need to ban them, then. Nobody will use them if this is true.

They have insane energy requirements, insane hardware requirements.

I run them locally on my computer, I know this is factually incorrect through direct experience.

Personal experience aside, if running an LLM query really required "insane" energy and hardware expenditures then why are companies like Google so eager to do it for free? These are public companies whose mandates are to generate a profit. Whatever they're getting out of running those LLM queries must be worth the cost of running them.

We are working on saving our planet

I see you've switched from "think of the children!" To "think of the environment!"

[–] FaceDeer@fedia.io 1 points 4 hours ago

Depends which 90%.

It's ironic that this thread is on the Fediverse, which I'm sure has much less than 10% the population of Reddit or Facebook or such. Is the Fediverse "dead"?

This is one of the biggest problems with AI. If it becomes the easiest way to get good answers for most things

If it's the easiest way to get good answers for most things, that doesn't seem like a problem to me. If it isn't the easiest way to get good answers, then why are people switching to it en mass anyway in this scenario?

[–] FaceDeer@fedia.io 3 points 4 hours ago (6 children)

People will use whatever method of finding answers that works best for them.

Stuck, you contact tech support, wait weeks for a reply, and the cycle continues

Why didn't you post a question on a public forum in that scenario? Or, in the future, why wouldn't the AI search agent itself post a question? If questions need to be asked then there's nothing stopping them from still being asked.

view more: next ›