this post was submitted on 25 Jun 2024
111 points (100.0% liked)

technology

23218 readers
2 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

The Enshittification continues. Slightly better than Google because you can at least turn it off, but still on by default! Turn that shit off if you use DDG!

top 38 comments
sorted by: hot top controversial new old
[–] WIIHAPPYFEW@hexbear.net 59 points 1 year ago (3 children)

NO ONE FUCKING ASKED FOR ANY OF THIS SHIT

[–] sovietknuckles@hexbear.net 34 points 1 year ago

It was never for you, the user. Like any advertising company, DuckDuckGo wants to reduce the cost of predicting what products its user will buy, which, at this point, means harvesting your thoughts and asking an AI model what you will buy, even if you've already left.

[–] hexaflexagonbear@hexbear.net 21 points 1 year ago

TAKE MY SHIT JOB IF YOU WANT JUST DON'T MAKE EVERYTHING ELSE WORSE AS WELLL

[–] Frank@hexbear.net 12 points 1 year ago

That rate of profit do be declining. I forget what pod was talking about it, but they were saying that tech execs think this will be what returns them to the good old days of high profits.

[–] Xx_Aru_xX@hexbear.net 46 points 1 year ago

Yeah, it's already inaccurate

[–] Barabas@hexbear.net 36 points 1 year ago

It is very cool that we are not only using a lot more energy to fuel this shit, but also that entire energy networks are being overloaded to do it.

Look forward to rolling brownouts in order to keep faulty search results and terrible images coming.

[–] Thordros@hexbear.net 23 points 1 year ago (1 children)

Unironically, I've been getting better search results from Yandex since this AI dogshit started picking up steam. I'm contemplating switching over full time.

[–] What_Religion_R_They@hexbear.net 14 points 1 year ago (1 children)

It's good for finding shit that does not appear on Google at all, but it also finds other super weird shit too. To be fair, google also shows me weird religious websites and state department propaganda journal articles seemingly at random on unrelated results.

[–] AssortedBiscuits@hexbear.net 9 points 1 year ago (1 children)

I see a lot more crank shit with Yandex while DDG is almost entirely Western MSM.

[–] jackmarxist@hexbear.net 10 points 1 year ago

DDG is censored lol. The owner literally said that he'll censor Russian "Disinformation" and promote western shit.

[–] radiofreeval@hexbear.net 23 points 1 year ago (1 children)

It's great too

I switched to searx and it's actually perfect

[–] EcoMaowist@hexbear.net 4 points 1 year ago (1 children)

How do you make sure that results actually appear? I'll be using a public instance with certain engines, and then those engines break and I either have to switch instances or engines.

[–] radiofreeval@hexbear.net 2 points 1 year ago

Searx.work works on my machine. I wish I could help you but I've never had that issue. You could self host and change default instances if you really want to be sure.

[–] RiotDoll@hexbear.net 18 points 1 year ago (2 children)

What kind of violence is theoretically possible against ai infrastructure right now, asking for myself

[–] ProfessorOwl_PhD@hexbear.net 12 points 1 year ago (2 children)

Find a random data centre, break in, and start hitting servers with a hammer. Sooner or later you're bound to knock out an ai one.

[–] drinkinglakewater@hexbear.net 11 points 1 year ago* (last edited 1 year ago)

Randomly hitting servers is too inefficient. The easiest way is to find the loudest server and go ham on that because apparently the Nvidia AI pod things are unbelievably loud

this is the most realistic program i've yet seen on this site

[–] UmbraVivi@hexbear.net 7 points 1 year ago

Another useless fucking chatbot

I use startpage. It's been pretty nice, it feels like old Google.

[–] supafuzz@hexbear.net 5 points 1 year ago (1 children)

I'm probably going to wind up paying for Kagi soon. One more monthly tax to make the internet halfway usable again

I did some digging and it seems that Kagi does have AI features, however their policy is that AI should always be opt-in every single time; only activated when you choose. If you want to read the techbro blog, it's explained a bit more here: https://blog.kagi.com/kagi-ai-search

Probably best to pay monthly instead of yearly in case they do something you don't like :yea:

[–] JCreazy@midwest.social 4 points 1 year ago (1 children)

The AI chat is helpful though

[–] dat_math@hexbear.net 16 points 1 year ago (1 children)

please demonstrate a case where this has been useful to you

[–] JCreazy@midwest.social 5 points 1 year ago* (last edited 1 year ago) (5 children)

I asked it the difference between soy sauce and tamari and it told me.

[–] Black_Mald_Futures@hexbear.net 17 points 1 year ago (1 children)

literally just google "wikipedia tamari"

[–] blobjim@hexbear.net 11 points 1 year ago

and essentially all it's doing is plagiarizing a dozen other answers from various websites.

[–] dat_math@hexbear.net 17 points 1 year ago* (last edited 1 year ago)

oooh tamari

so like, could you have answered that question without spinning up a 200W gpu somewhere to do the llm "inference"?

[–] dat_math@hexbear.net 10 points 1 year ago (1 children)

tamarind the fruit or tamarin the genus?

[–] Findom_DeLuise@hexbear.net 9 points 1 year ago

horror soypoint-2

is-this Are these the same?

[–] booty@hexbear.net 2 points 1 year ago (1 children)

and what part of that required an AI?

[–] JCreazy@midwest.social 1 points 1 year ago (1 children)

I never said it did. It was just faster than searching through multiple search results and reading through multiple paragraphs.

[–] dat_math@hexbear.net 1 points 1 year ago (1 children)

How did you know the answer was correct?

[–] JCreazy@midwest.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

How do you know any information is correct?

[–] dat_math@hexbear.net 1 points 1 year ago (1 children)

Do you really not see why I asked my rhetorical question or do you just want to bicker?

[–] JCreazy@midwest.social 0 points 1 year ago (1 children)

I wasn't bickering. You're the one trying to argue. It sounds like you're implying that information from AI is inherently incorrect which simply isn't true.

[–] dat_math@hexbear.net 1 points 1 year ago* (last edited 1 year ago)

First, at the risk of being a pedant, bickering and arguing are distinct activities. Second, I didn't imply llm's results are inherently incorrect. However, it is undeniable that they sometimes make shit up. Thus without other information from a more trustworthy source, an LLM's outputs can't be trusted.