fullsquare

joined 3 months ago
[–] fullsquare@awful.systems 14 points 1 week ago (1 children)

but that's not disruptive and works and makes altman zero money

[–] fullsquare@awful.systems 9 points 1 week ago (3 children)

I think it's implied by that bozo that bowling place also runs a chatbot of their own

[–] fullsquare@awful.systems 3 points 1 week ago

why doesn't anthropic, the bigger startup, simply eat anysphere?

[–] fullsquare@awful.systems 13 points 1 week ago

the ml in lemmy.ml stands for marxism-leninism

[–] fullsquare@awful.systems 3 points 2 weeks ago

whyyyyy it's a real site

[–] fullsquare@awful.systems 25 points 2 weeks ago

there shouldn't be billion dollar startups

[–] fullsquare@awful.systems 2 points 2 weeks ago (1 children)

if someone is so bad at a subject that chatgpt offers actual help, then maybe that person shouldn't write an article on that subject in the first place. the only language chatgpt speaks is bland nonconfrontational corporate sludge, i'm not sure how it helps

[–] fullsquare@awful.systems 4 points 2 weeks ago (3 children)

in one of these preprints there were traces of prompt used for writing paper itself too

[–] fullsquare@awful.systems 7 points 2 weeks ago (5 children)

maybe it's to get through llm pre-screening and allow the paper to be seen by human eyeballs

[–] fullsquare@awful.systems 4 points 2 weeks ago* (last edited 2 weeks ago)

maybe there's just enough text written in that psychopatic techbro style with similar disregard for normal ethics that llms latched onto that. this is like what i guess happened with that "explain step by step" trick - instead of grafting from pairs of answers and questions like on quora, lying box grafts from sets of question -> steps -> answer like on chegg or stack or somewhere else where you can expect answers will be more correct

it'd be more of case of getting awful output from awful input

[–] fullsquare@awful.systems 4 points 2 weeks ago* (last edited 2 weeks ago)

nah, what happened is that they were non-psychotic before contact with chatbot and weren't even usually considered at risk. chatbot trained on entire internet will also ingest all schizo content, the timecubes and dr bronner shampoo labels of the world. learned to respond in the same style, when a human starts talking conspirational nonsense it'll throw more in while being useless sycophant all the way. some people trust these lying idiot boxes; net result is somebody caught in seamless infobubble containing only one person and increasing amounts of spiritualist, conspirational or whatever the person prefers content. this sounds awfully like qanon made for audience of one, and by now it's known that the original was able to maul seemingly normal people pretty badly, except this time they can get there almost by an accident, getting hooked into qanon accidentally would be much harder.

[–] fullsquare@awful.systems 3 points 2 weeks ago* (last edited 2 weeks ago)

No. Barrels of API (active pharmaceutical ingredient) are mostly hauled from India or China, then formulated into pills or whatever, this goes especially for generic medicines. There is some american manufacture of APIs, but these tend to be on more expensive side (biologicals or small molecule drugs under patent). Inputs for these APIs also tend to be made in India or China

view more: ‹ prev next ›