pglpm

joined 2 years ago
MODERATOR OF
[–] pglpm@lemmy.ca 13 points 2 years ago (1 children)

Here?: https://ungoogled-software.github.io/about/

Looks like a good project, I didn't know about its existence.

[–] pglpm@lemmy.ca 1 points 2 years ago

Thank you. So many people speaking about Fennec, and I had never heard of it!

[–] pglpm@lemmy.ca 4 points 2 years ago (2 children)

Yes, the purpose isn't sabotaging.

[–] pglpm@lemmy.ca 64 points 2 years ago* (last edited 2 years ago) (3 children)

Title:

ChatGPT broke the Turing test

Content:

Other researchers agree that GPT-4 and other LLMs would probably now pass the popular conception of the Turing test. [...]

researchers [...] reported that more than 1.5 million people had played their online game based on the Turing test. Players were assigned to chat for two minutes, either to another player or to an LLM-powered bot that the researchers had prompted to behave like a person. The players correctly identified bots just 60% of the time

Complete contradiction. Trash Nature, it's become only an extremely expensive gossip science magazine.

PS: The Turing test involves comparing a bot with a human (not knowing which is which). So if more and more bots pass the test, this can be the result either of an increase in the bots' Artificial Intelligence, or of an increase in humans' Natural Stupidity.

[–] pglpm@lemmy.ca 95 points 2 years ago* (last edited 2 years ago) (9 children)

There's an ongoing protest against this on GitHub, symbolically modifying the code that would implement this in Chromium. See this lemmy post by the person who had this idea, and this GitHub commit. Feel free to "Review changes" --> "Approve". Around 300 people have joined so far.

[–] pglpm@lemmy.ca 23 points 2 years ago

Yeah that's bullsh*t by the author of the article.

[–] pglpm@lemmy.ca 5 points 2 years ago* (last edited 2 years ago) (1 children)

This is so cool! Not just the font but the whole process and study. Please feel free to cross-post to Typography & fonts.

[–] pglpm@lemmy.ca 2 points 2 years ago

Thank you for the great help, I hope it'll be useful to others too :)

[–] pglpm@lemmy.ca 3 points 2 years ago

Thank you for the info! As I'm completely new to Matrix I was indeed wondering. Probably the spam problem will increase as it becomes more popular....

[–] pglpm@lemmy.ca 7 points 2 years ago (2 children)

Thank you! I checked it. From what I understand I should use a link like https://matrix.to/#/@[user]:[server.zzz]. Then from there they are redirected to use their own Matrix app, if they have one.

[–] pglpm@lemmy.ca 1 points 2 years ago (3 children)

also @recreationalplacebos@midwest.social thank you! I had no idea about this possibility and these Firefox forks. Looks a little complicated but I'll try it. From what I gather, Firefox plans to bring back full extension support in the future?

[–] pglpm@lemmy.ca 4 points 2 years ago* (last edited 2 years ago)

I'd like to add one more layer to this great explanation.

Usually, this kind of predictions should be made in two steps:

  1. calculate the conditional probability of the next word (given the data), for all possible candidate words;

  2. choose one word among these candidates.

The choice in step 2. should be determined, in principle, by two factors: (a) the probability of a candidate, and (b) also a cost or gain for making the wrong or right choice if that candidate is chosen. There's a trade-off between these two factors. For example, a candidate might have low probability, but also be a safe choice, in the sense that if it's the wrong choice no big problems arise – so it's the best choice. Or a candidate might have high probability, but terrible consequences if it were the wrong choice – so it's better to discard it in favour of something less likely but also less risky.

This is all common sense! but it's at the foundation of the theory behind this (Decision Theory).

The proper calculation of steps 1. and 2. together, according to fundamental rules (probability calculus & decision theory) would be enormously expensive. So expensive that something like chatGPT would be impossible: we'd have to wait for centuries (just a guess: could be decades or millennia) to train it, and then to get an answer. This is why Large Language Models do two approximations, which obviously can have serious drawbacks:

  • they use extremely simplified cost/gain figures – in fact, from what I gather, the researchers don't have any clear idea of what they are;

  • they directly combine the simplified cost/gain figures with probabilities;

  • They search for the candidate with the highest gain+probability combination, but stopping as soon as they find a relatively high one – at the risk of missing the one that was actually the real maximum.

 

(Sorry if this comment has a lecturing tone – it's not meant to. But I think that the theory behind these algorithms can actually be explained in very common-sense term, without too much technobabble, as @TheChurn's comment showed.)

view more: ‹ prev next ›