@AutoTLDR
BTW Satan is a very cool guy, follow him on Twitter: @s8n
And people are seriously considering federating with Threads if it implements ActivityPub. Things have been so crazy recently that I think If Satan existed and started a Lemmy instance, probably there would still be people arguing in good faith for federating with him.
Lol that’s like saying there’s too much porn on /r/gonewild
Yes, their actual argument is excellent, but this remark gives me instant /r/iamverysmart vibes
“Timeo Danaos et dona ferentes.”
Companies like Meta poison everything they touch. They are a deeply evil, psychopathic organization. They are responsible for causing extremely harmful runaway effects in human society that I’m not even sure are possible to fix. The very reason for Lemmy's recent popularity is that people are fed up with the "if something is free, you aren't the user, you are the product" situation and its consequences (see Reddit vs. /u/spez).
Their intent to federate is a blatantly obvious attempt at an "embrace, extend, extinguish" strategy - I'm surprised anyone seriously considers federating with them. They need users to solve the "chicken and egg" problem and joining the fediverse would be an easy way for them to populate their service with content. Their motivations are obviously and transparently malicious and self-serving. They don't care about the goals and values of the fediverse at all, all they see is an easy way to gain initial users and content. At the first moment federation will be more inconvenient than useful to them, after they sucked all the profit they could out of it, they will drop the entire thing like a hot potato, and we will be left in the dust.
I personally like this instance very much, and I've been putting hours and hours of work into building the AUAI community since the day I joined. But I wouldn't hesitate for a second before deleting my account and never looking back if the community here decided to federate with Meta.
EDIT: another explanation of why they want to join the fediverse
The biggest aha-moment with Copilot for me was when I wanted to implement tools for my GPT-based personal assistant. The function calling wasn't yet available in the OpenAI API, and I've found that GPT-3.5 was really bad at using tools consistently in a long chat conversation. So I decided to implement a classifier DAG, with either a simple LLM prompt or a regular function in its nodes. Something like this:
what is this? (reminder | todo | other)
reminder -> what kind of reminder? (one-time | recurring)
one-time -> return the ISO timestamp and the reminder text in a JSON object like this
recurring -> return the cron expression and the reminder text in a JSON object like this
todo -> what kind of todo operation (add | delete | ...)
...
other -> just respond normally
I wrote an example of using this classifier graph in code, something like this (it's missing a lot of important details):
const decisionTree = new Decision(
userIntentClassifier, {
"REMINDER": new Decision(
reminderClassifier, {
"ONE_TIME": new Sequence(
parseNaturalLanguageTime,
createOneTimeReminder,
explainAction
),
"RECURRING": new Sequence(
createRecurringReminder,
explainAction
),
}
),
"TASK": new Decision(
taskClassifier, {
...
}
),
"NONE": answerInChat,
}
);
decisionTree.call(context);
And then I started writing class Decision
, class Sequence
, etc. and it implemented the classes perfectly!
If you are interested in AI safety - whether you agree with the recent emphasis on it or not - I recommend watching at least a couple of videos by Robert Miles:
https://www.youtube.com/@RobertMilesAI
His videos are very enjoyable and interesting, and he presents a compelling argument for taking AI safety seriously.
Unfortunately, I haven't found such a high-quality source presenting arguments for the opposing view. If anyone knows of one, I encourage them to share it.
LLMs can do a surprisingly good job even if the text extracted from the PDF isn't in the right reading order.
Another thing I've noticed is that figures are explained thoroughly most of the time in the text so there is no need for the model to see them in order to generate a good summary. Human communication is very redundant and we don't realize it.
Oh finally. Sorry everyone for this train wreck of a thread.
Ok, this is an uncharacteristically bad summary, AutoTLDR. Bad bot!