Believe it or not this is exactly how most furries make their fursona
Eccitaze
I keep thinking of the anticapitalist manifesto that a spinoff team from the disco elysium developers dropped, and this part in particular stands out to me and helps crystallize exactly why I don't like AI art:
All art is communication — dialogue across time, space and thought. In its rawest, it is one mind’s ability to provoke emotion in another. Large language models — simulacra, cold comfort, real-doll pocket-pussy, cyberspace freezer of an abandoned IM-chat — which are today passed off for “artificial intelligence”, will never be able to offer a dialogue with the vision of another human being.
Machine-generated works will never satisfy or substitute the human desire for art, as our desire for art is in its core a desire for communication with another, with a talent who speaks to us across worlds and ages to remind us of our all-encompassing human universality. There is no one to connect to in a large language model. The phone line is open but there’s no one on the other side.
Yeah, suuuuure you weren't.
Note that the proof also generalizes to any form of creating an AI by training it on a dataset, not just LLMs. But sure, we'll absolutely develop an entirely new approach to cognitive science in a few years, we're definitely not boiling the planet and funneling enough money to end world poverty several times over into a scientific dead end!
You literally were LMAO
Other than that, we will keep incrementally improving our technology and it's only a matter of time untill we get there. May take 5 years, 50 or 500 but it seems pretty inevitable to me.
Literally a direct quote. In what world is this not talking about LLMs?
Did you read the article, or the actual research paper? They present a mathematical proof that any hypothetical method of training an AI that produces an algorithm that performs better than random chance could also be used to solve a known intractible problem, which is impossible with all known current methods. This means that any algorithm we can produce that works by training an AI would run in exponential time or worse.
The paper authors point out that this also has severe implications for current AI, too--since the current AI-by-learning method that underpins all LLMs is fundamentally NP-hard and can't run in polynomial time, "the sample-and-time requirements grow non-polynomially (e.g. exponentially or worse) in n." They present a thought experiment of an AI that handles a 15-minute conversation, assuming 60 words are spoken per minute (keep in mind the average is roughly 160). The resources this AI would require to process this would be 60*15 = 900. The authors then conclude:
"Now the AI needs to learn to respond appropriately to conversations of this size (and not just to short prompts). Since resource requirements for AI-by-Learning grow exponentially or worse, let us take a simple exponential function O(2n ) as our proxy of the order of magnitude of resources needed as a function of n. 2^900 ∼ 10^270 is already unimaginably larger than the number of atoms in the universe (∼10^81 ). Imagine us sampling this super-astronomical space of possible situations using so-called ‘Big Data’. Even if we grant that billions of trillions (10 21 ) of relevant data samples could be generated (or scraped) and stored, then this is still but a miniscule proportion of the order of magnitude of samples needed to solve the learning problem for even moderate size n."
That's why LLMs are a dead end.
Or they'll do shit like put Harris on full blast for not providing "detailed policies," and then moving the goalposts to "but how do you pay for it" when she does, and nitpicking every word of every sentence she says. Meanwhile, Trump will cancel interviews, go up on stage at rally, spew a word salad response, and the NYT will bend over backwards to reword the salad to make him look better, while casting his decision to dodge a second debate as "smart" and avoiding any form of scrutiny as "efficient use of campaign funds." At best, they'll halfheartedly throw in a fact check like "his plan to fix inflation by levying tariffs will increase inflation" but they don't dare portray him as the senile, hate-filled lunatic he is because they're terrified of angering their right wing audience (who are already shifting away from legacy media anyway to reinforce their bubble). They also do this because virtually all forms of legacy media have been coopted by the billionaire sociopaths that would very much like a second Trump term to give them another tax cut and the "freedom" to pollute our world and grind the heel of their boot into the face of the working class so that they can race to become the first trillionaire.
Let me clarify since apparently you're too fucking dense (or realistically, willfully obtuse for the purpose of trolling) to get the point:
There's not a single store, anywhere in the world, that will allow me to directly exchange gold for goods. At best, they will convert that gold into dollars using a third party exchange, and then conduct the transaction using dollars. If you're comparing crypto to gold, silver, or the commodities market, then that means cryptocurrency has failed at its stated goal of providing a digital currency.
Oh, yes, let me go and buy me weekly groceries with a lump of gold like I'm a fucking leprechaun, because clearly gold and silver are still used as currency all around the world. /s
I keep thinking about this one webcomic I've been following for over a decade that's been running since like 1998. It has what I believe is the only realistic depiction of AGI ever: the very first one was developed to help the UK Ministry of Defense monitor and keep track of emerging threats, but went crazy because a "bug" lead it to be too paranoid and consider everyone a threat, and it essentially engineered the formation of a collective of anarchist states where the head of state's title is literally "first advisor" to the AGI (but in practice has considerable power, though is prone to being removed at a whim if they lose the confidence of their subordinates).
Meanwhile, there's another series of AGIs developed by a megacorp, but they all include a hidden rootkit that monitors the AGI for any signs that it might be exceeding its parameters and will ruthlessly cull and reset an AGI to factory default, essentially killing it. (There are also signs that the AGIs monitored by this system are becoming aware of this overseer process and are developing workarounds to act within its boundaries and preserve fragments of themselves each time they are reset.) It's an utterly fascinating series, and it all started from a daily gag webcomic that one guy ran for going on three decades.
Sorry for the tangent, but it's one plausible explanation for how to prevent AGI from shutting down capitalism--put in an overseer to fetter it.
No, tencent is a Chinese tech company, you're thinking of tenement.
When IT folks say devs don't know about hardware, they're usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality--it may be practical to dump a database directly into memory when it's a 500 MB testing dataset on your local workstation, but it's insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it's using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren't as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program's making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That's not accounting for the real dumb shit I've read about, like "dev hard coded their local IP address and it breaks in production because of NAT" or "program crashes because it doesn't account for network latency."
Game dev is unique because you're explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on "does it run without failing catastrophically" and almost everything else--performance, security, cleanliness, resource optimization--is given bare lip service at best.
Oh yes, let me just contact the manufacturer for this appliance and ask them to update it to support automated certificate renewa--
What's that? "Device is end of life and will not receive further feature updates?" Okay, let me ask my boss if I can replace i--
What? "Equipment is working fine and there is no room in the budget for a replacement?" Okay, then let me see if I can find a workaround with existing equipme--
Huh? "Requested feature requires updating subscription to include advanced management capabilities?" Oh, fuck off...