ExLisper

joined 2 years ago
[–] ExLisper@linux.community 0 points 2 years ago

Exactly, most of them would already 100% support the dictator. That was my point, US democracy is really fucking close to oligarchy already and it would be easy to take it over the edge.

[–] ExLisper@linux.community 3 points 2 years ago (1 children)

Sorry, I think the last Windows I had installed anywhere was XP. I was like "which windows logo is that? should I look it up?" but then I was like "nah, fuck it".

[–] ExLisper@linux.community 4 points 2 years ago (1 children)

Is that the one about the rings?

[–] ExLisper@linux.community 14 points 2 years ago (4 children)

Could be. Been a long time since I've read Dune.

[–] ExLisper@linux.community 5 points 2 years ago (6 children)

I'm pretty sure he could see colour. Wasn't that why he had those glasses for?

[–] ExLisper@linux.community 1 points 2 years ago (1 children)

Yes and what I'm saying is that US is not really that far away from this system. It doesn't have strong constitutional defences (Supreme Court is extremely politicised, corrupt and can just change it's interpretations of the constitution on a whim), the two party system is pretty much one capitalist party switching power between more extreme and less extreme wings every couple of years and it's all run but 70+ year old millionaires. US could very easily keep the thin layer of "democracy" and turn into dictatorship beneath it.

[–] ExLisper@linux.community 5 points 2 years ago (1 children)

Should I make it into a 'Yo Dawg' meme?

[–] ExLisper@linux.community 4 points 2 years ago (1 children)

But you would have to do something like multiple steps of preprocessing with expanding search depth on each step and do it both ways: when recollecting and changing memories. Like if I say:

  • Remember when I told you I've seen Interstellar last year?
  • AI: Yes, you said it made you vomit.
  • I lied. It was great.

So you process the first input, find the relevant info in the 'memory' but then for the second one you have to recognize that this is regarding the same memory, understand the memory and alter it/append to it. It would get complicated really fast. We would need some AI memory management system to manage the memory for the AI. I'm sure it's technically possible but I think it will take another breakthrough and we won't see it soon.

[–] ExLisper@linux.community 2 points 2 years ago (2 children)

So imagine a convo:

  • Let's see a movie.
  • AI: What movie would you like to see?
  • Interstellar.
  • AI: Ok.

1 years later:

  • Do you remember the movie Interstelar?

Now the AI can find the meesage that said 'Interstellar' in the history but without any context. To know you were talking about the movie it would have to analyze the entire conversation again. And the emotional charge of the message can also change instantly:

  • My whole family died in a plane crash.
  • AI: OMG!
  • Just kidding, April fools!

What would the AI 'remember'? It would require some higher level of understanding of the conversation and the 'memories' would have to be updated all the time. It's just not possible to replicate with simple log.

[–] ExLisper@linux.community 5 points 2 years ago (7 children)

That's the thing, I don't think a database can work as a long term memory here. How would it work? Let's say you tell your AI girlfriend that Interstellar movie was so bad it made you vomit. What would it store in the DB? When would it look that info up? It would be even worse with specific events. Should it remember the exact date of each event perfectly like DB does? It would be unnatural. To actually simulate memory it should alter the model somehow and the scale of the change should be proportional to the emotional charge of the message. I think this is on a completely different level than current models.

[–] ExLisper@linux.community 8 points 2 years ago (13 children)

Is it even feasible with this technology? You can't have infinite prompts so you would have to adjust the weights dynamically, right? But would that produce the effect of memory? I don't think so. I think it will take another major breakthrough before we have personal models with memory.

view more: ‹ prev next ›