FaceDeer

joined 1 year ago
[–] FaceDeer@fedia.io 1 points 1 hour ago (1 children)

so it doesn't appear one was trying to force it off or anything.

I don't think it indicates that given how the flight was already doomed at that point. The damage was done.

[–] FaceDeer@fedia.io 1 points 1 hour ago (1 children)

Ooh, I used to mod Luanti a lot. Wonder if any of my old work is on this server. Is it just straight Mineclonia?

[–] FaceDeer@fedia.io 3 points 7 hours ago (1 children)

You asked:

Everyone likes to believe they’re thinking independently. That they’ve arrived at their beliefs through logic, self-honesty, and some kind of epistemic discipline. But here’s the problem - that belief itself is suspiciously comforting. So how can you tell it’s true? [...] I’m asking: what’s your actual evidence that you think the way you think you do? Not in terms of the content of your beliefs, but the process behind them. What makes you confident you’re reasoning - not just rationalizing?

And I'm answering that. You literally asked for "actual evidence," and I gave links to the specific research I'm referencing.

I'm not here to argue with you over the meaning of the word "consciousness" when you didn't even ask about that in your question in the first place. If you think I'm talking about something other than consciousness go ahead and tell me what other word for it suits you.

[–] FaceDeer@fedia.io 3 points 8 hours ago (3 children)

Loosely, the awareness of our own actions and the reasons why we do them. The introspective stuff that the research I linked to is about.

The specific word doesn't really matter to me much. Substitute a different one if you prefer. Semantic quibbling is more of what I leave to the philosophers.

[–] FaceDeer@fedia.io 4 points 9 hours ago (5 children)

You might be referring to the split-brain experiments, where researchers studied patients who had their brain hemispheres separated by cutting the corpus callosum – the “bridge” between the two sides.

Nope, I would have described the split-brain experiments if that's what I was referring to. I dug around a bit to find a direct reference and I think it was Movement Intention After Parietal Cortex Stimulation in Humans by Desmurget et al. In particular:

the fact that patients experienced a conscious desire to move indicates that stimulation did not merely evoke a mental image of a movement but also the intention to produce a movement, an internal state that resembles what Searle called “intention in action”

I did misremember the fact that they only felt the intention to move, they didn't actually move their limbs when those brain regions were stimulated.

A related bit of research I dug up on this reference hunt that I'd forgotten about but is also neat; Libet in the 1980s, who used observation of the timing of brain activity to measure when a person formed an intention to do something compared to when they became consciously aware that they had formed an intention to do something. There was a significant delay between those two events, with the intention coming first and only later with the conscious mind "catching up" and deciding that it was going to do the thing that the brain was already in the process of doing.

As for consciousness, I think you might be using the term a bit differently from how it's typically used in philosophical discussions.

Probably, I'm less interested in philosophy than I am in actual measurable neurology. The whole point of all this is that human introspection appears to be flawed, and a lot of philosophy relies heavily on introspection. So I'd rather read about people measuring brain activity than about people merely thinking about brain activity.

This, I (and many others) would argue, is the only thing in the entire universe that cannot be an illusion.

You can argue it all you like, but in the end science requires evidence to back it up.

[–] FaceDeer@fedia.io 11 points 10 hours ago (7 children)

It's funny. I've seen research about LLMs "reasoning" and "introspecting" that has shown that they make up stories when you ask them why they answered questions in certain ways that don't match how their neurons actually fired, and a common response in the comments is to triumphantly crow about how this shows they're not "self aware" or "actually thinking" or whatever.

But it may be the same with humans. There's been fun experiments where people would have neurons artificially stimulated in their brains that cause them to take some action, such as reaching out with their hand, and then when you ask them why they did that they'll say - and believe - that they did it for some made-up reason like they were just stretching or that they wanted to pick something up. Even knowing full well that they're in an experiment that's going to use artificial stimulus to make them do that.

I suspect that much of what we call "consciousness" is just made up after-the-fact to explain to ourselves why we do the things that we do. Maybe even all of it, for all we currently know. It's a fun shower thought to ponder, if nothing else. And perhaps now that we've got AI to experiment with in addition to just our messy organic brains we'll be able to figure it all out with more rigor. Interesting times ahead.

I'm not terribly concerned about it, though. If it turns out that this is how we've been operating all along, well, it's how we've been operating all along. I've liked being me so far, why should that change when the curtain's pulled back and I can see the hamster in the wheel that's been making me work like that all along? It doesn't really change anything, and I'd like to know.

[–] FaceDeer@fedia.io 8 points 12 hours ago (1 children)

Surely someone committing suicide and taking hundreds of people with him in the process wouldn't lie about it.

[–] FaceDeer@fedia.io 15 points 13 hours ago (4 children)

I watched a very comprehensive and professional video by Captain Steeeve on this subject earlier today. He didn't outright literally say that one of the pilots deliberately downed the plane, but it was very clear that he thought that was the only explanation that really made sense here. Why do you say it sounds like they "did not mean to do so"? The switches are designed to not be movable without considerable deliberation and intent, you can't just bump these with your knee and switch them off. And both pilots were plenty experienced enough to know that you don't turn those switches off at that point in the flight.

[–] FaceDeer@fedia.io 11 points 13 hours ago (1 children)

Yes, though not a "traditional" one. I've got a voice recorder, I use it when I'm walking my dog to ramble on about whatever's on my mind. The day's events, my personal thoughts, to-do lists and notes, whatever. When I get home I dump the recording into a folder where some scripts I've written process the audio to produce a transcript (using the Whisper model from OpenAI) and then an LLM to create summaries and subject tags and so forth from it (currently Qwen3), entirely local on my computer. I've got an index for searching through them based on those AI-generated tags and summaries so I can more easily find old stuff if I need them or am curious for whatever reason.

I use entirely local AI because I am completely open and honest in there. Probably a bunch of blackmail material to be found if you dug deeply enough. I'm very careful with data security, none of this ever leaves my local systems.

I've been doing this for over ten years now, almost daily. I've always had a vague plan that someday I'd feed it all into an AI, it;s only just the past two years where that's actually started to become a reality. This weekend I'm going to experiment with upgrading my transcription AI to WhisperX, if it does a significantly better job I may have to rerun the whole dang thing through it all. Could take weeks, maybe months. I'm almost hoping it doesn't work. :)

[–] FaceDeer@fedia.io -1 points 2 days ago

An equally-true headline: "At last, a promising use for AI agents: debugging smart contract code." The availability of this tool should make smart contracts more secure in the future, and cryptocurrency more reliable as a result.

[–] FaceDeer@fedia.io 28 points 2 days ago (1 children)

Indeed. This sort of thing goes way back - the term "barbarian" was literally a result of Romans making fun of how non-Roman languages sounded to them (they used the onomatopoeia of "Bar Bar" to represent what they thought foreign languages sounded to them). Dismiss their language as meaningless gibber and you dismiss their thoughts as meaningless too.

[–] FaceDeer@fedia.io 23 points 2 days ago (1 children)

Yeah, this looks just plain awesome. Grocery stores are so bland and samey, it'd be nice if they had more creative decor like this.

view more: next ›