this post was submitted on 11 Mar 2026
645 points (96.1% liked)

Linux Gaming

24892 readers
746 users here now

Discussions and news about gaming on the GNU/Linux family of operating systems (including the Steam Deck). Potentially a $HOME away from home for disgruntled /r/linux_gaming denizens of the redditarian demesne.

This page can be subscribed to via RSS.

Original /r/linux_gaming pengwing by uoou.

No memes/shitposts/low-effort posts, please.

Resources

WWW:

Discord:

IRC:

Matrix:

Telegram:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] CoyoteFacts@piefed.ca 267 points 4 days ago (4 children)

Whether or not I use Claude is not going to change society

This gives me shopping cart theory vibes. I don't usually base my moral compass based on whether my action will have some kind of measurable impact, but whether I believe it's the right thing to do. After the intense doubling down in that discussion thread I'm definitely steering clear of lutris. It costs me very little effort to avoid projects that do icky things I don't want to encourage (even though it may not have a measurable impact~)

[–] rtxn@lemmy.world 135 points 4 days ago (1 children)

I can't fix the problem, therefore I'll be part of the problem.

[–] Korhaka@sopuli.xyz 21 points 4 days ago (3 children)

At my job we have been told how we have to start using AI more. I can't really see any point. The only tasks AI can help me for are pointless tasks from HR that shouldn't exist in the first place. Monthly forms with questions like "how are you feeling emotionally", used to take me ages to come up with corpo bullshit friendly answers but locally hosted deepseek does it in seconds.

[–] toynbee@piefed.social 21 points 4 days ago* (last edited 4 days ago)

When my work enabled Gemini, I asked it how to disable it. It said it couldn't help me and asked if I had another question. I didn't.

That's the only interaction I've willingly had with it.

[–] Kanda@reddthat.com 2 points 3 days ago (1 children)

The HR department will see that it's not quality human HR-slop and the thought police will be with you shortly

[–] Korhaka@sopuli.xyz 4 points 3 days ago (1 children)

Oh LLMs are great at writing HR slop

[–] Kanda@reddthat.com 1 points 2 days ago

But then there's no suffering

[–] Pika@rekabu.ru 0 points 4 days ago (1 children)

In my experience, AI models are fairly good at contextual search. That's the only thing I use them for.

[–] Korhaka@sopuli.xyz 4 points 4 days ago

Yes, if we had documentation then I suspect AI tools could be good for finding information in that.

[–] Joelk111@lemmy.world 34 points 4 days ago (1 children)

Lutris has always been a bit hit-or-miss for me, I avoided it unless it was the only option, as it only worked half the time. I don't want it to come off like it shouldn't exist, as stuff making Linux easier to use is great, but I don't use it at all in my current workflows.

[–] CoyoteFacts@piefed.ca 5 points 4 days ago (1 children)

I guess I've just been behind the times, but I've never had an incentive to switch. I just installed faugus and transferred everything over and it seems very slick. It seems to be missing 1 or 2 things, like environment variables per-game, but all the other important stuff seems to be here. I know what I'm doing with prefixes so having all the knobs to turn is great, but honestly linux gaming does not need most of those knobs nowadays.

[–] oxideseven@lemmy.ca 1 points 3 days ago (1 children)

How does transferring work?

I only have 2 or 3 things in lutris.

[–] CoyoteFacts@piefed.ca 1 points 3 days ago (1 children)

I just did it manually, pointing faugus at the old prefixes and setting the launch options the same

[–] oxideseven@lemmy.ca 1 points 3 days ago

Sick. Thanks. I'll do the same.

[–] blackbrook@mander.xyz 27 points 4 days ago (1 children)

Also, it is one thing to decide that something is not an ethical issue of concern, it is another thing to act with disrespect to everyone with a different opinion.

[–] FauxLiving@lemmy.world -4 points 4 days ago

it is another thing to act with disrespect to everyone with a different opinion.

Unless that opinion is 'I like using AI', then they deserved the disrespect.

[–] logos@sh.itjust.works 1 points 3 days ago (1 children)

virtue ethics > utilitarianism

[–] MolochAlter@lemmy.world 8 points 3 days ago (1 children)

Utilitarianism really falls at the first hurdle of any kind of evaluation of a moral system.

It has no real prescriptive power because it demands you be able to correctly foresee the outcome of your actions, something literally addressed by "The road to hell is paved with good intentions", an adage of at least 400 years ago, and yet people will still gravitate towards it as if society did not explicitly caution us about that mindset forever now.

At this point I can't help but look down on those who genuinely identify as utilitarian as either too young, too stupid, or actively malevolent and trying to find a way to justify their bad behaviours as errors rather than malice or negligence.

[–] ns1@feddit.uk 2 points 2 days ago (1 children)

I'd offer you a counterpoint (ignoring the issue with Lutris and AI for a minute):

If you choose not to judge your own actions by the expected consequences of those actions for everyone involved, then how exactly are you supposed to judge them? If you're following some rule that disagrees with the utilitarian view, then by definition it's a rule that in your own opinion leads to a worse outcome for everyone.

It's of course completely fine to not be utilitarian, but trying to claim that all utilitarians are either stupid or evil is just incorrect.

[–] MolochAlter@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

ignoring the issue with Lutris and AI for a minute

Please by all means, I ignored it in the first place, I find this way more interesting.

If you choose not to judge your own actions by the expected consequences of those actions for everyone involved, then how exactly are you supposed to judge them?

Well, this is only half the problem. It's a bad system because it demands the impossible of you (i.e. accurately predict the future) but it also has a really narrow interest in the dimensions of human morality.

To directly answer the question however: you judge them by a set of principles, whichever you deem right, that you apply consistently across choices.

When it comes to inter-personal choices, the vast majority of all questions can easily be answered by asking yourself "am i betraying some explicit or implicit bond of trust with someone (who has not done so themselves) by doing/saying this?" and if you are, you just stop.

And to be clear, I don't claim to follow this principle 100% of the time, I am not a saint, but that to me is the guiding principle when there are stakes to my behaviour, and it has not failed me yet.

If you’re following some rule that disagrees with the utilitarian view, then by definition it’s a rule that in your own opinion leads to a worse outcome for everyone.

(Emphasis added)

At its core, the idea of utilitarian morality is to "maximise utility", that is to do whatever does the most "good" to the highest number of people.

This is, IMO, a terrible metric, and as a deontologist I am perfectly happy reaching a "worse" outcome by it.

It is not particularly hard to see how, by applying this metric, you can justify any kind of scapegoating, abuse, and/or undue leniency on people that would deserve harsh punishment in any deontological or virtue based system, as soon as enough "good" is produced through it.

There is a very dark, but apt, joke about this kind of approach to morality: that 9/10 people involved in it endorse gang rape.

To me, morality is a qualitative assessment, not a quantitative one.

It does not matter how many perpetrator lives will be ruined if they have earned their punishment, and it does not matter how much happier they would be to get away with the crime than the victim would suffer, comparatively.

To do anything else would be to relinquish morality to the whims of the masses, because it implies that there is a threshold past which the abuse of the few becomes negligible due to the benefits it brings to the many.

trying to claim that all utilitarians are either stupid or evil is just incorrect.

To be fair I also stated they can be naïve; I was one too in my youth, until I learned and understood better.