apehardware

joined 6 months ago
[–] apehardware@lemmy.world 2 points 1 month ago

I guess more on the "tech" side, we mostly just solve requests from banks, most of them are repetitive tasks or fixing some weird incident. I call it "customer support without seeing the customer".

 

So, for some context, I work at an accounting company that provides tech services to banks. Really tiring shit but it was the only thing where they didn't ask you for 100 years of previous experience to enter an entry-level position.

A couple of days ago the CEO came for an annual visit and gave a conference, which was 2 hours of generic corporate crap in the world's most uncomfortable chairs (the only one that probably benefited from that meeting was the local chiropractor). I'm pretty sure the word "growth" was said at least 20 times, and I'm starting to genuinely hate that word.

The interesting bit is that, at the end, the guy answered some questions that were send beforehand by the employees' and other bosses. One of them was if he saw the benefits of using AI.

He fucking admitted he did not see those benefits, but then said immediately afterwards that the company had to use it because the competitors were using it.

Thankfully the project I work at is such a mess that they don't have the time to add AI to the workflow.

[–] apehardware@lemmy.world 7 points 6 months ago* (last edited 6 months ago)

Already doing my part then. The only times I used it was at the beginning when they were straight up tracking usage of the damn thing, but not the inputs, so me and a few coworkers asked it random shit to have a laugh. That was before we found about the environmental impacts and model collapse. Right now that "AI adoption team" has been very quiet for several months, so we just do work as normal (it sucks but at least we get paid.).

 

Title explains it all. For people in tech jobs who are being ~~indoctrinated~~ encouraged to use AI, what could be some methods of malicious compliance?

The only one I managed to come up with would be asking the chatbot to write the question you wanted to ask, them prompting it with its own replay to speed up that sweet model collapse.

 

[–] apehardware@lemmy.world 2 points 6 months ago

I keep making weird (human-made) art because I fucking want to.

I avoid AI as much as I can (not easy since I have a tech job. I have managed to dodge using it, but holy shit the AI cultist do not SHUT. THE. FUCK. UP).

I try to donate to good causes (nuke OpenAI headquarters kickstarter when?).

T try to find something to nerd about. Last time it was SCP, now it seems like it's going to be Sakha mythology (give this people independence from the Muscobitches now).

[–] apehardware@lemmy.world 2 points 6 months ago

Cyberpunk fiction did not prepare us for the tech dystopia being utterly embarrassing.

 

I could give a summary of the game and shit on AI. Surely people will like that.

But no. I'm going to go much deeper than just that.

My distaste of AI it's not just because the misinformation it generated, or its displacement of real talent, or its environmental impact...

...it's more personal.

I was at a really low point when the AI chatbot craze began and I ended up getting addicted during a few months. I managed to crawl out of that hell and decided to make this as a warning to the 3 people that would actually play it.

Bullet hell boss rush where you draw runes instead of shooting.

Collage art aesthetics and bizarre lore inspired by analog horror, Neon Genesis Evangelion and the folklore of Costa Rica and Galicia.

A merciless satire of GenAI showing what would happen if it stole and shat out mockeries of real objects instead of just data.

A look at the lies tech CEOs vomit on the masses to rot their brains with their bullshit generators.

This is Sayonara Sigil Sentry

Oh and it's free.