Allstate is now going use Copilot. Which means Microsoft AI gets access to all Allstate customers information.
Ask Experienced Devs
Icon base by Delapouite under CC BY 3.0 with modifications to add a gradient
Enterprise agreements using Copilot have "your data doesn't leave our cloud, and we won't train on your data" clauses. Even if you are just a M365 customer using the free tier of copilot you have enterprise data protection.
If you don't believe them on that clause they were probably already storing that data on one of the cloud providers anyways, so Microsoft would have already had full access to that data right at the database instead of some snippets of prompted context.
I get the strong feeling we are working for the same company. Quick question did it previously had strong focus on metaverse and blockchain and cloud when each of those things were currently in fashion.
Yep. Big healthcare, same thing.
It's like they don't realize that AI at best is as good as having a mailroom full of overeager interns. Who in their right mind would want to put in front of clients?! Or worse have it run critical business systems.
AI at best is as good as having a mailroom full of overeager interns.
That is the best description of AI that I've seen.
Even better, it's overeager interns that don't learn.
It blows my mind how many of the dress-up for meetings crew can't see how bad and like unprofessional ai generated content looks
More than 95% of management is utterly incompetent. All they do is follow the newest grift because they are not capable and don't want to actually evaluate something based on necessity or need. Examples of this are Agile, Cloud (AWS, Azure, etc), AI and many more.
Old man still yelling at the cloud 🤣
My company made the announcement during a meeting that we're going full AI. Our website is going to get sloppified, our software is going to get sloppified, and it's going to let our clients sloppify even harder. They're expecting everybody to start using AI in our workflows somehow, not just engineers. They're going to look at the numbers and ask questions if anybody isn't using Claude enough.
I'm protected for now because my environment is a bit more locked down, so they can't expect me to use AI, yet.
They can fire me when that day comes. I refuse to use generative AI.
"Let them eat slop" - Queen Marie-Antoinette
(Yes I know she didn't actually say it but let me have it)
I work for a financial company where our largest clients look to us for fiscally conservative actions. Thankfully, that trickles down to create an IT division that is aware of current trends, but usually doesn't chase the bleeding edge.
Let the risk takers tank the challenges, and we'll come in after and benefit from what's already been figured out.
We just opened Copilot chat to our users late last year. We discovered, and squashed, a whole slew of people trying to shadow IT their way into "supercharging their workflows with AI". A few were fired for shoving private info into public and unapproved models, but not nearly enough.
Been seeing "AI-native" too, and I work at a university. I forgive them for following the money, but me and the other techs have to constantly remind leadership that the average person is not excited about LLMs. And that going all-in could do reputational and credibility damage to the institution we currently work at.
I am not sure even your "non average" person is excited about LLMs or AI.
Like, I run things locally some and use Claude Code, but if all AI disappeared tomorrow I would just shrug and be irritated that I have to use Stack Overflow again because SO sucks fucking ass.
At a standup meeting, CEO asked whether we use AI and I was the only one who said that I don't use it at all, or very rarely, starting a little discussion. Overall, their position is somewhat moderate. They do fall for the hype a lot (especially with the recent Claude stuff) but it did not seem that it was a requirement for us to use it. But they were curious why I do not embrace this so much and I said that I can feel myself getting more and more stupid when using these tools, due to the mental offloading. This seemed to resonate a bit with the others, at least I could feel that my coworkers in the round got my point, despite remaining silent.
Coworker recently came to my desk and jokingly asked why I was typing out code by hand when I could ask Claude to generate it for me, but there was also a bit of seriousness to it, so I cringed a lot
I said that I can feel myself getting more and more stupid when using these tools, due to the mental offloading.
It takes a lot of self reflection to notice how well you understood a thing before genAI use, and how that understanding starts to disappear quickly after using genAI a lot more. I, too, noticed a similar reduction in ability and understanding of topics the more I used genAI. Now, I only use it as a last resort or when searching online will return too many unrelated results due to description of the problem being too generic.
A senior web dev is all about it at my work and I can't really stand him nor genAI so I'm glad I don't work directly with him or his team!
I really noticed it when instead of thinking how to solve X, my mind started phrasing a prompt to ask how to solve X instead, does that make sense? I found this to be a dangerous, almost evil thing, and I'm sure it is the same with my coworkers, or they just don't like to admit it. I still do this sometimes but am giving my best to unlearn it. And the crazy thing is, I did not even use it that much, only very occasionally, similar to what you mentioned you do. I do not wish to know how cooked the brains of "vibe coders" are by now...
Well, some have started succumbing to a form of psychosis, so probably not doing great!
I think somewhere in my organisation there's a very strong bulwark against AI encroachment, and I suspect it's our IT security because data protection and privacy are pretty much their top priority (we take it seriously in general). I don't think I've read anything published or sent internally that reads like AI slop.
I'm cleaning up AI slop but have been tasked with using more AI to do it faster and deliver more features. Doesn't matter to me. I'm getting paid 🤷 It'll be somebody else's problem soon enough, if the company survives.
My eye will be on the job market soon and I do wonder what it will be like. It seems like AI is eroding the minds of many.
It’s terrible, every single job posting is requiring using AI. That’s how you can tell it’s all a house of cards. Imagine reading a job post that said you’d be required to use JetBrains IDEs, you’d be thinking, “what the hell, why is this in a job post??”
Some teams have plugins and workflows around particular IDEs. Though I don't think any team has a workflow around AI that can't be done with human intelligence
Run before the cash burn finishes and it explodes in an AI supernova.
I'm lower level(sysadmin) but our devs use "AI", but it's more of algorithmic pattern matching with specific data sets we have.
We don't have a use case for it primarily.
I use it to get a rough outline for bash, python, and ansible scripts for automation. Still lots of refining and fitting things into our environment, but it's helped me tremendously get a jump start with a draft. It's like I get writer's block lol.
Yes. Also big tech
We've been commanded to rework all workflows to be agentic, whatever the hell that means.
It means prompts are triggered by events like crons or webhooks, and using text files to keep models from losing "context" 🙄
Any sort of tech company was always a headless chicken. It's just that when software developers were the hot thing on the horizon we were oblivious to the stupidity that thrives in corporate.
If you think about a business that maximizes profit, and sample opinions from the business side about the business domain.. you would be forgiven to assume the replies are nothing short of genius. However, the facade of brilliance falls apart the moment you attempt to sample anything adjacent to business or unrelated to it. You will realize that having experience in and making a lot of money does not translate well to other endeavors where making money is not the main concern.
Now that we have been milked dry like Apids in an Ant farm and have been virtually kicked out of the money making complex, we start to realize we were just cattle all along and that praise for our stature in the industry as the eponymous "genius" of the entire operation was just flattery to assuage any inconvenience we perceived through corporate bullshit. They succeeded in avoiding drawing our ire towards the incredibly narrow-sighted and hollow endeavor that is making the line go up exponentially.
Now that facade has fallen face first at the dawn of the LLM era. The arrogance that hid from software engineers and developers but was all too apparent to blue collar and "lesser" corporate functions has shown it's face to us, the enlightened morons.
Thanks for coming to (hehe) my TED talk
I’ve been in the software industry for 30 years. There has always been an active disdain for programmers, since we have been relatively scarce and moody and mysterious to the suits. Every so often, another product comes along which purports to finally free the organization of these pesky programmers and let the normies do the work faster and better. Management falls hard for the grift, and things end up worse off as a result. We’re in another one of those right now.
All that “genius” bullshit has always been a thin veneer, said through gritted teeth. They’ll come crawling back soon enough. But it does suck in the meantime.
Nope, definitely taken the position that LLM- based "AI" is actively harmful, at best.
That being said... It is a casino. So it makes sense they are generally pretty risk-averse.
I work for a small ~30-person company with various customers, including some very big names. We're very deliberate about where tools like those could help us, where it's worth the exploration and investment. We want to be innovative and have the expertise, but at the same time, be reasonable and sound. We're also very conscious of data sharing and safeguards, in part out of necessity, because we can't just share our customers' code or data with third parties.
Excitement, commitment, use, and hopes of using AI tools differ between colleagues. What we can use and how differs between projects.
So yes, there are definitely other kinds of companies and environments out there.
Microsoft?
I’m lucky I’m in a company that hasn’t gone far deep into the AI rabbit hole. They have set up Claude Code for us and encourage us to use it if we want, but that’s about it.