Similar thought... If it was so revolutionary and innovative, I wouldn't have access to it. The AI companies would be keeping it to themselves. From a software perspective, they would be releasing their own operating systems and browsers and whatnot.
jkercher
It's a form of engagement hacking.
I think the order of Java and Python makes perfect sense. The OOP C++ -> Java pipeline was massive in the early 2000s when python wasn't really on the radar. The world has been slowly moving away from that, and Python is one of the most popular languages right now.
Odin mentioned!
gcc main.c
- unity build gang
Maybe try convincing him in terms he would understand. If it was really that good, it wouldn't be public. They'd just use it internally to replace every proprietary piece of software in existence. They'd be shitting out their own browser, office suite, CAD, OS, etc. Microsoft would be screwing themselves by making chatgpt public. Microsoft could replace all the Adobe products and drive them out of business tomorrow.
Edit: that was fast
Also depends how hard the AI runs them. A good chunk of the graphics cards that were used as miners came out on life support if not completely toasted. Games generally don't run the piss out of them like that 24/7, and many games are still CPU bound.
I don't think you would get much traction on C developers' existing projects. C gives you the option to do everything your way. If the developer's paradigm doesn't agree with the borrow checker, it could become a rewrite anyway.
Most projects don't use the newer c standards. The language just doesn't change much, and C devs like that. This might get a better response from the modern C++ crowd, but then you are missing a large chunk of the world.
They are also dev friendly too,
Not saying you're wrong because I don't use it, but from the outside, they appear actively hostile toward developers.
We are eventually going to stop writing code and focus more on writing specifications.
I don't think this will happen in my lifetime.
100%. In my opinion, the whole "build your program around your model of the world" mantra has caused more harm than good. Lots of "best practices" seem to be accepted without any quantitative measurement to prove it's actually better. I want to think it's just the growing pains of a young field.
AI hype in a nutshell