This is obvious for people who understand the basics of LLM. However, people are fooled by how intelligent these LLM sounds, so they mistake it for actually being intelligent. So, even if this is an open door, I still think it's good someone is kicking it in to make it clear that llms are not generally intelligent.
snaggen
That's why it felt very early to have used it before it was default, I mean before 2016 felt too early for me... But it was way before Covid, so I'd say around 2017.
I know I have used it since Fedora made it default in 2016. I think I actually used it a while before that, but I don't have any thing to help me pin down the exact time.
Since I only use Intel built-in GPU, everything have worked pretty well. The few times I needed to share my screen, I had to logout and login to an X session. However, that was solved a couple of years ago. Now, I just wait for Java to get proper Wayland support, so I fully can ditch X for my daily use and get to take advantage of multi DPI capabilities of Wayland.
That is the boring part when projects gets more mature...
It was a joke
No, that is not all the idea. You might have that idea, but it is not a basic idea at all. To keep something open (as in open source), you must put restrictions that prevents it from closing.
A government is not more free just because it lacks any restrictions, about becoming a dictatorship. It is just less restricted at this point in time. To ensure a free society, there needs to be restrictions in place that ensures it stays free. The same applies to software.
Many seems to believe that less restrictions means more free or open, that is not true. It is just less restricted.
No, I think you missunderstand.... A joke is supposed to be funny.
I actually asked chatGPT about a specific issue I had and solved a while back. It was one of these issues where it looked like a simple naive solution would be sufficient, but due to different conditions that fails, you have to go with a more complex solution. So, I asked about this to see what it would answer. And it went with the simpler solution, but with some adjustments. The code also didn't compile. But it looked interesting enough, for me to question my self. Maybe it was just me that failed the simpler solution, so I actually tried to fix the compile errors to see if I could get it working. But the more I tried to fix its code the more obvious it got that it didn't have a clue about what it was doing. However, due to the confidence and ability to make things look plausible, it sent me on a wild goose chase. And this is why I am not using LLM for programming. They are basically overconfident junior devs, that likes mansplaining.