...so might as well say that "agent" is simply the next buzzword, since people aren't so excited with the concept of artificial intelligence any more
This is exactly the reason for the emphasis on it.
The reality is that the LLMs are impressive and nice to play with. But investors want to know where the big money will come from, and for companies, LLMs aren’t that useful in their current state, I think one of the biggest use for them is extracting information from documents with lots of text.
So “agents” are supposed to be LLMs executing actions instead of just outputting text (such as calling APIs). Which doesn’t seem like the best idea considering they’re not great at all at making decisions—despite these companies try to paint them as capable of such.
Even if it was, there’s no way to know, people can just lie. It’s not like it will be obvious, some people might have a feeling it is (based on their experience playing with LLMs) but won’t be able to point exactly why.