My understanding is that for image generating AI, the way the models work is that a detector has to develop alongside the generator, as distinguishing AI images is how it makes the images in the first place. No matter how good the image gen is, you will necessarily also get a detector as a byproduct.
LLMs don’t work that way. There is nothing to say it is even theoretically possible to make a detector for a given sophistication of LLM. The best you can really do is have a system say whether its LLM could have generated the text in question… but if the LLM is good enough, then it should be able to generate most human written texts, so… that’s pretty useless.