Bad joke
Natanael
Because if you don't know how to tell when the AI succeeded, you can't use it.
To know when it succeeded, you must know the topic.
The calculator is predictable and verifiable. LLM is not
The standards are royalty free, so I'm not sure what that has to do with anything
Well trained humans are still more consistent and more predictable and easier to teach.
There's no guarantee LLM will get reliably better at everything. It still makes some mistakes today that it did when introduced and nobody knows how to fix that yet
Then you want them to advertise NIST PQ standards
... Which is also not necessary for single user password databases anyway
Even then the summarizer often fails or bring up the wrong thing 🤷
You'll still have trouble comparing changes if it needs to look at multiple versions, etc. Especially parsing changelogs and comparing that to specific version numbers, etc
You'll never be able to capture every source of questions that humans might have in LLM training data.
Every LLM is shit at dealing with version changes. They don't understand it as a concept, despite all their training data.
Trust but verify as a concept is irrelevant to the majority of people. It specifically refers to how intel orgs' staff should handle their long term sources for information. It is applicable specifically when they have a high degree of trustworthiness already, but you still need to be a bit more sure than that.
If that's not your situation, you have no use for it.
You wouldn't take tips from a off-road rally driver during city traffic, would you?
That only makes sense with limited supply or very limited budget, but doesn't make sense if you want to keep the whole population healthy (especially since it has positive ROI through preventative effects)
That says more about you.
There are a lot of cases where you can not know if it worked unless you have expertise.