Make sure to hit the Ballmer's peak
Natanael
Getting humans to do their work reliably is a whole science and lots of fields can achieve it
That says more about you.
There are a lot of cases where you can not know if it worked unless you have expertise.
Because if you don't know how to tell when the AI succeeded, you can't use it.
To know when it succeeded, you must know the topic.
The calculator is predictable and verifiable. LLM is not
The standards are royalty free, so I'm not sure what that has to do with anything
Well trained humans are still more consistent and more predictable and easier to teach.
There's no guarantee LLM will get reliably better at everything. It still makes some mistakes today that it did when introduced and nobody knows how to fix that yet
Then you want them to advertise NIST PQ standards
... Which is also not necessary for single user password databases anyway
Even then the summarizer often fails or bring up the wrong thing 🤷
You'll still have trouble comparing changes if it needs to look at multiple versions, etc. Especially parsing changelogs and comparing that to specific version numbers, etc
You'll never be able to capture every source of questions that humans might have in LLM training data.
Every LLM is shit at dealing with version changes. They don't understand it as a concept, despite all their training data.
Weird idealist types.