Natanael

joined 8 months ago
[–] Natanael 4 points 4 months ago

Weird idealist types.

[–] Natanael 2 points 4 months ago

Make sure to hit the Ballmer's peak

[–] Natanael 2 points 4 months ago

Getting humans to do their work reliably is a whole science and lots of fields can achieve it

[–] Natanael 3 points 4 months ago (1 children)

That says more about you.

There are a lot of cases where you can not know if it worked unless you have expertise.

[–] Natanael 22 points 4 months ago* (last edited 4 months ago) (4 children)

Because if you don't know how to tell when the AI succeeded, you can't use it.

To know when it succeeded, you must know the topic.

The calculator is predictable and verifiable. LLM is not

[–] Natanael -4 points 4 months ago (3 children)

The standards are royalty free, so I'm not sure what that has to do with anything

[–] Natanael 2 points 4 months ago (2 children)

Well trained humans are still more consistent and more predictable and easier to teach.

There's no guarantee LLM will get reliably better at everything. It still makes some mistakes today that it did when introduced and nobody knows how to fix that yet

[–] Natanael 37 points 4 months ago* (last edited 4 months ago) (5 children)

Then you want them to advertise NIST PQ standards

... Which is also not necessary for single user password databases anyway

[–] Natanael 3 points 4 months ago* (last edited 4 months ago) (5 children)

Even then the summarizer often fails or bring up the wrong thing 🤷

You'll still have trouble comparing changes if it needs to look at multiple versions, etc. Especially parsing changelogs and comparing that to specific version numbers, etc

[–] Natanael 3 points 4 months ago (7 children)

You'll never be able to capture every source of questions that humans might have in LLM training data.

[–] Natanael 11 points 4 months ago

Every LLM is shit at dealing with version changes. They don't understand it as a concept, despite all their training data.

view more: ‹ prev next ›