Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
One interesting science field is "discrete AI" (probably has a few other names) which basically technically means "based on integers instead of floating point numbers". It has a few more implications on the models being more mathematically clean, but that's a long paragraph if I get into it.
The expecations are AI that is not based on absurd computing resources and black boxes, but getting the same benefits from low-power low-cost hardware and with outputs that can be more realistically queried to explain why the output became what it was.
E.g. if AI is used to make decisions on when to feed fish, and it feeds slightly too much, you'd want to be able to ask "why" and get a useful answer instead of today's "yeah idunno magic computer said so i guess training data lol"
Kinda sounds like you're talking about Explainable AI too. Very interesting set of fields, but I'm pretty sure they're all having funding problems too.
Yeah, funding is kinda not. I assumed the question was ignoring that, but I may have been mistaken.
Tsetlin machines are the ones I found most interesting. Strict yes/no logic stuff in the actual decision model, while the deeper complexity is in the training.
Sounds interesting. Glad those topics are still being investigated. So important to remember that even those neural methods labored for decades in the shadows before they finally found the answers they needed.