... Statistical engines are older than personal computers, with the first statistical package developed in 1957. And AI professionals would have called them trained models. The interpreter is code, the weights are not. We have had terms for these things for ages.
Poik
Because over-hyped nonsense is what the stock market craves... That's how this works. That's how all of this works.
My career is AI. It is over hyped and what the tech bros say is nonsense. AI models are not source, they are artifacts, which can be used by other source to run inference, but they themselves are not source, and anyone who says they are don't know what code is.
I guess X-Ray Vision? Yeah. It's a stretch.
As someone who has professionally done legal reverse engineering. No. No it isn't.
The security you get through vetting your code is invaluable. Closing off things makes it more likely for things to not be caught by good actors, and thus not fixed and taken advantage of by bad actors.
And obscurity does nothing to stop bad actors, if there's money to be had. It will temporarily stop script kiddies though. Until the exploit finds it's easy into their suite of exploits that no one's fixed yet.
ML-bubble? You mean the one in the 1960's? I prefer to call this the GenAI bubble, since other forms of AI are still everywhere, and have improved a lot of things invisibly for decades. (So, yes. What you said.)
AI winter is a recurring theme in my field. Mostly from people not understanding what AI is. There have been Artificial Narrow Intelligence that beat humans in various forms of reasonings for ages.
AGI still seems like a couple AI winters out of having a basic implementation, but we have really useful AI that can tell you if you have cancer more reliably and years earlier than humans (based on current long term cancer datasets). These systems can get better with time, and the ability to learn from them is still active research but is getting better. Heck, with decent patching, a good ANI can give you updates through ChatGPT for stuff like scene understanding to help blind people. There's no money in that, but it's still neat to people who actually care about AI instead of cash.
This is correct, but I prefer damnbidextrous, because I can't do a damn thing with either hand.
That too. There are so many reasons for homing the homeless first.
It's probably the cheapest and most effective first step. There's so much more that will need to follow it. There's a lot going on. But home the homeless first.
Aside from the fact that having a safe place to live alone helps both mental illness and substance abuse in most individuals, a major cause of homelessness is domestic abuse and being disowned. Having a safe place to live will absolutely help the over a third of domestic abuse victims who become homeless, and would help those who cannot afford to get away from their abusers due to lack of ability to find a safe haven.
Home the homeless, then we can start working on the harder parts.
Oh. I found it. It was Florida rail. I'll update the numbers with more accurately sourced ones. It should be 260000 km according to Wikipedia, although statistica lists 149000km (still the largest in the world, but significantly less). I wonder if the Wikipedia number is before a bunch of rails were destroyed. Basically that would be our high score, but really the high speed rail should be the goal of which we basically have none.
Statistica also lists the European Union, so not all of Europe, at 220000 km in 1990 (and declining since then, but who isn't). Dunno where Florida rail got their numbers but I should know better to trust anything coming out of that state.
I've been dating my boyfriend since before it was legal. Thank you for your input, but no. Just no.
Actually no. As someone who prefers academic work, I very heavily prefer Deepseek to OpenAI. But neither are open. They have open weights and open source interpreters, but datasets need to be documented. If it's not reproducible, it's not open source. At least in my eyes. And without training data, or details on how to collect it, it isn't reproducible.
You're right. I don't like big tech. I want to do research without being accused of trying to destroy the world again.
And how is Deepseek over-hyped? It's an LLM. LLM's cannot reason, but they're very good at producing statistically likely language generation which can sound like its training data enough to gaslight, but not actually develop. They're great tools, but the application is wrong. Multi domain systems that use expert systems with LLM front ends to provide easy to interpret results is a much better way to do things, and Deepseek may help people creating expert systems (whether AI or not) make better front ends. This is in fact huge. But it's not the silver bullet tech bros and popsci mags think it is.