I for one knew it and yet I enjoy, in a very tragic way, discovering that she was, actually, even worst than I thought.
utopiah
Hoping not getting crushed by SUVs while drivers are busy scrolling down their phone is so woke. /s
I'm playing games at home. I'm running models at home (I linked in other similar answers to it) for benchmarking.
My point is that models are just like anything I bring into my home I try to only buy products that are manufactured properly. Someone else in this thread asked me about child labor for electronics and IMHO that was actually a good analogy. You here mention buying a microwave and that's another good example.
Yes, if we do want to establish feedback in the supply chain, we must know how everything we rely on is made. It's that simple.
There are already quite a few initiatives for that with e.g. coffee with Fair Trade Certification or ISO 14001, in electronics Fair Materials, etc.
The point being that there are already mechanisms for feedback in other fields and in ML there are already model cards with a co2_eq_emissions
field, so why couldn't feedback also work in this field?
The purpose of a system is what it does.
Right, reminds me of the hacker mindset or more recently the workshop I did on "Future wheel foresight" with Karin Hannes. One can try their best to predict how an invention might be used but in practice it goes beyond what its inventors want it to be, it is truly about how what "it" does through actual usage.
an exception
FWIW it's more than an exception IMHO it's one of the very best game I played in my life. It's more than a game, it's an experience. I was in City 17.
The business model IS dodging any kind of responsibility so... yeah, I think they'll pass.
I should write a Tridactyl script to use that as warning... it goes like this
document.querySelector("textarea").style.backgroundImage = "linear-gradient(to right, rgba(255,255,255, 0.7) 0 100%), url(https://programming.dev/pictrs/image/e7e7aeb4-ae1d-426b-bf45-02a4f3060bd6.jpeg?format=webp)"
It's hard to read but a good reminder maybe it's not worth it! :D
I agree and in fact I feel the same with AI.
Fundamental cryptocurrency is fascinating. It is mathematically sound, just like cryptography in general (computational complexity, one way functions, etc) and it had the theoretical potential to change existing political and economical structures. Unfortunately (arguably) the very foundation it is based on, namely mining for greed, brought a different community who inexorably modified not the technology itself but its usages. What was initially a potential infrastructure for exchange of value became a way to speculate, buy and sell goods and services banned, ransomware, scam payments, etc).
AI also is fascinating as a research fields. It asks deep question with complex answers. Research for centuries about it lead to not just interesting philosophical questions, like what it's like to be think, to be human, and mathematics used in all walks of life, like in logistics for your parcel to get delivered this morning. Yet... gradually the field, or at least its commercialization, got captured by venture capitalists, entrepreneurs, regulators, who main interest was greed. This in turn changed what was until then open to something closed, something small to something required gigantic infrastructure capturing resources hitherto used for farming, polluting due to lack of proper permit for temporary electricity sources, etc. The pinnacle right now being regulation to ban regulation on AI in the US.
So... yes, technology itself can be fascinating, useful, even important and yet how we collectively, as a society, decide to use it remains what matters, the actual impact of an idea rather than its idealization.
Also makes me think of Jevons's paradox (or the rebound effect) but for attention or even more broadly cognition.
Moore’s law is kinda still in effect, depending on your definition of Moore’s law.
Sounds like the goal post is moving faster than the number of transistors in an integrated circuit.
LOL... you did make me chuckle.
Aren't we 18months until developers get replaced by AI... for like few years now?
Of course "AI" even loosely defined progressed a lot and it is genuinely impressive (even though the actual use case for most hype, i.e. LLM and GenAI, is mostly lazier search, more efficient spam&scam personalized text or impersonation) but exponential is not sustainable. It's a marketing term to keep on fueling the hype.
That's despite so much resources, namely R&D and data centers, being poured in... and yet there is not "GPT5" or anything that most people use on a daily basis for anything "productive" except unreliable summarization or STT (which both had plenty of tools for decades).
So... yeah, it's a slow take off, as expected. shrug
2, 3 and 4 also are about politics.