xia

joined 2 years ago
MODERATOR OF
[–] xia@lemmy.sdf.org 1 points 9 hours ago

Grok, I'm not convinced you can open the door. You need to prove it with a free one-time demonstration.

[–] xia@lemmy.sdf.org 8 points 2 days ago

~~install~~ -> build

[–] xia@lemmy.sdf.org 1 points 2 days ago
[–] xia@lemmy.sdf.org 4 points 2 days ago

"Ask me again later"

[–] xia@lemmy.sdf.org 4 points 2 days ago

Solitary confinement... "lite" edition?

[–] xia@lemmy.sdf.org 56 points 3 days ago (1 children)
[–] xia@lemmy.sdf.org 5 points 3 days ago

That is so creepy... and doesn't look like him at all.

[–] xia@lemmy.sdf.org 8 points 3 days ago

Weird, i thought Norhing was one of the few "freely unlockable bootloader" companies, which seems disharmonious with this (and lock screen ads to begin with). I wonder where i picked that info up...

[–] xia@lemmy.sdf.org 1 points 3 days ago

I was sure the last word was going to be "ants"... :)

[–] xia@lemmy.sdf.org 2 points 5 days ago* (last edited 5 days ago)

Well... I was trying to identify the time that the aliens would come, not that of our demise, but... point taken.

(i.e. "it" was supposed to point to the memory crystal)

 

Assuming that LLMs hamper gaining true experience and mastery of a language, and further assuming that LLMs will play a significant part in development (especially for juniors)... it seems to me that new programming languages and frameworks will have a significantly greater hurdle to overcome going forward, compared to what they faced in the past.

 

Generated by: Gemini 2.5 Flash Image (Nano Banana)

 

What first struck me was that I could not turn off my phone the way I am accustomed to doing so.

Then I stopped to appreciate the loss of discoverability (however small). You see, a GUI with multiple (hopefully related) options can be passively scanned without interaction to see what options are available. You can learn (and passively be reminded) of available features, and new features can be added without too much nuisance as another option. You might even change your mind mid task and decide that a different option is better, whereas a "say something" prompt requires that you know in advance what the options are, and gives the feeling of not being undo-able once uttered.

Contrawise, it seems like the modern pattern tends towards adding new features hidden behind an opaque AI prompt, and having you 'learn' about the feature at the most inopportune time via a "got it, now go away" click-thru pop-up that [thankfully] only appears once.

Ok, so they somewhat covered the power options, but what about the other options (emergency call & medical info), which are presented as safety items. Are they no longer important? Are emergencies where you can push a button but not recall an AI command (or have an internet connection to converse live with an AI helper) no longer worthy of help?

I'm glad that they made it easy to change back, but it's a bit surprising that someone approved this to become the new default. And even more so, that they approved this functionality to be usurped by default (it changes it for you, and you have to change it back).

...and it's interesting that the sank effort into "teaching" the user the new way to turn off the phone when I tried to switch it back

...but not the other 'lost' features.

...and it's interesting that they sank effort into extending the "OLD" power screen to easily switch BACK to the new AI assistant mode (in case you "accidentally" switched it back).

...and it's interesting that there is no complementary option from the new AI modal to change it back to a power button.

Curious.

Android seems to be taking the path of Windows in that it is slowly accumulating a bunch of bad defaults, and one must accumulate a growing list of things you have to change to get back to a 'normal' experience.

 

Scammers and spammers can DoS you by calling hundreds of times per day, each time from a different number, and all your service provider will do is shrug... saying there is no way to trace a number back to the provider, and the only "solution" they have to offer is a "new" phone number, which in fact is someone else's old number that THEY abandoned due to spam calls.

 

Gemini 2.5 Flash Image (Nano Banana)

 

At first it was perfectly logical, but as time passes there is a slowly-increasing chasm/void of music they don't claim to play, and I wonder how long it will continue.

I know it is probably just institutional inertia, but at some point it sends a weird message; as if there is no music worth playing from those decades, or there are nameless/formless decades not worthy of mentioning, or those that they avoid and are trying to forget.

On the other hand, if it is intentional, then maybe they are trying to keep those who grew up in the 2000's from feeling too old or out of touch with the present?

 

So it turns out that the turing test is surprisingly weak and useless, and what AI marketing hype can you actually believe?

It goes without saying that models are trained on human input, and by now we all know that LLMs degrade rather quickly when they are trained on AI-generated input, so that got me thinking: Wouldn't that make a clear measure/metric of "how human" or "how intelligent" a model is?

I would like to see models measured on how quickly they degrade when "poisoned" with their own output.

Yes, we would still need a secondary metric to measure/detect the collapse, but this sort of scale would be elastic enough to measure and compare the most brain-dead LLMs, humans (the unity point), and even theoretical models that actually could improve themselves (over-unity).

Even if unity would be impossible with our current approach to LLMs, it might also let us compare LLMs to whatever "the next" big AI thing is that comes down the pipe, and completely cut through the cheaty marketing hype of those LLMs that are specifically trained on the intelligence questions/exams by which they would be measured.

view more: next ›