lambdabeta

joined 2 years ago
MODERATOR OF
[–] lambdabeta@lemmy.ca 21 points 6 days ago

Thats exactly what I so often find myself saying when people show off some neat thing that a code bot "wrote" for them in x minutes after only y minutes of "prompt engineering". I'll say, yeah I could also do that in y minutes of (bash scripting/vim macroing/system architecting/whatever), but the difference is that afterwards I have a reusable solution that: I understand, is automated, is robust, and didn't consume a ton of resources. And as a bonus I got marginally better as a developer.

Its funny that if you stick them in an RPG and give them an ability to "kill any level 1-x enemy instantly, but don't gain any xp for it" they'd all see it as the trap it is, but can't see how that's what AI so often is.

[–] lambdabeta@lemmy.ca 3 points 1 week ago (1 children)

My guess for ferries would be that most ferry trips are very short. That means less total travel per trip, so for the same risk per trip it gets much higher risk per distance.

[–] lambdabeta@lemmy.ca 24 points 1 month ago

I think this is the real answer. HDR is a thing and the baseline for expected dynamic range is higher than both what older displays can produce and older eyes can consume.

[–] lambdabeta@lemmy.ca 5 points 1 month ago (1 children)

You should check out language jones' suggestion on how to use anki most effectively. It did wonders for me. (yt link: https://youtu.be/QVpu66njzdE)

[–] lambdabeta@lemmy.ca 10 points 2 months ago (3 children)

Now I want all 26 done this way... D as in django would probably be the best though.

[–] lambdabeta@lemmy.ca 42 points 3 months ago* (last edited 3 months ago) (13 children)

My favourite part has to be the fact that a box of poptarts contains 8 poptarts...

[–] lambdabeta@lemmy.ca 15 points 3 months ago (3 children)

I feel like everyone is underestimating 5. It's any toaster. Make a killer robot that happens to have the ability to make toast and you've got a remote control death bot.

I'd make a whole set of devices that happen to also be toasters. Why not add some heating elements and springs to an elevator, a car, a plane?

[–] lambdabeta@lemmy.ca 2 points 3 months ago (3 children)

I went to sign it and the sign button takes me to an empty page. Anyone else seeing that? (seems to be every petition that's open too :/)

[–] lambdabeta@lemmy.ca 5 points 3 months ago

Full on 5. Maybe a 4 if I try really hard. But I still have a "big imagination", I just can't see it in my mind.

[–] lambdabeta@lemmy.ca 2 points 3 months ago

I'm not sure if this is the best place to put this, but I wanted to share my thoughts.

For background, I got an undergraduate degree in software engineering. I took a few electives on AI, but it was more primitive back then (state of the art were GPT-2 and early stable diffusion models, it was pre "Attention is all you need"). Since then I've tried out the new stuff, made some generic profile pictures with "AI", messed around with a few prompts on GPT-4, but not really used any "AI" for anything productive. I haven't found a single valid use-case at my work for anything past MLPs (which admittedly work great).

My first reaction was: "wow, that's actually pretty impressive -- at least relative to what AI/ML could do a decade ago". I then noticed the size of the models, leading to my first question: how bad is the "efficiency" of increasing model size?! I know that MLPs reach serious diminishing returns before getting into the billions of parameters; and I understand that the transformers eat up a bunch of additional parameters. Nonetheless, I almost expected more from statements like "480 billion parameters". IIRC that's significantly more parameters than neurons in the human brain (though probably fewer nodes due to the more connected nature of artificial neural networks).

The next thing I realized was that there are a lot more models out there than I thought. I had of course heard of the GPTs, and played around with LLaMA, plus news has made me aware of Gemini, Claude, and Grok. I didn't realize just how many models there were. Given their size, that leads me to my next question: given the cost (in both $ and environmental impact) of training an AI, why are so many companies training their own AIs instead of in some way sharing training time on the same model? Surely the training sets can't be that different given their size. A handful of different models makes sense, but this seems like a massive waste of resources to me.

Finally, I realized that I really am a bit out-of-date on AI usage (though not overly out-of-date on AI research). I haven't tried interrogating weights of any modern model, I didn't even know you could reasonably ask modern models to dump their output probabilities like that, though of course it makes sense. So my final question: are there any FOSS models with state of the art techniques being used? (i.e. expert dispatch and LoRAs)

[–] lambdabeta@lemmy.ca 1 points 4 months ago

I'm curious: what makes bag vacuums better? What's the main difference?

[–] lambdabeta@lemmy.ca 1 points 4 months ago

I was thinking the same thing. Although the graph does have some taper to it and maybe "most" schools end around the 6th?

 

Maybe here we can draw the leaf without corrupting it. For reference I scaled the wikipedia picture to 100 pixels wide and anchored the 8 corners of the side borders. Hopefully we can make it look good!

 

I've sunk many hours into the which species is closer related to X, A or B?

 

Hopefully this post works, first time posting on jerboa.

 

She's our brown miniature poodle, about a year and a half old.

view more: next ›