circle

joined 2 years ago
[–] circle@lemmy.world 5 points 2 years ago

Even I used to believe that there is a good demand, but sadly it's a very small minority.

[–] circle@lemmy.world 10 points 2 years ago

Oh yes, to top it I have small hands - I can't reach almost any of the opposite edge without using two hands. Sigh.

[–] circle@lemmy.world 1 points 2 years ago

Thanks ill check that out

[–] circle@lemmy.world 1 points 2 years ago (3 children)

Agreed. YouTube revanced works well too. But are there alternatives for iOS?

[–] circle@lemmy.world 4 points 2 years ago (1 children)

Love the clock!

 

intuition: 2 texts similar if cat-ing one to the other barely increases gzip size

no training, no tuning, no params — this is the entire algorithm

https://aclanthology.org/2023.findings-acl.426/

[–] circle@lemmy.world 2 points 2 years ago

sure, thank you!

[–] circle@lemmy.world 2 points 2 years ago (2 children)

Thanks. Does this also conduct compute benchmarks too? Looks like this is more focused on model accuracy (if I'm not wrong)

 

As the title suggests, basically i have a few LLM models and wanted to see how they perform with different hardware (Cpus only instances, gpus - t4, v100, a100). Ideally it's to get an idea on the performance and overall price(vm hourly rate/ efficiency)

Currently I've written a script to calculate ms per token, ram usage(memory profiler), total time taken.

Wanted to check if there are better methods or tools. Thanks!

[–] circle@lemmy.world 3 points 2 years ago (2 children)

I already miss my muscle memory operations through sync :/

[–] circle@lemmy.world 2 points 2 years ago

Can't wait!