DarkKnyt

joined 2 years ago
[–] DarkKnyt@alien.top 1 points 2 years ago

If you're going to use this for a hypervisor then definitely the six cores. If you had at least six cores then I would say you want to go with the higher single core performance especially if you do stuff like compiling.

[–] DarkKnyt@alien.top 1 points 2 years ago

I use easy diffusion in windows, it works acceptable. I tried a 7B LLM model based on llama on windows as well and it was terrible compared to cloud hosted gpt-3,4. I have a 1660 to to work with so I'm VRAM limited and GPU speed slowed. I have Jupyter with pytorch but I haven't had a need to use it.

At work, I've trained models with yolo and use have recently started using gpt-4 for writing and coding starts.

The best use/start is to program some useful utilities in cuda, you don't need a honking big dGPU for that but you need tensor cores.

[–] DarkKnyt@alien.top 1 points 2 years ago

Openvpn is based on older encryption that is compute heavy. For 10 mb/s in open VPN, you'll get 30x the performance in wireguard, at least in my experience with the same opal. This is when it is done in software only and without a dedicated encryption chip.

There is a bug in some glinet firmware where the allowedips doesn't get parsed correctly. I believe it is fixed in the newest version (I'm doing split VPN now)

[–] DarkKnyt@alien.top 1 points 2 years ago

What's the cloud flare server? Planetary file system or something else?

[–] DarkKnyt@alien.top 0 points 2 years ago (2 children)

Sounds like you are more about the experimentation so so not having huge storage makes sense.

That Cisco better be pulling its weight for all its electron nom noms.

view more: ‹ prev next ›