I use easy diffusion in windows, it works acceptable. I tried a 7B LLM model based on llama on windows as well and it was terrible compared to cloud hosted gpt-3,4. I have a 1660 to to work with so I'm VRAM limited and GPU speed slowed. I have Jupyter with pytorch but I haven't had a need to use it.
At work, I've trained models with yolo and use have recently started using gpt-4 for writing and coding starts.
The best use/start is to program some useful utilities in cuda, you don't need a honking big dGPU for that but you need tensor cores.
If you're going to use this for a hypervisor then definitely the six cores. If you had at least six cores then I would say you want to go with the higher single core performance especially if you do stuff like compiling.