LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
No implying that models are devoid of purpose or potential for enriching peoples lives.
view the rest of the comments
That's enormous, very much not local, heh.
Here's the actual article translation (which seems right comparing to other translations):
Translation
DeepSeek R2: Unit Cost Drops 97.3%, Imminent Release + Core SpecificationsAuthor: Chasing Trends Observer
Veteran Crypto Investor Watching from Afar
2025-04-25 12:06:16 Sichuan
Three Core Technological Breakthroughs of DeepSeek R2:
Adopts proprietary Hybrid MoE 3.0 architecture, achieving 1.2 trillion dynamically activated parameters (actual computational consumption: 78 billion parameters).
Validated by Alibaba Cloud tests:
(Data source: IDC Computing Power Economic Model)
Data Engineering
Constructed 5.2PB high-quality corpus covering finance, law, patents, and vertical domains.
Multi-stage semantic distillation boosts instruction compliance accuracy to 89.7%
(Benchmark: C-Eval 2.0 test set)
Hardware Optimization
Proprietary distributed training framework achieves:
(Validated by Huawei Labs)
Application Layer Advancements - Three Multimodal Breakthroughs:
ViT-Transformer hybrid architecture achieves:
Industrial Inspection
Adaptive feature fusion algorithm reduces false detection rate to 7.2E-6 in photovoltaic EL defect detection
(Field data from LONGi Green Energy production lines)
Medical Diagnostics
Knowledge graph-enhanced chest X-ray multi-disease recognition:
(Blind test results from Peking Union Medical College Hospital)
Key Highlight:
8-bit quantization compression achieves:
(Enables edge device deployment - Technical White Paper Chapter 4.2)
Others translate it as 'sub-8-bit' quantization, which is interesting too.
I'm sad to see how many mentions of "proprietary" there are in there. I didn't think that was DeepSeek's way of doing things.
The rumor is probably total BS, heh.
That being said, it’s not surprising if Deepseek goes more commercial since it’s basically China’s ChatGPT now.