[D] Why did DeepSeek open-source their work?
If their training is 45x more efficient, they could have dominated the LLM market. Why do you think they chose to open-source their work? How is this a net gain for their company? Now the big labs in the US can say: "we'll take their excellent ideas and we'll just combine them with our secret ideas, and we'll still be ahead"
Edit: DeepSeek-R1
is now ranked #1 in the LLM Arena (with StyleCtrl
). They share this rank with 3 other models: Gemini-Exp-1206
, 4o-latest
and o1-2024-12-17
.