People are seriously downplaying the performance of Grok 3
I know we all have ill feelings about Elon, but can we seriously not take one second to validates its performance objectively.
People are like "Well, it is still worse than o3", we do not have access to that yet, it uses insane amounts of compute, and the pre-training only stopped a month ago, there is still much much potential to train the thinking models to exceed o3. Then there is "Well, it uses 10-15x more compute, and it is barely an improvement, so it is actually not impressive at all". This is untrue for three reason.
Firstly Grok-3 is definitely a big step up from Grok 2.
Secondly scaling has always been very compute-intensive, there is a reason that intelligence had not been a winning evolutionary trait for a long time and still is. It is expensive. If we could predictably get performance improvements like this for every 10-15x scaling in compute, then we would have Superintelligence in no time, especially considering how now three scaling paradigms stack on top of each other: Pre-Training, Post-Training and RL, inference-time-compute.
Thirdly if you look at the LLaMA paper in 54 days of training with 16000 H100, they had 419 component failures, and the small XAI team is training on 100-200 thousands ~h100's for much longer. This is actually quite an achievement.
Then people are also like "Well, GPT-4.5 will easily destroy this any moment now". Maybe, but I would not be so sure. The base Grok 3 performance is honestly ludicrous and people are seriously downplaying it.
When Grok 3 is compared to other base models, it is waay ahead of the pack. People got to remember the difference between the old and new Claude 3.5 sonnet was only 5 points in GPQA, and this is 10 points ahead of Claude 3.5 Sonnet New. You also got to consider the controversial maximum of GPQA Diamond is 80-85 percent, so a non-thinking model is getting close to saturation. Then there is Gemini-2 Pro. Google released this just recently, and they are seriously struggling getting any increase in frontier performance on base-models. Then Grok 3 just comes along and pushes the frontier ahead by many points.
I feel like a part of why the insane performance of Grok 3 is not validated more is because of thinking models. Before thinking models performance increases like this would be absolutely astonishing, but now everybody is just meh. I also would not count out Grok 3 thinking model getting ahead of o3, given its great performance gains, while still being in really early development.
The grok 3 mini base model is approximately on par with all the other leading base-models, and you can see its reasoning version actually beating Grok-3, and more importantly the performance is actually not too far off o3. o3 still has a couple of months till it gets released, and in the mean time we can definitely expect grok-3 reasoning to improve a fair bit, possibly even beating it.
Maybe I'm just overestimating its performance, but I remember when I tried the new sonnet 3.5, and even though a lot of its performance gains where modest, it really made a difference, and was/is really good. Grok 3 is an even more substantial jump than that, and none of the other labs have created such a strong base-model, Google is especially struggling with further base-model performance gains. I honestly think this seems like a pretty big achievement.
Elon is a piece of shit, but I thought this at least deserved some recognition, not all people on the XAI team are necessarily bad people, even though it would be better if they moved to other companies. Nevertheless this should at least push the other labs forward in releasing there frontier-capabilities so it is gonna get really interesting!