Google DeepMind makes AI history with gold medal win at world’s toughest math competition

Google DeepMind announced on Monday that its advanced Gemini artificial intelligence model has achieved gold medal-level performance at the International Mathematical Olympiad. It successfully solved five out of six highly challenging problems, marking a historic moment as the first AI system to receive official gold-level recognition from the competition organizers.

This accomplishment advances AI reasoning and positions Google as a leader in the competitive race among tech giants to develop next-generation artificial intelligence. More significantly, it illustrates that AI can now handle intricate mathematical problems using natural language understanding, eliminating the need for specialized programming languages.

“The official results are in — Gemini reached gold-medal status at the International Mathematical Olympiad!” Demis Hassabis, CEO of Google DeepMind, shared on social media platform X on Monday morning. “An advanced version managed to solve 5 out of 6 problems. Remarkable progress.”

The International Mathematical Olympiad, held annually since 1959, is esteemed as the world’s foremost mathematics competition for pre-university students. Each participating nation sends six top young mathematicians to tackle six extremely difficult problems covering algebra, combinatorics, geometry, and number theory. Typically, only about 8% of human contestants earn gold medals.


AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

  • Turning energy into a strategic advantage
  • Architecting efficient inference for real throughput gains
  • Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO


How Google DeepMind’s Gemini Deep Think cracked math’s toughest problems

Google’s latest success far surpasses its 2024 achievement, when the company’s combined AlphaProof and AlphaGeometry systems achieved silver medal status by solving four out of six problems. That earlier system required human experts to first translate natural language problems into domain-specific programming languages and then interpret the AI’s mathematical output.

This year’s breakthrough emerged with Gemini Deep Think, an enhanced reasoning system that employs what researchers call “parallel thinking.” Unlike traditional AI models that follow a single chain of reasoning, Deep Think explores multiple potential solutions simultaneously before arriving at a final answer.

“Our model operated end-to-end in natural language, producing rigorous mathematical proofs directly from the official problem descriptions,” Hassabis explained in a follow-up post on the social media site X, emphasizing that the system completed its work within the competition’s standard 4.5-hour time limit.

The model scored 35 out of a possible 42 points, comfortably surpassing the gold medal threshold. According to IMO President Prof. Dr. Gregor Dolinar, the solutions were “astonishing in many respects” and deemed “clear, precise and most of them easy to follow” by competition graders.

OpenAI faces backlash for bypassing official competition rules

The announcement comes amid growing tension in the AI industry over competitive practices and transparency. Google DeepMind’s measured approach to releasing its results has drawn praise from the AI community, especially when compared to rival OpenAI’s handling of similar achievements.

“We didn’t announce on Friday because we respected the IMO Board’s original request that all AI labs share their results only after the official results had been verified by independent experts & the students had rightly received the acclamation they deserved,” Hassabis wrote, appearing to reference OpenAI’s earlier announcement of its own olympiad performance.

Recommended Content