Google DeepMind has used chatbot models to come up with solutions to major problems in mathematics and computer science.
The system, called AlphaEvolve, combines the creativity of a large language model (LLM) with algorithms that can improve solutions.
It was described in a white paper released by the company on 14 May.
AlphaEvolve has helped to improve the design of the company’s next generation of tensor processing units — computing chips developed specially for AI
It has found a way to more efficiently exploit Google’s worldwide computing capacity, saving 0.7% of total resources.
AlphaEvolve is general-purpose, tapping the abilities of LLMs to generate code to solve problems in a wide range of domains.
DeepMind describes AlphaEvolve as an ‘agent’, because it involves using interacting AI models.
DeepMind says that AlphaEvolve has come up with a way to perform a calculation, known as matrix multiplication, that in some cases is faster than the fastest-known method, which was developed by German mathematician Volker Strassen in 1969.
In mathematics, AlphaEvolve seems to allow significant speed-ups in tackling some problems, says Simon Frieder, a mathematician and AI researcher at the University of Oxford, UK.
Although AlphaEvolve requires less computing power to run than AlphaTensor, it is still too resource-intensive to be made freely available on DeepMind’s servers, says Kohli.