DeepMind, a company owned by Google’s parent company that specializes in artificial intelligence (IA) has just used the evolution of its AlphaZero program for more than just defeating expert chess, shogi, and Go players.
The most recent achievement of the researchers, according to an article published in Nature, has been to discover a faster way to solve a mathematical operation essential for computer tasks that affect thousands of everyday tasks.
Using the algorithm of a game to beat a record
So what mathematical operation are we talking about? Specifically from the matrix multiplication. You may have heard of it before as it is one of the simplest and most widely taught in schools.
We will not explain how to multiply matrices, but we will point out that this operation is key for, for example, our smartphones to process images of the images we see and for voice assistants to be able to identify almost everything we ask of them.
The application of this simple operation also reaches more complex scenarios, such as simulations in the field of meteorology to predict the weather, data compression for transmission over the Internet and much more.
As you might imagine, all of these tasks require computing power. So any improvement in the matrix multiplication algorithm translates into a greater efficiency for existing hardware resources.
Although, it should be noted, that the interest of mathematicians in finding an improved algorithm for matrix multiplication has origins much more remote than those of the computer age. However, that is another story to tell.
The truth is that, despite the effort and interest in finding a more efficient algorithm, for 50 years we have not achieved progress in this regard. The last had been that of the German mathematician Volker Strassen in the late sixties.
Since matrix multiplication is, roughly speaking, multiplying the rows of one matrix with the columns of another, DeepMind thought of translating the problem into a kind of three-dimensional board game.
Building on AlphaZero, the researchers trained a new version of this artificial intelligence called AlphaTensor which, instead of playing chess, learned the best steps for multiplying matrices, these steps represent what we call an algorithm.
Now we have a new method that can do this operation on 47 steps. But we are only mentioning an array as an example. AlphaTensor achieved algorithmic improvements on 70 different matrix sizes. In other words, a wonderful advance.
Since matrix multiplication is elementary in many computing tasks, this advancement of AlphaTensor could make systems more efficient, consuming less power and reducing the number of round-off errors.
Pictures | Unsplash | deep mind
In Xataka | We have been to the world’s largest event on artificial intelligence. It’s like going back to the beginnings of digitization