• AIPressRoom
  • Posts
  • Optimising computer systems with more generalised AI tools

Optimising computer systems with more generalised AI tools

How MuZero, AlphaZero, and AlphaDev are helping optimise the entire computing ecosystem that powers our world of devices

Artificial intelligence (AI) algorithms are becoming more sophisticated every day, each designed to solve a problem in the best way. As part of our efforts to build increasingly capable and general AI systems, we’re working to create AI tools with a broad understanding of the world, so useful knowledge can be transferred between many different types of tasks.

Based on reinforcement learning, our AI models AlphaZero and MuZero have achieved superhuman performance winning games. Now, they’re expanding their capabilities to help design better computer chips, optimise data centres and video compression – and most recently, our specialised version of AlphaZero, called AlphaDev, discovered new algorithms that are already accelerating the software at the foundations of our digital society. 

While these tools are creating leaps in efficiency across the computing ecosystem, early results show the transformative potential of more general-purpose AI tools. Here we explain how these advances are shaping the future of computing and already helping billions of people and the planet.

Designing better computer chips

Specialised hardware is essential to making sure today’s AI systems are resource-efficient for users at scale, and designing and producing new computer chips can take years of work. But now, our researchers developed an AI-based approach to designing more powerful and efficient circuits by treating a circuit like a neural network – accelerating chip design and taking performance to new heights.

Neural networks are often designed to take user inputs and generate outputs, like images, text, or video. Inside the neural network, edges connect to nodes in a graph-like structure. To create a circuit design, our team proposed ‘circuit neural networks’, a new type of neural network which turns edges into wires and nodes into logic gates, and learns how to connect them together.

Then we optimised the learned circuit for computational speed, energy efficiency, and size, while maintaining its functionality. We used ‘simulated annealing’, a classical search technique that looks one step into the future, testing different configurations in search of the most optimal one. Using this technique, we took part in the IWLS 2023 Programming Contest – and won – getting the best solution on 82% of circuit design problems in the competition. 

Our team also started applying AlphaZero, which can look many steps into the future, improving the circuit design by treating the optimisation challenge like a game to solve. And so far, our research combining circuit neural networks with the reward function of reinforcement learning is showing very promising results for building a future of even more advanced computer chips.

Optimising data centre resources

Data centres manage everything from delivering search results to processing datasets. Borg manages billions of tasks across Google, assigning these workloads is like a game of multi-dimensional Tetris. This system helps optimise tasks for internal infrastructure services, user-facing products such as Google Workspace and Search, and manages batch processing too.

Borg uses manually-coded rules for scheduling tasks to manage this workload. At Google scale, these manually-coded rules cannot consider the variety of ever-changing workload distributions, and so they are designed as “one-size to best fit all”. This is where machine learning technologies like AlphaZero are especially helpful: these algorithms are able to automatically create individual optimally tailored rules that are more efficient for the various workload distributions.

During training, AlphaZero learned to recognise patterns in tasks coming into the data centres and also learned to predict the best ways to manage capacity and make decisions with the best long-term outcomes.

When we applied AlphaZero to Borg, experimental trials in production showed that this approach could reduce the amount of underused hardware by up to 19%, optimising the resource utilisation of Google’s data centres.

Next steps for video compression

Video streaming makes up the majority of internet traffic, consuming large amounts of data. So finding efficiencies in this process, however big or small, will have a huge impact on the millions of people watching videos every day. 

Last year, we worked together with YouTube to apply MuZero’s problem-solving abilities to helping compress and transmit videos. By reducing the bitrate by 4%, without compromising on visual quality, MuZero enhanced the overall YouTube experience. 

We initially applied MuZero to optimise the compression of each individual frame within a video. Now, we’ve expanded this work to make decisions on how frames are grouped and referenced during encoding, leading to more bitrate savings.

Early results from these first two steps show great promise of MuZero’s potential to become a more generalised tool, helping find optimal solutions across the entire video compression process.

Discovering faster algorithms

Most recently, AlphaDev, a version of AlphaZero, has made a novel breakthrough in computer science, discovering faster sorting and hashing algorithms – two fundamental processes used trillions of times a day to sort, store, and retrieve data.

Sorting algorithms impact how all digital devices process and display information, from ranking online search results and social posts to user recommendations. AlphaDev discovered an algorithm that increases efficiency for sorting short sequences of elements by 70% and about 1.7% for sequences of more than 250,000 elements, compared to the algorithms in the C++ library. So, when a user submits a search query, AlphaDev’s algorithm can help sort results faster. When used at scale, it saves huge amounts of time and energy.

AlphaDev also discovered a faster algorithm for hashing information, which is often used for data storage and retrieval, like in a customer database. Hashing algorithms typically use a key (e.g. user name “Jane Doe”) to generate a unique hash, which corresponds to the data values that need retrieving (e.g. “order number 164335-87”). 

Like a librarian who uses a classification system to quickly find a specific book, with a hashing system, the computer already knows what it’s looking for and where to find it. When applied to the 9-16 bytes range of hashing functions in data centres, AlphaDev’s algorithm improved the efficiency by 30%. 

Since releasing the sorting algorithms in the LLVM standard C++ library – replacing sub-routines that have been used for over a decade with RL-generated ones – and the hashing algorithms in the abseil library, millions of developers and companies are now using these algorithms across industries, such as cloud computing, online shopping, and supply chain management.

General-purpose tools to power our digital future

From playing games to solving complex engineering problems at the heart of every device, our AI tools are saving billions of people time and energy. And this is just the start. 

We envision a future where more general-purpose AI tools can help optimise the entire computing ecosystem that powers our digital world. But to support these tools, we’ll need faster, more efficient, and a more sustainable digital infrastructure. 

Many more theoretical and technological breakthroughs are needed to achieve fully generalised AI tools. When applied to diverse challenges across technology, science, and medicine, these types of general-purpose tools have the potential for being truly transformative. We’re excited about what’s on the horizon.

Learn more about sorting algorithms: