Faster sorting algorithms discovered using deep reinforcement learning

Joined
11/21/22
Messages
29
Points
263
From a couple of days ago, in Nature...

Abstract
Fundamental algorithms such as sorting or hashing are used trillions of times on any given day. As demand for computation grows, it has become critical for these algorithms to be as performant as possible. Whereas remarkable progress has been achieved in the past, making further improvements on the efficiency of these routines has proved challenging for both human scientists and computational approaches. Here we show how artificial intelligence can go beyond the current state of the art by discovering hitherto unknown routines. To realize this, we formulated the task of finding a better sorting routine as a single-player game. We then trained a new deep reinforcement learning agent, AlphaDev, to play this game. AlphaDev discovered small sorting algorithms from scratch that outperformed previously known human benchmarks. These algorithms have been integrated into the LLVM standard C++ sort library. This change to this part of the sort library represents the replacement of a component with an algorithm that has been automatically discovered using reinforcement learning. We also present results in extra domains, showcasing the generality of the approach.

 
Was it generated by ChatGPT?

An article with > 50 authors doesn't feel right.
Sorting is old hat; can we have something more exciting por favor?
 
Nope, by Google's DeepMind.

"AlphaDev uncovered new sorting algorithms that led to improvements in the LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements. "

70% faster sorting in short sequences!

"We applied AlphaDev to one of the most commonly used algorithms for hashing in data structures to try and discover a faster algorithm. And when we applied it to the 9-16 bytes range of the hashing function, the algorithm that AlphaDev discovered was 30% faster. "

30% faster hashing!

Those are not small numbers..
 
Have these numbers been independently verified?

 
Last edited:
Yes, Nature is peer-reviewed. Their editorial criteria and processes are solid... they may be the most reputable scientific journal ever.
Where's the meat? i.e. details of how the algorithms work?

How can I check it is correct?

It is a survey article, like many AI articles.

And no references to the CS giants, like Donald Knuth, Tony Hoare et al.

Too much gravy and hype for my taste, unfortunately..
 
Last edited:
Here’s a detailed explanation from Justine Tunney, a C library author.

She explains the beauty of it using Assembly code, then using C:


Enjoy!
It doesn't change the fundamental fact, that comparison-based sorting algorithms can run no faster than [imath]n \log n[/imath] time (if you have no knowledge about the input array).
 
It doesn't change the fundamental fact, that comparison-based sorting algorithms can run no faster than [imath]n \log n[/imath] time (if you have no knowledge about the input array).
Good point. Maybe AI defies the laws of gravity.
My claim indeed is that this new algorithm to be better explained.

Traditionally, superoptimizing is performed via exhaustive brute-force search in the space of valid instruction sequences. This is a costly method, and thus impractical for general-purpose compilers. Yet, it has been shown to be useful in optimizing performance-critical inner loops.
 
Last edited:
Back
Top Bottom