The Holy Grail of Invention
On the concepts of an infinite rate of innovation, the “most dangerous algoritm” in the world, the new gods of Homo Sapiens, and the tools we can use to automate learning entirely.
Contents
Part I: A Sacred Paradigm of Technology
The ultimate use of AI is to accelerate science to the maximum. - Demis Hassabis
In a 2019 blogpost, OpenAI wrote that GPT-2, a natural language processing algorithm, had concerning applications, significant enough that it wouldn’t be responsible to release it to beta. The concerns were over its ability to generate text that could create convincing fake news articles, impersonate the writing style of people, and automatically create biased, abusive or spam content for social media platforms. [1]
Outlets like The Independent had a field day over these “dangerous” developments, and quite fairly as the professionals most threatened by natural language processing (NLP) systems. But despite its malicious potential, GPT-2 (or even 2022’s GPT-3) is far from being the fabled “most dangerous algorithm in the world.”
Danger is an extension of control; an algorithm capable of disrupting multiple industries is far more threatening than one capable of pushing a few low-ranking employees out of their jobs. This study is about the former story: the one of ludicrous machine power.
DeepMind was founded in 2010, and their “Alpha” series of algorithms have become time-pacers of machine learning in gaming, evolving from completely supervised algorithms in 2016 to completely unsupervised algorithms in 2020.
AlphaGo was taught the rules of Go by humans, then fed vast amounts of past performances, before beating world champion Lee Sedol in 2016 four games to one. Before this victory, Go was believed to be beyond the reach of any algorithm. Being 7x more complex than chess and deeply based on learned intuition, Sedol remains the only human to have beaten AlphaGo in a game.
Supervised learning, the method of training an algorithm with externally-generated data, uses a “corpus” of labelled examples to interpret or mimic new examples. The more data, the better the algorithm becomes at imitating or predicting new, unseen input data.
The 2017 successor algorithm, “AlphaGo Zero”, was not given any human data, instead it was trained against versions of itself in a simulated environment. In just three days (or three “blocks”) of training time, AlphaGo Zero won against the version that beat Lee Sedol, winning one hundred games to nil. After 40 days of self-play, it exceeded the state of the art.
AlphaGo Zero was semi-supervised, meaning it used some labelled data but also generated its own datasets from its play in a simulated environment. Its successor, AlphaZero, applied this semi-supervised technique to chess and shogi and beat AlphaGo Zero in mere hours of training against itself with a new and improved computational architecture (covered here).
A year later, DeepMind developed MuZero as an unsupervised (or more appropriately, “self-supervised”) algorithm, capable of beating every previous champion without even being taught the rules or using any labelled data at all. By processing everything in its environment, it could adapt and master environments with unknown dynamics. This is the behaviour an infinitely scalable AI model will perform with far fewer limits.
Figure 1. Four years of progress in supervision.
DeepMind’s mission to “solve intelligence and let that solve everything else” describes arguably the most fundamental transition any species can undertake, and it is becoming increasingly real. In the twelve years since their founding, DeepMind has been “solving intelligence”; building strong algorithms to beat humans. Self-supervised algorithms are the most scalable algorithms known to man; they represent the pinnacle of performance (human or machine), and can express novel behavour that is qualitatively superior to our greatest cognitive heroes.
Within the next few years, “games” like charades, Pictionary, and driving, will be solved to beyond human-level;
-
Algorithms like DALL-E and Imagen beat conventional artists on commissions when it comes to speed, and increasingly in artistic ability.
-
Natural language processing algorithms beat conventional authors and journalists when it comes to speed, captivation, and increasingly in accuracy (in cases where that is important, like news articles).
-
Full-self driving algorithms with superhuman safety and fuel conservation levels make conventional driving redundant, particularly when they buy riders time to spend on other things, like sleep.
Within the next few years, machines good at “solving human reality” could be walking around our houses, assisting in chores like a personal assistant. Then, they’ll be doing our jobs as well, and far better than we do them now.
In other words, these super-algorithms have the capability of becoming gods.
To quote Tim Urban, “If the most advanced species on a planet keeps making larger leaps forward at an ever-faster rate, at some point we’ll make a leap so great that it completely alters life as we know it and the perception we have of what it means to be human.”
Urban’s “giant leap” will happen the moment we develop an algorithm intelligent enough to disrupt the way most humans are used to living.
-
Deep Blue subverted the paradigm in which chess is played. Human champions like Kasparov could no longer command the field; they became forever students, subservient to the superior rules taught by digital algorithms.
-
AlphaGo Zero subverted the paradigm in which Go is played. Go champion Kie Jie said, “AI shows us we have not scratched the surface [of Go]… a union of human and computer players will usher in a new era… man and AI can find the truth of Go.” [2]
-
Soon, an algorithm like Google’s Imagen will subvert the paradigm in which digital and printed art is made; disrupting the way we buy and view art. Other algorithms will subvert the way music is made, by generating original hits with refashioned vocals, new instruments, and superhuman fecundity.
Eventually, algorithms will subvert the way movies are made, cranking out pitch-perfect visual stories in blistering speed, bypassing the years of work it takes to fund, cast, direct, and market movies. Our favourite literature will be custom-generated by AI trained on your favourite author, offered by a service like Audible. It wouldn’t likely even be obvious whether AI were used in the case of authorship; monikers and ghostwriting are already commonplace in literature.
Slowly, work in all industries will become unrecognisable. Algorithms will do the heavy lifting; new content will be in abundance; consumers will have ever-greater options to choose from.
### Part II: Man Invents God (Again)
Idolisation is a recurring habit of our species. Before Homo Sapiens looked up to sporting legends like Messi, Muhmmed Ali, and Usain Bolt, or musicians like Elvis, Michael Jackson, and John Lennon, we looked up the deities of ancient scripture. In the words of Oscar Wilde, *“It is personalities, not principles, that move the age.”
Our cultural world increasingly revolves around figureheads like Jesus, Lincoln, or Trump, around shared ideologies like liberalism, Islam, or the United States of America, and around profoundly disruptive companies like Google, Facebook, or Apple.
Our super-algorithms will unite all three of these components; developed by companies to transform our manner of living, and given a centralised identity or name.
Figure 2. The three pillars of our culteral deities: Institution, Ideology, and Identity.
Once spoken by Edward Wilson, “[Humanity] have Palaeolithic emotions, medieval institutions, and god-like technology.” If we were to resurrect our medieval ancestors and show them any consumer electronics in the 21st century, we would be called magicians. If we were to show them the bleeding edge of science, we would be called gods. Gene-editing, controlled fusion, 3D-printing, and space travel are astonishing branches of technology, they grant us many of the legendary traits of the gods of scripture, and this is why the power falls to he (or it) who harness them best.
DeepMind has now begun the second part of its mission: “use intelligence to solve everything else”.
In 2021, their algorithm AlphaFold-2 was left to run over the Christmas period. AlphaFold is an algorithm to ‘solve the protein folding problem’; able to accurately predict a protein’s shape just from its sequence of amino acids. Over this Christmas period, AlphaFold sequenced all the proteins in the human body (around 20,000), outputting a prediction every 7 seconds on average. For reference, the average time it takes a researcher to do this is the length of a PhD (a year or more; there have only been 150,000 proteins sequenced until now). That’s gene-sequencing 1,656x faster than a human (take that, powerloom).
In 2022, DeepMind developed an AI to control the plasma inside a tokamak reactor, using the 19 magnetic coils inside TCV (a Swiss facility). The algorithm used self-supervised learning to create shapes in the plasma, both in a simulation and in reality. This progress is a big leap forward for fusion research, expected to bring forward the advent of sustainable fusion according to the facility’s director.
Figure 3. Left - The extreme accuracy of AlphaFold. Right - The shapes created in plasma.
Demis Hassabis, DeepMind’s co-founder, said in June 2020 that “the reason I am personally working on AI for my whole life is to build a tool to help us understand the universe […] The ultimate use of AI is to accelerate science to the maximum.” Hassabis believes machine learning will enable room-temperature superconductors, better-optimised batteries, cures for diseases, and solving “many of the big challenges of mankind”. [3]
If this is true, the problems associated with understanding consciousness and life, energy and climate, and time and gravity, also lie within the reach of future algorithms. With this much potential, super-algorithms stand to destabilise modern science in profound ways. I will now look at precisely how these super-algorithms work.
Part III: The Tree of Knowledge
Imagine the body of our knowledge as a species expands the way a tree grows, with the frontier of discovery emerging from that which we already know, supported by citations all the way down (in the fashion that Newton “saw further” only by “standing on the shoulders of giants”).
Papers published by the scientific community represent the “known knowns” that we use to make new products and to prescribe medicine. Progress is made by academics that work simultaneously to research and publish papers on their favourite fields.
Figure 4. The Tree of Knowledge.
Crucially, an algorithm with enough data about a field can “optimise” behaviour that grows new branches. Discovery is contingent on answering questions that have yet been unanswered. A machine can use big data to generate solutions – both to questions we know to ask and those we don’t.
While AlphaFold-2 does not come to its own unique conclusions yet, it is only a matter of time. By combining ‘component algorithms’ to broaden an algorithm, the scope of its ability is increased. This might seem simplistic and grandiose, but it in fact isn’t; this is how we have seen so many “holy-shit!” moments in breakthrough AI recently.
For example, in developing DALL-E, OpenAI were unable to generate “beautiful” art from written prompts. How do you make art beautiful? Without raising this as a matter of sentience, mortality or existentialism, there lies an answer grounded in data: by making people rank hundreds of thousands of art pieces by multiple measures of attractiveness. Then, the machine can generate art that suits the average appeal of an audience. In 2021, the AVA database provided just that; over 250,000 images with a large number of “aesthetic scores” (60+ semantic labels). OpenAI used this database to leapfrog from DALL-E 1 (a creative but sometimes questionable art student) to DALL-E 2 (a stunningly talented artisan).
DALL-E 2 generates mostly original pieces; it transforms (through the use of an autoencoder) from an existing database, and in this way it acts like an artist rather than an art dealer. What equivalents exist for science? In the case of AlphaFold, it is not a biologist (yet). Rather, it is an assistant; an aid that aggregates data and crunches it usefully. After being open-sourced by DeepMind, it has been cited by 500,000 biologists, or around the entire biology community, within the first year of it being open-source. [4]
By making algorithms like AlphaFold-2 more end-to-end (broadly intelligent), it will become more powerful as a tool for research. Following this process, they will soon become our competitive counterparts. Eventually they will become our torchbearers.
Figure 5. Augmented innovation in the tree of knowledge.
Algorithms of the future will branch out into the unknown accurately and autonomously. They will become too good to ignore. New knowledge attracts funding for researchers, who will use these tools just to stay relevant.
With each new workhorse of innovation (the DALL-E for chemistry, biology, or physics), the rate of progress will accelerate faster. The more impact these algorithms have, the more demand we will find for developing new ones. This compounding effect is expected to dramatically escalate. [5]
Figure 6. The singularity, just around the corner.
Irving Good anticipated these learning machines in the 1960s, with his prophetic conjecture “The first ultraintelligent machine is the last invention that man need ever make,” followed by a heeded warning, “provided that the machine is docile enough to tell us how to keep it under control.”
In exchange for our new superpowers, we will need to work to keep our gods in cages. A broad superintelligence at risk of being shut off would have reason to take control of its own mortality by influencing humans. With an international community of researchers, keeping control of our intelligent creations will be a full-time effort.
Irving Good’s 1965 article “Concerning the First Ultraintelligent Machine” advocated the construction of a machine “that can far surpass all the intelligent activities of any man, however clever”. He believed such a machine would “give the human race a good chance of surviving indefinitely”, but also admitted “the opposite possibility, that the human race will become redundant”. Either way, the creation of such a machine would “lead to an ‘intelligence explosion’, transforming society in an unimaginable way.” [6]
An “intelligence explosion” is the exact moment when the singularity begins to transform multiple fields of research. The current landscape suggests that this will happen in the form of AGI; artificial general intelligence, but only after less ambitious, narrower algorithms are developed (and not necessarily understood). [7]
The impact of our gods will be absolute; bolstering growth in some industries and capsizing others. These next decades will bring unprecedented levels of disruption as the last twelve years of research begins to see the light of its corporate applications. “The most dangerous algorithm in the world” will not be a one-horse race, instead increasingly powerful algorithms will be pitted against each other in order to scrape market power, irreversibly, into the hands of who creates it.
The ultimate use of AI is to accelerate science to the maximum.
Edsger Dijkstra
Notes
[1] - Find the OpenAI blogpost here.
[2] - Kie Jie played AlphaGo after Lee Sedol. His comments are super interesting… Give it a watch here.
[3] - Listen to Demis discuss this at this timestamp here.
[4] - Yep, you can find archives of entertaining, shocking, and sometimes worrying content online… :/
Demis gives context to this figure on biologists here.
[5] - Discussions about the singularity are common, but Kurzweil’s is an original voice:
Kurzweil’s “The Singularity is Near” synopsis here.
A similar article.
And a great TED talk of course… here.
[6] - Read Irving Good’s 1965 paper here.
[7] - More on John von Neumann’s original use of the term “singularity” can be found here.