Pervasive Machines: Three Stages of Superintelligence
This is the second part of my Pervasive Machines series.
Contents
The Route Forward
Motivations to develop algorithms that âsolve intelligence to advance science and benefit humanityâ (DeepMind) seem spiritually well-intended but will unlock immense profit for those who can monopolise. In the same spirit that refrigeration was truly capitalised by Coca-Cola, and not the inventor of the fridge, it may be a more ruthless generation of capitalists who truly unlock the value of AI systems.
OpenAI seemed positioned to become an API-selling corporation like Stripe, until ChatGPT become the fastest growth consumer product in history. Now, they are positioned to monopolise on their own creation with Plug-ins.
Nevertheless, I would argue that AGI will come in three distinct stages.
Stage One: AGI Tools (Broad AI)
The first stage of AGI development is likely to be driven by scientific innovation and financial opportunity. This stage will witness the rise of AGI tools, which will consist of composite algorithms using large language models (LLMs), computer vision systems, and other machine learning techniques at their core.
Component algorithms with make these generally usable for a new wave of edtech, agency work and contracting. Instead of merely asking LLMs questions, we will call upon third-party services and databases to execute commands, such as project management, client communication, and the work of virtual assistants in real-time.
As with all ground-breaking technologies, the adoption of AI will significantly enhance quality of life for many while damaging that of others, particularly in less developed countries where call centres employ a larger part of the workforce. In leading economies, this shift will likely exacerbate job polarization, as discussed in this essay (Figure 6).
Figure 6: Job polarisation over time. For more: Credit
Although these tools may not conform to a universally accepted definition of âAGI,â they will display broad intelligence by leveraging vast amounts of data and training. This stage will surmount to the wave of innovations under the âGenerative Pre-trained Transformerâ (GPT) umbrella and lay the groundwork for the next stages of AGI development.
Stage Two: How âTrue AGIâ Arises
Truly building AGI necessitates we make breakthroughs in machine learning that allow algorithms to learn from discovery and adapt to new situations with flexibility and generalizability.
According to Altman, AGI refers to a system capable of driving the cutting edge of technological advancement. In his conversation with Lex Fridman, Altman suggested that while LLMs may play a role in AGIâs development, they will not exhibit general intelligence on their own. Noam Chomsky also argues that pattern recognition systems like GPT-4, while adept at language, lack the flexibility and true creativity found in the full spectrum of human cognition [13].
At some point, research will enable simultaneous training and inference, perhaps as a composite of advanced reinforcement learning techniques (see Geoff Hintonâs Forward-Forward Pass). The most sophisticated AGI tools will conduct studies requiring human-like creativity and problem-solving abilities when instructed. Shortly after, we can expect the singularity.
In an ideal scenario, akin to Irving Goodâs prophecy, this is the point where humans can use AGI to solve the worldâs problems without constant supervision. It is also the point when AI will have the greatest amount of disruption on humans and almost certainly cause enfeeblement.
Those above the interface can reasonably expect to drive a new political system of resource abundance and social hierarchy known as âRentismâ [14]. and will require constant government regulation to avoid fast take-off and runaway superintelligence.
Stage Three: Later Generations of AGI
In later generations of AI, limitations that existed for the digital computing paradigm will be pushed outward, allowing for astonishing levels of cognition and self-agency. It is feasible to assume that given a scenario with quality superintelligence, there will be a severance between humans and machines.
Assuming there are better, more efficient paradigms of intelligence architecture to be built, those projects will be embarked upon by our most sophisticated algorithms in a manner that far surpasses human capabilities. While humans may be able to grasp the next paradigm, we may also fall short of being able to construct it ourselves. In theory, we would expect a trajectory that converges on Bremermannâs maximum rate of computation, of around 10âľâ° bits per second per kilogram.
Figure 7. The trajectory of Generational AGI towards omniscience.
Evolution: Humans vs 1st Gen AGI
To forecast the long-term advantages of AGI over human cognition, we can study the hyperparameters of biological intelligence and draw parallels.
Human intelligence in humans can be attributed entirely to our biological evolution. Various forms of intelligence are displayed throughout the Darwinian evolutionary tree, with humans being one of the organic intelligences that utilize neurons as our core computational elements.
But outside of the evolutionary path, inorganic agents may induce intelligence using transistors (silicon-based), qubits (quantum computing), or even strands of DNA (molecular computing). It is possible that anything capable of forming logic gates can be utilized for computation, and thus give rise to intelligence [15].
Figure 8. The Path to Composite Intelligence. Credit
High-end silicon-based supercomputers have long surpassed the computational capacity of the human brain, and at current rates, we can expect desktop computers to house similar power by 2042 [16]. However, neurons are just one of many hyperparameters holding us back in the race for cognition. Others include:
I. Signal speed
Within the brain, axons carry action potentials at up to 120m/s, while electronic cores can communicate at the speed of light. This limits biological brains to 0.11mÂł, assuming they remain a single entity. An electronic system by that same measure of round-trip latency could be 6.1x10šâˇmÂł or around the size of Pluto.
II. Speed of computational elements
Neurons operate at peak speed of 200 Hz, or around seven orders of magnitude slower than a modern microprocessor at 3 GHz. The brain compensates by parallelising operations across a vast number of neurons at the same time. Unfortunately, this isnât great for large-scale computations which generally demand sequential processing.
III. Reliability, lifespan, memory, sensory input
Some estimates suggest that the adult human brain stores around 1 billion bits, or about a quarter of the storage capacity of an Amazon Alexa. Brains also become fatigued after a few hours of work and permanently decay after a few decades. In terms of sensory input, we process around 11 million input bits per second, with 90% of those visual. As this is achievable in digital computers with just a single digital camera, it is possible to compute far, far more input bits per second across a variety of modalities.
Furthermore, the natural intelligence of the brain (human âG-factorâ) is fixed from around the age of 11 to the point where it decays irreversibly (around 65). In comparison, digital hardware can be swapped out for better, updated circuitry the moment it becomes available.
Figure 9. The Connectome: a nerve map of the neural connections in the human (thanks, Emilija). Credit.
Trans-Humanism (Staying Alive)
So, the question arises: Can we remain relevant by upgrading our self-agency without sacrificing our sense of identity?
To adapt to more efficient workflows, the human brain could benefit from support for fast sequential processing. This discussion is about blurring the lines between being human and incorporating the technology that makes AI so transcendent. We can either embed it within us or transfer our biological wetware into digital form.
Figure 10. The âjump-offâ point for humanity to survive.
Brain-Computer Interfaces (BCIs) and neural implants are essential in bridging the gap between human cognition and AGI systems, and are the strongest bet for cognitive advancements.
Invasive research companies include Neuralink; aiming to develop high-bandwidth, minimally invasive interfaces that enable seamless communication between humans and machines and enhance cognitive capabilities. One of Neuralinkâs notable achievements includes implanting a chip in a monkeyâs brain, allowing it to play video games using its thoughts [17]. BrainGate focuses on creating neural interfaces for individuals with paralysis, enabling them to control external devices with their thoughts.
Full brain scans involve mapping and digitizing the human brain, preserving its neural structure and functions. This process would enable a complete replication of an individualâs cognitive abilities, memories, and personality, creating a digital copy of their mind [17].
The development of AGI has the potential to put the world into its final chapter. To minimise the coarseness of this filter, humans must take responsible measures against the technology, before we deploy something that causes serious harm. The call for moratorium serves as a good wakeup call, and should be supported. In the long run, I remain hopeful that there exists technology that can push out the hyperparameters of the brain, in order to allow us to keep up to speed with our own creation.
References
[13] Noam Chomsky on AI and other things: https://www.youtube.com/watch?v=7uHGlfeCBbE.
[14] The second of the Four Futures: https://sites.evergreen.edu/politicalshakespeares/wp-content/uploads/sites/226/2015/12/Frase-Rentism.pdf.
[15] https://www.lesswrong.com/posts/yuzDFq5CoeMaRZuF2/simulation-hypothesis-and-substrate-independence-of-mental.
[16] https://www.oscarmoxon.com/post/a-brief-history-of-technological-history.
[17] Researchers like Dr. Kenneth Hayworth at the Howard Hughes Medical Institute have been working on brain preservation techniques such as plastination, which could potentially allow for high-resolution scanning and digitization of neural tissue (source: https://www.brainpreservation.org/team/dr-kenneth-hayworth/).