In listening to a lot of interviews about AI, I ended up doing a crash course in trying to learn about the architecture of these super AI systems as I really didn’t know anything about them. My quick takeaway was that it seems like we have been essentially building a brain from scratch. It also made me realize how the two fields of computer science and neuroscience have been merging over the years. As I was trying to understand how we build these supercomputers, I stumbled across the documentary AlphaGo. It tells the story of how the company of Deep Mind developed the AlphaGo AI system that beat Lee Sedol, one of the world’s best. While I do remember hearing about AlphaGo back when it happened, I don’t think I appreciated at the time exactly how significant that moment was for both AI and the culture surrounding Go. The documentary was a rather dramatic story, as you had this potential major breakthrough for AI, but on the other side, you had the most complex and oldest game in history. Lee Sedol was representing the legacy of the oldest board game in history, invented 3000 years ago. For 3000 years it was a symbol of one of the greatest tests of human intelligence and yet in less than a week, boom, the magic was lost. Sedol lost to AlphaGo four times, winning only once.

It led me to realize how these AI systems can be trained for specific problems that have a unique set of rules. For instance, in molecular biology, you could develop an AI that’s solely trained on chemistry and other relevant rules. This network could then solve problems that any neuroscientist, biologist, or biochemist might encounter.

The thought of having different AI systems to solve different problems, and the respective training datasets they’re given, intrigued me. It’s a concept I don’t believe I had considered before. What would those rules be? How far could creativity extend within these networks? What would cutting-edge science look like for an AI system trained on the rules of life sciences? How well do we understand those rules?

AlphaGo evolved into AlphaZero, which represented significant improvements. Eventually, it was given access to the game without knowing the rules or the objective. As long as it played against someone who knew the rules, it learned the game and figured out how to win. Could a similar approach be applied to a life system? Could an AI interact with a functioning life system, learn its rules, and then make intelligent decisions based on those rules?

Think of a simple life system. It wouldn’t be a game, but it would involve learning rules nonetheless. For example, the goal might be replication, or it could be a subset of that goal for that organism. Let’s consider an organism with various adaptations developed over time, all serving the purpose of procreation. A subset of that could be the sleep-wake cycle, driven by the circadian rhythm and its associated mechanisms. Could an AI find a subset of behaviors that are adaptive, learn the rules for those behaviors, and then create something new or solve a problem in a way we hadn’t considered?

It’s fascinating to contemplate specialized AI systems, such as one trained to play Go. ChatGPT and other large language models seem much more profound to us though because they operate in a context we understand. No translation is required.

This leads me to consider the safety implications. If these AI systems are limited to a specific model, then their potential for causing significant damage should also be limited. Having these limited AI systems, each providing a specific service, seems to be a safer approach. But why would we want to develop superintelligence or artificial general intelligence, that’s smarter than us and has a broad, multi-layered capacity? If superintelligence implies that it has fully matched and surpassed human capabilities, why not have multiple, independent units of specific intelligence instead? Together, they could be considered as having general intelligence, but their separation would prevent them from combining their strengths and becoming a danger.

Why would we ever need a system with full general intelligence? It’s clear we don’t have the time to adapt to it and use it safely. Keeping different functions independent could mitigate potential risks. However, even a system with a subset of intelligence could become dangerous if it autonomously scales up significantly and disrupts everything.