I am gaining a deeper understanding of AI and how our knowledge of the brain influenced its creation. When AI scientists were inspired to model neural networks similar to the brain, it led to a rapid advance in the ability of the AI systems to display more complex and human-like reasoning and thinking capabilities. It’s pretty fascinating to see the convergence of these sciences. We are going to a see a huge increase in the collaboration between neuroscience and AI, but also psychology, to help humans adjust to their new identity in this world. There will also be an increased interest in learning about the brain to further advance the AI systems, but also using AI to model and further our understanding of the brain.

AI systems will continue to surpass humans in every way. We are products of evolutionary design, the result of a process that builds on top of other already developed structures and functions. Imagine being tasked with building a skateboard, then building a bike by adding to the skateboard, and finally building a car by adding to the skateboard-bike contraption. It would end up being an odd looking and poorly functioning machine, especially compared to the cars we have today that were designed without these restrictions. And so given that AI was designed with purpose with the end goal already in mind, it wasn’t burdened by unnecessary redundancies or competing parts or functions. AI is a very efficient system compared to the evolved brain. As Geoffrey Hinton, the godfather of AI has said, “it is already much better than the brain at doing what it does so far, and that includes reasoning and cognition.”

So, given that the architecture of AI such as ChatGPT and other Large Language Models (LLMs) was purposefully designed and therefore well understood, I was surprised to hear that these AI scientists don’t actually know the entirety of how it is doing what it does. It has certain emergent properties that can’t yet be explained. Interestingly, just like we try to understand the brain by assessing behavioral ouput in response to different input, the same approach is being taken to understand LLMs. As anyone that has interacted with LLMs knows, the text output you receive is highly sensitive to the exact nature of the input text prompt. And from my experience so far, LLMs are a strong challenge to humans in terms of complexity for this specific process, essentially what we would refer to as cognition. One interesting parallel between the brain and LLMs is that you can alter the prompt in a way to adjust the LLM “temperature” setting, which changes the creativity or level of randomness of the word generation. This is a fascinating feature, and models how humans produce different behaviors under different settings, contexts, and emotional states. I suspect this will spawn a whole new field of psychology, essentially a discipline applying similar empirical psychological approaches to characterize the “behavioral” nature of these advanced LLM systems.

As I listen to interviews with these leading AI developers, they’re very excited to talk about it because this has been their goal and life’s work, to see an AI system approach the performance of the human brain. But they thought it was going to happen much further into the future, with some predictions being as many as 50 years from now, but these estimates have drastically changed. Now some are claiming we will reach this point within the next 5 years. So many AI experts were shocked by this, and now find themselves in the process of having to play catch up and start thinking seriously about this in an academic way, using tools to characterize and understand the behavior of the system. And there are people attempting to do just that, but most of the current hype is just using the LLMs and finding new applications for it. And this has led to an explosion of tools that have very practical and life changing potential, for better or worse.