
Recently, I tuned into the Lex Fridman podcast featuring Stephen Wolfram, the AI scientist behind the Wolfram Alpha answer machine. Although his manner of speaking can be somewhat annoying at times, the things he discussed were all very interesting. He covered the intersection of neuroscience and AI, exploring subjects ranging from job market implications to how ChatGPT and other LLMs could rewrite theories of linguistics. He talked a lot about how AI will change academia, focusing mainly on computer science departments. Wolfram predicts that computational literacy will soon be essential education, and he’s actually currently working on an introductory-level textbook covering computational principles. Another intriguing prediction he discussed was that we may soon be speaking a hybrid language or a blend of English (or any other spoken language) and computational language. He even shared an anecdote about visiting a group of kids at a school who were fans of Wolfram Alpha and actually talked to him in the Wolfram Alpha language. He said it was pretty wild but he couldn’t understand it.
Another topic that was discussed reminded me of a professor I had in graduate school. This guy was an electrophysiologist and would half-jokingly dismiss synaptic transmission as “leaky neurons.” His point was that all the important information is coded by the action potentials, so synaptic events were less important. Wolfram was similarly dismissing these synaptic complexities in the brain for the same reason but from a different perspective. He argued that AI models are, in fact, simpler and more efficient representations of the brain’s functionalities. Indeed, the LLMs are doing quite well at matching (or exceeding) the brain’s cognitive abilities. But while they can perform certain tasks of having conversations, writing essays, generating code, or solving mathematical problems, these activities and others don’t encompass the entirety of our brain’s capabilities. At least not yet.
Wolfram pointed out that the brain was designed by the constraints of evolution, implying that it isn’t as efficiently designed as it could be. Conversely, the development of most AI is designed with full knowledge and intent of its final purpose and function and therefore is far more efficient at what it does. Also, we actually know in complete detail the “anatomical” connections within an AI system, yet our knowledge of the brain’s one trillion connections is still incomplete.
All this leads to a critical question. Is the knowledge that neuroscience is striving to uncover, the complex synaptic events for example, really necessary? If the end goal is to understand the patterns of neural activity, does it matter how an action potential is triggered? If the frequency and location of these action potentials are what truly matter in the end, then are some efforts in neuroscience a waste of time? However, it is highly likely that both disciplines will continue to thrive as they have, but the emergence of these AI-based neural networks has certainly opened new opportunities for experimentation. It’s an exciting time, this AI species is evolving fast.
