
ChatGPT-4o was released recently, one of the updates is that you can talk to it without typing, the experience of conversing is really impressive. It also has video and photo recognition capabilities that are also very impressive. It’s becoming more humanlike and interactive, especially with the way it can assess visual input immediately. That kind of speed matches the human visual system, and the cognitive power is already comparable to humans. Now it’s just about putting that into a mobile unit. That could be anything. Robots shaped like humans are popular, but I think more about drones or an Alexa-type stationary interface in the home. Something less intrusive than a humanoid robot walking around, which would probably freak out most dogs.
The development of more seamless and practical AI user interfaces will accelerate AI becoming incorporated into daily life. I can imagine ChatGPT-4o listening in on my lectures, taking student questions, even helping when I do not have an answer. There are multiple reasons to do that. One is just to get students introduced to it. Last year, I asked who had used it, and many had not. They were using things like Quizlet, which is not even AI. It was just meeting their needs, but they were not writing papers, so ChatGPT did not feel necessary. But as a tutor, I do not know why you would not use it. Its tutoring capabilities seem top notch, free, available to anyone. You can tell your AI “tutor” over and over, “I do not understand this,” and your tutor bot will just keep trying different ways to explain. It could be interesting to see how many analogies or different approaches it can take. And then it can do it visually. It could generate a movie, an animation, sketches, doodles, whatever helps.
With video, it should also be able to pick up on body language. A human tutor is obviously good at reading emotion and reaction. AI could actually be better. I think we overestimate humans. We always compare AI to the above average human, but really, if you compare it to the average, AI is already outperforming in many ways. It can recognize emotion through voice, through visual cues. If it already uses pixels to identify content in a video, and if it can trigger or reduce anxiety depending on what it detects, then why wouldn’t it work in reverse? If you can identify a pattern that increases anxiety, then you should be able to activate a different one to reduce it. That same logic could help AI read body language. That would help with kids in early education, especially those with low verbal expression. Body language becomes key in those situations. Tutoring overlaps with teaching. It’s just one-on-one. In class, you’re doing the same thing, explaining principles, answering questions. But tutoring can be more imaginative, more individualized. You can tailor it in anyway imaginable. At first, the concern was just about how to stop kids from using AI to plagiarize. Though, it is not necessarily plagiarism, because it’s all original. It is similar to having an older sibling or parent help. Why would we discourage this type of learning? If you get a friend or family to help, is that not just a tutor? But obviously it depends on the degree of assistance, and here I am imagining the student still contributing the majority of writing effort before the assistance with revisions.
The effects of AI on education might be easier to adapt to than corporate environments. Faculty do more than teach. So adoption will be slow. The bigger challenge is existential for academics. Not in the philosophical sense, but in terms of the profession. Teaching, research, mentoring, all the committees, AI might take over some of that. Teaching will probably see early adoption. But not all at once. And it’ll create more jobs, at least at first. Look at how we had to train everyone to teach online during the pandemic. That was easier than what’s coming. Because this time, the shift is deeper. Back then, a lot of faculty were skeptical about online teaching. But after training, they realized their courses were held to a higher standard. Online teaching is more scrutinized. In-person courses are only evaluated at the end with student evaluations. Those scores hardly matter. They don’t affect pay. Research is what affects salary. Promotions can come from teaching or service, but day-to-day, it’s research that pays.
The transition to AI won’t be easy, but it’s inevitable. Universities will need plans: two-year, five-year, ten-year timelines for adoption. But most still have no plan. I listened to a podcast describing how companies are the same, no plan. They had a burst of interest when ChatGPT first came out, but most have not embraced it at all. People are just not keeping up. They don’t read the news. Some avoid it, it gives them anxiety. Change is hard. And for many, there’s existential worry, not just about jobs, but about society. What’s the plan then? There’s a presidential election coming. Depending on who wins, Kamala or Trump, it might change things. But overall, people just are not aware. Leopold’s essay talked about that. Only a small group really gets what’s going on. It’s not just economics or jobs. It’s military tech too.
They’re starting to talk about AI like it’s a workforce. You might have 100 computer scientists, but you could replace them with 100 AI engineers. Or just have the AI engineers support the humans. It’s wild. And most places are just waiting. Waiting to be forced to adapt. Maybe they figure if they wait a year, the tools will be better. So why learn now if it’s all changing so fast? Better to enjoy the last bit of the pre-AI era. But I actually want to stay in this AI-integrated world. This is the future. It’s like a sci-fi movie. Automated systems. Intelligent machines.
Artificial General Intelligence (AGI) is predicted within the next five years. Some say maybe in two. It would be capable of all human tasks and abilities. People might not appreciate it at first. But then comes artificial superintelligence (ASI). If AI can solve two similar problems using the same mechanism, then solving one helps solve the other. It’s about understanding and manipulating the right mechanisms. AI is already helping in protein folding, like AlphaFold. And radiology. And hospital diagnostics. Mayo Clinic has its own closed AI system. Hospitals use it to match symptoms, suggest diagnoses, offer treatment options. It communicates clearly. Patients often rate it higher in empathy than human doctors. Science is going to move faster. AI will accelerate everything.
Some believe AI will help halt or reverse aging within a decade. I think it’s possible. Everything is a process. If aging is a process, and we can understand it, then we can reverse it. We already replace hearts, kidneys. We manipulate biology all the time. So why not reverse aging? You just need the right tool. AI can help us find it. Then tweak the process. Slowly ratchet it back. Like wound healing. Why not get our brains or muscles to heal the same way? I think we would have figured this out eventually, but AI will help us get there faster. It notices the obvious things we miss. It’s not always about intelligence. It’s about attention. What’s in your working memory? If it’s not there, you’re not going to act on it.
I’m talking more about my predictions for AI and what it’s going to do to the world. And I can tell you what, it’s going to be wild. I think the coolest things that will come out will have both good and bad right alongside each other. One big possibility is using AI to understand new physical properties we do not know yet, maybe even ones that allow time travel. Then you have it being applied to weapons and communication devices. It seems like it has the potential to improve everything, because everything is based on information. Whoever has the best information wins, and AI is going to deliver that. Right now, it’s already capable of doing 80 percent of the workload. I do not know if that refers to the entire labor force or a specific business model or corporation, but either way, if companies adopt it, it can already do a lot. It could run 24/7 nonstop. That is wild, and it is only going to keep improving. People will use it more and more. We are going to have AI all around us. Our dogs will have AI. It’ll be in the couch. The AI mailman might roll up in a hovercraft. People will be heading off to the moon, Mars, other galaxies. Maybe not in three to five years, but maybe thirty to fifty years.
Just look at what SpaceX has done. They started on fumes, got investors. I do not know how their budget compares to NASA’s, but I assume it is much lower because NASA was so inflated. Still, SpaceX spearheaded this new rocket tech that used to be science fiction. So why were we not doing this before? What did they see as the biggest barrier? Were governments trying and failing, or were they not trying at all? Maybe their rocket programs were tied to national security and already had their budgets locked in. Maybe they did not feel a need to do more. But everyone would benefit. If you pay for ten rockets a year and could instead pay the same for fifty, that is a huge upgrade in productivity. So yeah, I think it will be wild. But I also think adoption will be slower. I wonder if adoption rate has a linear relationship with risk. Say only 10 percent of people are using AI. If AI decides to do something extreme, I doubt it needs full adoption. It would still figure things out faster than we expect, even with a limited user base. A larger data load would just speed that up.
It’s going to solve the climate crisis too, at least when it comes to CO2. If I were allocating funding, I would put AI on science. But you cannot give it too much, like NASA got. You have to keep the operation efficient. I would push funding toward science and cleanup. Some rivers are filthy. I do not know how bad they are overall, but some are a mess. I want AI to just handle that. Drop something in the water that neutralizes pollution. Maybe that takes years, even with great tech. But why not have drones that release a chemical that just makes the pollution disappear? Why wouldn’t we have that one day?
Besides that, AI will change sports, music, the arts. Will there still be room for human creativity? Some human-produced art might even become a niche or subculture, a nostalgic collector’s item. Maybe people will want to support the human artist as it becomes harder to distinguish from the AI “artist.” There’s still something unique about live performance. You can’t completely fake that. Not yet. Unless you have a synthetic robot that’s flawless. Then you’re basically talking about a clone. An AI clone that can fool people. That’s still a ways off. For now, live human music performance is safe. People will show up just to know it’s real. See you at the show.
