The next frontier of generative AI has arrived. OpenAI has officially launched GPT-5, a significant upgrade to a series of models that ignited the global AI boom. Rolling out to free and paid ChatGPT users this week, the new model brings sharper reasoning, faster performance, and more natural interactions. It also opens fresh possibilities for teaching, research, and creative development.
For leaders in education, policy, and creative fields, GPT-5 signals a shift from a tool that we use to a collaborative partner. That was already happening with OpenAI’s previous versions, but the new model is a world apart.
“It really feels like talking to an expert”
That is how OpenAI CEO Sam Altman framed GPT-5 in a recent conversation. It’s not just that the new model has more data or provides faster output. GPT-5 can reason through more complex questions, solve more challenging problems, explain difficult concepts with clarity, and generate high-quality responses across a broader range of domains. It acts and responds like an expert.
In a conversation with Theo Von — This Past Weekend podcast, Ep. 599 (posted July 24, 2025), Sam Altman said,
This morning I was testing our new model and I got a question.
I got emailed a question that I didn’t quite understand. And I put it in the model, this GPT-5, and it answered it perfectly.
And I really kind of sat back in my chair and I was just like, oh man, here it is moment…
I felt like useless relative to the AI in this thing that I felt like I should have been able to do and I couldn’t.
It was really hard. But the AI just did it like that. It was a weird feeling.
Let that sink in for a moment . . . a feeling of being useless next to an AI model. It’s an amazing, surreal, and perhaps frightening moment. For educators and, honestly, everyone, the question is immediate. What changes when a technology begins to supersede you? What is its impact on our emotions, careers, and the structure of society as a whole? More on that in the future, but first, let’s see what OpenAI GPT-5 can do.
Custom personas, familiar interfaces
OpenAI is introducing preset personalities for ChatGPT, including Cynic, Robot, Listener, and Nerd. These options shape tone and style, not just answers. They hint at AI as a social interface, where voice and temperament influence trust, motivation, and learning. Many of us have already been doing this through the way we prompt ChatGPT in our work. But the new personalities will make it easier and lessen the need for users to work on their prompting skills. As we move toward future models, it’s clear that the whole “prompt engineering” thing will become a laughable artifact of the early days of generative AI.
For faculty, instructional designers, and students undertaking their own work, this will raise new questions. Can a dry “Robot” tone reduce distractions in study sessions? Could a “Listener” persona improve student support chats? Tone now matters as much as accuracy. And we’re sure that these limited preset personalities are just the beginning. There will be more in the future until generative AI becomes so advanced that it intuitively understands your needs.
If that sounds like something out of science fiction, it’s rapidly becoming our reality.
Coding on demand and the implications for learning
At the core of GPT-5 is a leap in code generation. In a live demo, the model produced a multilingual language-learning app in minutes, with quizzes, flashcards, and a simple game. No code from the user. I don’t think programmers need to file for retirement just yet. But if you work in the field or many others, you should be thinking about your future.
Altman calls this “software on demand.” For educators, a faculty member with minimal technical background can generate working tools from a prompt. AI-native instructional design moves from idea to practice. This feature isn’t perfect yet, but it will get there. Remember how GPT-3.5 had trouble with basic coding and even spelling? We’re way beyond that point, and the weaknesses we can still point to in OpenAI GPT-5 will soon be corrected.
This also affects IT in the education and corporate sectors. GPT-5 lowers the threshold for software creation across campus, for pedagogy, research workflows, and administrative tasks. It will also profoundly challenge the work of IT departments as scams and phishing exploits become that much easier to create. As always, but especially in today’s world, user skills are not keeping pace with the technology. And perhaps they never will, given how rapidly GenAI is advancing.
Scaling access but not yet AGI
OpenAI reports nearly 700 million weekly users and 5 million business customers. It is also extending access to the U.S. government with its recent offer to provide access to federal agencies for the nominal cost of $1 a year. With the decision by the General Services Administration to approve OpenAI, along with Alphabet Inc.’s Google and Anthropic as vendors, it’s a sign that generative AI is becoming part of our embedded infrastructure.
But if OpenAI GPT-5 is more accessible and significantly better, it’s still not close to AGI. Sam Altman noted in a call with reporters on the day before its release.
This is clearly a model that is generally intelligent, although I think in the way that most of us define AGI, we’re still missing something quite important, or many things quite important.
One big one is, you know, this is not a model that continuously learns as it’s deployed from the new things it finds, which is something that to me feels like AGI. But the level of intelligence here, the level of capability, it feels like a huge improvement.
So we’re getting closer, but this is not AI with a mind of its own. Though in fact, there may be times when you use it that it feels like it’s learning autonomously. We still have a little reprieve from the ethical challenges ahead, but don’t get too comfortable. GPT-5 may be the newest model, but it’s most definitely not the last.
Finally, the various models are gone
For those of us who are heavy generative AI users, one of our frustrations has been the variety of models OpenAI has released, along with their incredibly confusing naming conventions. There was GPT-4o, which most of us used for day-to-day work, GPT-4.1 (and its mini and nano versions), and the o3 and o4-mini series. The “reasoning” models took longer to produce an answer, but generally offered better results for specific tasks. With OpenAI GPT-5, all of that is gone, the confusion replaced by a single model that will select other modes if it needs to. As Ethan Mollick points out,
A surprising number of people have never seen what AI can actually do because they’re stuck on GPT-4o, and don’t know which of the confusingly-named models are better.
GPT-5 does away with this by selecting models for you, automatically. GPT-5 is not one model as much as it is a switch that selects among multiple GPT-5 models of various sizes and abilities. When you ask GPT-5 for something, the AI decides which model to use and how much effort to put into “thinking.” It just does it for you.
For the majority of users, that will make life easier as GPT-5 will take what it thinks is the best approach. But as Mollick notes, it will become an issue for the rest of us. There’s no way to specify which model to use, and indeed, GPT-5 doesn’t reveal the process by which it selects an alternative model. For now, at least, we’re left in the dark on this.
I have a feeling our new follow-up prompt will be: Can you take some more time to think about this?
A tightening race and rapid adoption
The release of GPT-5 comes amid a broader acceleration in generative AI. Google, Anthropic, xAI, and Meta continue to roll out updates to their own models, adding multi-modal capabilities and AI agents. Many are already embedded in campus and corporate life, sometimes through approved pilots, but more often via personal accounts that bypass review entirely. I can’t begin to count the number of employees who have told me they’d never let their supervisors know how much they use generative AI.
Models are improving in reasoning, memory, and coding capabilities, while costs continue to drop. Of course, this opens up access for students but raises new challenges for education and other organizations. The old days of carefully vetting and introducing emerging technologies are over, giving way to a world where students, faculty, and employees will adopt whatever they feel is most useful.
There was a time when IT departments and tech leaders were the lead horse in the race, often dragging the rest of the pack (sometimes reluctantly) with them. As Generative AI becomes more widely used, they may be moving toward the back of the pack, trying desperately to keep everyone running on the same race course.
The challenge is no longer choosing a model—it’s in managing an evolving ecosystem of them. And OpenAI’s GPT-5 is not the last of the challenges we’ll face.
What OpenAI GPT-5 means for institutions, educators, and policymakers
While GPT-5 may not be what many expected it to be (something close to AGI), it is still a pivot point. And you’ll quickly experience that when you use it. GPT-5 is faster, more fluent, and more autonomous than earlier versions. It builds software, teaches concepts, simulates dialogue, and adapts its behavior, all from natural language prompts.
For educators, the question is not whether students will use these systems. They already do. The question is how we respond, with curriculum redesign, AI literacy, new assessment models, and strategies that move beyond detection toward integration. We keep thinking the push to use AI detection software will come to an end, but it hasn’t. Maybe OpenAI’s GPT-5 will be the final nail in the coffin.
For policymakers and academic leadership, GPT-5 heightens the need for innovative governance. Accessibility, equity, privacy, institutional alignment, and what it means to be human now sit at the center of the AI conversation.
The age of generative presence has begun. OpenAI GPT-5 is its strongest expression so far.
Emory Craig is a writer, speaker, and consultant specializing in virtual reality (VR) and generative AI. With a rich background in art, new media, and higher education, he is a sought-after speaker at international conferences. Emory shares unique insights on innovation and collaborates with universities, nonprofits, businesses, and international organizations to develop transformative initiatives in XR, GenAI, and digital ethics. Passionate about harnessing the potential of cutting-edge technologies, he explores the ethical ramifications of blending the real with the virtual, sparking meaningful conversations about the future of human experience in an increasingly interconnected world.