The next frontier of generative AI has arrived. OpenAI has officially launched GPT-5 and it’s already dividing opinion. The new model is faster, sharper, and more adaptable than its predecessors, introducing built-in personas and the ability to create software on demand. Advocates see it as a breakthrough for education, research, and creative work. Critics argue it feels sterile, hides too much of its process, and falls short of the leap toward AGI that many expected. Whatever field you’re in and whichever side you’re on, GPT-5 is set to reshape how we teach, govern, and imagine the future.
Let’s take a look at what GPT-5 brings to the table. There’s much to cover, including:
- How it feels to talk with it
- The newly built-in personas
- The loss of the various models in the prior version
- The implications for learning
- The rapid pace of adoption
- How close are we getting to AGI (hint: not very)
- Criticisms of GPT-5
- What it means for educators and policymakers
How it feels to talk to it
It really feels like talking to an expert.
That’s how OpenAI CEO Sam Altman framed one of his interactions with GPT-5 in a recent conversation. It’s not just faster output or more data that this LLM model provides. It can reason through layered, complex questions, solve problems that stump most people, and explain difficult concepts with uncanny clarity. It doesn’t just answer questions. It engages like a seasoned specialist.
Altman offered a telling moment from his own experience in a conversation with Theo Von — This Past Weekend podcast, Ep. 599 (posted July 24, 2025). Sam Altman said,
This morning I was testing our new model and I got a question.
I got emailed a question that I didn’t quite understand. And I put it in the model, this GPT-5, and it answered it perfectly.
And I really kind of sat back in my chair and I was just like, oh man, here it is moment…
I felt like useless relative to the AI in this thing that I felt like I should have been able to do and I couldn’t.
It was really hard. But the AI just did it like that. It was a weird feeling.
Let that sink in for a moment – a feeling of being useless next to an AI model. That flash of anxiety, even for the CEO of OpenAI, is both startling and unsettling.
For educators and, honestly, everyone, that raises a host of questions. What happens when technology becomes a peer, or even our superior? What is its impact on our emotions? Careers? The structure of society as a whole? These are not speculative questions for the future, but are already here as generative AI progresses.
More on that in the future, but first, let’s see what OpenAI GPT-5 can do. And why the rollout has been somewhat rocky.
Custom personas, familiar interfaces
OpenAI is introducing preset personalities for ChatGPT: Cynic, Robot, Listener, and Nerd. These shape tone and style, not just answers, hinting at AI as a social interface where voice and temperament influence trust, motivation, and learning. Many of us have already been doing this informally through careful prompting, but the new presets make it easier, lowering the need for prompt-engineering skills. It’s a sign of where things are heading: in a few years, “prompt engineering” may feel like a quaint relic of the early generative AI era.
For faculty, instructional designers, and students, these personalities will raise fresh questions. Can a dry “Robot” tone cut distractions during study sessions? Could a “Listener” persona improve student support chats? Tone now matters as much as accuracy. And these four are only the start. Over time, AI will anticipate our needs without us choosing a persona at all.
If that sounds like science fiction, it’s not — it’s the direction we’re moving, and GPT-5 makes the first step feel deceptively casual.
Finally, the end of multiple models
For those of us who are heavy generative AI users, one of our frustrations has been the confusing tangle of models OpenAI has released (along with confusing naming conventions). There was GPT-4o for most daily work, GPT-4.1 (plus its mini and nano versions), and the o3 and o4-mini “reasoning” models that traded speed for depth. Picking the right one often felt like navigating a menu designed to confuse you.
With OpenAI GPT-5, all of that is gone, the confusion replaced by a single model that will select other modes when it needs to. As Ethan Mollick points out,
A surprising number of people have never seen what AI can actually do because they’re stuck on GPT-4o, and don’t know which of the confusingly-named models are better.
GPT-5 does away with this by selecting models for you, automatically. GPT-5 is not one model as much as it is a switch that selects among multiple GPT-5 models of various sizes and abilities. When you ask GPT-5 for something, the AI decides which model to use and how much effort to put into “thinking.” It just does it for you.
GPT-5 now feels more like an invisible portal, routing your request to whichever engine it deems best. For the majority of users, that will make life easier as GPT-5 will take what it thinks is the best approach. They’ll be more like expert users without ever having to learn the capabilities of the various models.
But for others, it will seem more like a lock on the door to the wizard’s room.
Coding on demand and the implications for learning
At the core of GPT-5 is a leap in code generation. In a live demo, the model produced a multilingual language-learning app in minutes, with quizzes, flashcards, and a simple game. All without a single line of code from the user.
I don’t think programmers need to file for retirement just yet. But if you work in the field, you should be thinking hard about your future.
Altman calls this “software on demand.” For educators, a faculty member with minimal technical background can generate working tools from a prompt. AI-native instructional design moves from idea to practice. This feature isn’t perfect yet, but it will get there. Remember how GPT-3.5 had trouble with basic coding and even spelling? We’re way beyond that point, and the models will continue to improve.
This also affects IT in the education and corporate sectors. GPT-5 lowers the threshold for software creation across campus, for pedagogy, research workflows, and administrative tasks. It will also profoundly challenge the work of IT departments as it creates new security risks – scams, phishing exploits, and malicious code that becomes easier to create.
The uncomfortable truth here is that user skills are not keeping pace with the technology. And perhaps they never will, given how rapidly GenAI is advancing.
The rapid pace of adoption continues
The release of GPT-5 lands in the middle of an AI acceleration wave. Google, Anthropic, xAI, and Meta are all rolling out updated models with multimodal capabilities and AI agents. Many are already embedded in campus and corporate life — sometimes through approved pilots, more often through personal accounts that bypass oversight entirely. I’ve lost count of the number of employees I’ve met at workshops and conferences who admit they’d never tell their supervisors how much they rely on generative AI.
Models keep improving in reasoning, memory, and coding while costs drop, opening access for students and independent creators. But that rapid pace of adoption creates new headaches for institutions. The old process of carefully reviewing and introducing technology is over. Students, faculty, and staff will adopt whatever works best for them, whether it’s sanctioned or not.
In the past, IT departments and tech leaders set the pace, often pulling the rest of the organization (sometimes reluctantly) forward. Now they’re struggling to keep up, trying to make sure everyone stays on the same track while the race is already in full sprint. The challenge is no longer which model to choose — it’s managing an ecosystem that changes under your feet. GPT-5 may be the newest hurdle, but it won’t be the last.
Scaling access but nowhere near AGI
As adoption accelerates, OpenAI is scaling GPT-5’s reach. The company reports nearly 700 million weekly users and 5 million business customers. It has even offered the U.S. government access for just $1 a year, following the General Services Administration’s approval of OpenAI, Google, and Anthropic as official vendors. Generative AI is no longer just a consumer tool — it’s becoming part of the public sector’s infrastructure.
But if OpenAI GPT-5 is more accessible and significantly better, it’s still not close to AGI. Sam Altman noted in a call with reporters on the day before its release.
This is clearly a model that is generally intelligent, although I think in the way that most of us define AGI, we’re still missing something quite important, or many things quite important.
One big one is, you know, this is not a model that continuously learns as it’s deployed from the new things it finds, which is something that to me feels like AGI. But the level of intelligence here, the level of capability, it feels like a huge improvement.
So we’re getting closer, but this is not AI with a mind of its own. Though in fact, there may be times when you use it that it feels like it’s learning autonomously. We still have a little reprieve from the ethical challenges ahead, but don’t get too comfortable. GPT-5 may be the newest model, but it’s most definitely not the last.
Criticism of OpenAI GPT-5
Every new tech release brings its share of critics, and OpenAI GPT-5 is no exception. It’s quickly become a target of mounting criticism on social media, especially on Reddit. As Mashable notes,
A quick glimpse of the ChatGPT subreddit (which is not affiliated with OpenAI) shows scathing reviews of GPT-5. Since the model began rolling out, the subreddit has filled with posts calling GPT-5 a ‘disaster,’ ‘horrible,’ and the ‘biggest piece of garbage even as a paid user.’
Most complaints center on three issues:
- Loss of control. Users can no longer choose which model to run. GPT-5 decides in the background, showing only a vague “thinking” message. For those with workflows built around specific models, this is infuriating. Ironically, OpenAI had also been criticized in the past for offering too many models.
- Loss of personality. GPT-5 feels more sterile, less human than GPT-4o. The older model, when customized, sometimes veered a little too close to the movie Her — but it had a spark GPT-5 seems to lack.
- Perceived drop in intelligence. Some users report that GPT-5 takes multiple iterations to deliver answers that GPT-4 would have nailed on the first try.
Sam Altman responded to some of the criticism on Reddit during his AMA. OpenAI will make it more straightforward which model is answering when it switches in the background. As for the quality of GPT-5’s responses, he said an “autoswitcher” wasn’t working as planned, and that it had been resolved. As for the personality quotient, it remains to be seen if we return to what GPT-4o offered. We suspect there will be several tweaks – and more debates over its responses – in the coming weeks.
What OpenAI GPT-5 means for institutions, educators, and policymakers
While GPT-5 may not be what many expected it to be (something close to AGI), it is still a pivot point. And you’ll quickly feel it when you use it. GPT-5 is faster, more fluent, and more autonomous than earlier versions. It builds software, teaches concepts, simulates dialogue, and adapts its behavior, all from natural language prompts.
For educators, the question is not whether students will use these systems. They already do. The question is how we respond: redesigning curricula, developing AI literacy skills, new assessment models, and strategies that move beyond detection toward integration. We’ve been thinking that the push to use AI detection software will come to an end over the past two years, but it hasn’t. Maybe OpenAI’s GPT-5 will be the final nail in that coffin.
For policymakers and academic leadership, GPT-5 heightens the need for innovative governance. Accessibility, equity, privacy, institutional alignment – and perhaps most importantly – what it means to be human now sit at the center of the AI conversation.
The era of generative AI continues to push forward. Despite a somewhat rocky introduction, OpenAI GPT-5 is its strongest expression so far. It will force leaders in education, policy, and the creative fields to decide whether they will adapt with it – or be outpaced by it.
Emory Craig is a writer, speaker, and consultant specializing in virtual reality (VR) and generative AI. With a rich background in art, new media, and higher education, he is a sought-after speaker at international conferences. Emory shares unique insights on innovation and collaborates with universities, nonprofits, businesses, and international organizations to develop transformative initiatives in XR, GenAI, and digital ethics. Passionate about harnessing the potential of cutting-edge technologies, he explores the ethical ramifications of blending the real with the virtual, sparking meaningful conversations about the future of human experience in an increasingly interconnected world.