2025 was not defined by a single breakthrough model but multiple AI developments. It was the year AI shifted from being a simple tool to becoming unavoidable, yet challenging, infrastructure. As we review AI in higher education and government, we see a shift that exposed long-standing assumptions about authority, expertise, labor, and trust. These debates have been around before, but with three years of using generative AI, they now crystalize more sharply, and perhaps cut more deeply.
We’ll take a look at 2025 as it recedes in the rearview mirror and see what AI delivered, where it failed, and the open questions we will confront in 2026. The eight sections ahead:
- AI Moved from Tool to Collaborator
- Anthropic’s $1.5 Billion Copyright Settlement: The Price of Piracy
- The AI Literacy Divide and EdTech’s Answer
- The “Open Versus Closed” Debate
- Agentic AI had a Rough Introduction
- Entry Level Workers and Creative Labor Face the Abyss
- Regulation Bifurcates Across Cultural and Political Fault Lines
- The Core Question Shifted From AGI to Responsibility
AI Moved from Tool to Collaborator
Across research, policy analysis, curriculum design, grant writing, product planning and operations, AI platforms moved inside institutional workflows instead of serving only as tools for individual users. The platforms now handle longer contexts, multi-step reasoning, and are embedding themselves in organizational work. Institutions stopped debating how to write prompts and started using AI for critical tasks.
The new versions of Claude, Gemini and ChatGPT in released in 2025 transformed how we do work. Previous versions were more search-like: you asked a question, you got an answer. We lived with the ad nauseam focus on prompt engineering. Don’t get us wrong – writing a good prompt is still important. It’s just not “engineering.” Now AI is a collaborator, a platform that is more personal, more focused on your needs, especially when you’re using a paid AI platform and customizing it appropriately.
Anthropic’s $1.5 Billion Copyright Settlement: The Price of Piracy
In August, Anthropic agreed to pay $1.5 billion to settle claims that it downloaded millions of books from pirate sites LibGen and PiLiMi to train its Claude models. It became the largest copyright infringement settlement in U.S. history, compensating approximately 500,000 works at roughly $3,000 per title.
The settlement followed a June ruling by Judge William Alsup that split the legal question neatly: using lawfully acquired copyrighted materials to train AI constitutes transformative fair use, but acquiring materials through piracy does not. Anthropic faced potential statutory damages exceeding $70 billion before settling.
Beyond the immediate financial impact, the settlement established two principles that will shape AI development. First, how companies acquire training data matters as much as how they use it. Second, the era of scraping and hoping for forgiveness has ended. Companies now face strong incentives to negotiate licensing agreements or develop alternative training approaches. For higher education specifically, this clarifies that AI companies cannot simply help themselves to academic content, whether published research, course materials, or institutional knowledge bases.
The AI Literacy Divide and EdTech’s Answer
It’s no surprise that elite universities, well-capitalized firms, and major government agencies gained earlier access, higher usage limits, and rolled out deeper integrations of AI platforms. Smaller institutions, regional governments, and less-resourced organizations end up relying on free or deeply constrained tools. The equity issue is alive and well, but perhaps now more focused on institutions. Students at rich universities get access; those at lower resourced institutions have to pay for AI platform subscriptions on their own. The same holds true for employees at companies and students at universities in the developing world.
Of course, EdTech companies rushed in the door to offer questionable solutions for educational institutions. It’s a pattern we’ve seen too many times before. Companies like Boodlebox (already at 900+ colleges and universities) offer compliance, AI platform aggregation, AI usage evaluation, and optimized token reduction. Usage evaluation is problematic enough as it’s unclear how they distinguish between students who use AI as a thought partner versus those who just paste in random prompts. The token reduction feature is even more troubling as you can only do that through aggressive caching or prompt compression. Either way, institutions are seduced by a prepackaged solution and students aren’t developing the AI literacy skills they need for the real world.
The “Open Versus Closed” Debate in 2025
2025 also saw open-weight models gain traction in research and sovereign deployments, but the year also exposed how narrow the definition of “open AI” was. No matter how open an AI model claims to be, training data, governance structures, and long-term accountability remain opaque.

At the same time, Meta Platforms once again shifted their focus (as they seem to do every year these days). They began the year loudly promoting openness, then pivoted toward tighter control as competitive pressure intensified from Claude, Gemini, and ChatGPT. Across higher education, business, and government, the love for open AI gave way to pragmatism. Hybrid approaches dominated, alongside a quiet recognition that we will never gain full transparency unless you take it upon yourself to develop and host your own LLM. If you’ve got the money, Jensen Huang at Nvidia would love to take your call.
Agentic AI Had a Rough Introduction

AI agents capable of planning, executing, and revising tasks appeared in software development, compliance reviews, internal analytics, and customer operations. As we noted back in June, in tightly controlled environments, they perform extremely well. But despite the boosterism from the tech sector – “AI Agents will be everywhere!” – they failed comically when not tightly controlled. The lesson was clear: planning matters more than ambition.
According to an MIT study in June 2025, the State of AI In Business 2025, 95% of AI pilots are failing. As a manufacturing COO noted,
The hype on LinkedIn says everything has changed, but in our operations, nothing fundamental has shifted.
That statistic is misleading and the quote over the top. AI is having a profound impact on organizations. But it’s important for higher education to watch the edge cases – both the AI success stories and the failures, especially with AI Agents. And universities need to address the more fundmental question here: how do we prepare students for a world where AI Agents will be their collaborators and co-workers?
Entry Level Workers and Creative Labor Face the Abyss
Entry-level workers, and those in communications and the creative fields are feeling the most pressure from the increasing use of AI. Senior roles are shifting toward synthesis and interpretation but those entering the job market are now competing against AI use in business and other organizations. There is no hard data on job loss, and AI isn’t walking in the door on legs and taking away anyone’s job (a comic line from Josh Johnson in a Q and A on The Daily Show). But the internships and junior professional pathways that students traditionally used to gain entry to their fields are seemingly more difficult to access. As The Atlantic noted in a recent essay, this was the pathway to our careers.
Your new eager intern with a freshly minted degree may be willing to work 40, maybe even 60 hours a week. But your AI platform is available 24/7 and works across your business or organization, even supporting your colleagues simultaneously. The best intern or junior employee can’t compete with that. It’s an issue just bubbling to the surface that we’ll confront even more directly in the year ahead.
Regulation Bifurcates Across Cultural and Political Fault Lines

It’s been an uneven year for AI regulation, which has splintered into competing approaches. A unified solution born out of the idealism of early generative AI developments has fallen by the wayside. The EU has tackled regulation head-on, developing frameworks that emphasize transparency and accountability. The EU Artificial Intelligence Act is still very much a work in progress, but it’s the only major framework we have. Not surprisingly, the United States has embraced market-driven adaptation, which means little to no regulation at all. Parts of Asia are prioritizing industrial acceleration and state control. For businesses and institutions operating across borders, the fragmentation complicates collaboration, compliance, and shared norms. The idea of a single global governance model has finally collapsed and we will see how these different approaches play out in 2026.
The Core Question Shifted From AGI to Responsibility
By the end of 2025, serious debate finally moved away from whether AI was intelligent. While the tech sector still likes to debate when AGI will arrive, the real questions now focus on who directs outcomes, who bears responsibility for errors, and how institutions preserve real human authority rather than symbolic oversight. AGI is still a long-term issue, and no one knows when we will get there, or what happens once Artificial General Intelligence arrives. But the responsibility question arrived in full force in 2025. What happens when Agentic AI leads people astray? What are education institutions ultimately responsible for when AI increasingly co-opts our thinking process?
The most challenging question we’re currently working on: we live in a world where everyone is using AI but started out not using it. We’ve all developed our critical thinking skills through traditional means. That’s rapidly changing and education hasn’t a clue how to address it once a new generation of students arrive in another decade. The implications of that issue are much more profound than abstract discussions of AGI.
Conclusion: AI in Higher Education and Government
You can breathe a deep sigh of relief – 2025 was not the year AI replaced educators, professionals, or public servants. But it was also not a year to take AI lightly, as if it’s just another in a long line of tech advances. It was the year higher education, business, and government started redesigning work, learning, and authority. But it was a year where we didn’t work on governance, regulations, or institutional and individual accountability. That’s a hard gap to face in 2026, but face it we will. It defines both the risk and the opportunity for AI in higher education, business, and government as we look to the year ahead.
Emory Craig is a writer, speaker, and consultant specializing in virtual reality (VR) and generative AI. With a rich background in art, new media, and higher education, he is a sought-after speaker at international conferences. Emory shares unique insights on innovation and collaborates with universities, nonprofits, businesses, and international organizations to develop transformative initiatives in XR, GenAI, and digital ethics. Passionate about harnessing the potential of cutting-edge technologies, he explores the ethical ramifications of blending the real with the virtual, sparking meaningful conversations about the future of human experience in an increasingly interconnected world.