President Biden’s new Executive Order on artificial intelligence (AI) regulation released today is far-reaching in some areas and deeply limited in others. It goes beyond last month’s Blueprint for an AI Bill of Rights, which sounded more like a wish list than a concrete plan to reign in the potential excesses of generative AI. For an Executive Order, it sets lofty goals,
. . . [establishing] new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.
But despite the language, we’re not sure it warrants Mashable’s description as a “regulation bombshell that’ll shake up the industry”- unless, of course, one sees any regulation of the tech industry as a bombshell.
As an Executive Order, there are clear limits for what it can address. The previous Blueprint was criticized for lacking enforcement mechanisms, and the only ones available in the new Executive Order will have to be developed by federal agencies. In some cases, Congress will have to pass legislation funding the enforcement departments, adding another element of complexity to the process.
As a result, the new Order targets some essential areas that need to be addressed (the use of AI in healthcare, the military, and national security). However, it ignores the equally critical issues of the responsible use of generative AI in the public sphere.
In short, it’s a mixed bag that will send a few shudders through parts of the AI sector and elicit nothing more than a shrug from creators of fake news, scammers, and other bad actors. And even in tackling the complex issues that an executive order on AI can address, it’s unclear what power the enforcement mechanisms will have.
The Executive Order on AI: Impact on Education
President Biden’s Executive Order on AI doesn’t focus in-depth on education since there are limits to how it can impact the complex structure of education in the United States. Between a diverse landscape of public and private (both nonprofit and for-profit) higher education institutions and K12 systems run by an assortment of state and local entities, the new Executive Order won’t have a sweeping impact on our learning environments.
The Department of Education and the Executive Order on AI
Politico sums up how the Executive Order will guide the work of the Department of Education:
The Department of Education is directed to create an “AI toolkit” for education leaders to assist them with implementing recommendations made earlier this year for using artificial intelligence in the classroom.
The draft text aligns with comments made by a department official on Tuesday to congressional staffers, educational technology companies and other education leaders that the agency has started working on an AI toolkit and expects to release it next spring.
The draft also orders the department to develop resources, policies and guidance that “address safe, responsible, and nondiscriminatory uses of AI in education” within 365 days.
With all due respect to the good people in the Department of Education, it’s hard to imagine an “AI Toolkit” that can solve the complex challenges of generative AI use in learning environments. Moreover, by next spring, AI will have evolved beyond what it is today. Government works slowly, while AI is moving rapidly, and what we’ll get may well be out of date by the time it arrives.
Other Implications for Education
Some issues raised in the Executive Order on AI – such as privacy – will have to be addressed by K-12 and higher education. But even here, we’ll have to see what legislation is passed by Congress in the coming year. Higher education institutions will likely find themselves reassessing their use of predictive analytics in admissions or academic evaluations. As AI tools become increasingly powerful and attractive in the drive toward administrative efficiency, universities will have to closely evaluate the use of AI in these areas.
By embracing artificial intelligence for better governance, the Executive Order on AI suggests an immediate need for domain-specific expertise within federal agencies. This could present educational institutions with the dual challenge and opportunity to produce graduates who are not just technically proficient but also literate in the ethical and societal dimensions of AI. However, we have a long road to travel here as universities are still wrestling with the use of AI in courses and academic programs. There has been little movement to teach the ethical aspects and most of the courses here were designed for the pre-generative AI era, where the general public was impacted by AI but didn’t have immediate access to use it on their own.
We would love to see this new emphasis on AI enrich the intellectual climate of American higher education institutions and pave the way for more interdisciplinary approaches, especially at the intersection of AI, arts, and humanities. But that may be wishful thinking on our part as there seems to be few moves toward a more interdisciplinary approach.
Impact on the Creative Arts
The 111-page Executive Order on AI makes only passing reference to the creative arts, where it says,
Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges. This effort requires investments in AI-related education, training, development, research, and capacity, while simultaneously tackling novel intellectual property (IP) questions and other problems to protect inventors and creators.
Tackling those “novel intellectual Property (IP) questions” is beyond the focus of what the Order can do and will be left up to the U.S. legal system. Court cases by artists and other creatives are already underway, and it’s impossible to anticipate how the courts will ultimately rule on AI-generated images and text.
Can We Watermark AI Generated Content?
The Executive Order mandates that the Secretary of Commerce submit a report identifying existing watermarking options and possible new solutions. Given that OpenAI has dropped its watermarking efforts and no one else seems to have a viable solution, it’s a nearly impossible directive. If you’re curious, here is the text of the mandate in the Order:
To foster capabilities for identifying and labeling synthetic content produced by AI systems, and to establish the authenticity and provenance of digital content, both synthetic and not synthetic, produced by the Federal Government or on its behalf:
(a) Within 240 days of the date of this order, the Secretary of Commerce, in consultation with the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, shall submit a report to the Director of OMB and the Assistant to the President for National Security Affairs identifying the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, for:
(i) authenticating content and tracking its provenance;
(ii) labeling synthetic content, such as using watermarking;
(iii) detecting synthetic content;
(iv) preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual) . . .
We pity the poor committee that has to generate that report. As we’ve pointed out previously, any watermark solution will likely generate an “arms race” with new platforms and tools that promise to strip out watermarking features. The Order glosses over the challenges here, and if there is a solution, the tech industry itself may solve the problem long before government agencies can.
Microsoft and Adobe are already developing a solution – an icon of transparency – which they have been working on with the Coalition for Content Provenance and Authenticity (C2PA), a group of organizations across industries, including tech and journalism.
Text of the New Executive Order on AI
The full text of the new Executive Order on AI can be found here. Warning: it is not scintillating reading, and your best approach is to search for the terms/concepts that are critical for your work.
Below is the Fact Sheet of the Executive Order, which directs the following actions:
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy
Artificial Intelligence
Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.
As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.
The Executive Order directs the following actions:
New Standards for AI Safety and Security
As AI’s capabilities grow, so do its implications for Americans’ safety and security. With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems:
- Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.
- Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.
- Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.
- Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
- Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
- Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.
Protecting Americans’ Privacy
Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions:
- Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.
- Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.
- Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks. This work will focus in particular on commercially available information containing personally identifiable data.
- Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems. These guidelines will advance agency efforts to protect Americans’ data.
Advancing Equity and Civil Rights
Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety. To ensure that AI advances equity and civil rights, the President directs the following additional actions:
- Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.
- Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
- Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
Standing Up for Consumers, Patients, and Students
AI can bring real benefits to consumers—for example, by making products better, cheaper, and more widely available. But AI also raises the risk of injuring, misleading, or otherwise harming Americans. To protect consumers while ensuring that AI can make Americans better off, the President directs the following actions:
- Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI.
- Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.
Supporting Workers
AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement. To mitigate these risks, support workers’ ability to bargain collectively, and invest in workforce training and development that is accessible to all, the President directs the following actions:
- Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.
- Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.
Promoting Innovation and Competition
America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined. The Executive Order ensures that we continue to lead the way in innovation and competition through the following actions:
- Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change.
- Promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities.
- Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.
Advancing American Leadership Abroad
AI’s challenges and opportunities are global. The Biden-Harris Administration will continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide. To that end, the President directs the following actions:
- Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak.
- Accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable.
- Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure.
Ensuring Responsible and Effective Government Use of AI
AI can help government deliver better results for the American people. It can expand agencies’ capacity to regulate, govern, and disburse benefits, and it can cut costs and enhance the security of government systems. However, use of AI can pose risks, such as discrimination and unsafe decisions. To ensure the responsible government deployment of AI and modernize federal AI infrastructure, the President directs the following actions:
- Issue guidance for agencies’ use of AI, including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment.
- Help agencies acquire specified AI products and services faster, more cheaply, and more effectively through more rapid and efficient contracting.
- Accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies will provide AI training for employees at all levels in relevant fields.
As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI. The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.
The actions that President Biden directed today are vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.
For more on the Biden-Harris Administration’s work to advance AI, and for opportunities to join the Federal AI workforce, visit AI.gov.
Conclusion
As Wired Magazine noted, President Biden’s Executive Order is finally dragging the U.S. Government into the age of AI. If only it could do as much for the public sphere, which remains a wide-open arena for tech companies to do whatever they like. One significant point stands out here: we’re in the midst of a rapidly evolving technology landscape that no one fully understands. And the regulatory aspects are woefully slow to catch up. We’ll follow up on the recent developments in the EU, where the drive toward regulation is more focused and much stronger. Stay with us – we’ll continue to follow the developments closely as the generative AI revolution unfolds.
Emory Craig is a writer, speaker, and consultant specializing in virtual reality (VR) and generative AI. With a rich background in art, new media, and higher education, he is a sought-after speaker at international conferences. Emory shares unique insights on innovation and collaborates with universities, nonprofits, businesses, and international organizations to develop transformative initiatives in XR, GenAI, and digital ethics. Passionate about harnessing the potential of cutting-edge technologies, he explores the ethical ramifications of blending the real with the virtual, sparking meaningful conversations about the future of human experience in an increasingly interconnected world.