The U.S. President Joe Biden signed an Executive Order, dated October 30, 2023, to ensure safe, secure, and trustworthy development and use of artificial intelligence (AI).

This Order was followed soon after by a draft memorandum from the Office of Management and Budget (OMB), the comment period for which ended on December 05, 2023. The OMB memorandum provides implementation guidance for the federal agencies to manage AI risks and mandate accountability while advancing innovation in AI. The speculation is that other policymakers—from Congress to the U.S. states—can use these documents as a guide for future action in requiring accountability in the use of AI.

The White House, in May 2023, had issued a request for input on the U.S. national priorities and future actions on AI. The resulting comments from the public informed the development of the AI Executive Order and other executive actions. Both the Executive Order and the draft OMB memo also build on the earlier Biden administration efforts like the Blueprint for an AI Bill of Rights (released in October 2022) and the AI Risk Management Framework from the National Institute of Standards and Technology or NIST (released in January 2023). Both the documents set up mandates for accountability and for the federal government to be a model for accountable AI. The Executive Order directs federal agencies to develop additional guidance, which we are likely to see over the next year.

The Executive Order sets out several guiding principles and priorities, including standards to promote AI safety and security, consumer and worker protection, data privacy, equity and civil rights, innovation, competition, and responsible government use of AI. The Order defines AI systems broadly and is not just limited to generative AI. The Order impacts any machine-based system that supports predictions, recommendations, or decisions and addresses policies on the following key aspects, among many others:

  • Tasks U.S. Agencies to create standards to protect against AI misuse, for example, mandating NIST to create standards to test AI models before public release and OMB to issue guidance to federal agencies for labeling and authenticating official US government content
  • Focuses on “red-teaming“ as the testing methodology, requiring private companies to preemptively test their models for specific safety concerns
  • Requires developers of large AI models to test and report the resulting documentation of safety testing practices and results to the federal government
  • Directs the Department of Commerce to develop guidance for content authentication and watermarking to clearly label AI-generated content (Meanwhile, AI companies, such as OpenAI, Alphabet, and Meta Platforms, have voluntarily agreed to watermark AI-generated content.)
  • Aims to establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software
  • Seeks to “strengthen United States leadership of global efforts to unlock AI’s potential and meet its challenges” by focusing on expanding bilateral, multilateral, and multi-stakeholder engagements to collaborate on AI and accelerating development and implementation of vital AI standards with international partners to solve global challenges

In general, the market views on this Order vary considerably. Certain market participants welcomed this Executive Order while others noted that the Order is vague, depends on goodwill of large technology firms, or takes a political stand, instead of formulating concrete rules. It is also being highlighted that such Executive Orders are not as stable or concrete as a legislation, since future administrations may reverse these.

Visit Moody’s Analytics | Digital Banks Microsite to find out how we harness new technologies to deliver growth through our banking solutions.

Related Links

Keywords: Americas, US, Regtech, Fintech, Suptech, Artificial Intelligence, Executive Order, OMB, NIST, Department of Commerce, White House

Source:   Moody’s Analytics Press Release