New A.I. Executive Order by Biden: What You Need to Know
A closer look at Biden’s new Executive Order on Artificial Intelligence (AI)
Attention to all artificial intelligence enthusiasts/users, There’s a new A.I. executive order by Biden: What You Need to Know. In this blog, I’ll share important highlights of this new executive order on Artificial Intelligence.
On October 30, 2023, President Biden issued an executive order on artificial intelligence (AI) to ensure that America leads the way in seizing the promise and managing the risks of AI.
Here are things to know about this new executive order on the use of AI:
New Standards for AI Safety and Security
The executive order establishes new standards for AI safety and security, requiring developers of the most powerful AI systems to share their safety test results and other critical information with the U.S. government. Companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government.
Protection of Americans’ Privacy
The executive order protects Americans’ privacy by requiring AI companies to be transparent about how their models work and to establish new standards for labeling AI-generated content.
Advancement of Equity and Civil Rights
The executive order advances equity and civil rights by ensuring that AI is developed in a way that is fair and unbiased.
Promotion of Innovation and Competition
The executive order promotes innovation and competition by encouraging the development of AI in a way that maximizes its possibilities and contains its perils.
Enforcement
While the executive order goes beyond previous US government attempts to regulate AI, it places far more emphasis on establishing best practices and standards than on how, or even whether, the new directives will be enforced. However, the order empowers other agencies to address AI issues seriously.
How does Biden’s executive order protect Americans’ privacy?
President Biden’s executive order on AI aims to protect Americans’ privacy by requiring AI companies to be transparent about how their models work and to establish new standards for labeling AI-generated content.
The order recognizes that AI is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires, which can increase the risk that personal data could be exploited and exposed.
The executive order establishes new standards for AI safety and security, requiring developers of the most powerful AI systems to share their safety test results and other critical information with the U.S. government. Companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government.
It is expected the above measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.
Potential consequences for businesses that violate this EO on AI and privacy
President Biden’s executive order on AI and privacy establishes new standards for AI safety and security, requiring developers of the most powerful AI systems to share their safety test results and other critical information with the U.S. government.
Companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when.
The order also requires AI companies to be transparent about how their models work and to establish new standards for labeling AI-generated content.
The executive order aims to ensure that AI systems are safe, secure, and trustworthy before companies make them public.
Violating the executive order could result in consequences such as fines, legal action, or other penalties.
However, the order places far more emphasis on establishing best practices and standards than on how, or even whether, the new directives will be enforced.
What are the effects of Biden’s new A.I. executive order on the A.I. developers?
The effects of Biden’s AI executive order on AI developers are as follows:
Regulatory Burdens
The order will introduce regulatory burdens on AI, which some experts believe are necessary but aggressive.
Sharing Safety Test Results
Developers of AI systems that pose risks to US national security, the economy, public health, or safety must share the results of safety tests with the US government before they are released to the public. This requirement could be seen as a burden by some AI developers who may be loath to follow it unless they are forced via regulations.
Transparency
The order will require more transparency from AI companies about how their models work and will establish a raft of new standards, most notably on labeling AI-generated content. This could be seen as a positive development by some AI developers who believe that transparency is necessary for building trust in AI.
Privacy and Security
The order seeks to establish privacy and security rules around the use of AI, which could have significant long-term effects on every tech professional who works with the technology. This could be seen as a positive development by some AI developers who believe that privacy and security are essential for building trust in AI.
Equity and Civil Rights
The order seeks to address the risk of bias and civil rights violations that AI can heighten by calling for guidance to landlords, federal benefits programs, and federal contractors “to keep AI algorithms from being used to exacerbate discrimination.” This could be seen as a positive development by some AI developers who believe that equity and civil rights are essential for building trust in AI.
Meanwhile, the effects of Biden’s AI executive order on AI developers are mixed. While some developers may see the order as a burden, others may see it as a positive development that could help build trust in AI.
AI experts have hailed the order as an important step forward, especially thanks to its focus on watermarking and standards set by the National Institute of Standards and Technology (NIST).
However, others argue that it does not go far enough to protect people against immediate harms inflicted by AI.
Overall, this new executive order by the Biden administration seeks to shape the development of generative AI in several areas, including new standards for AI safety and security, protecting Americans’ privacy, advancing equity and civil rights, standing up for consumers, patients, and students, supporting workers, promoting innovation and competition, and advancing American leadership abroad.
The order also requires developers of AI systems that pose risks to US national security, the economy, public health, or safety to share the results of safety tests with the US government before they are released to the public.
You can read and learn more about what you need to know about this new A.I. (Artificial Intelligence) EO and the reference I used in this article here.
I hope my blog post about the new A.I. executive order by Biden helps you learn what you need to know about this topic as an individual or a business. Leave your comment and let the Medium community know your thoughts.
I appreciate you reading my article. Check out other related articles below as well. As always, I am truly grateful for your support of my writing hobby.
You can follow me on Medium, check out my other Medium posts, and sign-up to my email here to get instant notification every time I have new posts.
Consider also becoming a Medium member here to get unlimited access to my stories and to support the other Medium writers as well. Please take care of yourself and each other. Until next time. Have a blessed day to all. Cheers!