On Monday, President Joe Biden issued an executive order regarding AI, will set the tone for how technology will be governed in America in the future. This order aims to address nearly every AI issue that has arisen over the past 12 months. This includes mandatory testing to ensure advanced AI models are not weaponized, mandatory watermarking of AI-generated content, and minimizing the risk of workers losing their jobs to his AI. It is included.
While the scope of the order is limited to the United States, specifically the federal government, the country currently leads the world in AI, so any changes could lead to global changes in how this technology is regulated. is likely to have an impact on the discussion. By stating that U.S. federal agencies can only work with companies that comply with these new standards, Biden is leveraging $694 billion in federal contracts to push broader industry compliance.
The AI Executive Order is no short feat: at 20,000 words, it’s a quarter of the length of Harry Potter and the Philosopher’s Stone. That’s why we’ve saved you the pain and read it out loud. Below are some important points and thoughts about the order.
AI needs to be more ethical and properly tested
The order states that government agencies must conduct “robust, reliable, reproducible, and standardized evaluations of AI systems.”Historically, there has been a bit of confusion How to evaluate AI modelsexperts are forced to approve them, or Risks of using shadow AI.
Agencies must also ensure that the AI models they purchase are ethically developed, properly tested, and that the AI-generated content is watermarked. This would also be an interesting requirement for government agencies. Currently, the top 10 AI-based model companies are as follows. Terrible at sharing how AI was made “ethically” — Even Claude from Anthropic, an “ethical AI” company, is missing this point, and the only “open” thing about OpenAI is in the name.
Additionally, the U.S. government will strengthen and enforce consumer protection laws regarding AI to go after those who engage in unintentional bias, discrimination, privacy violations, and “other harms caused by AI.”
“The interests of Americans who increasingly use, operate, and purchase AI and AI-enabled products in their daily lives must be protected,” the order states. “Using new technologies such as AI does not exempt organizations from legal obligations. At a moment of technological change, hard-won consumer protections are more important than ever. .
According to Pluralsight author and AI expert Simon Allardice, companies building AI-powered applications must comply with the executive order and ensure that their applications are as transparent as possible.
“If I were a company building AI-driven applications that touch aspects of finance, healthcare, human resources, etc., such as tools that potentially impact lending, hiring decisions, and access to healthcare, I would be very cautious right now. ‘We need to plan for the explainability of AI-driven decision-making,’ Simon said.
“Likewise, we can expect new levels of control and higher expectations when it comes to data privacy in general.”
There is a clear timeline for both the US government and technology companies to agree on.
Biden’s executive order is more than just “general guidance” on non-technical aspects. We got very specific about what we needed to do with the generative AI.
“Although this is not a ‘law’ in the traditional sense of the word, we anticipate that many bills will be enacted soon,” Simon said. “This document includes specific deadlines with detailed deadlines, including new NIST guidelines and best practices for generative AI and underlying models within 270 days, and reports and recommendations on how to detect and label synthetic content within 240 days. Contains numerous requests for deliverables.”
However, Mr Simon said some parts lacked detail as the bulk of the order delegated the details to the agency. For example, it stipulates that companies developing underlying models must notify the federal government if something poses a “serious risk to national security, national economic security, or national public health and safety.” There is a clause to do so.
“Say you’re a company developing a new fundamental model. Who decides whether it poses a significant risk?”
AI companies need to demonstrate cybersecurity best practices
Within 90 days of the order, the Secretary of Commerce will require all companies developing underlying models to continually report that they have appropriate security measures in place. This includes consistent red teaming and cybersecurity testing, the results of which must be disclosed.
Aaron Rosenmund, senior director of security and generative AI skills at Pluralsight, said the move was “totally wise” but could reduce competition for smaller companies in the AI space.
“We felt this was the perfect move, especially when focusing on how foreign militaries seek to exploit AI and vulnerabilities in AI systems as part of their plans to destroy technology,” he said. . “Most organizations seem to have a reasonable expectation that existing functionality will be available for free.”
The US government is extremely concerned about AI being used as a biological weapon.
A key part of this order concerns threat assessment of AI used to support CBRN threats, including chemical, biological, radiological, and nuclear. But Biden seems to think that AI is a solution, just as it can be used for both. aggressively and defensively in cyber security.
The same technology that is being used to make great advances in AI drug discovery. Reused as chemical or biological weapons, according to the researchers.In his scenario, a large-scale language model Advised on potential biological agents Consider your budget constraints and success factors to decide whether to use it. The report recommends “obtaining and distributing Y. pestis -infected specimens, while identifying variables that may influence the expected number of deaths.”
This is a very serious concern, considering that AI is currently in the worst shape it has ever been in. This executive order will put pressure on companies using these tools to prove that they cannot be used to inadvertently create CBRN threats.
AI upskilling is focused to lead the world and prevent job losses
For most people, the rise of AI comes with the fear of losing their jobs to machines. According to research, 69% of people are worried about losing their job to AI. Developers are not immune either, with 45% of developers experiencing it. “The threat of AI skills”or the feeling that the rise of AI has made important skills redundant.
The executive order focuses on incentivizing workforce upskilling to maximize the benefits of AI and minimize job losses. seems to be reading) Aaron Sconard’s speech on Navigate last week on human intelligence). The United States believes that upskilling is the key to maintaining its dominance in the AI field, and in addition to attracting AI talent from overseas, it is also “investing in AI-related education, training, development, research, and capabilities.” It states that.
The U.S. government plans to provide AI training to non-technical employees as well. According to the order, “employees who are not in traditional technical roles, such as policy, management, procurement, or legal fields,” are “employees working in AI, machine learning, data science, or other related subject areas.”
“I was pleased to see that this mandate seemed to fully embrace the idea that AI is a transformative change for the workforce at large. There is a recognition that it has an impact on people,” Simon said.
According to Aaron Rosenmund, AI should not replace artificial intelligence but “enable us to raise the level of work for all of humanity.” However, this is only possible if training is made easily accessible to everyone, and that is what the Order appears to be trying to accomplish.
“But to unlock things like this, training on how to operate AI tools needs to be available everywhere,” Aaron said. “As AI capabilities begin to impact every aspect of our lives, this could be the largest and most pervasive requirement for workforce upskilling that we will see in our lifetimes. there is.”
Conclusion: Riding the wave of future change in AI
Until now, most countries, especially the US government, have been catching up on AI. Now, this ambitious mandate, if enacted by various U.S. federal agencies, would further tighten requirements for AI development and further upskill everyone as they learn about the technology, regardless of their background. Looks like it’s going to happen.
The order also likely means more work in cybersecurity roles to properly vet AI-based products and services, an area that is already in the spotlight. Pluralsight’s Top Tech Careers of 2023. In early 2023, there were only enough cybersecurity professionals to fill his 68% of all job openings in the US, which required him to obtain a cybersecurity certification (presumably, CIISP, CCSPor Che) is a wise career choice.
Executive orders are not laws, so nothing will change overnight. But this is where the winds of AI are blowing, and what his industry outlook for 2024 will increasingly look like for tech professionals and AI companies, as well as the broader global workforce. This is a strong indication of what will happen.
Additional resources:
Impact of AI: Cybersecurity challenges and opportunities
Generative AI Toolkit: How to prepare your organization for AI adoption
California releases first report on risks and potential use cases for generative AI