About half of these could reap the benefits of new technology, while the other half could see their roles reduced or replaced.
While technology has created risks for some highly skilled workers, the IMF said they would also benefit a lot if AI complemented their jobs.
My findings also revealed that rapid progress in artificial intelligence technology has given developed economies a head start over poor countries.
This is based on the belief that developing and emerging economies will miss out on growth opportunities due to their increasing reliance on manual labour.
“AI is likely to exacerbate overall inequality, a worrying trend that policymakers must proactively address to prevent technology from further inflaming social tensions,” Georgieva said.
“It is crucial that countries create comprehensive social safety nets and provide retraining programs for vulnerable workers.”
Her comments come as world leaders meet at the World Economic Forum in Davos, Switzerland, where new developments in artificial intelligence and the risks posed by the technology are a major point of discussion.
OpenAI is abandoning its promise not to use the technology for military purposes
Written by James Titcomb
The maker of the ChatGPT app has eased a ban on using its artificial intelligence technology to help conduct war, sparking speculation that it may seek ties with the US military.
OpenAI has softened the language around its usage policies, after previously saying it would prevent the use of AI in “activities that involve a high risk of physical harm,” such as weapons development and military uses.
A new version of the policy removed reference to the use of artificial intelligence in “military and warlike actions,” although it maintained the ban on weapons development.
The change, made last week, was interpreted as an easing of the company’s ban on using AI technology for military uses.
The company said it updated its policies due to work on national security projects.
Microsoft, the largest investor in OpenAI, is a major supplier to the US Department of Defense (DoD).
The Department of Defense has experimented with advanced AI models such as ChatGPT and Google’s Bard, while the Department of Defense established the Defense Artificial Intelligence Center to look at ways to deploy the technology.
An OpenAI spokesperson told The Intercept that its policies were written to make them “clearer” and “more readable.”
The company added: “Our policy does not allow our tools to be used to harm people, develop weapons, monitor communications, injure others or destroy property.”
“However, there are national security use cases that align with our mission. For example, we are already working with the Defense Advanced Research Projects Agency (DARPA) to catalyze the creation of new cybersecurity tools to secure the open source software that critical infrastructure and industry depend on.”
“It was not clear whether these beneficial use cases would be permitted under ‘military status’ in our previous policies. So the goal of updating our policy is to provide clarity and ability to have these discussions.
Militaries around the world are increasingly making use of artificial intelligence for weapons development and war simulation, in an attempt to gain an advantage on the battlefield.