India is drafting rules to detect and curb the spread of deepfake content and other harmful artificial intelligence media, a senior lawmaker said on Thursday, following reports of such content spreading on social media platforms in recent weeks.
Ashwini Vaishnao, India’s IT Minister, said the ministry held meetings with all major social media companies, industry body Nasscom and academics earlier in the day and reached a consensus that regulation is needed to better combat the spread of fake videos as well as apps that facilitate their creations.
“Businesses share our concerns and I get that [deepfakes] Not freedom of expression. “They understand that this is very harmful to society.” “They understood the need for stricter regulation on this matter, so we agreed that we would start drafting the regulation today itself.”
The ministry will be ready with “clear, actionable items” on how to combat deepfakes within 10 days, he said, adding that New Delhi is also assessing a fine on those who do not comply and holding individuals creating such videos accountable. He said that social media companies will hold a follow-up meeting with the ministry early next December regarding this issue.
Deepfakes are artificially created media, often using artificial intelligence, to realistically replace a person’s appearance or voice. Although entertaining at times, ethical concerns abound regarding consent and potential misinformation. The IT Ministry’s move comes on the heels of Indian Prime Minister Narendra Modi expressing concerns over deepfake videos last week.
“Deepfakes can spread much more quickly without any checks, and spread very quickly within minutes of being uploaded. That’s why we need to take some very urgent steps to strengthen trust in society and protect our democracy,” Vaishnau said at a news conference, as he recounted an incident in the afternoon. It contains a fake video clip of a prominent Indian minister urging citizens to vote for the opposition party.
The new regulation will also focus on strengthening reporting mechanisms for individuals to report such videos, and on proactive and timely actions by social media companies, Vaishnaw said.
He said measures “need to be more proactive because the damage can be very immediate,” adding that even taking action “hours” after reporting may not be enough.