India urged to ‘act quickly’ as deep fake videos hit Modi, Bollywood: ‘laws are fragmented’
[ad_1]
“India needs to act quickly because its population of 1.4 billion is going to be the biggest testing laboratory and guinea pig experiment ground for AI content. If the prime minister can be targeted, then anybody else can as well,” said Pavan Duggal, chief executive of the AI Law Hub and a senior supreme court advocate.
India introduced stringent new rules to govern social media firms a few years ago under its Information Technology Rules that put digital platforms such as X and Facebook under greater scrutiny. Social media companies with more than 5 million users are required to appoint tech compliance officers.
‘Disappointing’: Indian tech titan’s call for 70-hour work week sparks debate
‘Disappointing’: Indian tech titan’s call for 70-hour work week sparks debate
The main role of these officers is to remove content deemed “objectionable” by law enforcement agencies or they can challenge the request in court. Under the rules, platforms need to remove sexual content within 24 hours, while they have 15 days to do so for non-sexual offensive content.
But often companies only comply with the rules half-heartedly, Duggal said, adding that the legal system is in need of an overhaul in this area.
“Given the disastrous nature of deep fake content, any kind needs to be removed within 24 hours. If it’s a video of a political leader, 15 days’ time can demolish his reputation,” Duggal said.
“India must come up with rules to combat deep fakes,” he said, highlighting that the country has no laws to police that particular form of content.
Deep fakes are becoming increasingly sophisticated and can be generated within minutes, according to tech lawyers. Consequently, such content can potentially overwhelm law enforcement efforts, they said.
“Today, Indian laws are fragmented under different legislative acts to deal with such issues. But there is no law to define what is deep fake, so we must start with that,” said Anisha Patnaik, the founder of LexStart, a law firm for startups.
Besides deep fake videos, generative AI has also been used to create highly realistic fake images and even voice recordings of celebrities, Patnaik noted.
China became the first country to introduce a law targeting generative AI in August – one of several regulations it had passed to curb the spread of different forms of harmful AI.
In most countries, however, such regulations have not been introduced or are in the pipeline.
The Indian government has discussed plans to pass legislation called the Digital India Act, which would replace the existing Information Technology Act, but it is still in its early stages and it is unclear to what extent it would include provisions to counter deep fakes.
Given the recent developments, Patnaik said she would be surprised if the issue of deep fakes is not adequately addressed by the government.
Sidharth Mahajan, a partner at Athena Legal, a New Delhi law firm, said that any new laws should cover “evolving technology” including AI.
India approves subsidies for Apple supplier Foxconn, Lenovo, 25 other tech firms
India approves subsidies for Apple supplier Foxconn, Lenovo, 25 other tech firms
Borderless fight
The issue of tackling deep fakes through regulations has also become complicated because such content often has an overseas origin and tackling the issue requires extensive international cooperation.
International cooperation to fight cyber crimes and share learning experiences has improved in recent years, said KPS Sandhu, head of global strategic initiatives at the Tata Consultancy Service Cyber Security Practice.
Nonetheless, there is a need to step up such collaborative efforts among countries including introducing an international convention to combat the threat but this is expected to take years, Duggal said.
India hopes for a digital economy, but can it train 560 million young workers?
India hopes for a digital economy, but can it train 560 million young workers?
The most effective way to counter deep fakes is to deploy the latest technology and build related infrastructure, according to cybersecurity experts.
With more robust defence systems in place, authorities and companies can better detect deepfakes and neutralise the threat, Duggal said.
For instance, organisations can monitor for signs of deep fake videos such as speech, facial expression and eye movement being out of sync with other parts of the videos, said Sanjay Kaushik, managing director of security and risk management consultancy Netrika Consulting India Private Limited.
But the most advanced AI systems are capable of simulating tiny movements accurately, he added.
Ultimately, the fight against deep fakes has to start with a revamp and constant updating of the relevant regulations.
“Law enforcement agencies in India are thoroughly unprepared. We are not giving the right kind of weight to AI-based crimes yet. But it has already arrived at our doors,” Duggal said.
[ad_2]
Source link