Rampant rise in misuse of Deepfakes using AI LLMs

AI Generated
The most recent case that highlights the growing existential jeopardy of synthetic media is a historic court ruling by the Bombay High Court upholding an interim injunction to protect the personality rights of a Bollywood actor named Akshay Kumar. Justice Arif Doctor, who heard the case on Wednesday, characterized the hyper-realistic abilities of the deep-fake technology as being indeed terrifying, noting that the high level of sophistication of the modern AI-generated content makes it essentially impossible to tell the difference between the fake and the real.
The intervention of the court was based on reported cases of deep-faked videos that were depicting Kumar as making communally inflammatory statements and promoting sham gambling networks. The implications, which can possibly emerge as a result of spreading such content, can be quite severe and serious, and the bench emphasizes the risk to both the reputation of the celebrities and the order in the community.
In order to understand the increased worry of the court, one should take into account the fast-developing nature of the technology behind it. The breakthrough was developed in 2014 with the introduction of Generative Adversarial Networks (GANs). The way a GAN works is that it is a zero-sum game between two neural networks: the Generator, which produces the synthetic data (e.g. a human face) given a random noise, and the Discriminator, which compares the output of the former to real-world data to identify forgeries.
The Generator gets to emulate photorealism, surpassing the limits of human perception after millions of trials and the Generator learns to fool the Discriminator. Similar to GANs, parallel with an autoencoders, which uses the compression-decompression cycle, it is possible to swap faces through the projection of the expression of a so-called source person onto a so-called target one.
The deep-fakes beyond the entertainment sector have become a powerful tool of manipulation in political and financial spheres. The largest democratic process in world history was used in the 2024 Indian General Election, but synthetic media was used on a large scale. Political parties used multilingual clones to make speeches in the regional dialects which the candidates were not comfortable speaking. More questionably, DMK party revived a dead leader (Muthuvel Karunanidhi died in 2018) through AI to approve his son as a leader. Some viewers took this as an imaginative homage but the critics warned that this was a political exploitation of emotions to seize the mood of voters.
The menace has now moved to the finance realms that are high stakes. More and more actively, cybercriminals are switching to advanced identity-first, and not yet on simple presentation attacks (presenting a photograph to a camera). Three Pillars of AI Fraud in the Modern World:
1. Camera Injection Attacks: Malware will turn off a real-time camera feed and replace it with a pre-recorded deep-fake, tricking Live KYC checks. 2. Computer-generated fake voices: MFA bypass through voice cloning AI bots dictates a voice that has been cloned by the victim or try to impersonate a bank official and place a real-time demand on 2FA codes. 3. Real-Time Face-Swapping: Fraudsters insert the face of a victim in their own live video calls to circumvent a biometric authentication.
As a reaction to this mounting pressure, the Information Technology Rules were greatly amended, in October 2025, by the Indian Ministry of Electronics and Information Technology (MeitY). This has become a tight structure of Synthetically Generated Information (SGI) that comes into force in 2026.
* Mandatory Labelling: An AI-modified visual content should have a conspicuous label that should occupy at least 1010 of the screen area. * Audio Disclosures: The AI-produced audio should include an audible disclosure in the first 10% of an audio. *Metadata Tagging: The intermediaries must incorporate permanent and unique metadata tags to make it traceable. * Takedown Timelines: Within 36 hours after receiving a report, Platforms are required to take down non-consensual or harmful SGI, or within no more than 3 hours in the case of misinformation related to elections.
The attendant risks notwithstanding, there are transformative benefits available in the technology. Within the medical care setting, voice cloning has proven to be the life line to patients with neurodegenerative disorders like ALS since patients can use their own voices to talk in the midst of their physical inability to articulate themselves. Another way that Virtual Physicians improve health literacy in rural India is by offering individual medical guidance in hundreds of local languages.
The future issue of concern, as the so-called Year of Authenticity Breaks goes on in 2026, is to create a strong, media-literate community. Although laws like the IT Rules 2025 offer the much-needed protective layer, the final safeguard is a citizenry that is mindful of the mechanics of the so-called Liar’s Dividend the process by which the very presence of deep-fakes will allow people to deny the truth just as easily as they can summon falsehoods.