Integrity Score 585
No Records Found
No Records Found
No Records Found
By 360info
Sexual deepfake abuse silences women causing lasting harm, and laws to protect them are inconsistent.
In early 2024, pop megastar Taylor Swift became the centre of a disturbing controversy. Millions of sexually explicit deepfake images of her flooded social media, raising concerns about the misuse of this Artificial Intelligence (AI) technology. Only after one image was viewed more than 47 million times, did social media platform, X (formerly Twitter), remove the content.
Swift's case provided a wake-up call to how easy it is for people to take advantage of generative AI technology to create fake pornographic content without consent, leaving victims with few legal options and experiencing psychological, social, physical, economic, and existential trauma.
The trend began in 2017, when a Reddit user uploaded realistic, but entirely fabricated, sexual imagery of female celebrities superimposed onto the bodies of pornography actors. Seven years on, nudify apps are readily accessible and advertised freely on people's social media feeds, including Instagram and X. In Australia, a Google search of ‘free deepnude apps' brings up about 712,000 results.
A 2019 survey conducted across the UK, Australia and New Zealand found 14.1 percent of respondents aged between 16 and 84 had experienced someone creating, distributing or threatening to distribute a digitally altered image representing them in a sexualised way. People with disabilities, Indigenous Australians and LGBTQI+ respondents, as well as younger people between 16 and 29, were among the most victimised.
Sensity AI has been monitoring online sexualised deepfake video content since 2018 and has consistently found that around 90 percent of this non-consensual video content featured women.
What happened to Swift is sadly nothing new, as there have been numerous reports of sexualised deepfakes being created and shared involving women celebrities, young women and teenage girls.
Legal ambiguities
These digitally manipulated images pose significant ethical and legal challenges, prompting a reevaluation of existing laws and responses to such abuses. This is complicated on platforms with encrypted content, such as WhatsApp, where deepfakes may be shared without fear of detection or moderation.
Read Full Story https://theprobe.in/science-technology/legal-loopholes-dont-help-victims-of-sexualised-deepfakes-abuse-4461221