Let’s talk about…AI and explicit content

Regulation needed to stop misuse of AI tools and spread of harmful content online

 AI and its future, fear around it, and whether or not any such fear is warranted, has been a popular topic in mainstream media and discourse for some time now.

Almost a year ago, I wrote a column touching on some AI concerns following the launch of ChatGPT (an AI chatbot), and shortly before fake AI-generated pictures of the Pope wearing a puffy jacket went viral because of how many people had thought they were real. In both situations, there was a lot of talk about how relatively easily AI-generated content could be passed off as ‘genuine’, and what implications this could have in the future, and such talk has only snowballed in the year since.

Taylor Swift deepfakes

This takes us to recent days, and now the issue has gained heightened prominence following an incident which saw deepfake pornographic images of singer Taylor Swift rapidly spread through X/Twitter. The fake images were eventually removed and searches for her name were blocked to stop the spread, but not before the pictures had been shared over and over, one particular image amassing 47 million views before it was finally taken down.

I say finally, because it took a considerable amount of time before X/Twitter began to act on the situation. In fact, tackling the spread (initially at least) largely ended up being done by Taylor Swift’s fans, who mobilised to mass-report the images.

The majority of Taylor Swift’s fans are women, particularly young women, and their solidarity with the singer has often been touted as not just arising from enjoying her music but also because of her public persona as someone who champions women, who talks often about feminism and womanhood. In coverage since the photos were spread, many of the fans who mobilised others to get them taken down commented they were spurred to do so – yes particularly because they’re fans of Swift – but also because it was the just thing to do, and for many, as women, they felt the need to ‘stand up’ in a sense for their fellow woman.

Taylor Swift is far from the first public figure this kind of thing has happened to. Many other A-list celebrities, influencers, streamers, etc have faced similar situations – situations where their likeness has been superimposed onto explicit pictures and/or videos. And while celebrities are easiest to deepfake since there are so many high-quality pictures of them out there to use, horrifyingly, this is not something that can’t also happen for the average person; there are countless apps that allow users to create deepfakes with one photo and zero expertise. It is gross violation and a sickening situation to imagine, and while it’s unfortunately been happening for some time now, it’s a topic that’s remained relatively out of mainstream discourse up to now.

What made this particular incident such a big news story was not just Swift’s massive renown, but also the fact of X/Twitter’s failure to respond properly and promptly. When we talk about AI, between those who welcome it as the way forward and those who consider its future inevitably sinister, one opinion everyone has in common is that we need to ensure AI’s progress is met with an equivalent progress in our regulations and standards around it. Tech companies behind AI programmes need to be kept in check and social media platforms AI-content ends up on need to too.

AI as a tool – for better or worse

In the previous column I mentioned earlier, I spoke about how while it’s easy, and perhaps understandable, to be wary of things like AI-chatbots because of how well they can produce ‘genuine’-seeming content, it’s also conceivable that gone about the right way and alongside proper regulation, they could instead serve as a tool, something that helps humans instead of something that replaces ‘genuine’ or human content. The type of focus we have when we develop AI programmes and regulations around them will massively influence what the future of AI will look like – whether it’s closer to being a way forward that helps us, or something closer to the more sinister reality some fear.

Because the inherent problem in the fact that at its best, AI is a tool that can ultimately aid humans, is the fact that there are a lot of humans out there seeking to do sinister things. Before despicable people created AI deepfake porns, despicable people photoshopped others onto explicit pictures. Before that, they cut and pasted physical photos.

The recent Taylor Swift situation didn’t happen in a vacuum; the core issue is long-standing and ever-present still in modern society, it’s just been catalysed by the proficiency and availability of AI. In fact, I can speak to an anecdotal example that comes to mind thinking about all this talk about AI and women pictures being used pornographically without their consent, which happened recently when I visited Paris with a friend the other week.

We were taking the metro home one evening when a (highly inebriated) man took a seat beside two other girls in the carriage. We noticed him take out his phone and try to take pictures up one of the girls’ skirts, so we quietly informed them of what we had seen and the four of us left the carriage. We had a brief conversation with the girls before we went our separate ways, exchanging pleasantries about where everyone was from and what everyone is doing, but also talking about what had just happened. One of the girls told us that unfortunately, this kind of thing was not uncommon in Paris and that what can often happen in tandem is they will take pictures of the girl’s face also and put it into Google Lens (an AI image recognition technology) to find her social media accounts to harass her and/or to find her personal information.

It was chilling to hear about those girls’ experiences and to think what some people are capable not just of doing, but even conceiving of. But in other ways, it was par for the course; we were horrified, but not surprised, to see the man taking pictures, but we had not yet heard about something like the Google Lens the girls told us about, so that part did surprise us as it was a new form and methodology of harassment we hadn’t even conceived possible.

Everyone adapts to the times, adapts to change and progress, adapts to the tools at their disposal –  and this includes even the amoral among us, those who may use those tools for harm.

Going forward

This experience came to mind thinking about the Taylor Swift situation, not just because of the obvious links between women’s pictures being used pornographically without their consent and how AI is used for that, but also because of the fact that in both scenarios, the immediate mitigation for the problem wasn’t done by the powers-that-be (X/Twitter in the Swift story, and security staff in the metro story), but by everyday people – and notably other women. And I think in both cases, while I appreciate and value the ‘support other women’ and ‘do the just thing’ mentality that fuels action like this, at the end of the day, this cannot feasibly be how these types of situations are dealt with; an absence of action from those whom responsibility falls onto cannot be accepted.

And it seems many agree: X/Twitter has been widely criticised for the inadequate and slow approach they took in tackling the recent situation. Backlash over their response (or lack thereof) has thrust into the limelight reinvigorated debates about the importance of making sure that regulations around AI are morally acceptable and up to date, and the importance of holding tech giants to account for how their platforms can be misused for harm – the latter being a topic which was only bolstered further in recent days following coverage of the top social media CEOs facing intense questioning from a US Senate committee over accusations their companies failed to protect kids from exploitation and abuse.

I hope the one silver lining to all these highly upsetting stories is that due pressure is put on tech giants to put proper regulations in place, because already we have seen too many examples of how AI-tools can be misused and how fake (and frankly amoral) AI-content can be created and shared with impunity.