Let’s talk about…Distinguishing between AI and human-written content

Could you tell if something is written by AI?

Artificial intelligence (AI) is becoming increasingly prevalent in our daily lives, from voice assistants and chatbots to predictive text and automated content creation. And while AI writing still has limitations in terms of creativity, humour, and empathy, it has come a long way in terms of natural language processing and understanding. With AI’s growing ability to write coherent sentences and passages, some are left wondering: would you even be able to tell if something is written by AI?

If you think you would, and confidently believe you can always intuitively tell the difference, I ask: were you able to tell the paragraph above was entirely AI-written?

The opening three lines of this article were in fact generated by ChatGBT, a popular, free (at least for now) chatbot developed by OpenAI. Funnily enough, upon going to use the chatbot for the first time in order to get the response for the start of this article, the first thing the site prompted me to do was complete a CAPTCHA to prove I wasn’t a robot… a tad ironic, all things considered. But eventually I was all set up, joining the over 100 million unique users ChatGBT has already managed to amass since its launch in November 2022.

This blizzard-like success ChatGBT has seen in the short period following its launch not only is record-breaking in terms of user growth, but it has also sparked a whirlwind of public debate around the potential (positive and negative) of such AI tools, considering that if you ask it to, such chatbots could write hundreds of words on any topic, or engage in a fairly coherent conversation.

OpenAI’s co-founder Elon Musk, who left the company in 2018 citing conflicts of interest, has called the chatbot “scary good”, warning that “we are not far from dangerously strong AI”. And while Musk might be making a more existential point about future AI, his comments also apply to the pitfalls of just how well current chatbots are able to mimic human-written content, a point which is being discussed more and more in the wake of ChatGBT’s rocket success and the success of similar tools.

One worry people have is that if we’re truly unable to distinguish between something written by a human and something written by an AI chatbot (as seems increasingly the case), what’s to stop AI-written articles, books, poems, etc. from meeting the mark of, or even becoming more successful than, the ones we write? Or what’s to stop people from passing off AI’s work as their own, such as students using it to write their essays for them or do their assignments?

We saw educational institutions in Ireland reference this exact issue earlier this year, with Quality and Qualifications Ireland (QQI), the watchdog for standards in Irish higher education, stating that many institutions have already begun reviewing policies around assessment and academic integrity following the surge in these tools’ popularity.

There is a genuine sense of apprehension around embracing AI tools, and given their ever-improving ability to produce coherent content in mere seconds, it’s understandable how this fear has come about.

However, at the same time, I do often wonder how much of our worries around AI are overly imbued with a science fiction-esque “don’t let the robots take over” bias. That’s not to disparage the validity of certain concerns around AI – such as the job losses mass automation could lead to, the environmental impact of all that processing power, security problems, data ethics, AI bias etc., all of which warrant their own individual discussions – but this specific area (automated text-generation) feels like one where a bit of cautious optimism could reasonably be extended.

After all, it’s misleading to suggest modern AI is capable of perfectly replacing human-written (or at the very least, human-edited) content. As anecdotal example, the generated paragraph I opened this article with took a good few ‘regenerate response’s and amending the prompt to get to something workable. I received some uncoherent or irrelevant responses, practical formatting errors, and even one or two false claims. Evidently, problems of nuance, higher-order comprehension, and tone are apparent in some generated responses, as is the occasional lack of accuracy – something we’ve seen play out with students using AI for assignments, as chatbots would incorrectly reference, or fail to, or in some cases make up references altogether.

ChatGBT and similar tools, while indeed landmark in how accessible they are making such proficient AI, don’t necessarily herald the end for humans’ involvement in content creation the way they’re sometimes reported to. AI-generated media will always require some degree of human oversight, and even then, I would argue that traditional media will always be valued even if technically it’s less efficient.

Sure, the growth of these tools means we need to adapt to ensure they’re used responsibly, but we do seem to be giving focus to this – for example, college plagiarism detectors like Turnitin are implementing AI detection, and QQI’s National Academic Integrity Network met recently to discuss how to account for AI and how to address it with students. Who knows, maybe this will inspire institutions to lean more towards higher-order education, which promotes critical thinking over rote memorisation, something the LC model was criticised for years for.

AI is already in our lives – if you use Google, Netflix, or any social media, or if the little predictive text options pop on your phone when typing, you’re interacting with it every day. And that’s not to say future AI won’t be an entirely different ballgame (it most certainly will), but if we adapt in tandem and continue to prioritise a focus on ethics, responsible AI, and human control, we’ll be closer to ensuring AI fulfils its indented role of being a tool that simply augments our workload, allowing us the time to focus on the more important and less menial things in life.