My experience of using ChatGPT AI
Since the first lockdown, Clearleft has become a remote-first organisation. This means we have some important rituals to help foster our sense of team and community. One of those is a fortnightly ‘Laundromat’, an hour long hybrid session open to everyone for loosely work-related chat and debate. Last week the topic of artificial intelligence (AI) came up. The tone of conversation was, if not dystopian, then definitely cautious. However there was some excited curiosity around the ChatGPT tool.
Playing around with ChatGPT was intriguing, particularly when asking questions that would be difficult to answer otherwise, such as “explain why it’s impossible to go faster than the speed of light in terms a 9 year old would understand”. The answers returned to any query are phrased extremely well. They’re a bit dull and repetitive, but better than a lot of human-written prose I’ve seen.
And here’s the rub. The responses are written well enough that one can be sucked in to treating the answers as definitive. But you need to take the opposite approach: you should question all output with as much rigour as any other unsubstantiated claims. There is no actual intelligence or judgement involved, just increasingly clever pattern matching. Remember the AI is trained on the morass of the web, with all its trolls, fake news and tedious marketing guff. Even its own FAQs say ChatGPT may be inaccurate, untruthful, and otherwise misleading at times
.
So if it can’t be trusted without investigation, what’s it good for?
There’s no denying its utility. On the face of it, ChatGPT could be really helpful – it even generates decent code snippets if you ask the right questions. As John Naughton wrote in the Guardian, at best, it’s an assistant, a tool that augments human capabilities […] it’ll soon be as mundane a tool as Excel.
I found that if you already have some basic knowledge in the subject you’re exploring, it can be really useful tool to get the writing juices flowing. As Deborah Carver put it,
AI generators spit out a lot of words quickly, but they ultimately spout some valuable knowledge amid the clutter. I estimate 20-35% of the AI-generated language is useful, adding phrasing and ideas that may not have been considered in an initial draft or outline.
For example, I wanted to help expand on ‘internal service design’, a term we use at Clearleft. I knew roughly the message I wanted to get across: get it right for employees and you get it right for customers. But I was less sure about how to expand upon that with a bit more detail.
I asked ChatGPT for a definition and repeated the query a few times. Some parts of the answers were not really correct, but otherwise it provided some useful points and turns of phrase I could easily adapt. I wasn’t asking for an answer I could parrot back – as I said the AI output is pretty dull and repetitive, at least at the moment. But it was a really useful tool to get unblocked. The takeaway here is to use ChatGPT as an input rather than an output.