Will ChatGPT Replace Writers
A few weeks back, an old friend asked me this over some pints. “Do you think ChatGPT will replace us one day?” Both of us are from the creative industry, initially assumed to be technologically resilient. But like truck drivers with self-driving vehicles, it is now our heads at risk of the chopping block.
By now, most readers are well aware of the disruptive capabilities of ChatGPT. OpenAI and their counterparts have been making inroads within the generative A.I. space, mainly for text, images and code. With rapid development within this space, is it a cause for concern for the average knowledge worker?
New Guns, Same Battlefield
Despite what pundits might say, generative A.I. at its current state will not replace creators but serve as a supplementary tool within existing workflows. Article writing using ChatGPT, for instance, still needs human involvement to craft deliberate prompts and scrutinise the output, which includes fact-checking. What was once an hours-long writing endeavour can be shortened to just mere minutes.
To put things in perspective, asking ChatGPT to replace the writing process is like having a Thermomix or InstantPot replace my mother’s cooking. It is like believing traditional Japanese knife makers will be put out to pasture just because a factory has opened up nearby, producing thousands of knives in a single day.
I agree ChatGPT will affect large swathes of the content creation industry; but mass-produced, low-quality content has always been primed for disruption. Serious writers have grappled with content mills, where regiments of writers get paid per word by the cent. Similarly, reputable artists have also struggled with copyright issues and low-effort copycats.
This new threat of mass-produced content is not new, but has merely shifted to a digital landscape. Many will pay more for quality content, but they just need a reason to do so. While grammatically correct, articles produced by ChatGPT are generic, lack substance, and most likely factually incorrect. Although the competitive landscape has drastically intensified, it is not a battleground we creators are unfamiliar with.
How the Sausage Is Made
The truth is that A.I. has always been highly disruptive in niche industries. Microsoft’s integrated development environment (IDE) VSCode, has a co-pilot program that auto-completes code, trained using data collected by popular code-hosting platform, GitHub. Auto-transcribing services built into Google Meet and Microsoft Teams have also put manual transcribers out of business, even disrupting incumbent auto-transcribers like Otter.ai.
ChatGPT is not unique in this manner. The only reason it stands out is because of the leaps in capabilities compared to its predecessor, but more importantly, it brought A.I. tools to the mainstream. This is thanks to its high accessibility and low learning curve. Anybody can sign up for an account and start conversing with ChatGPT immediately. The responses given seem intellectual, and the A.I. gives an illusion of sentience, but it is not.
In overly-simplified terms, ChatGPT is a language model that conducts a series of predictions based on given user prompts and conversational history. Think of it as repeatedly tapping the auto-suggested word on your smartphone when typing a text message, except that the sentence generated actually makes sense.
ChatGPT is incapable of “understanding” the semantics behind the prompts it has given, nor the words it has generated. To it, words are just ones and zeros transformed using word embedding techniques, pieced together using statistical likelihoods. It appears intelligent because it excels in its predictive capabilities and assumptions, learnt through petabytes of trained data scraped from the internet. It is not an understanding machine but a predictive one.
Therefore, ChatGPT struggles with face-checking because it does not comprehend what “truth” is (this might change with the integration of additional modules). It is also why conversations with ChatGPT will gradually become weird to outright disturbing as the conversation drags on, where the multitude of inputs causes the deep learning model to go out of whack.
It Boils Down to the State of A.I. Developmen
Source: Vincy Khandpur & Shikhar Sahni
While I’m still assured of my job security, my opinion might change drastically depending on the state of A.I. language model development.
Essentially, technological life-cycles exist on a sigmoid curve. If we are at the middle or tail end of technological innovation, our circumstances would be like the internet in the 1990s. This infrastructure will give rise to companies competing to develop applications leveraging while the technology gradually matures and stabilises. The use cases might differ, but the A.I. equivalent of Napster and AOL will rise and fall, paving the way for even larger corporations in the future.
However, if we are still at the early growth stage of this technology, its potential for disruption will cause heavy whiplash. The companies with the most resources will reign supreme. A.I. will benefit the many, and impoverish the most, while only enriching the few. I have mentioned the idea of integrating fact-checking modules into ChatGPT, but this is opening a can of worms. Who gets to decide what is factually correct and wrong? How will the results inform and direct human decisions? The topic of ethical A.I. has been stuck in the regulatory inbox for the longest time, and hopefully ChatGPT has exacerbated the issue, making it too hard to ignore for authorities globally.
Adapting to Change
So what does this mean for regular Joes and small to mid-sized companies?
For professional knowledge workers, I highly recommend embracing premium digital tools for everyday use. In fact, I would argue that A.I. development within the productivity space is far more interesting than what ChatGPT has to offer.
As an example, Reclaim.ai and Motion auto-schedule your daily tasks unto your calendar based on the task’s priority, estimated duration, and deadline. If a task takes longer than usual, or a meeting gets cancelled at the last minute, the algorithm will readjust your calendar accordingly, letting you know what needs to be done at any given time.
For avid note-takers, Mem.ai uses natural language processing (NLP) to surface similar notes to the one you are working on, introducing a folder-less and tag-less way of organising thousands of notes. It is also the first platform that offers GPT functionality trained using your provided data besides the whole internet. This means that it can offer personalised book recommendations based on your book reviews, or come up with unique marketing campaigns based on existing meeting notes.
Companies should also increase the cycles of tool adoption and abandonment. With the switch away from licensed software to monthly and yearly subscriptions, it is now easier and in fact, cheaper to do so. This involves empowering small teams with the agency to choose their own software that is best suited for the job.
For instance, my current department uses Notion for project management and As the single source of truth. Canva replaced Adobe Suite as our design tool of choice, while Miro and Mindmeister became the default for brainstorming and post-mortem purposes. We also migrated away from locally stored phone contacts to a proper customer relationship management (CRM) tool.
Just introducing these tools into the workplace is not enough. Change management exercises involve introducing liberal yet comprehensive policies that balance work efficiency, data security and future-proofing. Not to mention getting employee buy-ins, training regimes, and migration exercises.
Although the initial process may seem tedious, the benefits of having an agile tool adoption environment pay dividends in the long run. In fact, many of the tools my department adopted do not even have built-in A.I. functionalities.
That is because well-designed software provides more holistic features and greater user experience than the status quo. It makes teammates more productive and more comfortable adopting A.I. tools in the future. More importantly, it makes working five days a week a more pleasant experience. If Asian countries can fuss over elegant stationeries for everyday paperwork, why can’t knowledge workers be fussy over the digital tools we use?
I emphasise this point because many companies I have come across are insistent on utilising legacy tools. While the industry has already shifted to the cloud for most document handling, some are insistent on using software built in the 1990s. I can understand if there are compelling reasons to do so, such as backwards compatibility or regulatory requirements, but more often than not, this is not the case. Why give farmers hoes to till the land when the industry has transitioned into tractors?
Many believe that technology is developing faster than humans can adapt, but I disagree. Humans have always excelled at adoption, being the key reason we are successful as a species. The cycles of tool adoption are getting shorter, with ChatGPT being the first to hit 100 million users in a matter of days. These 100 million users are not geniuses but everyday folks.
Knowledge workers today are no longer measured by just the skills they have or the experience they have garnered; but their ability to embrace and scrutinise new tools while being able to unlearn, learn and relearn new workflows and technologies available at hand. We are trying to adapt to a new age of A.I. digital tools.