AI Has Come, but I Will Still Be Here
AI is set to disrupt most if not all, sorts of knowledge work. But disruption shouldn’t be mistaken as a replacement.
I’ve been using AI and automation tools for personal productivity for quite some time now—and frankly, it’s not as scary as it seems. In fact, I wish they had come along sooner.
Tech has always been around disrupting the “doing” of work for decades. But somehow it’s different this time. There is a lot of fear-mongering about how ChatGPT is smart enough to “think” its way into human irrelevancy. This is an insult to humanity and the act of thinking.
It oversimplifies problem-solving into a series of simple, repeatable steps. It ignores the complex nuances that are required to make solutions relevant and useful. Somehow, articles, apps, or even entire business models can be mass-generated using simple prompts. Generated? Yes. Implemented properly? Not yet.
We shouldn’t really underestimate ourselves. The human mind is capable of more than that; actual problem-solving demands us to do more than that; AI tools can assist us in doing exactly that.
I believe that human ingenuity is enhanced by AI tools, not substituted by them. We shouldn’t dismiss them, fear them, or even idolise them. They are what they are, and their effectiveness depends on how we use them.
Today, I’d like to celebrate humanity by highlighting what is (still) unique about us.
AI Can’t Contextualise
Contextualising problems is hard. Be it feature requests, bug fixes, or a sales proposal, resolving them requires you to:
Understand surrounding factors.
Reflect and break down complex problems into simpler components.
Structure the problems in a way that is easily understood and actionable.
Identify gaps in knowledge that need to be researched or tested.
Determine the desired outcome and what it should look like.
It is not a linear process. We have to skip, backtrack, and cycle through a series of messy steps to derive some sort of clarity. It is not a trait we are born with but a skill that is trained and gained through practice and experience.
That’s why consulting firms are a multi-billion-dollar industry. A core part of their job is to help everyone agree on what the business problem actually is, because contextualising problems can be subjective, self-diagnostic, opaque, and multi-layered. Correctly contextualising problems go beyond visiting a database—it requires liaisons with stakeholders up and down the value chain, as well as sideways with peers and partners.
In that sense, ChatGPT is disruptive in the same way that Google is disruptive.
Even with the entire internet at your fingertips, it’s only as useful as your ability to articulate your search terms. Even today, there are still knowledge workers with poor Googling skills yet cushy jobs. I didn’t even know that was possible.
ChatGPT operates the same way with prompts. It can only assume and accept context but never contextualise in itself. It can’t tell me “This is exactly your problem” and come up with a thesis explaining why. The only way that’s possible is for it to be literally omniscient.
Hence, knowing how to contextualize helps you sift through good questions from bad ones—enabling you to know what to Google or what to prompt ChatGPT. Without it, you operate under assumptions—falling down rabbit holes in pursuit of irrelevant results, which, by extension, make you more irrelevant than ChatGPT ever could.
AI Can’t Organize
Generating content is not the same as organizing content—and it’s an area that AI still struggles with.
At the fundamental level, most white-collar work is a series of information organization and manipulation, hence the term “knowledge work”.
Graphic design is aesthetic elements organized in a certain way.
An app is a series of code functions configured in a certain way.
A thesis proposal is researched information structured in a certain way.
We are all blacksmiths, hammering, moulding, and combining nuggets of information to make it relevant and useful.
Some AI tools attempt to solve this organisational problem. Note apps like Mem and Napkin aim to do away with folders or tags too. It would be a dream, wouldn’t it? To have AI organize our data, folders, and notes automatically.
But doing so properly requires deep levels of personalization. The way information can be organized will change depending on:
Context and use-case
Quantity and type of content
Time of day
Time of year
The file’s public or private accessibility
User preference
You can have an entire department run entirely on generative AI, but you still need a professional to vet the output and organize it in a way that makes it useful. Just because ChatGPT can generate code, it doesn’t mean we can skip out of coding classes.
If we take a step back…Most AI tools on the news right now are generative AI. But do we really need more content?
We are centuries ahead of the Gutenberg press. Books are no longer sacred or scarce. The challenge today is not the lack of content but the overabundance of it. Our mettle as professionals is no longer our ability to create more content but rather to curate it.
AI Can’t Dabble in Politics
When AI is left with the “doing” and humans are left with the “thinking”, it is politics that will determine the decision-making—and that’s precisely why we need to be more conscious and active within the political landscape.
As a human collective, we need to take action! But…what sort of action? For what purpose? In what manner? Through which policies?
Politics is a fascinating human-centric process because none of us can escape it. As long as one person’s decision will affect another, there will be politics.
Politicians debate the national budget.
Apartment owners fight over indoor pet policies.
A couple argues over what to have for dinner.
It is a game of personal interest and power plays that require social, career, and financial capital. It is a system of values, communication, decision-making, and accountability. It is both a science and an art and more importantly, it’s not a playground AI can play in due to the simple question—who is responsible for an AI’s decision if it can even make one?
There is an ongoing “HustleGPT” movement where ChatGPT functions as the CEO of a newly created company with the main task of increasing profits. When fed with enough data, it’s able to make informed decisions and react to ever-changing conditions.
But in actual practice, the entity running the company is the flesh-based CEO. ChatGPT only serves as the extension of the CEO. The CEO allows himself to follow instructions given by ChatGPT. Employees effectively follow the instructions of the CEO advised by ChatGPT. ChatGPT can’t be responsible for a company—only a human can.
In a way, the speed of AI disruption could be a blessing. It has cut down a day’s work into mere hours and removed much of the busyness that clouds our judgment.
We now have breathing room to take a step back and think about our circumstances. We can start paying close attention to how decisions are made within our communities and our involvement in them. We can start connecting with our superiors, subordinates, and peers to determine what kind of direction we’re headed towards.
For many people, it’s perfectly fine to opt out of such discussions, and for good reason. However, to not participate in politics is to let other people make decisions on our behalf. From that lens—perhaps we’re no different than AI after all.
A Conversation About Being Irrelevant
From what I can tell, people are inherently impressed with AI technology. It is fun, amusing, and a helpful tool for our daily lives. The fear of AI mainly stems from it taking over our jobs and livelihoods.
I’m not saying that it wouldn’t. For many, that is the harsh reality. I think it is perfectly okay to associate our identities with our work and talents (I certainly do). However, I also believe that we are more than that, and it would be foolish to ignore the other aspects of ourselves.
We are not just professionals but also parents, children, partners, and siblings. We have interests, dislikes, hopes, and fears. We can be kind, cruel, nice, or mean. We are a bundle of insecurities and yet full of ourselves at the same time. Our stories and experiences shape who we are, who we are, and who we want to be.
Unlike us, AI doesn’t understand nor generate meaning. Its illusion of sentience is a product of reinforcement learning. Only we can find purpose in our lives through the pursuit of meaning.
Yes, ChatGPT will automate a large part of our work and that will make us irrelevant. But that just means that we need to find meaning elsewhere—an obstacle we’re not unfamiliar with.
I prefer to view it on the bright side. Finally, I don’t have to proofread each and every word I wrote! Finally, I don’t need to manifest marketing ideas out of thin air! Finally, I don’t have to spend hours troubleshooting Python code screwing up my project!
AI is taking over jobs that we probably dislike in the first place. Now I can focus on what matters to me—spending time with my partner, finding ways to improve myself, and writing this article.
AI has come, and I will still be here. You will be here too, and so will everyone else. Frankly, I think it’s great to be here, so let’s just have fun.