You might still be sceptical of how much ChatGPT can help to add value to your work. Or perhaps you’re willing to try it, but are not really sure how it works.
We interviewed Anaïs Monet, an AI and data science researcher at the Temasek Lab in NUS, to share the wider impact of artificial intelligence.
Not a mind, but a model
One of the biggest misconceptions Anaïs encounters is how “AI is like magic”. With a seemingly “magical” stream of words answering your queries on tools such as ChatGPT, some might presume there must be a “mind” behind it.
But AI is more of a model, rather than an independent, thinking mind.
Computer science professor Cal Newport, in his article for The New Yorker, asked a pertinent question, “What kind of mind does ChatGPT have?”
In fact, ChatGPT is less mind, more math. Here’s how a tool like ChatGPT functions:
- You insert a prompt.
- Your input is passed along layers, with each layer finding relevant features in your text.
- Based on the input, it will assign votes to the one word that it thinks will best fit the prompt.
- Rinse and repeat.
Imagine that you were in a factory that creates words, not trinkets.
The raw materials of this factory are the prompts that you feed. The words are then filtered through a sieve. At this first sieve layer, a little robot carefully looks at the words you’ve just fed.
The robot then races off to its vast factory comprising reams and reams of texts it has learnt, and then starts looking at relevant sections. For example, you might say, “Write me an investment case to buy a property based on Warren Buffett’s principles.”
The robot will then go to the section with Warren Buffett’s past investment writings and starts looking.
The words now drop through the next sieve layer. At this layer, it’s a different robot. This robot starts assigning votes to the words found in the compilation of writings from Warren Buffett. It weighs them, and then finally spits out the highest-ranking set of words.
Imagine layers after layers of this.
An important thing to note here is that the model assigns votes to the words it already has in its knowledge base, and then spits out the highest-scoring string of words to you. This makes the final output largely dependent on the quality of the knowledge base (how effective the little robots are at their job), and the queries (raw materials) you’ve fed.
Moreover, its output may be grammatically correct, but not necessarily useful. It is the human who has to make sense and value out of these words.
AI is not new…but can it replace your job?
AI has always been around us. YouTube recommendations are driven by machine learning, based on how the algorithm has seen users interact with videos (with criteria such as length of viewing, bouncing off the video before watching 30 seconds of it, etc.). What goes into your social media feeds is also driven by the same form of machine learning.
If you look closely at what ChatGPT does, Cal Newport explains how it writes “combinations of known topics using a combination of known styles”.
It is reproducing language, but certainly not doing what most knowledge workers do.
How much of your job today involves copying and pasting, manipulating sentences, and synthesising it coherently? Probably not much.
Rather, you’re asked to combine, analyse, or infer from different bodies of knowledge, to create something of value to the company – tasks that are hardly close to what ChatGPT can do.
Anais leaves us with a final thought, “Ultimately, we have to recognise that AI is a tool that’s there to help. Being open is the first step towards becoming friends with it.”
This article is contributed by Live Young and Well.