
The year is 2005, and I have just uncovered a problem that was affecting 22 million people on our mobile prepaid network. After the momentary panic and managerial drama subsided, I was left with a list of approximately 4000 mobile accounts that needed to be reconciled with small individual amounts. The spreadsheet was handed to me by my senior engineer with a grin on his face, saying, “These need to be done before you go home today.” It was 4:30 in the afternoon. That was my first week as a VAS Support Engineer at a mobile telecoms company.
The next morning at the office, I reported back to my senior that the list had been completed by 8 PM the previous evening. He was impressed by my determination and proceeded to show me a very short shell script that could have reconciled the same list in 30 seconds. Seeing the blood drain from my face as he showed me this magical tool, he quickly explained that he didn’t want me to use that shell script because it was more important to learn exactly what, how and why things are done the way they are. I’m sure there was some level of sadistic pleasure involved, but the lesson was valuable nonetheless, and I don’t hold a grudge at all, Josh. Using that script incorrectly and without a proper understanding of the mechanics involved could have produced 4,000 mistakes just as easily as it could have fixed 4,000 problems, ultimately making things worse for all those clients and negatively impacting my career prospects.
Fast forward to today, and we live in an age where AI-enabled tools enable us to dramatically reduce the turnaround time on tasks that previously took days to mere minutes or even seconds. The questions are: Do we know what it’s doing? Do we know how it’s doing it, and the ever existential “Why” it’s doing it? Should we even care? (Spoiler: Yes, we should)
These are important questions to ask because they fit perfectly with anything else that forms part of a business. Anything of value operates on a What, How and Why level. Businesses, in their simplest form, can be categorized into layers of those three key areas. The “What” of the business. “How” a business goes about achieving its goals and, most importantly, “Why” it’s doing it in the first place.
It stands to reason that when we introduce resources or personnel into a business, they must operate in harmony with those levels and align with the business values. So, how risky is it to introduce a tool at every level of a business where we don’t fully understand how, what and why it operates?
Let’s take a quick look at the facts.
What does it do?
Taking the most popular AI tool on the market as an example, Chat GPT is a Large Language Model (LLM). It is designed to provide coherent and relevant responses to the input it receives. Most commonly, it engages in conversation with the user, helps with tasks, generates text (not unlike this blog entry, but I assure you, this blog post is fueled by late-night chocolate and ice tea and not AI), and more recently can even generate images that are getting more and more realistic and accurate. Essentially, it attempts to make you “feel” like a human is interacting with you from the other side of the screen. A very smart and well-read human.
How does it do it?
Machine learning. You’ve heard that term thrown around the office (or boring dinner parties) with reckless abandon. More specifically, though, these LLMs are trained using something branched off of that called “Deep Learning”. In a nutshell, it absorbs a vast amount of text-based materials from various sources, such as the Internet itself and then calculates the statistical patterns in the language. Things like which words follow others, which concepts appear together more often, and how tone changes based on context.
Based on all of that, the LLM can “learn” which words are statistically more likely to follow each other. That learning is then further refined by warm-bodied people and tweaked to ensure that it comes across as less machine and more “human”. So, when you type a question into this LLM, it searches its databanks and generates a coherent reply based on what it has been exposed to. There is no thought behind the reply. It is a repeat of information that it has assimilated with a massively impressive game of “rock, paper, scissors” to generate a reply that is coherent and grammatically correct.
Why does it do it?
There is no “why” in AI. It’s that simple. The closest a LLM such as ChatGPT can come to explaining why it “thinks” a suggestion is a good idea is that it references the number of suggestions made on that question or something similar, and it pushes the suggestion that is stated more often. That’s why Google AI infamously suggested eating rocks as part of a healthy diet, to glue pizza together and to use petrol to make a spicy spaghetti dish. These are (funny) AI hallucinations, but as much as they create a good laugh, they should also remind us that this new technology has flaws and could, potentially, result in a business crashing faster than my brain after midnight chocolate, or at the very least, some questionable spaghetti. It’s the old adage of “garbage in, garbage out”, but even in cases where the LLM has been provided the most relevant and correct information sources, it still manages to come up with questionable responses from time to time, and to date, no one has figured out precisely why. No pun intended.
Where is this going, Peter?
Circling back to how this information is relevant to the use of AI tools in our industry, and what it means to us, we need to ask why. As an industry that carries the interest and livelihood of millions of people globally, our “why” is our most important level in which we operate. People have a “why”. Machines do not.
Machines are amazing in the “how” and “what” space. We can analyse the data from farms and generate insights that were never possible before because machines use complex algorithms to spot patterns and reference thousands of white-papers and other academic resources in a millisecond. It can do something as trivial as summarising our meeting minutes, or it could save a life by spotting heart irregularities faster and more accurately than any cardiologist can by simply being better at calculating the numbers and odds. It can write a piece of code that generally does what you asked it to do.
But there are things that AI cannot do: It can’t efficiently write a piece of code with flair and ingenuity. AI provides code with massive inefficiency, and it cannot even fathom coming up with new and innovative ways to write that piece of code. It cannot respond to questions in a way that takes a person or their circumstances into account, because it cannot empathise or pick up on subtle emotional clues that everything might not be OK. It is completely incapable of following intuition, again, because it cannot think and does not have a gut-feel about anything.
Most of today’s iconic businesses exist only because their founders had hunches. They did something that no one thought was a “good idea”, and they turned left when everyone else was turning right. If those people relied on AI back in the day, we wouldn’t know about Microsoft, because it was a wacky kid interested in computers at a time when computers were a niche… and in his parents’ garage no less. We wouldn’t have known about Apple, because AI would have suggested Wozniak build a Chevy truck in his garage instead of a computer. Because a Chevy truck had a statistically higher chance of selling and making money. The stats, and by extension, an LLM would have said that’s a ridiculous idea.
There are other considerations to take into account, too. Many entry-level jobs are being replaced by AI tools. That’s great news for the business’s bottom line and its shareholders, but if you remove entry-level positions, how do you get new people into the industry and your business? University students will find it harder and harder to get their foot in the door because the bar to enter has been raised way beyond the ability of a “newbie” out of university. How do you enter your first job with 5 years of experience?
Studies have shown that people are getting dumber by the day because they’re relying on tools like ChatGPT to provide them with answers to almost everything. Critical thinking isn’t a thing anymore. People are using Google less and are simply asking their questions in ChatGPT. There’s less research happening and more copying and pasting from whatever ChatGPT spits out. If what I’ve seen driving on our public roads is anything to go by, we cannot afford to get any dumber. Don’t even get me started on what AI tools have done to our education system. Let’s just say in about 10 years, your doctor’s or lawyer’s certification may not be “well-earned”.
Let’s not forget about the impact the use of LLMs has on our carbon footprint, with AI data centres expanding exponentially and bringing massive energy consumption requirements with them.
Before you walk out
Don’t get me wrong. I love LLMs like ChatGPT and I use them fairly often. Without image generators like Midjourney, I would have no hope of ever creating illustrations that are usable in any way. I am not an advocate against the use of AI tools. I understand that they hold the potential to change our world, and it can swing in either direction, good or bad. I’m known for using the line “AI won’t replace you, someone who uses AI will”.
Like the internet back in the day, I know that AI technology is here to stay, and businesses need to adopt an AI strategy to remain relevant and competitive. Even in my role as a Scrum Master, I am constantly looking for ways to leverage AI in ways to boost my value to my teams and my clients, and I share that knowledge with my colleagues because I’d rather see us surfing the AI wave to new horizons than being swept away and drowned by it. Working smarter, not harder, is the way to go, after all, and I appreciate that after fixing 4,000 accounts manually. One by one. Josh.
We need to be vigilant and safeguard our ways of working and protect our “why”. We need to be cautious about which levels of our businesses we allow tools such as AI to gain a foothold. We need to ensure that we gain from AI instead of being replaced by AI. The only real way to do that will always be with people. Because people serve people best, and people understand why.