Less than two years ago, we were all enthralled by the emergence of a technology that promised to disrupt all facets of our lives. What started as a live social experiment of opening large Natural Language Processing (NLP) algorithms, in the form of ChatGPT, to answer the public’s unscripted questions, soon proved so effective that most see it as a technology with the potential for major disruption.
However, over time, many observers’ enthusiasm was curbed by the realization that physical limitations exist in building larger, more capable training models. When we ask ChatGPT a question online, a machine is crunching numbers – consuming more than 10 times the power of a Google search. It is not magic. Nor is it a tireless sentient being answering our questions – it is machines doing complex math. The more we ask of it, the bigger it needs to be.
Despite physical challenges, many believe that the future domination of Artificial Intelligence (AI) through Large Language Models (LLMs) is inevitable. Others are more skeptical. Setting aside the enormous cost (currency and environment) associated with building, training and maintaining these models – which proponents will argue are fixable; mere eggs broken in the pursuit of a sentient omelet – how the technology itself will mature and the societal role it will fulfil is debatable.
The problem might lurk in its current design: the lack of Intelligence, or the “I” in AI. People are mesmerized by Generative AI’s output and the illusion of understanding that it possesses. But at its core, the generative AI models are algorithms trained on human-supplied information: publicly available webpages fed into a computer with contextual patterns mapped out.
After fitting the parameters, it is then able to take text prompts and produce the statistically most likely combination of words (or combination of pixels in a picture) to satisfy the questioner’s request. It is ultra-efficient predictive text strung together in a way that is unsurprisingly (given its vast library of human-supplied training data) very human-like.
Loading...
When generative AI draws a realistic horse, it produces a combination of pixels statistically most likely to succeed in its prompt. It doesn’t rely on reasoning, draw outside the confines of its objective, or dream up an aesthetic rendering. The horse likeness is the outcome of a likelihood distribution based on human supplied training information. Nothing more, but (profoundly too) nothing less.
The groundbreaking improvement posited by Google engineers to past versions of predictive text was to allow for deeper contextual considerations to be accounted for during training – and much, much larger datasets to train on.
At its core, AI remains a parrot, not a mind… a remarkable achievement in mimicry and information collation, but certainly not sentient. But if given more time, data and computing power, will we ever see the emergence of super-human intelligent algorithms at our disposal – the so-called AGI revolution?
So far, no one has been able to convincingly argue why an algorithm, trained on human input data, will somehow be able to transcend its training set and supersede human intelligence.
A curious problem is that generative AI has always been portrayed by its progenitors and proponents as a force that will either be a panacea to all our ills or a real existential threat – a temperate middle from insiders has mostly been lacking. This has succeeded in capturing the public’s imagination, but with an unrealistic base assumption of its capabilities.
What we lack is a clear and universally-accepted use case.
We’ve seen its capability of improving productivity in repetitive tasks, its speedy (albeit not always trustworthy) information collation and its astounding replication of human generative capabilities in art, film and music by mimicking human creativity. But can AI ever exceed humans’ capabilities and cause us to be indefinitely dependent on its wisdom and guidance? That may stay science fiction.
A more realistic take on AI currently is that it will likely prove to be a useful productivity tool in many industries, not a grand source of displacement. The automobile made many horse trainers’ past vocations irrelevant, but not the person. An industry soon formed to maintain the disruptor – and we are all grateful for its invention. We should be more realistic and pragmatic about generative AI and its possible applications.
Finally, how should one go about investing in this technological revolution? When it comes to securing the rights to raw inputs for chip manufacturing, access to the top minds and use of expensive computing equipment to develop the technology further – size and scale matters.
Mark Twain said that when everyone is looking for gold, it’s good to be in the picks and shovels business. Investing in the companies owning the rights to picks and shovels used for tomorrow’s application may very well be the best course of action at this point – and few indices encapsulate this better than the Nasdaq 100. From what we know today, it certainly seems like a sensible horse to back.
*Satrix is a division of Sanlam Investment Management
Disclaimer
Satrix Investments (Pty) Ltd is an approved FSP in terms of the Financial Advisory and Intermediary Services Act (FAIS). The information does not constitute advice as contemplated in FAIS. Use or rely on this information at your own risk. Consult your Financial Adviser before making an investment decision. Satrix Managers is a registered Manager in terms of the Collective Investment Schemes Control Act, 2002.
While every effort has been made to ensure the reasonableness and accuracy of the information contained in this document (“the information”), the FSPs, their shareholders, subsidiaries, clients, agents, officers and employees do not make any representations or warranties regarding the accuracy or suitability of the information and shall not be held responsible and disclaim all liability for any loss, liability and damage whatsoever suffered as a result of or which may be attributable, directly or indirectly, to any use of or reliance upon the information.
Loading...