MESSAGE FROM CAM
ECONOMIC OUTLOOK
UPDATE
FUND PERFORMANCE
If you’re like me then the term Artificial Intelligence (AI), chatbots, and large language models (LLMs) have flooded your news feed, social media timeline, YouTube account, and even your podcast playlist. It seems impossible to get away from the topic, and I mean that quite literally – even the villain in the latest Mission Impossible movie is a rogue form of AI called “The Entity”. If AI is of interest to you, then you’ve probably tinkered with OpenAI’s ChatGPT and marveled at its ability to write a poem, provide advice on fixing your golf slice (yes, this is what my chat history looks like), or asking it about the origins of pancakes. Our Citadel Asset Management (CAM) equity team has had to conceptualise and think about AI for many years now, so I feel we have been ahead of the curve in terms of thinking about AI, investing in AI, and understanding the future in the context of AI. In this article I want to share those thoughts in a way that, hopefully, is a little different to everything you’ve seen so far.
AI is not a new concept
The term AI first appeared as early as 1956 when computer scientist John McCarthy coined the phrase in the context of human-like intelligence from a computer. Crucially AI had to pass something called the “Turing Test” which was created by Alan Turing in his paper “Computing Machinery and Intelligence”. In the paper Turing noted that AI would truly be AI if it could convince a human that they weren’t talking to a machine. In the 1940s, 50s, and 60s it was inconceivable for a machine to pass this test which led McCarthy to state that for AI to succeed it would need “1.7 Einsteins, two Maxwells, five Faradays, and the funding of 0.3 Manhattan projects”.
Why now for AI?
It might seem like an odd question but how have we suddenly become so successful with AI, given that it has been talked about for over 70 years? The reason is because of another important bit of technology theory: Moore’s Law. In 1965 Gordon Moore, co-founder of Intel, posited that the number of transistors on an integrated circuit double every two years. Stated plainly, technological computing power increases at an exponential rate because we have more powerful computer chips in smaller and smaller forms every year.
This is the core bedrock of the technological revolution we have seen. It has allowed us to create powerful home computers, laptops, mobile phones, better cameras, fridges, televisions, and even speakers. In the past, a supercomputer took up the space of a large room, whereas it now comfortably fits in the palm of your hand. The thing is that the room still exists, and it houses far more computing power than most people can comprehend.
Cloud computing, algorithms, and machine learning
The concept of machine learning sounds complicated, but it truly isn’t. We get a computer to do a task and reward it for doing that task properly. If we were teaching it to play chess, taking an opponent’s queen would be rewarded with points while losing pieces would lose points. The computer, to learn, would simply move beads around the board, randomly at first but would do so thousands and even millions of times finally “learning” what combination of moves result in success over an opponent. Thus, machine learning is a form of brute-force learning. The trick is that a computer can repeat a process millions of times without getting tired thereby learning far faster than humans can.
In addition to machine learning we now have the advent of cloud computing. This allows us to store data in the cloud, but it also allows us to compute in the cloud. Sitting in my home in Cape Town I could access a cloud facility anywhere in the world with vast supercomputing potential to compute heavy loads of data. This has allowed us to build companies, create bitcoin, and allow machines to learn without having to invest in new hardware before we do so.
The stars align for AI
The above points all coincided in the new millennium – the advent of vast computing power, the ability to use that computing power to allow machines to learn via machine learning, and the ability to create cloud farms, where data could be crunched remotely – leading to an environment that has allowed AI to flourish very rapidly.
Crucially, as mentioned, AI isn’t new. There is a video on YouTube where Google CEO, Sundar Pichai, uses Google Duplex as a virtual assistant to book an appointment in a hair salon, order a meal at a restaurant, and other things. He asks the AI to do this using his voice and the AI places a call and can converse with a human and complete the task. It is truly amazing and seems to pass the Turing Test, albeit for a very defined task.
The interesting thing is that this video appeared five years ago.
Effectively, Google had access to technology far better than ChatGPT years ago. This leads me to an important fact about the current version of AI: it has existed for years, if not a decade, at a variety of the largest technology companies in the world. The important change is that OpenAI decided to release this technology to the wider public which began the global arms race for AI in the public domain. Companies like Google, Tesla, Amazon, and Meta have effectively used a combination of algorithms, and machine learning to create products and services that we rely on today. That technology, however, is now effectively free to use by the general public, with some guardrails attached.
This isn’t true AI
There is a command in Microsoft Word in which pressing “Control + H” allows you to replace any word with something else throughout the entire document. I should, ideally, do that in this article and replace AI and artificial intelligence with machine learning and large language models. The reason is that what we have experienced is not true AI. Anyone that has interacted with ChatGPT, or some derivative, knows full well that they are interacting with a computer. Curiously, though logically, science fiction has already provided us with examples of true and proper AI like Jarvis in the Avengers, and Kit from Knight Rider. That version of AI, which could pass the Turing Test, is known as generative AI or artificial general intelligence (AGI). The AI we have been using is better specified as narrow AI.
Yes, what we can do with ChatGPT is truly amazing, but it does make a variety of mistakes and isn’t exactly creating new ideas, it is merely packaging ideas based on what it has already been taught. The way the program works is that it is merely trying to find the next correct word that fits in a sentence. This is why the program is called an LLM. Based on that definition it is amazing that the program can create functional sentences let alone being able to provide in-depth responses on a variety of topics. The reason for the amazing nature of the model goes back to how it was taught. It hasn’t been confirmed, but it seems like ChatGPT was trained on a vast portion of the available internet and was “rewarded” for giving better and better answers by constructing better sentences and responses. This is where humans often struggle to understand the available computing power to help these models learn – if we go back to the chess example, ChatGPT has played millions if not billions of times to learn to construct sentences and give good answers. Initially it struggled to create a sentence in proper form, but it has performed the task of building sentences so many times, on so many topics, that it now does it in a truly advanced way.
So, what next for AI, LLMs, and the wider economy?
OpenAI has not created technology that can pass the Turing Test or be called AGI. However, it has proved that our computing capability has reached a level that can start to create initial forms of AI. Moore’s law tells us that computing power will only become more advanced so we know that narrow AI will develop further and further to eventually give us some form of generative AI. This will allow us to have true virtual assistants, driverless cars, and even service workers able to help us with things like meals at a restaurant or even a legal dilemma.
If we allow ourselves to consider this world, what does the economy of the future look like? Most media that I have consumed, assumes that there will be mass joblessness as AI takes our jobs. While this seems like a bad result the other side of the argument has been that we will learn to do new jobs, just as we were forced to do during previous economic revolutions, like the industrial revolution.
My theory is very different and requires some out of the box thinking. We could allow a portion of human society to be unemployed in perpetuity. They would earn some form of income to remain unemployed while those that wish to work, and have the required skills, will get paid to work in this new society. This idea is not new, a form of this has already cropped up in various economic publications with the benefit payment to the unemployed referred to as Universal Basic Income (UBI). You would have to keep the unemployed occupied and satisfied which is where AI could create games, tasks, and even worlds that could be occupied, which would need to be entertaining, for people to spend their free time in. If this sounds like the plot of “Ready Player One” or “The Matrix” it’s because it is. A different, and less dystopian, theory relies on the population getting older. We know that developed nations have a higher proportion of older, and thus retired, people that are not contributing to the workforce. This natural shift in society will require some technological stop gap which will allow us to fill the work void to provide the resources, goods, and services that this new population needs. In this world the loss in human capital will be replaced with technology thus maintaining productivity and keeping inflation in check.
Yes, these theories seem very out there in the context of the world today, but as economists and portfolio managers, the CAM team prides itself on thinking far into the future to understand what the various scenarios could be.
What about AI investments?
I mentioned earlier that CAM has been thinking about and investing in the AI theme for years now. That is only partly true. We have been investing in the theme of big data computing. We realized that computing was getting better, and we understood that this would affect society at virtually every level. Thus, for the last few years, we have invested in companies throughout the computing supply chain and have benefitted significantly from these investments. When we think about the big data supply chain it starts with companies creating photolithographic machines that allow precise etching onto thinner and thinner wafers to create smaller and smaller CPUs (central processing units). This spills over into foundries that are designing and creating CPUs for a variety of businesses that previously relied on companies like Intel and AMD to provide their computing capability. This logically spilled over into the hardware and software companies that are benefitting from better computing power by providing better hardware and software, allowing us to work in the cloud, process more data, or simply have higher quality technology products in our homes and offices.
In conclusion
Whenever I see the words “Artificial Intelligence” I can’t help but add the word “narrow” to the description. The AI trend has well and truly taken over with AI being talked about and hyped across a variety of product categories and companies. It is important to stand back and understand the true context of AI, its limitations, benefits, and who could truly stand to benefit from its use. In the CAM team, we pride ourselves on our ability to conceptualise and strategise around big picture concepts and see both the near-term and long-term impacts on society as well as our investment portfolio. I hope this article gives you a different perspective on the AI conundrum and provides much needed context on a space that is developing in real time. As always, you can speak to your Citadel advisor if you have any questions or comments or if you would like to reach out to me directly.