AI and the automation of work
This article from Benedict Evans may be of interest. Here is a section:
Indeed, while one could suggest that LLMs will subsume many apps on one axis, I think it’s equally likely that they will enable a whole new wave of unbundling on other axes, as startups peel off dozens more use cases from Word, Salesforce and SAP, and build a whole bunch more big companies by solving problems that no-one had realised were problems until LLMs let you solve them. That’s the process that explains why big companies already have 400 SaaS apps today, after all.
More fundamental, of course, there is the error rate. ChatGPT can try to answer ‘anything’ but the answer might be wrong. People call this hallucinations, making things up, lying or bullshitting - it’s the ‘overconfident undergraduate’ problem. I think these are all unhelpful framings: I think the best way to understand this is that when you type something into a prompt, you’re not actually asking it to answer a question at all. Rather, you’re asking it “what sort of answers would people be likely to produce to questions that look like this?” You’re asking it to match a pattern.
Hence, if I ask ChatGPT4 to write a biography of myself, and then ask it again, it gives different answers. It suggests I went to Cambridge, Oxford or the LSE; my first job was in equity research, consulting or financial journalism. These are always the right pattern: it’s the right kind of university and the right kind of job (it never says MIT and then catering management). It is giving 100% correct answers to the question “what kinds of degrees and jobs are people like Benedict likely to have done?” It’s not doing a database lookup: it’s making a pattern.
This gels with my experience of using ChatGPT. In market terms, the AI answer is a moving average. It gives you a general idea of what the market is doing without exact accuracy about what is happening right now, or whether something important is happening at the margin. The difficulty, of course, is that all the interesting details are at the margin because that it is the marginal buyer which sets the price.
If I were writing a 3000-word piece where I spent the first 2000 words weighing both sides of the argument in a balanced manner, large language models would do a reasonable job coming up with that.
With some editing, I could then focus on the meat of the argument in the conclusion. That’s an example of a labour-saving device, which could theoretically take a three-hour job and shave it down to one hour. It’s not good enough, yet, to replace the analyst but it could certainly improve efficiency. I see that as a clear path of commercial utility over the medium term and it will eventually be integrated into many white-collar jobs.
Nvidia has been the go-to AI stock because they have a suite of dedicated AI chips. The successful launch of large language models has prompted other companies to refocus their efforts on building their own chips. Meta Platforms and Amazon are well on the way to commercializing their own designs. Google’s DeepMind is even using AI to develop AI chips.
Nvidia rebounded strongly from the $400 level on Monday. That area needs to hold if the “higher plateau” argument is to remain credible.
Meta has surged over the last 10 months and is now testing the sequence of higher reaction lows. It needs to hold the $289 level if a deeper process of mean reversion is to be avoided.
Outside of the small group of AI champions, Microchip Technology posted a large downside weekly key reversal three weeks ago and continues to follow through on the downside. This is a strong failed upside break signal and suggests the most likely direction of trading is a return to the lower side of the range.