Will AI Crash the Economy?

Brian Dunaway wrote:

In a post in these pages about a week ago I offered a few comments on AI euphoria, as well as a typical example of AI-driven search engine error.

Charles Hugh Smith (CHS) seems to have covered all my “earthly” concerns very well in an article here. (Perhaps my biggest concerns with AI are more metaphysical than physical, but I will not address them here.) CHS enumerates the current primary issues with AI, each of which is enormous.

CHS doesn’t appear to believe that AI is a “nothing” technology (neither do I), but he does characterize it as a con. No doubt there is some of that, but I think the AI euphoria is a genuine case of very poor understanding and judgment, fueled by The Promise of Singularity, and solving all the mysteries of the universe – and, easy money looking for the next big thing.

AI seems to have a lot of promise as a research tool, pointing primary research in directions that might have taken researchers many years to imagine. In this context, the researcher would employ scientific methods to verify the validity of an AI “conclusion.” As such, the AI entity would be part of the trial-and-error scientific process.

This is altogether different from what industry seems to think AI will accomplish: a Unified Field Theory of human endeavor. But at this point, AI just doesn’t seem anywhere near capable of doing what industry is trying to do with it.

A few additional comments, employing CHS’s numbered subheadings:

1. AI revenues are orders of magnitude lighter than the sums being invested 

In a similar vein to the subheading, a rule-of-thumb states that “if you buy stock at a P/E ratio of 15, then it will take 15 years for the company’s earnings to add up to your original purchase price – 15 years to ‘pay you back.’ That’s assuming that the company is already in its ‘mature’ stage, where earnings are constant. [Emphasis mine.]” “Assuming” – that’s the mother of all assumptions.

But, even if one is a “true believer,” even the market indicates AI technology is in its nascent stage. A typical AI large corporate P/E is around 50 (that is, when the “E” isn’t negative!). So, half a century? Sounds about right. That is, IF it ever works as advertised, with profit-making reliability in the distant future, and requiring enormous resources that aren’t even yet available. And a lot can happen in 50 years. This technology should be considered very high risk.

2. AI tools are inherently untrustworthy and lend themselves to generating “going through the motions” slop

CHS offers a very good summary here: AI has a “superficial appearance of value but actually has negative value as it’s incomplete, misleading and/or incoherent. Sorting the wheat from the chaff actually takes more time because AI is so adept at generating a superficial gloss. In other words, AI generates time sinks rather than productivity.” Exactly.

Currently, I would place AI technology at a TRL (Technology Readiness Level, in NASA parlance) of around 4 (of 9) – and that is probably very generous – even at this level it simply seems unproven. At a TRL of 5 (component and/or breadboard validation in relevant environment), it would be pretty difficult to make an argument other than that AI has proven to be unreliable and/or not cost effective at scale.

A few weeks ago Epoch Times penned an article that illustrates well the idea of “time sink” in the context of coding software with AI. One report noted 45% of code samples failed security tests. A programmer and IP attorney commented, “I’m surprised the percentage isn’t higher. AI-generated code, even when it works, tends to have a lot of logical flaws that simply reflect a lack of context and thoughtfulness.”

In the same article, in the context of law, “AI hallucinations have already made headlines for the problems they can create in the workplace. A 2024 study observed LLMs had a ‘hallucination’ rate between 69 percent and 88 percent, based on responses to specific legal queries. [Emphasis mine.]”

3. The rate at which major companies are adopting AI is rolling over

I have nothing to add here, other than to say that apparently larger companies that have the resources to understand the future of AI are pulling back from AI adoption. CHS includes a fascinating graph suggesting that firms with more than 50 souls had a negative AI adoption rate starting this last summer.

4. AI data centers are competing with other users for electricity, water and capital

AI power requirements alone are stratospheric, and as CHS notes, this is having an enormous impact on power bills. That would include the power bills of the most vulnerable – not just economically vulnerable, but cruelly, those whose jobs the overlords want to eliminate.

And regarding “singularity” – as defined – one study I read suggested that all the current power in the world would not be sufficient to achieve it. Added to that little detail are the ridiculous requirements of Green Fantasies like an EV in every driveway and net zero power generation, and all this with a backdrop of failing power infrastructure.

 

Share