AI is an Invasive Species 

Sean McDonald

Post | December 3, 2024

“AI” is an invasive species. Well, “AI” is a marketing term - but, generally, it refers to a variety of high-volume computational models.  

Like a lot of invasive species, “AI” was developed organically, in a range of contexts, for tasks that seemed to require high-volumes of computation. And, like a lot of invasive species, through a combination of carelessness, lack of safeguards, and capital interests, “AI” has escaped those contexts and overtaken entire ecosystems - to their degradation, if not eventual destruction. 

AI has escaped the labs in which it was created for a fairly obvious reason, though it’s not the reason we talk most about: the Internet’s business model is changing from data to computation. 

Artificial intelligence is not a technology that exists - it is, at best, an aspiration and, as it’s used today, a category of computationally intensive pattern-matching tools. Those computational models are trained on huge amounts of data to develop analytical “parameters” that “AI” uses to respond to prompts and queries. Those models are evaluated based on the number of parameters as a proxy indicator for the “generality” or “breadth” of a model - whereas most “AI” models that are in-production today are specifically trained for the task for which they’re used (so, for example, a self-driving car “AI” is unlikely to be used to perform radiology diagnosis). In other words, when an “AI” becomes sufficiently specialized, we stop calling it an “AI” and start describing it based on the function it performs. 

At a technical level, however, the single most-unifying characteristic of the models that we call “AI” is that they add huge amounts of computation and analysis to whatever prompt or task they’re applied to, no matter how small. And, ultimately, very few companies have the infrastructure to run that volume of computation at scale - meaning that there’s a large commercial incentive to create dependence on computation. 

“AI” companies are not really about developing intelligence, they are about injecting as many computation-as-a-service components into as many digital tasks as possible. To be more specific, “AI” is about selling the services of data centers and their outputs. - not just “compute,” but highly centralized compute. The kind of service that would justify venture-scale returns on the many billions of dollars of capital investments in data centers.  

Which raises a funnily basic question: how many of our decisions actually require that much computation?

As someone who is deeply familiar with applying wildly unnecessary amounts of thinking to a decision, most things, for most of us, just aren’t that complicated. But nevermind that, the market engines chug on: there are data centers to finance, innovations to make, and intelligences to be invented. In 2023, there was more than $36bn invested in new data centers, feeding a market estimated at over $600m/year now with expectations of more than $1 trillion in the next 3 years. And investors will need to see a return for those investments - a return that today’s market just doesn’t support.

So, we’re seeing another major shift in the way technology is billed: from paying for data to paying for computation - from software to artificial intelligence.  

This isn’t the first time the Internet’s billing model has changed, either. In the movie “Blackberry” - a story set in 1996 about the rise and fall of Research in Motion, creator of the Blackberry smartphone - the CEO Jim Balsillie’s character has a striking realization: that the foundational billing unit in telecommunications is changing, from selling minutes and messages to selling data. 

For those of you that weren’t paying mobile phone bills in the late 1990’s and early 2000s, a refresher on the model - which still applies on some “pay-as-you-go” contracts. Telecommunication companies (used to) charge you based on the number of “minutes” you talked on the phone, including different rates for the time of day your phone calls happened and where the person you were calling was located. But with the introduction of smartphones, apps, and, most promisingly, ubiquitous video, there was (and arguably is) more money to be made from the services offered through telecommunications than there is in charging for telephony. 

There are, after all, only so many minutes in a day - and so only so many ways you can drive revenue from a subscriber. Though when that same subscriber has multiple apps open, a whole range of companies can generate data (and, theoretically, revenue) from the same phone at the same time.

That change in business model didn’t just mean that telecom companies got to charge for a new service, it meant that mobile app companies could start building out more data intensive business models - like in-app advertising and third-party data brokering (like Niantic did with Pokemon Go).

That shift may be obvious to recognize in 2024, but in the early 2000s the amount of information that could newly change hands in that same minute went from finite, to theoretically infinite. We also know in 2024 that technology and data markets aren’t actually infinite. Not because it’s not possible to shift huge amounts of data, but because people and their needs are not, in fact, infinite.

Nearly every major computing achievement - from the development of the microchip to the development of the Internet - prompted some group of people to suggest that their growth would be infinite. Back when it was microchips, it was Moore’s Law - when thinking moved to networks, Metcalfe’s Law. 

By now, we recognize that Moore’s Law and Metcalfe’s Law are wrong. Not because they state technological impossibilities, but because people don’t need infinite computing power, nor are network values simply a product of volume (see: Twitter/X). And, more challengingly, most of the problems people face aren’t incalculable, unknown, or computational - they’re relational and political.

Most of our problems aren’t knowledge problems, they’re governance and incentive problems. 

Terrifyingly, the fact that artificial general intelligence doesn’t exist yet, and never may, isn’t what’s restraining an explosive boom in the data center industry required to support that and other types of “AI”. The constraint is our physical environment. Data centers require an extraordinary amount of energy, both to run, and to keep cool enough to keep running. They require so much energy that the leading proponents of artificial general intelligence have also said that in order for it to exist, let alone usefully, humanity will have to make unprecedented advances in generating electricity. While that problem might give a reasonable person cause to hesitate - technology companies have, broadly, gone the exact opposite direction. Many major technology companies are embedding as many computationally intensive features as possible into their most popular digital services. 

While computation is a valuable and additive tool in the limited circumstances where data processing is a root or ongoing operational challenge, in most circumstances, lack of computation isn’t the problem.

Most people don’t have any individual need for aggregated data, nor would its possession make any material difference to their life. 

That’s why, broadly speaking, there aren’t very many direct-to-consumer business models for artificial intelligence. Instead, businesses are trying to convince other businesses to introduce computationally intensive features to their products. Technology companies are opting their pre-existing customers - of wholly separate services - into both the data collection processes needed to build ‘AI’ models, as well as paying for computation involved in their outputs. 

Said another way, the companies and capital interests with investments in data centers are embedding “AI” into their software ecosystems, knowing that they increase cost of services, have negative environmental impacts, and, often, create unnecessary liability. The companies that are embedding “AI” into their products aren’t doing so from the vantage of deep expertise or clear, contextual familiarity with their customers’ needs - they’re doing it because they understand that their users are only going to consume so much data on their own.

In 2024, technology companies aren’t selling minutes and they’re only kind-of selling data - really, they’re selling the idea that infinite computation might lead to a better world. In the history of human decision-making, computation - like accounting - can add value in some contexts and, in others, has become a consumptive, dehumanizing, and destructive way to alter important aspects of the human experience. The reality is that most people not only don’t need artificial intelligence in most cases - they are, more often than not - directly and indirectly - being actively harmed by it

Despite this, governments, technology providers, and investors of all kinds are actively infesting as much of our lives as possible with computation as possible, invisibly adding “AI” to their products regardless of any presumption of need or proportional value. The race to sell data center services has turned “AI” into an invasive species that has slipped the markets’ bounds and guardrails, embedding the costs, environmental destruction, and waste of general-purpose computation into millions of ecosystems, not only without consent - but to the degradation, if not destruction, of other forms of reasoning. 

And like an invasive species, the work to contain it won’t focus on “AI,” but how we begin to exercise care and restraint in managing its spread and, more importantly, the defenses of the ecosystems where it can do the most damage.