The views expressed by contributors are their own and not the view of The Hill

Did Big Tech bite off more than it can chew with AI? 

It’s time to take a deep breath when it comes to AI. 

It can be tough, of course, when the world’s five largest companies by market capitalization are each investing billions of dollars into their own AI solutions; when venture capitalists are telling us that AI will displace half the world’s jobs by 2027; and when even elementary schools are struggling to create AI policies.  

But AI isn’t coming for your job this year, and it’s not going to replace humanity. The truth is, massive, generative AI programs that train on huge swaths of the internet have a great future behind them.

Although their possibilities are tantalizing, AI reality has fallen short. Large AI programs have yet to consistently provide practical value to the businesses spending billions of dollars on them. And the delta between business value and AI spending will only get worse.  

Companies have had much greater return on investment with smaller AI models that are designed for specific purposes. Such tailored models — as opposed to the unstructured large language models like OpenAI’s ChatGPT — provide immediate and significant return on investment and can thrive in a regulated environment. 

By way of background, large language models scrape information from across the internet and ingest terabytes of data to produce the outputs we’re all familiar with, from AI-generated art and op-eds (not this one, I promise) to running consumer sentiment analyses and optimizing SEO. By ingesting such huge amounts of data, they can identify and repeat patterns.  

Smaller models, on the other hand, detect patterns from far less data, all of which it takes from areas relevant to a given field. So a model designed to make stock trades may pull data from the Fed, business news sites, and a number of other relevant areas, but ignore countless irrelevant items online. It would not pull data from TMZ, the complete works of Shakespeare or a TikTok dance craze.

Such small models appeal to companies, first and foremost, because they provide immediate, tangible return on investment. Large language models do not.  

According to Sequoia Capital, the AI industry spent $50 billion on chips from NVIDIA alone — and that’s not even accounting for AI’s other astronomical expenses, such as top-flight talent, intensive programs to help companies leverage complex technologies and high financing costs. Even Mark Zuckerberg admits the industry is spending too much.

Yet large language models have only brought in $3 billion in revenue, largely because they’re not good at math; much to the chagrin of humanities majors, math is still very important in business. But sadly, with these technologies, 2 plus 2 doesn’t always equal 4.  

Compare that to the immediate ROI of smaller models. Companies can implement systems for six instead of eight figures and begin using them almost as soon as the check goes through. 

Some companies want to change the world and harness the next big thing, of course. But, ultimately, these are profit-driven businesses. When they find themselves spending seven and eight figures on “pilot” programs that fail to improve the bottom line, they’re going to make changes.  

It’s not just money that could slam the breaks on large language models; it’s the coming wave of regulation.

For good or ill, AI is going to face more robust regulation. Given how fast the technology is moving and the relatively limited understanding of AI on Capitol Hill, that regulation is going to be blunt. It will center on simple concepts that members of Congress can understand and, given the end of Chevron Deference, rules that regulatory agencies can enforce.

Most likely, regulation will require that AI products be both repeatable (i.e., if you enter the same prompt, you will get the same result each time) and explainable (i.e., we can determine how the program came to the conclusion it did if we need to). Quite fairly, regulators will be uncomfortable with AI’s tendency to “hallucinate” or with programs that cannot explain how or why they came to the decision they did.   

The vast majority of large language models are neither repeatable nor explainable, and they are unlikely to become so in the near future. They simply have too much data and use too much processing power for us to know where their information is coming from or how the AI came to the conclusions it did. Smaller models do not have those same limitations. It is a simple forensic exercise to understand their conclusions. Given that it is combing through far smaller data sets, its conclusions will not vary.

This isn’t to say large language models are useless or that the AI revolution is overblown. They aren’t, and it isn’t. There are use cases for large language models and, as everyone can attest, from the finance industry to the agriculture industry, AI is already valuable.  

But the AI industry has to grow up. It can no longer focus on “limitless possibilities” and ignore the associated costs. Like a daydreaming college student who has just graduated, it’s time for AI to stop thinking about solving all of the world’s problems and start thinking about getting a job.  

AI tailored to specific needs is AI’s foreseeable future. And that is a good thing. Let’s take a deep breath and enjoy it.

Sultan Meghji is CEO of Frontier Foundry and a former chief innovation officer at the Federal Deposit Insurance Corp. 

Tags Artificial intelligence generative AI large language models

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video