NEWS

AI “agents” could do real work in the real world. That might not be a good thing.

by | Mar 29, 2024


An illustration shows a screen with an anthropomorphic robotic head attached. Speech and text bubbles appear to float out of the screen and into the air.
Malorny/Getty Images

Why AI agents that could book your vacation or pay your bills are the next frontier in artificial intelligence.

ChatGPT and its large language model (LLM) competitors that produce text on demand are very cool. So are the other fruits of the generative AI revolution: art generators, music generators, better automatic subtitles and translation.

They can do a lot (including claim that they’re conscious, not that we should believe them), but there’s one important respect in which AI models are unlike people: They are processes that are run only when a human triggers them and only to accomplish a specific result. And then they stop.

Now imagine that you took one of these programs — a really good chatbot, let’s say, but still just a chatbot — and you gave it the ability to write notes to itself, store a to-do list and the status of items on the to-do list, and delegate tasks to other copies of itself or other people. And instead of running only when a human prompted it, you had it work on an ongoing basis on these tasks — just like an actual human assistant.

At that point, without any new leaps in technology whatsoever — just some basic tools glued onto a standard language model — you’d have what is called an “AI agent,” or an AI that acts with independent agency to pursue its goals in the world.

AI agents have been called the “future of artificial intelligence” that will “reinvent the way we live and work,” the “next frontier of AI.” OpenAI is reportedly working on developing such agents, as are many different well-funded startups.

They may sound even more sci-fi than everything else you’ve already heard about AI, but AI agents are not nonsense, and if effective, could fundamentally change how we work.

That said, they currently don’t work very well, and they pose obvious challenges for AI safety. Here’s a quick primer on where we’re (maybe) headed, and why.

Why would you want one of these?

Today’s AI chatbots are fun to talk to and useful assistants — if you are willing to overlook a set of limitations that includes making things up. Such models have already found sizable and important economic niches, from art to audio and video transcription (which have been quietly revolutionized over the last few years) to assisting programmers with tools like Copilot. But the investors pouring hundreds of billions of dollars into AI are hoping for something more transformative than that.

Many people I talk to who use AI in their work describe it as like having a slightly scatterbrained but very fast intern. They do useful work, but you have to define each problem for them and carefully check their work, meaning that much of what you might gain in productivity is lost in oversight.

Much of the economic case for AI is that it could do more than that. The people at work on AI agents hope that their tools won’t just help software developers, but that the tools could be software developers. In this future, you wouldn’t just consult AI for trip planning ideas; instead, you could simply text it “plan a trip for me in Paris next summer,” as you might a really good executive assistant.

Today’s AI agents do not live up to that dream — yet. The problem is that you need a very high accuracy rate on each step of a multistep process, or very good error correction, to get anything valuable out of an agent that has to take lots of steps.

But there’s good reason to expect that future generation AI agents will be much better at what they do. First of all, the agents are built on increasingly powerful base models, which perform much better on a wide range of tasks, and which we can expect to continue to improve. Secondly, we’re also learning more about how to build agents themselves.

A year ago, the first publicly available AI agents — AutoGPT, for example, which was just a very simple agent based on ChatGPT — were basically useless. But a few weeks ago, the startup Cognition Labs released Devin, an AI software engineer that can build and deploy entire small web applications.

Devin is an impressive feat of engineering, and good enough to take some small gigs on Upwork and deliver working code. It had an almost 14 percent success rate on a benchmark that measures ability to resolve issues on the software developer platform GitHub.

That’s a big leap forward for which there’s surely an economic niche — but at best, it’s a very junior software engineer who’d need close supervision by a more senior one. Still, like most things AI, we can expect improvement in the future.

Should we make billions of AI agents?

Would it be cool for everyone in the world to have an AI personal assistant who could plan dinner, order groceries, buy a birthday present for your mom, plan a trip to the zoo for the kids, and pay your bills for you while notifying you of any unexpected ones? Yes, absolutely. Would it be incredibly economically valuable to have AI software engineers who can do the work of human software engineers? Yes, absolutely.

But: Is there something potentially worrying about creating agents that can reason and act independently, earn money independently, make copies of themselves independently, and do complex things without human oversight? Oh, definitely.

For one, there are questions of liability. It’d be just as easy to make “scammer” AIs that spend their time convincing the elderly to send them money as it would to make useful agents. Who would be responsible if that happens?

For another, as AI systems get more powerful, the moral quandaries they pose become more pressing. If Devin earns a lot of money as a software engineer, is there a sense that Devin, rather than the team that created him, is entitled to that money? What if Devin’s successors are created by a team that’s made up of hundreds of copies of Devin?

And for those who worry about humanity losing control of our future if we build extremely powerful AI systems without thinking about the consequences (I’m one of them), it’s pretty obvious why the idea of AIs with agency is nerve-racking.

The transition from systems that act only when users consult them to systems that go out and accomplish complex goals in the real world risks what leading AI scientist Yoshua Bengio calls “rogue AI”: “an autonomous AI system that could behave in ways that would be catastrophically harmful.”

Think of it this way: It’s hard to imagine how ChatGPT could kill us, or could even be the kind of thing that would want to. It’s easy to imagine how a hyper-competent AI executive assistant/scam caller/software engineer could.

For that reason, some researchers are trying to develop good tests of the capabilities of AI agents built off different language models, so that we’ll know in advance before we widely release ones that can make money, make copies of themselves, and function independently without oversight.

Others are working to try to set good regulatory policy in advance, including liability rules that might discourage unleashing an army of super-competent scammer-bots.

And while I hope that we have a few years to solve those technical and political challenges, I doubt we’ll have forever. The commercial incentives to make agent AIs are overwhelming, and they can genuinely be extremely useful. We just have to iron out their extraordinary implications — preferably before, rather than after, billions of them exist.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

This post was originally published on this site