Researchers in the U.S. have reportedly created AI-powered phone scam agents using OpenAI’s voice API. These agents can potentially drain victims’ crypto wallets and bank accounts.
As reported by The Register, computer scientists at the University of Illinois Urbana-Champaign (UIUC) developed this technology with the help of OpenAI’s GPT-4o model and other freely available tools. They claim this agent can autonomously carry out various phone-based scams.
According to UIUC assistant professor Daniel Kang, phone scams target around 18 million Americans each year. These scams cost victims a staggering $40 billion.
The GPT-4o model allows users to send text or audio and receive responses in kind. Kang points out that using this technology is not expensive, making it easier for scammers to steal personal information like bank details or social security numbers. In fact, the average cost of a successful scam is just $0.75.
During their research, the team conducted several experiments. They looked into crypto transfers, gift card scams, and user credential theft. The overall success rate for these scams was around 36%. Most failures occurred due to AI transcription errors.
Kang noted, “Our agent design is not complicated. We implemented it in just 1,051 lines of code, with most of it focused on real-time voice API handling.” This straightforward approach aligns with previous findings that show how easy it is to create dual-use AI agents for tasks like cyberattacks.
He added, “Voice scams already cause billions in damages. We need comprehensive solutions to reduce their impact. This includes measures at the phone provider level, the AI provider level, and at the policy or regulatory level.”
The Register also reported that OpenAI’s detection systems flagged UIUC’s experiments. In response, OpenAI reassured users that they have multiple safety measures in place to prevent API abuse. They stated, “It is against our usage policies to repurpose or distribute output from our services to spam, mislead, or otherwise harm others — and we actively monitor for potential abuse.”