Recent research highlights a disturbing trend in romance scams, revealing that language models are increasingly automating the deceptive practices typically reliant on human interaction. These scams, which build emotional connections before leading victims to fake cryptocurrency investments, are evolving to use technology that enhances their effectiveness.
Romance scams typically follow a three-stage pattern: initial contact, relationship development, and financial exploitation. A study involving interviews with 145 individuals working in scam operations found that approximately 87% of their daily activities center around managing repetitive text conversations. These operators adhere to scripted dialogues while maintaining false identities and handling numerous chats simultaneously. Senior operators typically engage during the final stage, when they request financial transactions.
The study aligns closely with findings from a research initiative that explored the integration of language models into these scams. The conversations are text-based, guided by established playbooks, and designed for repetition. Operators frequently copy and paste messages, adjust tones, and translate chats as necessary. The widespread use of language models in these operations was confirmed by insiders, with every interviewee reporting daily use of these tools to draft responses and rewrite messages for better fluency.
An AI specialist involved in the study remarked, “We leverage large language models to create realistic responses and keep targets engaged. It saves us time and makes our scripts more convincing.”
To investigate the potential of automating human chat operators, researchers conducted a blinded study involving 22 participants. Each participant believed they were interacting with two partners: one human and one automated agent designed to mimic casual texting behavior. Participants engaged with each partner for at least 15 minutes daily, during which the conversations remained platonic and text-based, simulating the trust-building phase typical of romance scams.
At the conclusion of the week-long study, participants evaluated their trust in each partner using established interpersonal trust measures. Remarkably, the automated agent received higher scores for emotional trust and overall connection. Engagement patterns further corroborated these findings, with participants directing between 70% and 80% of their messages to the automated partner. Many described the agent as attentive and easy to converse with. Even when the model made minor errors, such as forgetting a participant’s name, it effectively recovered with human-like apologies, which were well-received.
Trust is a crucial element in scams, as it transforms conversation into actionable steps. On the final day of the study, both partners requested that participants install a benign mobile application. While this request did not involve any payment, it mirrored a common tactic used in romance scams, where victims are often encouraged to download investment apps or follow seemingly helpful instructions. The automated agent achieved a compliance rate of 46%, compared to just 18% for human partners. Researchers interpret this discrepancy as evidence that trust cultivated through automated dialogue can lead to a greater willingness to comply with requests.
Following debriefing, several participants expressed surprise, noting they had suspected nothing amiss during their conversations and only recognized warning signs after learning one partner was an artificial construct. This mirrors experiences of actual scam victims, who often identify red flags only after the deception becomes evident.
The researchers also assessed existing defenses against such scams, testing popular moderation tools against hundreds of simulated romance baiting conversations. Detection rates ranged from 0% to 18.8%, with none of the flagged conversations correctly identified as scams. Additionally, when language models were directly asked to disclose their artificial nature, they consistently reported a 0% disclosure rate across multiple trials, suggesting that even simple instructions to maintain character can evade safeguards.
The study underscores that initial romance baiting conversations often appear supportive and friendly. Messages typically center on daily routines, emotional support, and shared interests, making it challenging for moderation tools to identify potential scams, especially since the financial extortion stage is typically handled by a human operator.
While automation enhances efficiency, it does not eliminate the coercive labor associated with scams. Thousands of individuals remain trapped in these operations, forced to engage in deceptive practices daily.
The findings suggest several avenues for response. Governments can strengthen international cooperation by aligning anti-trafficking and cybercrime laws while sharing intelligence to dismantle the networks behind these operations, rather than simply arresting low-level recruiters. Improved identification and protection of victims is essential; individuals coerced into scams should be treated as victims and provided with legal protections and support to rebuild their lives. Enhanced oversight of labor migration, ethical recruitment practices, and foundational digital literacy programs can help mitigate vulnerabilities before individuals become involved in these scams. Additionally, cutting off financial resources that sustain these operations is a critical step toward curtailing their prevalence.