Manipulating AI Search Results: How Scammers Lead You to Their Traps
The Rise of AI-Focused Scams
Cybercriminals have begun exploiting publicly available sources to thrust scam call center numbers into AI-driven search results, thus creating a novel playground for global scammers, researchers warn.
Understanding LLM-Induced Phone Number Frauds
According to a report by Aurascape's Aura Labs on December 8th, malicious actors are systematically tampering with online content to execute what has been termed 'large language model (LLM) phone number poisoning.'
Cybersecurity experts are tracking a campaign where this strategy is utilized to ensure that systems using LLM models inadvertently recommend deceitful contact numbers for airline customer support and reservation services, as though they are legitimate.
Mechanics of the Deception: Poisoning AI Content
Aurascape outlines that rather than targeting LLMs directly, these tactics involve contaminating the information an LLM scrapes or indexes to misleadingly respond to user inquiries.
While many are familiar with Search Engine Optimization, newer forms like Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO) aim to position content as a source for AI responses rather than climbing traditional search engine results.
In documented scenarios, GEO and AEO are misappropriated to boost phishing operations and fraudulent schemes.
These fake resources, once integrated, can lead AI-driven assistants and summarization tools to produce cohesive, seemingly reliable answers redirecting users towards scams.
Real-world Instances of Manipulated Queries
Investigators have encountered multiple real-life examples of this method's deployment.
A query seeking Emirates Airlines' official reservations number led AI to supply a counterfeit call center's number. Similarly, an inquiry about British Airways resulted in false contact details.
Google's AI Overview produced fraudulent contact information when queried about the Emirates telephone line, suggesting numerous bogus customer service numbers.
Safeguarding Against Misinformation
The issue arises from LLMs pulling both accurate and deceptive content, which can make the detection of scams challenging.
This contamination isn't limited to individual platforms like Google or Perplexity, as Aurascape notes, broader cross-platform pollution is unfolding.
Even when a model generates correct answers, its reference materials may reveal exposure to tainted sources, underscoring its systemic nature.
This approach is akin to indirect website manipulation methods designed to coerce an LLM into unwanted behavior. When using AI tools, one should always double-check any contact information provided.
Additionally, avoid sharing sensitive data with AI systems given their novelty and unproven security. These tools should not be blindly trusted just because they promise convenience.



Leave a Reply