AI-Driven Phishing: The New Threat to Biometric and Signature Security
Phishing attacks are evolving with AI and stealth, targeting biometric data and signatures. Discover the new tactics and learn how to protect your business. ...
Key Takeaways
- AI is transforming phishing into highly personalized and sophisticated attacks, making them harder to detect.
- Phishers are now targeting biometric data and electronic signatures, posing unprecedented risks to businesses and individuals.
- Trusted platforms like Telegram and Google Translate are being exploited to bypass traditional security measures.
- Proactive measures and user education are crucial to mitigating these advanced threats.
AI-Driven Phishing: A New Frontier in Cyber Threats
Phishing attacks have long been a menace to both individuals and businesses, but the integration of AI has elevated these threats to a new level of sophistication. Kaspersky's recent detection of over 142 million phishing link clicks in Q2 2025, a 3.3% increase globally and a 25.7% increase in Africa from Q1, underscores the growing menace. This surge is driven by advanced AI-powered deception techniques and innovative evasion methods, targeting sensitive data such as biometrics and electronic signatures.
The Evolution of Phishing with AI
AI has transformed phishing into a highly personalized and convincing threat. Large language models enable attackers to craft emails, messages, and websites that mimic legitimate sources with unprecedented accuracy. These models eliminate the grammatical errors and awkward phrasing that once exposed scams, making them nearly indistinguishable from genuine communications.
Key tactics include:
- Personalized Emails and Messages: AI-driven bots on social media and messaging apps impersonate real users, engaging victims in prolonged conversations to build trust. These bots often fuel romantic or investment scams, using AI-generated audio messages or deepfake videos to lure victims into fake opportunities.
- Deepfake Impersonations: Attackers create realistic audio and video deepfake impersonations of trusted figures, such as colleagues, celebrities, or bank officials, to promote fake giveaways or extract sensitive information. Automated calls mimicking bank security teams use AI-generated voices to trick users into sharing two-factor authentication (2FA) codes, enabling account access or fraudulent transactions.
- Targeted Attacks: AI-powered tools analyze public data from social media or corporate websites to launch targeted attacks, such as HR-themed emails or fake calls referencing personal details.
Exploiting Trusted Platforms
Phishers are deploying sophisticated methods to gain trust and evade detection by exploiting legitimate services. For instance, Telegram’s Telegraph platform, a tool to publish long texts, is being used to host phishing content. Google Translate’s page translation feature generates links that look legitimate, bypassing security solutions’ filters.
Additional evasion techniques include:
- CAPTCHA Integration**: Phishers integrate CAPTCHA, a common anti-bot mechanism, into phishing sites. The presence of CAPTCHA is often associated with trusted platforms, lowering the likelihood of detection.
- Short-Lived Links**: Malicious links are extremely short-lived to evade detection, often redirecting to legitimate websites once the exploit is taken down.
The Shift from Passwords to Biometrics and Signatures
The focus of phishing attacks has shifted from passwords to immutable data. Biometric data, such as facial or fingerprint scans, and electronic signatures are now prime targets. Fraudulent sites that request smartphone camera access under pretexts like account verification can capture these biometric identifiers, which cannot be changed. Similarly, electronic and handwritten signatures, critical for legal and financial transactions, are stolen via phishing campaigns impersonating platforms like DocuSign or prompting users to upload signatures to fraudulent sites.
Risks and Implications:
- Unauthorised Access**: Biometric data can be used for unauthorised access to sensitive accounts.
- Dark Web Sales**: Biometric data and signatures are sold on the dark web, posing significant reputational and financial risks to businesses.
Case Study: Operation ForumTroll
In 2025, Kaspersky detected a sophisticated targeted phishing campaign dubbed Operation ForumTroll. Attackers sent personalised phishing emails inviting recipients to the “Primakov Readings” forum, targeting media outlets, educational institutions, and government organisations in Russia. The exploit leveraged a previously unknown vulnerability in the latest version of Google Chrome, and the malicious links were extremely short-lived to evade detection.
The Bottom Line
The convergence of AI and evasive tactics has turned phishing into a near-native mimic of legitimate communication, challenging even the most vigilant users. Attackers are no longer satisfied with stealing passwords; they are targeting biometric data, electronic, and handwritten signatures, potentially creating devastating, long-term consequences. By exploiting trusted platforms like Telegram and Google Translate, and co-opting tools like CAPTCHA, attackers are outpacing traditional defenses. Users must stay increasingly skeptical and proactive to avoid falling victim to these advanced threats.
Frequently Asked Questions
How does AI make phishing attacks more dangerous?
AI enables attackers to craft highly personalized and convincing emails, messages, and websites that mimic legitimate sources, making it difficult for users to distinguish between real and fake communications.
What are some common targets of AI-powered phishing attacks?
Common targets include biometric data (facial and fingerprint scans), electronic signatures, and handwritten signatures, which are critical for legal and financial transactions.
How do phishers use trusted platforms to bypass security measures?
Phishers exploit platforms like Telegram and Google Translate to host phishing content and generate links that look legitimate, bypassing security solutions' filters.
What can businesses do to protect themselves from AI-driven phishing?
Businesses should implement multi-factor authentication, educate employees about phishing tactics, and use advanced security solutions that can detect and block AI-generated threats.
What is Operation ForumTroll, and why is it significant?
Operation ForumTroll was a sophisticated targeted phishing campaign that used personalized emails and a previously unknown vulnerability in Google Chrome to compromise systems, highlighting the advanced nature of modern phishing attacks.