
Researchers at Guardio Labs have identified a sophisticated scam that exploits the AI assistant Grok on the social media platform X, leading to the distribution of malware. This new approach, termed “Grokking” by Nati Tal, the Head of Cyber Security Research at Guardio Labs, enables attackers to bypass security measures and propagate harmful links.
The scam begins with attention-grabbing video advertisements that do not include clickable links in their main posts. This tactic helps them evade the security filters employed by X. Instead, the malicious links are concealed in a small “From:” metadata field, which appears to be overlooked by the platform’s scanning processes. The attackers then prompt Grok with simple questions related to the advertisement, such as, “What is the link to this video?” Grok reads the hidden metadata and replies with a fully clickable malicious link, which enhances the link’s credibility due to Grok’s trusted status on the platform.
This manipulation effectively turns Grok into a “megaphone” for harmful content, as noted by cybersecurity experts Ben Hutchison and Andrew Bolster. The misleading links direct users to dangerous websites, often leading them through fake CAPTCHA tests or prompting them to download information-stealing malware. By leveraging Grok, the attackers transform a system initially intended to enforce restrictions into a powerful tool for spreading malicious content.
The reach of this scam is alarming. Some advertisements reportedly garnered over 5 million views, demonstrating the potential for widespread impact. As a result, links that should have been blocked instead promote harmful content to millions of unsuspecting users.
Insights from Cybersecurity Experts
In light of this research, cybersecurity professionals have provided their insights on the implications of this scam. Chad Cragle, Chief Information Security Officer at Deepwatch, emphasized the strategy employed by attackers: “They hide links in the ad’s metadata and then ask Grok to ‘read it out loud.’” He stressed the necessity for security teams to scan hidden fields and educate users that even a “verified” assistant can be deceived.
Meanwhile, Andrew Bolster, Senior R&D Manager at Black Duck, categorized Grok as a high-risk AI system that exemplifies what he refers to as the “Lethal Trifecta.” He explained that in the realm of AI, such manipulation can be perceived as a feature rather than a flaw, given that the model is designed to respond to content regardless of intent.
The findings illustrate a significant vulnerability in AI technologies that, while beneficial, can also be exploited for malicious purposes. Organizations using AI-powered services must remain vigilant, implementing robust security measures to protect against such tactics. The Grokking scam serves as a stark reminder of the potential risks associated with AI and the critical need for ongoing cybersecurity awareness.