Malvertising has made its way to Bing’s chatbot/search engine. Cybersecurity researchers from Malwarebytes recently observed a malicious ad being offered as part of the Chat-GPT, AI-powered answer to a search query.
Malvertising is a simple practice – hackers try to trick ad networks into displaying ads that look legitimate but are, in fact, malicious. It’s a game of impersonation, as the ads, the sites they’re leading to, and the content being provided there, appear as something they’re not (it’s usually software, streaming services, or cryptocurrency-related tools).
So far, malvertising was seen on your usual search engines, Google, Bing, etc. despite the companies putting in gargantuan efforts to keep their search results clean, for obvious reasons. However, the emergence of Chat-GPT – especially since its integration into Bing – things have changed.
New dog, old tricks
Microsoft integrated Chat-GPT into Bing earlier this year, and a few months ago even started monetizing it, practically in the same way other search engines monetized their digital real estate. When a user types in a query, they’d get a result, paired with a few sponsored links (which were clearly stated as sponsored). Bing, in all its AI-powered glory, is no different.
In this particular instance, when Malwarebytes’ researchers asked Bing for the Advanced IP Scanner tool, they were given a link that ultimately redirected them to “advenced-ip-scanner[.]com” (mind the “e” instead of the “a”), where the victims would download an installer. That installer’s goal was to retrieve the ultimate payload, but it seems to be defunct, as the researchers couldn’t obtain the actual malware.
“Threat actors continue to leverage search ads to redirect users to malicious sites hosting malware,” the researchers warned. “While Bing Chat is a different search experience, it serves some of the same ads seen via a traditional Bing query.”