Welcome to IT-Branschen – The Channel for IT News, Cybersecurity and Digital Trends

For Companies, Suppliers and Decision Makers in the IT Industry

Digital strategy and insights for decision-makers in the IT industry

Subscribe

Stay up to date with the most important news

By pressing the Subscribe button, you confirm that you have read and agree to our privacy policy and terms of use
Contact us
Google: “Hackers exploit Gemini AI for cyberattacks” Google: “Hackers exploit Gemini AI for cyberattacks”

Hackers Exploit Google's AI – A Growing Cybersecurity Threat

For years, Western countries have expressed concern about cyberattacks from adversary states. However, the situation has taken a new turn, as tech giant Google has publicly admitted that its AI-powered chatbot, Gemini, is being exploited by hackers from Iran, China and North Korea.

Hackers' use of Gemini AI

In an ironic twist, Google's threat intelligence group revealed that Iranian hackers are exploiting Gemini AI for reconnaissance and phishing attacks. Meanwhile, Chinese cybercriminals are reportedly using the chatbot to identify vulnerabilities in various systems and networks.

North Korean hackers, on the other hand, have been found using Gemini AI to generate fake job offers, luring IT professionals into fraudulent remote or part-time work schemes.

Advertisement

Russia's absence and suspicions of AI manipulation

Surprisingly mentioned Google Threat Intelligence Group not Russia, despite its reputation for cyberwarfareThe omission raises questions—perhaps Russia’s involvement is still under investigation. However, Google suggested that an Asian nation is using generative AI to spread misinformation, generate malware, manipulate translated content, and use fake digital identities to spread disinformation.

For more information on how AI is used in cyber warfare, read this report from the EU Cybersecurity Agency.

Can AI tools be protected from malicious use?

Preventing AI tools from falling into the wrong hands is a complex challenge. One potential solution is to enforce user authentication and track who has access to machine learning tools. In addition, implementing restrictions—for example, IP address filtering or user identities – help limit abuse.

But such measures have their drawbacks.

Cybercriminals may simply turn to open source alternatives, escalating competition between threat actors and making state-sponsored cyberattacks even harder to track. This in turn places an increasing burden on law enforcement agencies already struggling with cybersecurity and intelligence analysis talent shortages.

To understand more about the challenges in cybersecurity, visit Cybersecurity & Infrastructure Security Agency (CISA).

The bigger concern: AI's role in digital surveillance

When Google recently rolled out Gemini AI on Android smartphones globally raises a troubling question – can this technology be manipulated to function beyond its intended purpose? What if it starts recording audio and video from users’ surroundings without their knowledge?

As AI continues to evolve, ensuring its ethical use is prioritized becomes increasingly critical. Striking the right balance between innovation and security remains one of the greatest challenges in the digital age.

Read more about AI's ethical guidelines at AI Ethics Guidelines from the EU.

Stay up to date with the most important news

By pressing the Subscribe button, you confirm that you have read and agree to our privacy policy and terms of use
Advertisement