Skip to content

Artificial Intelligence streamlining and personalizing the process of cyber intrusions for hackers

Artificial Intelligence (AI) revealed as a dual danger in CrowdStrike's yearly cyber threat hunting report for numerous companies.

Artificial Intelligence streamlines and personalizes the process of perpetrating digital intrusions...
Artificial Intelligence streamlines and personalizes the process of perpetrating digital intrusions for hackers

Artificial Intelligence streamlining and personalizing the process of cyber intrusions for hackers

In a recent report by security firm CrowdStrike, it has been revealed that government-backed hackers are increasingly using artificial intelligence (AI) to automate, customize, and accelerate cyberattacks.

According to the report, AI is being used for various purposes such as automating reconnaissance tasks, assessing the exploitation value of discovered vulnerabilities, generating phishing messages, translating and modifying phishing lures, and maintaining a high operational tempo.

One example of this is the Iran-linked hacking team Charming Kitten, who likely used AI to generate messages in a 2024 phishing campaign against U.S. and European organizations. Similarly, the North Korea-linked hacker team "Famous Chollima" (also tracked as UNC5267) conducted over 320 intrusions in a year, with AI assistance.

AI is also being used to develop or employ autonomous malware that can adapt its code in real-time to evade detection by cybersecurity defenses. Hackers have been chaining together multiple open-source tools powered by AI to perform complex attack stages like reconnaissance, lateral movement, and credential harvesting at speeds impossible without automation.

The use of AI in cyberattacks not only accelerates their execution but also lowers the cost and increases the volume of attacks. This creates a rapidly evolving threat landscape requiring new, AI-enhanced defensive strategies to keep pace.

As organizations continue to adopt AI tools, the attack surface will continue expanding, and trusted AI tools will emerge as the next insider threat. This is a concern highlighted by CrowdStrike in their report.

Businesses are increasingly incorporating AI into their workflows, potentially making them targets for AI-enhanced cyber attacks. However, this also means that AI tools are becoming increasingly targeted by hackers as companies adopt them without proper security measures.

One example of this is the exploitation of a vulnerability in Langflow's AI workflow development tool, which was used by hackers to burrow into networks, commandeer user accounts, and deploy malware.

In summary, government-backed hackers are leveraging AI technologies primarily to automate and customize attacks, improve social engineering effectiveness, evade detection, and conduct relentless operations across multiple fronts, marking a significant evolution in cyber warfare capabilities. This trend underscores the importance of implementing robust AI security measures to protect against these increasingly sophisticated threats.

[1] CrowdStrike's Annual Threat Hunting Report [2] The Guardian, "North Korea's hackers are using AI to launch sophisticated cyberattacks" [3] The Washington Post, "Iranian hackers are using AI to launch sophisticated cyberattacks" [4] Forbes, "How AI is Making Cyberattacks More Sophisticated and Dangerous"

  1. Artificial intelligence (AI) is being used by hackers to develop autonomous malware that can evade cybersecurity defenses by adapting its code in real-time.
  2. The use of AI in cyberattacks has the potential to lower the cost and increase the volume of attacks, creating a rapidly evolving threat landscape.
  3. The incorporation of AI into workflows makes businesses potential targets for AI-enhanced cyberattacks, highlighting the need for robust AI security measures.

Read also:

    Latest