ePrivacy and GPDR Cookie Consent by Cookie Consent

The dark side of AI: 6 ways AI could elevate email threats of the future

e92plus
November 2023

by Colin Brown

Written by Cofense.

Artificial Intelligence (AI) capability has been a popular topic of conversation, undeniably transforming various industries with optimised efficiencies and various creative application. But with this powerful tool what does this mean for the future of email threats, and how could it be leveraged by cybercriminals to amplify their penetration? 

In this blog we’ll explore the potential ways AI could be applied to make email threats even more dangerous, what organisations and Channel should be looking out for, and highlighting the need for proactive measures to ensure cybersecurity.

1. Advanced Phishing Attacks: AI unquestionably will make it far easier for anyone to develop highly convincing and personalised phishing emails at scale! Despite previously relying on a threat actors individual capability, machine learning algorithms can analyse vast amounts of data at speed. This could include stolen personal information, online behaviour, social media profiles, and even public records. Armed with this information, cybercriminals will be able tailor their phishing emails extensively, making them appear far more authentic with the use of personal details, or even mimicking the writing style of someone they know. Such sophisticated AI-driven phishing attacks could significantly increase the success rate of these malicious campaigns.    

2. Automated Email Attacks: AI can facilitate automated email attacks at an unprecedented scale. Using AI-driven bots, cybercriminals will be able to send massive volumes of malicious campaigns to unsuspecting recipients. These bots can even mimic human behaviour, such as replying to emails or engaging in conversations, making them difficult to differentiate from legitimate users. The sheer volume and speed of these automated email attacks could overwhelm email servers, causing disruption to businesses and hindering communication channels, further putting organisations at risk of a breach.

3. Automated Spear Phishing: By combining these advanced tactics with automation, threat actors will be able to deliver spear phishing campaigns far easier. Spear phishing is a targeted form of phishing where attackers focus on specific individuals or organisations. With AI, cybercriminals could automate and scale spear phishing attacks, targeting a large number of individuals simultaneously and effectively. This automation and scalability will further increase the reach and effectiveness of spear phishing attacks, posing a serious threat to individuals and organisations alike.

4. Deepfake Emails: Deepfake technology, which uses AI to create realistic and manipulated videos, images or audio recordings, could be applied to emails as well. At Cofense we’ve already seen an increased trend in image-based attacks due to the difficulty in being able to identify them, and with the application of AI, cybercriminals could generate future content that is not only difficult to detect but even more convincing. With organisations a core target, they could use it to impersonate senior individuals. For instance, an AI-generated video clip of a CEO requesting urgent funding transfers or the disclosure of sensitive information. Deepfake emails pose a serious risk of eroding trust and will also make it increasingly challenging for individuals to distinguish between genuine and fraudulent communications.

5. Evasion of Detection Systems: AI could be employed to develop more sophisticated malware and tactics that look to evade traditional security systems. Machine learning could analyse existing security technology and their known measures to identify areas of weakness, enabling cybercriminals to evolve phishing campaigns capable of bypassing SEG’s, antivirus software and intrusion detection systems. AI will enable threat actors to be even more nimble in their evolutionary efforts, staying one step ahead.

6. Reduced Identifiers: One well known identifier for malicious campaigns has historically been to check for poor grammar or spelling. With the use of AI and machine generated content, this identifier could be hugely compromised. It will likely far improve the quality of copy and make it far less distinguishable from legitimate messages. 

Conclusion: 
Whilst we can already see some of the vast positive capabilities of AI, we must be aware of the possibilities it also creates for cybercriminals to amplify threats. With the technology we’ll undoubtably see the continued increase in threat sophistication and sheer volume of IOC’s, delivering advanced, highly convincing and even automated phishing campaigns. In order to protect against these next level attacks, the way organisations evolve themselves will be vital in their success and Channel will need to deliver the tools and capability to do this. The access and utilisation of threat intelligence will be crucial in staying abreast of developing tactics, and consequently this will need to feed every element of an organisations security stack. Only by feeding this intelligence into processes, defence mechanisms, and employee training and simulations will Channel be able to help organisations to combat evolving threats. By staying vigilant and proactive, we can work towards mitigating the risks associated with the dark side of AI and ensuring a safer digital environment for all. 

Contact us to find out how Cofense's 35+ million global threat reporters could help enable you to deliver invaluable live threat intelligence, and unmatched proactive training and simulations.