Powered by MOMENTUM MEDIA
Powered by momentummedia
nestegg logo

Invest

Email phishing attacks skyrocket due to ChatGPT's influence, experts warn

By Newsdesk
  • March 06 2024
  • Share

Invest

Email phishing attacks skyrocket due to ChatGPT's influence, experts warn

By Newsdesk
March 06 2024

The rise of Generative AI technologies like ChatGPT has been met with a dramatic increase in cyber threats, particularly in the realm of email phishing attacks.

Email phishing attacks skyrocket due to ChatGPT's influence, experts warn

author image
By Newsdesk
  • March 06 2024
  • Share

The rise of Generative AI technologies like ChatGPT has been met with a dramatic increase in cyber threats, particularly in the realm of email phishing attacks.

Email phishing attacks skyrocket due to ChatGPT's influence, experts warn

Security firms are reporting a surge in such attacks, exceeding 10 times the amount previously recorded. This uptick has been attributed to the malicious use of AI by cybercriminals, including the development of AI-powered malware tools that are being distributed on the dark web.

Edith Reads, a financial analyst from Stocklytics, highlighted the critical nature of the situation, noting that "threat actors are using tools like ChatGPT to orchestrate schemes involving targeted email fraud and phishing attempts." These attacks often lure victims into divulging sensitive information such as usernames and passwords by clicking on deceitful links.

According to data gathered in the last quarter of 2022, approximately 31,000 fraudulent emails were being sent each day, marking a 967% increase in credential phishing attempts. The majority of these attacks, about 70%, were conducted through text-based business email compromises (BEC), while 39% targeted mobile users through SMS phishing (smishing).

Advertisement
Advertisement

Cybersecurity measures are being pushed to evolve in response to these threats. Reads suggested that "it is vital for companies to integrate AI directly into their security frameworks to monitor all communication channels and neutralize risks consistently." Despite efforts by AI developers to prevent misuse of their platforms, cybercriminals continue to find ways to circumvent these protections.

Email phishing attacks skyrocket due to ChatGPT's influence, experts warn

The potential for the misuse of generative AI extends beyond phishing attacks. Recent research, including studies from the RAND Corporation, has raised alarms about the possibility of terrorists using these technologies for planning and executing biological attacks. Additionally, exploiting less commonly tested languages has shown how hackers can trick ChatGPT into providing instructions for criminal activities.

In an effort to address these vulnerabilities, OpenAI has engaged cybersecurity experts in Red Teams to identify and remedy security weaknesses in its AI systems, a move that underscores the ongoing battle between technological advancement and its exploitation by malicious actors.

Forward this article to a friend. Follow us on Linkedin. Join us on Facebook. Find us on X for the latest updates
Rate the article

more on this topic

more on this topic

More articles