Highlights:

  • The key finding of Vulcan Cyber’s analysis is that hackers can exploit ChatGPT to propagate harmful packages into developer environments with ease.
  • Developers are advised to carefully evaluate the libraries they use, particularly when AI has recommended them.

Artificial intelligence is the “in” thing in 2023, as evidenced by a surprise increase in stock values on the Nasdaq Inc. Exchange. However, the key finding of a recent analysis by Vulcan Cyber Ltd. is that hackers target all new, shiny objects observed online.

The paper critically highlights the cybersecurity dangers that could arise from the rapid spread of generative AI technology, notably with products like OpenAI LP’s ChatGPT. Although AI holds immense assurance, the paper points out that technology could still present security risks.

The key finding of Vulcan Cyber’s analysis is that hackers can easily exploit ChatGPT to propagate harmful packages into developer environments. Researchers at Vulcan claim that the risk exists due to the pervasive usage of AI in almost all corporate use cases, the structure of software supply chains, and the presence of open-source code libraries.

The idea behind the report’s warnings is what the researchers refer to as “AI package hallucination.” In some instances, AI systems like ChatGPT build coding libraries that first seem realistic but are ultimately nonexistent. When ChatGPT suggests these “hallucinated” packages, a threat actor may make and distribute a dangerous package with the same name, leaving a secure system open to unanticipated cyber-attacks.

The researchers mentioned that this potential weakness could expose users to cybersecurity dangers if the use of AI tools for professional tasks increases. Developers who now turn to ChatGPT instead of more established venues like Stack Overflow for coding solutions may unintentionally install these malicious programs, exposing the larger company.

According to Vulcan Cyber experts, most care must be taken, but AI should not be stopped because of the possible risks. Instead, the research urges greater alertness and proactive behavior, especially from the professionals who use AI more frequently in their daily roles.

Developers are advised to carefully evaluate their libraries, particularly when AI has recommended them. Before installing a package, developers should take precautions to confirm its legitimacy by considering details such as the package’s creation date, download count, comments, and any notes that may be attached.

The researchers reported, “It can be difficult to tell if a package is malicious if the threat actor effectively obfuscates their work or uses additional techniques such as making a trojan package that is actually functional. Given how these actors pull off supply chain attacks by deploying malicious libraries to known repositories, it’s important for developers to vet the libraries they use to make sure they are legitimate.”

The research highlights a serious cybersecurity concern that is quickly growing due to the broad use of generative AI technology, given how prevalent AI is right now.

Developers are urged to be more vigilant and proactive by the researchers by thoroughly validating the libraries the platforms’ suggested AI calls for. It emphasizes the necessity of balancing accepting AI’s enormous potential and thoughtfully tackling the related hazards.