To Blog

The Write Stuff: Generative AI’s Ability to Create Content and Code Poses Both Cyber Threats and Opportunities

Oct. 13, 2023

This post is the second in an IQT series on the implications of Generative AI for cybersecurity. You can read our introductory post here.

As its name suggests, Generative AI (GenAI) is great at generating all manner of stuff, from silly Jay-Z verses to convincing news summaries. What makes it so powerful is its ability to write things, whether those things are emails or computer code. This is one of the three main domains of GenAI’s impact on cybersecurity that we highlighted in our introduction to this blog series, which you can read by clicking on the hyperlink in italics above.

In this post, we’ll discuss why this ability to write natural language and code is both a big headache for—and, potentially, a great help to—cyber defenders. We’ll also highlight innovative startups working on solutions to the challenges posed by GenAI in this domain and underline where we think there are opportunities for more entrepreneurial activity.

Now back to those Jay-Z verses. You don’t have to be a fan of the artist to appreciate how GenAI and the Large Language Models (LLMs) that underpin it are democratizing access to powerful tools for creating content. GenAI tools are being used for frivolous ends, but they’re bound to be used for malicious ones too. The ability of modern LLMs to generate human-like, conversational text-based content has worrying implications for cyber defenders. Equipped with GenAI tools that are still relatively crude such as WormGPT—a ChatGPT alternative purporting to have no restrictions that prevent it from being used for illicit activity—cyber attackers hope to produce more authentic and persuasive phishing emails at scale. Moreover, GenAI’s ability to create fake-but-highly-convincing audio and video content too gives hackers a way to reinforce their phishing efforts and to conduct more effective social-engineering operations.

The volume of phishing attacks is already increasing. Researchers found a roughly 50% increase in 2022, followed by a 135% increase in the early months of this year. But it’s not just the quantity of these attacks that’s worrying. The “quality” of the messages—by which we mean their ability to get people to take actions that compromise cybersecurity—is set to rise too, as hackers use GenAI tools to craft even more convincingly-worded emails.

In addition to using the tools to generate text for phishing attacks, threat actors are poised to take advantage of LLMs to develop software programs—and, in particular, to supercharge the development of malware. Researchers have already demonstrated how to use generative AI techniques to synthesize new polymorphic variants which dynamically modify benign code at runtime to evade detection algorithms. FraudGPT, discovered in August, advertises a number of advanced GenAI capabilities in cybercrime forums including ‘writing malicious code’ and ‘creating undetectable malware’.

Generating solutions to GenAI threats

While attackers are undoubtedly exploiting these new tools, developers, security teams, and an ecosystem of business stalwarts and startups are mobilizing to tap GenAI tools to bolster cyber defenses.

For example, LLMs are being trained to detect written natural-language content generated by other well-known LLMs, which can help counter GenAI-powered phishing threats. However, the results have varied. In January 2023, OpenAI launched a classifier trained to distinguish between AI-written and human-written text, but by July it put out an announcement saying that: “the AI classifier [was] no longer available due to its low rate of accuracy.” Late-stage company Abnormal Security has built a specialized prediction engine to detect emails created by GenAI and seed-stage startup GPTZero advertises its own model trained to detect the use of ChatGPT, GPT4, Bard, Llama, and other AI models. Yet there’s still an urgent need for more solutions to the phishing issues posed by the new technology.  

Spotting and neutralizing malicious code created with GenAI tools is also a tough challenge that’s going to require innovative solutions. One very early-stage approach looks at training LLMs to be able to detect malicious actions within binaries. Another breaks down binaries to an intermediate stage and then pattern-matches against known states of the widely-used MITRE Attack Framework. IQT sees opportunities for next-generation signature-detection tools and next-generation behavior-based detection tools that leverage GenAI to match the scale and variability of the emerging threat.

Fighting back using GenAI to write better code going forward

Although GenAI has undoubtedly given malware-makers a powerful new tool to use, on the flipside it could also frustrate hackers’ efforts by helping developers create code that’s more secure. Offerings from the likes of GitHub Copilot, Codeium, Replit, TabNine, and others use AI to autocomplete code and refactor segments of code in ways that are intended to reflect security best practices.

While these code-generation tools are still in their infancy, nearly four out of every five developers already believe they have improved their organization’s code security, according to a report from Synk. Driving this result are features such as AI-based vulnerability-filtering to block insecure coding patterns in real time—including hard-coding credentials, path injections, and SQL injections—and fine-tuning models (or even training them from scratch) on security-hardened proprietary databases.

Underpinning many of these AI-driven code-generation applications are foundation models. Some of these models have been optimized for code and trained on billions of parameters, such as OpenAI’s Codex and Meta’s Code Llama, but have not, in general, been trained on the scale of a GPT-4 (a trillion-parameter model) with the explicit goal of generating code. A startup called Poolside aims to build powerful, next-generation foundation models and supporting infrastructure designed to boost even further the accuracy and security of AI-assisted programming.

Encouraging changes are also taking place in applications that complement the core function of code generation. Startups such as Nova and Codium (which is a different startup to Codeium above) aim to generate code-integrity-test suggestions for developers’ consideration inside integrated development environments. There are also product-security copilots under development at companies like Amplify Security and Pixee which consume detections from DevSecOps alerting tools and automatically generate and propose remediation code. And companies including Grit.io, Second, and Intuita are leveraging GenAI to improve security hygiene by addressing technical debt in the code base and automating code migrations and dependency upgrades.

Here comes even bigger code

All this code-related effort matters more than ever because GenAI promises to make the challenge of managing “big code” even harder. While the technology can help us build more secure software it’s also going to unleash a tsunami of new code by making it even easier to create programs, some of which will be solely generated by AI tools themselves. Managing larger codebases with their associated complexity will complicate cyber defenders’ task. Dealing with a GenAI-created wave of phishing emails and other content will also be a significant challenge in a world where AI agents and personal assistants are plentiful, making it even harder to distinguish suspect AI-generated messages from legitimate ones created by trusted AI agents.

There’s plenty of opportunity—and an urgent need—for new and original solutions to these and other issues related to GenAI and cyber. IQT anticipates making new investments in the key innovators in the field and if you’re a startup that’s active there we’d love to hear from you. We’re also looking closely at themes including the augmentation and automation of security tasks, and the use of GenAI in cyber data analysis and synthesis, which will be the subjects of the next blog posts in this series. Check back in with us soon!

IQT Blog

Insights & Thought Leadership from IQT

Read More