Shocking to no one: Artificial Intelligence (AI) was a huge topic at Black Hat USA 2023, but what did we learn about it? With no shortage of talks on it, there are many insights to take into account. We asked highly skilled Software Security Researchers who attended both Black Hat and DEFCON to weigh-in on the most insightful moments, particularly related to AI. Here’s what we found.
AI is a Double-edged Sword for Security
AI presents society with a double-edged sword (especially when it comes to cybersecurity). John Simpson, Senior Security Researcher, explains: “AI is clearly the hot topic; at both Black Hat and DEFCON there was a lot of emphasis on the dangers but also significant talk about its potential usefulness.”
The intricate interplay between AI’s benefits and risks underscores the complexity of our rapidly evolving digital age. On the one hand, attackers are using AI to enhance their exploit capabilities. Conversely, we are able to enhance defenses with AI through tools that utilize automation to quickly remediate insecure code - like Veracode Fix.
To go deeper into how AI is being used to enhance and simplify software security, check out a recent blog to see a developer generate insecure code with ChatGPT, find the flaw, and secure it quickly with Veracode Fix by developing a function without manually writing any code.
Niching Down AI: Risks and Advantages of LLMs (Large Language Models)
Large language models (LLMs) are advanced AI systems created to understand and generate human-like text. They’re built upon deep learning techniques, particularly the Transformer architecture, which you can learn more about in our whitepaper: Artificial Intelligence (AI) and the Future of Application Security Testing.
Principal Security Researcher, Mateusz Krzeszowiec, shared about one stand-out talk that spoke on LLMs called, “Shall we play a game? Just because a Large Language Model speaks like a human, doesn't mean it can reason like one,” by Craig Martell, the Department of Defense Chief Digital and AI Officer.
Mat remarked, “This was a good reminder that LLMs are neither magical nor have any reasoning apparatus built in... Beyond that, the Department of Defense announced a Generative AI Task Force, known as Task Force Lima, and Craig invited the DEFCON community to reach out with suggestions and comments.”
Additionally, John Simpson shared: “Rich Harang, Principal Security Architect at NVIDIA, has a background in biotech and mentioned on a panel that ‘LLM-like’ models are making big waves in the medical sciences for things like protein folding.”
We’re Moving from AI-based APIs to AI-based Agents
Just as quickly as ChatGPT blew the world's collective mind, we are shifting the way AI is being utilized and integrated. ChatGPT is one of many AI-based Application Programming Interfaces (APIs), but based on information received at Black Hat, we see the general trend shift quickly from AI-based APIs to AI-based agents.
What is an AI-based agent? John explains, “AI-based agents that have the ability to interact with the components of their environment. For example, an agent that can download and configure software, get it running, interact with the software and other OS components.”
Instead of just accessing specific AI functions through APIs, AI agents can perform more complex tasks by combining multiple AI capabilities and interacting with users or systems in a more human-like manner.
Why does it matter? There’s a colossal cybersecurity impact that comes from such a shift. This leads us to our next insight.
Government Agencies are Paying Attention to Cyber
The Department of Defense isn’t the only government agency wary of cybersecurity concerns. The White House is putting significantly more resources into solving security issues nationwide and with allies.
John explains: “This is the first time governments seem to be trying to stay ahead of things from a regulatory perspective. Plus, CISAs close partnership with Ukraine is providing a lot of learning experiences for both sides.”
Additionally, the acting National Cyber Director, Kemba Walden, announced in her keynote that the Office of the National Cyber Director published a Request for Information on open-source software security.
A recent article on the announcement writes: “Veracode co-founder Chris Wysopal, a longtime cybersecurity expert who contributed to the National Cybersecurity Strategy, told Recorded Future News that the emergence of artificial intelligence has made it imperative that the federal government act quickly, as the time to fix security issues will need to fall precipitously to keep up with the increase in automated attacks.”
A Lesson from the TETRA Vulnerabilities Exposure
We can’t talk about government interest in cybersecurity without bringing up the recent TETRA zero-day vulnerabilities exposure.
On July 25, 2023, The Hacker News published: “A set of five security vulnerabilities have been disclosed in the Terrestrial Trunked Radio (TETRA) standard for radio communication used widely by government entities and critical infrastructure sectors, including what's believed to be an intentional backdoor that could have potentially exposed sensitive information.”
These zero-day vulnerabilities were a point of discussion at the event, and Mat explained the impact: “The lesson here is that we should be using open standards, especially in critical infrastructure.”
Where Do These Insights Leave Us?
The insights gained from this year’s conference collectively leave us at a pivotal juncture of opportunity and responsibility. They point to the dual nature of AI’s influence on cybersecurity; as AI continues to advance at an unprecedented pace, both attackers and defenders find themselves at the forefront of innovation. This makes the balance between innovation and safeguarding our digital assets a more critical and delicate dance than ever before.
To learn from these Security Researchers, check out the latest blog from Mateusz Krzeszowiec on CSRF vulnerability remediation, or the latest blog from John Simpson on secure coding with template engines.