This week, Collision (virtually) kicked off its annual conference, bringing together creatives, builders, influencers, innovators, and other great minds to cover some of the hottest topics in business and technology. Known as ‘America’s fastest-growing tech conference,’ this year Collision featured over 450 speakers with more than 100 hours of content to consume across the three-day event.
With a sizable group of 40,000-plus attendees to entertain, the team behind Collision came prepared with a packed schedule. The lineup included speakers from some brand heavy-hitters – Amazon, Twitter, TikTok, and PayPal to name a few – as well as our very own Chris Wysopal representing the application security (AppSec) space for Veracode!
AI, AI…Oh!
Chris first led a hodgepodge of talent from security and tech to moderate Collision’s AI, AI… Oh!: AI, Security and Privacy in Online Society session. For this roundtable, Chris was joined by Jeff Moss of DEF CON, Jordan Fisher of Standard Cognition, Katie Moussouris of Luta Security, Alexander Vindman of Lawfare, Gary Harbison of Bayer, and Window Snyder of Thistle Technologies. The topic at hand? Just how major the impacts of AI and machine learning are on all industries today, and the risks this technology can bring if left unchecked.
The roundtable dug into important issues like allocating organizational resources to security, privacy, and transparency to monitor AI, as well as what can go wrong when companies don’t quite get it right. Chris kicked off the conversation by asking, how can we have technology figure out exactly what algorithms are doing so that we know when something is going awry, and who is to blame when it does? Gary Harbison brought up the idea of self-driving cars, which take data from their environment and make decisions in the moment. At some point, if there is a decision made by the algorithm that pits the safety of the driver against a pedestrian, who is to blame and what is the ramification? Gary followed up that we as an industry need to think this through sooner rather than later.
Another risky implication of this technology, the group suggested, is that in cases where AI is used to track consumer behavior, such a tool can quickly become an invasion of privacy. Window Snyder noted that implementing security (and being able to measure it) is a critical first step. She posed the question, how are we going to measure efficacy and improvements in security around AI technologies so that we can see what is actually providing value to consumers? “Consumers will feel understandably uncomfortable knowing that a brand is tracking what they do inside of a store, and they may feel like they’re being watched everywhere they go,” she said.
Window went on to explain that, if we want to create a trust between technology companies and the people we’re observing, we need to make sure that we’re creating clear business requirements and metrics, reducing the scope and time for tracking, and doing as much as possible to reduce the granularity of the data that is collected. Another important step, she says, is that when you build a mechanism to collect data, you also need to build a mechanism to remove it after extracting as much granularity as possible. Doing so tells consumers that the technology was built with their privacy in mind.
There’s an economic and geopolitical aspect to the risks of AI technology too, as pointed out by Alexander Vindman. “We’ve been talking about this from a standpoint of protecting privacy, but in reality I think this takes two tracks: our side of the world where we want to protect privacy, and then the other side of the world looking to control this technology.” Alex noted that this issue carries national security risk, and that even if we address it internally in the democratic world, we will face challenges more broadly in the future.
Read part two of this Collision recap and get the details from Chris's second session, Secure from the Top Down.