14 Questions Are All You Need

Carson Zimmerman

How is our SOC doing, really? It’s easy to become lost in compliance and regulatory requirements soup. There are plenty of respected consultancies that will perform multi-week SOC assessments. A quick Internet search yields several SOC capability maturity models. And yet, a one-hour conversation with a SOC veteran will yield a gut sense of how a SOC is doing on its journey and where investments are needed. What if SOCs had a lighter weight method of identifies key strengths and weaknesses: one can be done in an afternoon or more than once a year?

In this talk, Carson Zimmerman will challenge your thinking about how to measure and drive SOC effectiveness. He will present 14 key indicators of performance, that survey not only how the SOC is doing at a given point of time, but also how well growth and improvement are baked into the SOC culture.

Carson Zimmerman is a veteran cybersecurity operations specialist, author, and speaker. In his current role at Microsoft, Carson leads an investigations team responsible for defending the M365 platform and ecosystem. In his previous role, at The MITRE Corporation, Carson specialized in cybersecurity operations center (CSOC) architecture and CSOC consulting. His experiences over 20 years as a CSOC analyst and engineer led Carson to author Ten Strategies of a World-Class Cybersecurity Operations Center and co-authored its second edition, Eleven Strategies.

“About Time” to Peak Into CN eCrime Ecosystem

Mao Sui

Chinese-speaking cyber criminals continue to pose a threat to organizations globally. It is critical for security specialists to be prepared against these emerging threats by understanding their ecosystem, motives, and TTPs.

This presentation will highlight what industries and sectors are heavily targeted by Chinese-speaking threat actors and explain how these threat actors operate under the central government’s strict surveillance and censorship. The talk will then take a deeper look at how these cyber criminals target individual countries with unique TTPs, such as choices of products, communication platforms, and advertisement channels. The presentation will also shed light on how cash-out methods have evolved over time as well. Information and data analysis included in this presentation will help cybersecurity specialists predict future trends and protect their organizations against threats originating from Chinese-speaking cyber criminals.

Mao Sui is a Senior Security Researcher at CrowdStrike. She leverages her subject matter expertise in data collection and analysis to provide valuable insight on emerging threats from the APAC region.

AI Enhanced Hacks: Model in the Middle

Ryan Ashley and Ari Chadda

It is inevitable that Hackers will collectively embrace and abuse AI systems. After all, that is the mindset that differentiates hackers: the desire and capacity to take a piece of technology and twist it to fit our needs. There has been a lot of talk about generative models in the context of social engineering and operational security, but comparatively little discussion of other ways that AI systems might be used as part of a hack. This talk will discuss a recent engagement in which we had to attack a system that interfaced with, among other things, a security camera. We will explain how we implemented a technique that we call a Model in the Middle (ModITM) attack, in which an ML model was inserted as a malicious interloper between a pair of services in order to degrade system integrity. Additionally, we will discuss how using an ML model in an attack changes the design parameters of both the model and the attack infrastructure. Then, we will generalize the ModITM attack and explain how it can be used in different attack scenarios. To the best of our knowledge, this attack is the only publicly available example of using a machine learning model as a component of an attack chain.

Ryan Ashley (@birdwainer) and Ari Chadda are engineers with IQT Labs. For the past 3 years, their work has centered on exploring ways to ensure the fairness, ethics, and security of machine learning models and systems.

Attacking Web Applications With JS-Tap

Drew Kirkpatrick (hoodoer)

JS-Tap is a tool that provides a generic JavaScript payload that can be used as either an XSS payload or post exploitation implant. It is intended to assist red teamers targeting web applications. JS-Tap heavily instruments the client-side of the application in the user’s browser, extracting information useful to red teamers. Because the payload focuses on the client-side, no prior knowledge of the application is needed and no requests are sent by the payload to the application server.

When used as an XSS payload JS-Tap uses a novel persistence technique called an iframe trap that keeps the payload running for an extended period of time.

JS-Tap captures sensitive data as users interact with the application including screenshots of pages visited and inputs entered by the user such as login credentials. Cookies and local/session storage are scraped, potentially disclosing sensitive session data. HTML content is also captured providing the application insight needed to develop targeted XSS payloads for future attacks. The payload is also able to intercept JavaScript based network communications of the instrumented application.

The exfiltrated data is presented in the JS-Tap portal for easy analysis.

Drew Kirkpatrick (@hoodoer) is a Senior Security Consultant at TrustedSec and has 25 years of experience designing and building complex systems, including application security, network policy management, machine learning, and transit and aerospace systems. These days, he works to improve Information Security by applying penetration testing and computer science to assess the security posture of TrustedSec clients. Before joining TrustedSec, he was a Security Researcher at NopSec and Secure Decisions as well as a Senior Computer Scientist for the U.S. Navy.

Back (45 Years?) in the USSR: Exploring the Russian Elbrus Architecture (With a 25-year-old Exploit!)


Elbrus is a 45 year old Russian CPU family currently targeted at the Russian government and military market. This talk will use a 25 year old C++ virtual function pointer exploit technique as the basis for exploring Elbrus’ instruction set architecture, which contains some unique features such as very long instruction words (VLIWs) and register windowing.

evm (@evm_sec) is a reverse engineer and member of the Principal Staff at the Johns Hopkins University Applied Physics Laboratory (JHU/APL). He started out reversing in the Windows internals & trusted computing world, and now spends more time in embedded devices. His research interest is in accelerating software RE with architecture-agnostic methods. At APL he is the editor-in-chief of the internal ‘zine devoted to RE and vulnerability research.

Backtrace in Time: Revealing Attackers’ Sleep Patterns and Days Off in RDP Brute-Force Attacks with Calendar Heatmaps

Andréanne Bergeron

Brute-force authentication attacks on RDP, often automated and shared among malicious actors, escalate in danger when coupled with human motivation, transitioning from opportunistic to targeted attacks.

Our team deployed high-interaction honeypots and analyzed, for this presentation, 3.4 million login attempts over three months. The objective was to observe the different strategies of brute-forcing attack by malicious actors who try to compromise our systems. Human behaviors were revealed through the data, so we developed a visualization tool to help present the intricating results. Our distinctive approach involves a calendar heatmap, visually unraveling human behaviors behind brute-force attacks over time. This calendar analysis reveals four key behaviors, illustrating diverse threat actors and strategies.

Identifying and differentiating human behavior is crucial for crafting robust protection measures and the criminological lens of this presentation emphasizes the human element. The bigger objective of our research program is to unravel attacker behaviors to disrupt practices and escalate the cost of malicious endeavors.

Andréanne Bergeron, Ph.D., (@AndreanBergeron) is a cybersecurity researcher at GoSecure, specializing in online attackers’ behaviors. Acting as the team’s social and data scientist, her expertise delves into the intersection of criminology and cybersecurity. In addition, Andréanne holds an esteemed position as an affiliated professor in the Department of Criminology of Montreal University, bridging academia and industry. Her commitment to provide a unique perspective on the human element behind digital threats reflects a holistic approach, enriched by theoretical depth and real-world applicability. An experienced presenter, Andréanne has showcased her research at prestigious conferences such as Black Hat USA, DEF CON, and BSides Montreal.

Bad Romance: The TTPs of “pig butchering” scammers

Sean Gallagher

Over the past 18 months, sha zhu pan (“pig butchering plate”) scams have become a global phenomenon, combining tried and true romance lures and heavily scripted social engineering with technically sophisticated fraudulent mobile trading and decentralized finance applications. The rings behind these scams are highly organized, multinational, and frequently use coerced workers to act as “keyboarders” to keep victims engaged.

This presentation (depending on time requirements) will delve into the evolving tools, techniques, and practices used by sha zhu pan rings–from common victim engagement techniques to communications, use of generative AI, and basic technical infrastructure. It will cover patterns in victimology, enagement with live scams, and take-down efforts in cooperation with cryptocurrency exchanges, hosting providers, and other stakeholders.

Sean Gallagher (@thepacketrat) is a Principal Threat Researcher for Sophos X-Ops. He is IT Security Editor Emeritus of Ars Technica, a 30-year veteran of infosec and tech journalism, and a
former US Navy officer. He lives and works in Baltimore.

BBOT: The Dangers and Rewards of Building a Recursive Internet Scanner

TheTechromancer (Joel Moore)

2021 was the year I fell in love with Spiderfoot. Spiderfoot was remarkable because it could find obscure and well-hidden gems that no other tools could. Guriosity grew into obsession as I studied the code and began to comprehend the gravity of Spiderfoot’s biggest feature: Recursion. But not everything was rainbows and butterflies; and soon Spiderfoot’s subtle flaws would drive the creation of something completely new.

Come with me on my journey from a Spiderfoot contributor to the creator of a new and powerful recursive scanner: BBOT. Explore the dangers of recursion, relive some of the insidious bugs (both with Spiderfoot and early versions of BBOT) that caused recursion to get the better of us, and discover the ideas and methods I used to tame them. Most importantly, see how it was all worth it in the end! Modular, extensible, and easily-importable as a python library, BBOT was designed from the ground up to be developer-friendly. See how you can harness its recursive power to not only transform your OSINT process, but own and pwn your way to victory on pentests!

TheTechromancer (@thetechr0mancer) is a hacker at Black Lantern Security. He loves coding in Python and is the creator of several tools, including TrevorSpray, ManSpider, and BBOT.

Blue2thprinting (blue-[tooth)-printing]: Answering the question of ‘WTF am I even looking at?!’

Xeno Kovah

If one wants to know (for attack or defense) whether a Bluetooth (BT) device is vulnerable to unauthenticated remote over-the-air exploits, one needs to be able to query what firmware or OS the target is running. Unfortunately there is no universally-available method to get this information across all BT devices. There is also no past work that attempts to rigorously obtain this information. Therefore, we have created the “Blue2thprint” project to begin to collect “toothprints” (2thprints) of BT devices.

In this talk, we’ll show why it is necessary to send custom packets and packet sequences in order to build more robust 2thprints. These custom packets and sequences cannot be created by using existing BT software interfaces. They require utilizing custom firmware on the packet-sending device.

This research will present a new state-of-the-art when it comes to exposing the known, the unknown, and the under-known of BT device identification. And it will show what work remains, before we can approach 100% identification for any random device that shows up in a BT scan.

Prior to working full time on OpenSecurityTraining2, Xeno Kovah (@XenoKovah) worked at Apple designing architectural support for firmware security; and code auditing firmware security implementations. A lot of what he did revolved around adding secure boot support to the main and peripheral processors. He led the efforts to bring secure boot to Macs, first with T2-based Macs, and then with the massive architectural change of Apple Silicon Macs. Once the M1 Macs shipped, he left Apple to pursue the project he felt would be more impactful: creating free deep-technical online training material and growing the newly created OpenSecurityTraining 501(c)(3) nonprofit.

Breaking HTTP Servers, Proxies, and Load Balancers Using the HTTP Garden

Ben Kallus and Prashant Anantharaman

This talk describes the HTTP Garden, a system that compares how different HTTP servers interpret identical requests. We present a tool with over 20 web servers and 10 popular proxy services. Our system also accounts for minor transformations that servers apply to these requests as part of their internal normalization mechanisms. We deploy our novel, coverage-guided, grammar-based, output-driven differential fuzzer that has successfully identified and promptly reported over 80 parsing bugs in popular HTTP servers.

In addition, the HTTP Garden also hosts a set of CDNs, Load Balancers, and Proxies. We also identify how we can combine these services with bugs in backend servers to identify exploit chains. We have identified and reported exploit chains in popular services, such as Google CDN, Akamai, Node.js, LiteSpeed, Apache Traffic Server, Puma, HAProxy, H2O, OpenBSD httpd/relayd, and Gunicorn. These bugs range from trivial parsing bugs and request smuggling bugs to more impactful exploits, such as cache poisoning, access control bypasses, and stream desynchronization.

Ben Kallus is a PhD student at Dartmouth College, where he works with Prof. Sean Smith on differential fuzzing of systems.

Prashant Anantharaman (@parsingpunisher) is a Senior Security Researcher at Narf Industries, focusing primarily on building security solutions for industrial control systems.

Building Canaries with ELK and ElastAlert

Andrew Januszak

Canaries (honeytokens, canary tokens, canary files, canary accounts, etc.) are relatively low-effort high-gain defensive tools. My team wanted to use our existing infrastructure and tooling to implement canaries in various places across different services. We decided to build our own canaries, leveraging our ELK stack for data collection and our ElastAlert instance for alert tuning and generating notifications across different communication channels. This solution has been successful in detecting phishing/credential harvesting attacks and provides a lot of flexibility for how, when, and where we implement canaries as well as receive alerts. While there was initially some internal skepticism about their utility that skepticism quickly turned into positive reinforcement once we began to see their benefits.

Andy Januszak is a Senior Systems Engineer at Lehigh University and also serves as a Pentester in REN-ISAC’s Peer Assessment Program.

Cache Crashers: Exploiting and Detecting Vulnerabilities in Memcached

Bryan Alexander

Memcached is an open source high-performance caching store used by many large and small organizations. It’s very fast, simple, full featured, open sourced, and well supported by both enterprise and the open source community. Stripe is adopting it to power parts of our internal caching strategy, and as a result, we took a look at the security of the caching store itself, its protocol, and its proxying capabilities.

During our analysis, we identified a number of critical memory corruption vulnerabilities that could result in the complete compromise of a remote Memcached proxy node. We developed a proof of concept for one of the discovered vulnerabilities and built generic signatures that can be used to identify exploit attempts. We’re releasing a public blogpost detailing the discovery, details on our fuzzing, a working proof of concept, and all signatures for detecting exploitation.

Bryan Alexander (@dronesec) is a security engineer at Stripe, working on penetration testing and attacker engineering. His interests include vulnerability research, reverse engineering, offensive tooling, and bug bashing, and has reported dozens of vulnerabilities across a wide spectrum of industries and products.

CISO Risk Dumpster Fires: SEC Turns Up the Heat

Liz Wharton, Danette Edwards, and Cyndi Gula

It’s dangerous out there for CISOs, security teams, and the board of directors as they navigate incident responses and risks in light of the new SEC reporting rules. The year 2023 brought a series of important milestones: significant SEC mid-breach materiality disclosures to the public, the first SEC fraud suit against a CISO, and the first known threat actor snitching to the SEC. Join an in-depth exploration of these developments with a former SEC lawyer, former in-house counsel and IR incident lead, and a company board member. Drawing insights from these proverbial dumpster fires as they continue to unfold, the panel aims to shed light on the unique challenges posed by these changing dynamics. Our session will break down how the cyber liability stakes have been heightened, impacting even the junior level analyst.

Liz Wharton (Founder, Silver Key Strategies) (@LawyerLiz) is an executive and counsel for cybersecurity startups and served on Atlanta’s ransomware incident response team while Sr. Assistant City Attorney.

Danette Edwards spent more than a decade as an SEC Enforcement prosecutor and recently returned to the private sector as the Co-Chair of Katten Muchin Rosenman’s Securities Enforcement Defense Practice.

Cyndi Gula (Managing Partner, Gula Tech Adventures) (@cyndi_gula) is a startup operations expert advising and serving on the Board of Directors for numerous cyber focused companies. She previously started and ran operations for Tenable Network Security, which IPO’d in 2018, and Network Security Wizards.

The Cosmic Turtle of Code: It’s graphs all the way down

Mark Griffin

Are you a flat coder? What if I told you that code is actually made up of many connections, invisible to the naked eye? Sounds ridiculous, but if all you ever look at is linear source code… you might be a flat coder.

Graph visualizations, on the other hand, are intuitive visual representations that match the true structure of code. If you’re really experienced in reading or reversing code, you’re probably so used to traversing forward-and-back references that you unconsciously create similar mental models… if you aren’t used to this, you probably just want to start with the graph.

This talk will explore the origins of graph-based code representations and how these have evolved to help modern reverse-engineering and source code understanding workflows.

We’ll go over use cases and techniques to show how graphs can speed up our ability to find code that matters, determine reachability of bugs, assess fuzzing effectiveness, and understand bug fixes.

In the end we’ll talk about customizing graphs and visualizing time, as well as how to avoid the risks of useless eye candy and overwhelming viewers. Join us in debunking the notions of flat code and embracing the graphical revolution of a connected future.

Mark Griffin (@seeinglogic) is a security researcher with over a decade of experience, specializing in rapidly understanding code, finding the bugs in it, and helping fixing them. He previously developed fuzzing technology at a startup, released multiple open-source analysis projects and visualizations, and also worked as a cybersecurity consultant. These days he’s focused on building tools and spreading knowledge to help the world see and understand code better.

DNS is Still Lame: Why it’s a problem and what we can do about it

Ian Foster

DNS is one of the foundational building blocks of the internet. Everything from TLS to your IoT toaster relies upon it, but there are some critical and common problems that can put your names at risk of takeover. Nobody is immune, from the top tech companies to small businesses. In this talk, I’ll cover some of the common ways, and more uncommon ways domain names and name servers can be incorrectly configured to allow full or partial takeovers through lame delegations.

Even with a perfect setup, you are still at the mercy of your top level domain’s nameserver’s security, which in many cases may not be what you expect. I’ll go over findings gathered by monitoring and scanning all of the root TLD’s name servers for over a year and the fixes (or lack of) deployed, and how this impacts domain owners and more importantly, every internet user.

I’ll also announce and demo a new open source tool that will automate much of the scanning and identification of these problems so that businesses, sysadmins, and domain owners can identify these problems before attackers do.

Ian Foster plays an offensive security engineer by day and a DNS researcher and historian by night. Constantly monitoring changes in DNS for new research opportunities, Ian runs dns.coffee, a historical database of the DNS root zones and name servers providing data to researchers for the past 13 years.

Driving Forward in Android Drivers: Exploring the future of Android kernel hacking

Seth Jenkins

Android kernel exploitation has increasingly utilized bugs in hardware-specific drivers to achieve local privilege escalation. GPU (Graphics Processing Unit) drivers represent a substantial portion of this attack surface, however, numerous other Android chipset component drivers are also viable targets for an attacker. These drivers represent a relatively untapped source of kernel privilege escalation bugs. This talk will take a deeper look at how to perform research into these drivers and will discuss two drivers where serious security issues were discovered. This will culminate in a discussion on the development of an exploit for one of these issues. It will also demonstrate this exploit, achieving kernel arbitrary read/write and root access on a stock Android phone.

Seth Jenkins (@__sethJenkins) is a security researcher at Google Project Zero. He primarily focuses on Linux kernel and Android zero-day research but has dabbled in a variety of architectures, operating systems, and software. He particularly enjoys innovating novel strategies for exploit development.

Exploitable Security Architecture Mistakes We Just Keep Making


There are some interesting Security Architecture mistakes that I’m repeatedly seeing SaaS companies make that you may not be aware of. They are consistently critical severity, satisfying to exploit, and totally preventable once you know to look for them. Most all stem from keeping compute costs low by using root to isolate untrusted code from sensitive data. Sound like a terrible idea?! Totally!! But that’s how they got successful enough to hire you! Now, let’s fix it up…

William works in AppSec for a SaaS where he frequently pentests and enjoys collaborating with engineers on the fixes. He is passionate about defense evasion, tool automation, and hardening defenses.

Ewe Cant Trusst Yore Eers: An Overview of Homophone Attacks

Aaron Brown

Approximately 284 million people on Earth have visual impairments. Many use tools, such as screen readers, as an aid to understanding the information presented by computers. The tools these users rely on, however, have not gotten much attention from the security research community. In this talk, I will present a class of sensory confusion attacks using sound-alike symbols and words that can be leveraged by attackers to deceive users. These homophone attacks are analogous to homoglyph attacks, but due to the different senses and tooling involved cannot be solved with the same mitigations.

Aaron Brown (@TheTarquin) is a hacker, security professional, and philosophy school dropout. He started hacking at a young age because he liked maximizing the absurdity content of systems. He also studied philosophy (especially the phenomenology of technology) and seeks to understand the ways in which our systems help the human brain lie to itself. While he has a strong background in security and other hacker-adjacent fields, he is happiest when he’s breaking shit just to see what happens.

FedRAMP is Broken (And Here’s How to Fix It)

Shea Nangle and Wendy Knox Everette

Bring your Shmoo Balls, we have some juicy opinions on how the federal government should vet cloud services. After going through the FedRAMP authorization process with multiple companies, we have grey hair, scars, and some things to say.

We’ll go through some systemic problems and flag some of those weird controls that have always bugged us, and then when we’ve finished airing our grievances we’ll dig into the tough stuff: what can possibly change? Should it change? Will r5 ever be fully adopted? Should FedRAMP continue to exist?

Shea Nangle is a Director at a cybersecurity consultancy. He has been involved with FedRAMP (as a consultant and working for cloud service providers) since 2014. In 2023, he was recruited for the position of FedRAMP Director but chose to stay in private industry.

Wendy Knox Everette is a software developer & hacker lawyer who is currently the CISO at a healthcare data analytics firm. She has co-authored a peer reviewed article on FedRAMP in IEEE Security & Privacy, as well as another reviewing other security issues caused by control frameworks in NDSS.

Fuzzing at Mach Speed: Uncovering IPC Vulnerabilities on MacOS

Dillon Franke

This research presents an in-depth investigation of MacOS Inter-Process Communication (IPC) security, with a focus on Mach message handlers. It explores how Mach message handlers are utilized to execute privileged RPC-like functions and how this introduces vectors for sandbox escapes and privilege escalations. This involves a detailed examination of MacOS internals, particularly the calling and processing of Mach messages, their data formats, and statefulness.

The core of the study is the development and application of a custom fuzzing harness targeting these identified IPC function handlers. The fuzzing process, aimed at inducing crashes indicative of memory corruption vulnerabilities, is discussed in detail. Several generated crashes will be discussed, one of which may be exploitable to obtain remote code execution. The research culminates in the open-sourcing of a bespoke Mach message corpus generation script and the custom fuzzing harness, contributing to the broader cybersecurity community and laying groundwork for future exploration in this area.

Dillon Franke (@dillon_franke) is a seasoned security researcher with a track record of uncovering high-impact vulnerabilities in complex systems. Throughout his career, Dillon has focused on identifying and exploiting weaknesses in widely used products, working closely with organizations across various industries to improve their security posture and protect against emerging threats. His work has been featured in numerous industry publications and news outlets, and he has spoken at major security conferences around the world, including Black Hat, TROOPERS, Nullcon, and the Qualcomm Product Security Summit. In his current role as a Senior Security Consultant at Google/Mandiant, Dillon continues to perform cutting edge application security research and release open source tools. He is passionate about sharing his knowledge with others and inspiring the next generation of security professionals to take up the mantle and continue the fight for a safer, more secure online world.

FuzzLLM: Fuzzing Large Language Models to Discover Jailbreak Vulnerabilities

Ian G. Harris and Marcel Carlsson

Jailbreak vulnerabilities in Large Language Models (LLMs), which exploit meticulously crafted prompts to elicit content that violates service guidelines, have captured the attention of research communities. While model owners can defend against individual jailbreak prompts through safety training strategies, this relatively passive approach struggles to handle the broader category of similar jailbreaks.

We have developed a fuzzer called FuzzLLM which generates prompts to trigger jailbreak vulnerabilities in an LLM. We use a set of templates to capture the structural integrity of a prompt and isolate key features of a jailbreak class as a set of constraints. A set of base templates are defined based on previous work and the templates are combined and modified in order to generate a wide range of prompt variations which still contain essential jailbreak features. Code and data are publicly available on github.

Ian G. Harris is Professor of Computer Science at the University of California Irvine. His research interests include the design of secure hardware/software systems, and the application of Natural Language Processing (NLP) to security and design.

Marcel Carlsson is a principal security consultant and researcher at Lootcore. He performs scenario-based threat assessments and simulations, consulting, training, and research for international businesses and organizations.

Marcel and Ian have presented at security and hacking conferences all around the world such as POC, Shakacon, Confidence, Black Hat, DEF CON, and BSidesLV.

Going Meta–Pulling Info from Encrypted Radios

Luke Berndt

Even though more and more public safety radio systems are encrypting their audio, it is still possible to pull useful information from the metadata. This talk will walk through the open source Trunk Recorder tool and the types of metadata it can collect.

Luke Berndt (@LukeBerndt) is a tinkerer who is not particularly good at any one thing but is curious enough to try everything.

Groovy X-Ray Reverse Engineering like it’s the 70s

Aleksandar Nikolic and Travis Goodspeed

A cabinet x-ray machine is a handy tool for any reverse engineer of electronics, but the price tag keeps convenient x-ray photography beyond the reach of most hobbyists, particularly when the digital sensor works. In this lecture, we’ll show two film alternatives to digital photography, generating x-ray pictures first with polaroid film in the absence of wet chemistry and second with a proper dark room under a red light. We’ll also explain how film can be superior to a digital sensor, offering better resolution and dynamic range at a much larger surface size.

Aleksandar Nikolic is mostly focused on software reverse engineering and vulnerability research but easily gets nerd-snipped into various side-projects. A healthy interest in hobby electronics, obsolete hardware, and photography often leads him to unusual research topics.

Travis Goodspeed (@travisgoodspeed) is a reverse engineer from East Tennessee, where he drives old Studebakers and knows all the good dogs by name. His past projects include a replacement module for calculator watches, a web API for identifying functions in Thumb2 firmware, and the International Journal of PoC||GTFO.

Hack the Planet Gently With a Code of Practice

Leonard Bailey, Harley Geiger, Katie Moussouris, Casey Ellis, and Jen Ellis (moderator)

In May 2022, the Department of Justice revised its guidance for charging violations of the US’ main anti-hacking law, the Computer Fraud and Abuse Act (CFAA). Under the new guidance, federal prosecutors were directed that “good-faith security research” should not be charged as a violation of the CFAA. This is a huge step forward in reducing legal risk for security researchers, but it raises the question of what good-faith security research is and how prosecutors can spot it. The claim of research must not be used as a get-out-of-jail-free card for all manner of criminal behavior as this would ultimately undermine the credibility of actual research, risk unwarranted violations of privacy and potential harm to property, and expose researchers to more risk.

This panel will bring together legal experts and security research leaders to discuss whether the time has come to articulate a Code of Practice for good-faith security research. We will investigate what one might include, and what challenges might exist in the creation, adoption, and utilization of a code. We will also discuss the broader legal landscape for researchers, both in the US and internationally, as well as where lines exist between security research, hacktivism, and hack back.

Leonard Bailey is Special Counsel for National Security and Head of the Cybersecurity Unit in the U.S. Department of Justice’s Computer Crime and Intellectual Property Section.

As Counsel at Venable, Harley Geiger (@HarleyGeiger) advises clients on security research and vulnerability disclosure and coordinates the Hacking Policy Council.

Katie Moussouris is coauthor of ISO standards on vulnerability disclosure and handling, creator of Microsoft and the Pentagon’s bug bounty programs, and founder and CEO of Luta Security.

Casey Ellis is the founder, CTO, and Chair of Bugcrowd, and the driving force behind Disclose.io.

Jen Ellis (@infosecjen) collaborates on all of the above.

Hacking Network APIs

Dan Nagle

A foundational component of communication between devices is the TCP/IP network stack. Web browsing, streaming video, secure control, and innumerable other applications are built upon this technology. This 2-part demonstration will use open source tools to focus on the data transfer components UDP and TCP while targeting an IoT device. Part 1 is reverse-engineering the network commands to better understand them and then mimic it (a common attack strategy). Network protocols will be discussed during this process. Armed with our new knowledge and skills, part 2 will take them a step further to discover and analyze malware present on the IoT device. This presentation is light on slides and heavy on demos.

Dan Nagle (@NagleCode) is a Principal Cyber Engineer for Raytheon CODEX. He has 20 years of software development experience. He has written and published apps for desktop, mobile, servers, and embedded. He is the author and inventor of Packet Sender, an app used daily by security researchers, featured in manuals from major tech companies, and is taught in universities around the world. He is also the author of 2 network-related patents and a book published by CRC Press. His open-source contributions have received international awards, and he has presented at many developer conferences (Black Hat, DEF CON, IEEE) about them.

Hacking the Planet (Under Glass)

Rich Wickersham

As a pinball enthusiast and lifelong hacker, I have waited many years to find a way to converge my two favorite hobbies. In the Spring of 2020, I was the first person to bring a modern pinball machine to a security conference for the purpose of identifying vulnerabilities in IoT pinball machines. With very little effort I learned that quite a lot can go wrong and the impacts can have consequences. I also ran into the same pitfalls that many in our industry have experienced when finding and attempting to disclose a vulnerability. This talk will cover my experience in hacking pinball machines, the vulnerabilities I identified, and the lessons I learned in the process.

I am applying these lessons to a sanctioned pinball hacking effort that I am planning for 2024 so this story has what I believe is a perfect ending!

Rich Wickersham (@RichWickersham) is a seasoned security leader with over two decades of experience designing, implementing, and securing resilient architectures for private and public organizations. Wickersham is actively involved in the security community as a Core Organizer of BSides NoVA and has spoken previously at DEF CON, BSides NoVA, CSA, OWASP, and Voice of America. Rich’s active research interests include social media OPSEC, OSINT, and countermeasures.

Hi My Name is Keyboard

Marc Newlin

Seven years after MouseJack, Marc set out to hack some more peripherals. Gaming-keyboards looked fun but were hilariously bad, so he looked to Apple’s Magic Keyboard for a challenge. One question lead to another, and he was soon reporting unauthenticated Bluetooth keystroke-injection vulnerabilities in macOS, iOS, Android, Linux and Windows, along with link-key-extraction vulnerabilities in popular computers and peripherals. This is a story of trusting your instinct, following the research, and ignoring the part of your brain that says “there is no possible way this will work.” We’ll look at the progression of research and decisions that were made, the vulnerabilities themselves, and the realities of a complex, multi-vendor disclosure. We’ll conclude with tools, demos, and some reflection on what we can learn from this dumpster-fire.

Marc Newlin (@marcnewlin) just wants to hack all the things. He works on drone reverse-engineering at SkySafe and has a penchant for hacking anything with a radio. His past research includes the MouseJack vulnerabilities in 2016, 26 Comcast CVEs in 2017, and a collection of wireless-presenter 0-days in 2019. Marc also loves competitive engineering and was a finalist in the DARPA Shredder, DARPA Spectrum, and DARPA Spectrum Collaboration Challenges.

I Can LTE Even… [MVP edition]


Embark on an exhilarating journey delving into the clandestine realm of mobile networks! Follow the gripping narrative of one’s odyssey, ignited by stumbling upon an unprotected MME online, culminating in the creation of a personalized LTE kingdom right at home.

This thrilling saga commences with a fascination for elusive network protocols, the unearthing of Baicell’s surprising LTE gear, and the revelation of FCC’s exclusive Band 42, beckoning enthusiasts to forge their own LTE domains. With secured funding, a thriving home-based cell network emerged, illuminating the path for others through an enticing GitHub repository.

This isn’t solely about connectivity; it’s a rebellion against conventional service providers. Imagine crafting your network, sharing it with friends, or empowering underserved communities with affordable internet. This isn’t just tech; it’s a bold revolution in connectivity!

Nobletrout (@nobletrout) used hack LTE stuff. He still does. But he used to too. He does other things as well. Mostly half assed and not with much forethought.

Improving Red Team Maturity Through Red Team as Code (RTaC)

Jack (Hulto)

How can we apply “as code” concepts to build more mature red teams?

This talk goes beyond managing red team infrastructure and shows examples of how teams can leverage the benefits of an “as code” process to achieve: repeatability, version tracking, and self documentation to improve red team engagements and empower IR / detection engineers.

Jack (Hulto) (@Hultoko) is a cyber security engineer with a passion for DevOps and infrastructure. He currently works as a lead engineer on the red team at a Fortune 50 company. Jack specializes in tool development and less attributable infrastructure. Jack has worked in both the private and public sector performing security research, penetration testing, reverse engineering, digital forensics, and now red teaming. In his free time, Jack volunteers at collegiate training exercises like CCDC and ISTS; he is the red team lead for SECCDC, Linux lead for ISTS, and a member of the National CCDC red team.

Intel is a Fallacy, But I May Be Biased

Andy Piazza (klrgrz)

As threat intelligence practitioners, we often discuss our biases, mental models, and the common fallacies that impact our analysis and reporting. This talk looks at how we’ve failed to effectively communicate some of the decisions that we’ve made consciously and unconsciously during the production and dissemination of threat intelligence and how that impacts how our stakeholders think about the data. For example, threat profiles and analysis reports often talk about the targeted industry without actually discussing if the industry was specifically targeted or if a member of that industry was breached as a target-of-opportunity. Without that clarity organizations in that industry may misunderstand their threat landscape and prioritize defensive projects for lower-priority groups.

Quite simply–we’ve introduced biases into our intelligence reporting that are not often discussed or considered by consumers. This talk will present multiple areas of threat intelligence reporting where we may be unintentionally implying significance to the wrong areas in our findings.

Andy Piazza (@klrgrz) has 20 years of experience in security operations, threat intelligence, and incident response. Andy has developed cyber threat intelligence programs for clients and is a frequent contributor to the CTI community through his blog and talks at conferences. He is a US Army combat veteran with deployments to Iraq, Central America, and Haiti. He has earned a Master’s in Intelligence Studies from American Military University and a Master’s in Information Security Engineering from SANS. He’s also the Operations Lead for BSidesNOVA and the Global Head of Threat Intelligence at IBM X-Force.

Lean, Developer-Friendly Threat Modeling

Falcon Darkstar Momot

If you feel like threat modeling is a tedious box to check before you can get on to actually securing your assets, you’re not alone. At Aiven, I built a threat modeling process that’s intended to be fast and easy, but still complete. It depends on taking input from developers and other stakeholders, and it allows the security team to guide a collaborative journey into discovering what controls the organization will need to implement to meet its security objectives. We move away from creating diagrams and artifacts nobody can use, into creating living documents and getting consensus on how to secure the asset.

Falcon Darkstar Momot (@falcon) was a penetration tester and researcher for 9 years and is currently the manager of product security at Aiven.io. As an appsec-focused manager, he draws on expertise as a LangSec researcher and cross-domain knowledge to help organizations avoid creating products that cannot be secured. His academic credentials include an MBA and a M. Sc. Information Systems and has done threat modeling in scenarios involving everything from IoT to web applications to cloud service providers.

A Legal Defense Fund for Hackers

Harley Geiger and Charley Snyder

The hacker community has long conducted security research that skates the edge of legality. This has led to charges and lawsuits, bogus and serious alike. Ethical hackers have rights and defenses against legal claims, but don’t always have access to representation or resources to defend themselves. In this brief presentation, we will discuss a new resource for the hacker community: the Security Research Legal Defense Fund (SRLDF).

The SRLDF is a nonprofit that is presently operational and aims to help finance representation for hackers facing legal threats due to good faith security research. This talk will cover how the SRLDF works and what is needed to qualify. The talk will share how hackers can leverage this resource to protect themselves, their rights, and others in the community.

Harley Geiger (@HarleyGeiger) is Counsel at Venable, LLP, where he leads the Security Research Legal Defense Fund and the Hacking Policy Council and counsels clients on a variety of cybersecurity law issues.

Charley Snyder (@charley_snyder_) serves as Head of Security Policy at Google. In this role, Charley organizes Google’s expertise and technology to help solve the world’s pressing public policy challenges related to safety and security online.

Level Up

Kirsten Renner

Created a road map to follow to help people at any career chapter get to the next level or pivot to an entirely new area by way of actively helping in our community. Covers my own advancements, methods I used, and testimony from numerous interviews from individuals who built their own pathways by volunteering and other contributions.

Kirsten Renner (@Krenner): Village Creator, Volunteer, Board Member, Content Creator, Community Contributor, Mentor, Talent Advisor Extraordinaire, Army mom. Known to run a little bit.

More Money, Fewer FOSS Security Problems? The Data, Such As It Is

John Speed Meyers, Sara Ann Brackett, and Stewart Scott

“Pay the maintainers” has become a rallying cry for some advocates of free and open source software. While the argument often involves morality or economics, a key strand of this thought also includes the security of free and open projects. In essence, how can maintainers, stretched thin by many competing demands on their time and attention, improve the security of their projects?

But would more “money” actually improve the “security” of FOSS projects? To our knowledge, no one knows, so we set out to answer this question.

We built a new tool to automate the collection of FOSS funding data, particularly whether a project has funding from GitHub Sponsors, Tidelift, Open Collective, NumFOCUS, and Google Summer of Code. We also gathered security posture data, thanks to the OpenSSF-produced “scorecards” tool, and combined that with funding data on the top 1000 Python and npm packages. We analyzed the extent to which funding, and specifically these different types of funding, does or does not improve FOSS security.

Does money help? You’ll have to attend the talk to find out 🙂

John Speed Meyers is the head of Chainguard Labs at Chainguard, an open source software security company, and is a nonresident senior fellow with the Atlantic Council’s Cyber Statecraft Initiative in the Digital Forensic Research Lab.

Sara Ann Brackett is a research associate at the Cyber Statecraft Initiative, where she works on open-source software security (OSS), software bills of materials (SBOMs), software liability, and software supply-chain risk management.

Stewart Scott is an Associate Director with the Cyber Statecraft Initiative, where he works on open source software and software supply chain security policy among other projects focusing on systemic cyber risk.

Network Layer Confusion: Fun at the boundaries

Joshua DeWald

Have you ever wondered how request smuggling or domain fronting works? Curious about how IP interacts with Ethernet? Ever wanted to send an IP packet over email? Confused about how VLANs relate to subnets? We all have Network Layer confusion, which is the idea that we all have a tendency to think that the world (particularly networks) actually works in the way that the abstractions present. This talk is intended to (re-)”teach” layered networking via demonstrations and discussion of what happens at the “boundaries” between the layers as they work with (and against) each other. You should come out of here less confused and full of new ideas about networks! I have found ideas in this presentation to be of practical usefulness in my day job, this not just about attacking but rather how we can use the network to do our bidding, not merely hope it works.

Joshua DeWald has worked in software development related to Internet technologies for about 20 years. For most of his career, his real love has been when software and systems don’t work and needing to figure out why and then fix them. This has taken him all the way poking on API call functionality, all the way to patching SmartNIC firmware. The art and science of troubleshooting is truly fascinating. It only took 15 years (!) for Josh to start really pointing that love at the networking aspects of systems.

No, SBOM Will Not Solve All Your Software Supply Chain Problems

Andrew Hendela

No, you don’t actually want an SBOM. Seriously: If one more person tells me they would have stopped the Solarwinds attack with SBOM or vuln management, I am going to go insane… more insane. Join me for a cathartic (at least for me) rant about software supply chain security.

Andrew Hendela (@zelkathak) has been doing offensive and defensive cybersecurity for well over a decade, running the gamut from VR, malware analysis, and cyber attribution. He is also a co-founder of Karambit.AI, a cybersecurity startup focused on automated RE for software supply chain security.

NTLMv1-SSP DES Mechanics Explained

EvilMog (Dustin Heywood)

100% (really 80%) live demo of reversing NTLMv1 to NTLM and showing ASCII art explaining how DES keys are generated in NTResponse Messages. Understand how authentication coercion and LanmanCompatibilityLevel can sink your domain. A full explanation of NTLMv1-SSP and server response calculation. Understand the secrets of the universe, well NTLMv1 anyways. Seriously awesome ASCII art diagrams. 100% delivered from the terminal including slides.

Dustin Heywood, otherwise known as EvilMog (@Evil_Mog), is the Chief Architect of X-Force, Bishop of the Church of Wifi, and retired member of Team Hashcat (not really retired, he hasn’t competed for a few years). He collects licenses and certifications. Holder of many black/gold/red badges and has been on more than a few Hacker Jeopardy winning teams.

0wn the Con

The Shmoo Group

For seventeen eighteen years, we’ve chosen to stand up and share all the ins and outs and inner workings of the con. Why stop now? Join us to get the break down of budget, an insight to the CFP process, a breakdown of the hours it takes to put on a con like ShmooCon, and anything thing else you might want to talk about. This is an informative, fast paced, and generally fun session as Bruce dances on stage, and Heidi tries to hide from the mic. Seriously though–if you ever wanted to know How, When, or Why when it comes to ShmooCon you shouldn’t miss this. Or go ahead and do. It’ll be online later anyway.

The Shmoo Group is the leading force behind ShmooCon. Together with our amazing volunteers, we bring you ShmooCon. It truly is a group effort.

Sobriety Hacks! Unleashing the Power of Incremental Change

Jennifer VanAntwerp

As cybersecurity practitioners, ethical hackers, and everything in between, you know the importance of making incremental changes to improve performance and mitigate risks. But did you know that the same approach can be used to transform your personal life? Whether you’re struggling with addiction or simply looking to improve your health, incorporating small yet significant changes in your daily routine can make all the difference.

Through 23 years of hard-fought sobriety, I’ve learned about the power of incremental change. In this session, I’ll share the tips and tricks that have helped me maintain a sober (and FUN!) lifestyle. From setting achievable goals to building a robust support system, I’ll provide you with practical tools you can use to embark on your own journey of health.

But let’s be clear: sobriety doesn’t have to be scary or shameful. On the contrary, it can be an empowering and liberating experience. By caring for our bodies and minds, we can unlock our full potential as cybersecurity professionals and human beings. So whether you’re sober-curious or just want to know how to make your work events more inclusive for non-drinkers, join me for this session and discover the joys of incremental change.

Jennifer VanAntwerp (@the_jvan) is the founder of Sober in Cyber, a nonprofit on a mission to provide alcohol-free events and community-building opportunities for sober individuals working in cybersecurity. She is passionate about breaking the stigma of addiction recovery and is profoundly driven to increase the number of professional networking events that don’t revolve around alcohol. Jennifer is also the principal at JVAN Consulting, which provides marketing consultation services to cybersecurity startups. When she’s not developing marketing strategies or running her nonprofit, Jennifer enjoys volunteering, sewing, and tinkering with her beloved ’65 Ranchero.

Summiting the Pyramid (of Pain)

Michaela Adams, Roman Daszczyszak, and Steve Luke

Adversaries are continuing to grow sophisticated in their tooling and behavior in an attempt to evade detection analytics. The Pyramid of Pain, created by David Bianco, groups indicators, tools and behaviors into different levels reflecting how much “pain” it would take an adversary to evade detection. Detection engineers have leveraged the Pyramid of Pain to understand how difficult it is for adversaries to evade analytics, however, it does not provide a sufficient framework to quantify how robust an analytic is against adversary changes.

This talk introduces a model and analytic writing guidance that goes beyond the Pyramid, allowing defenders to score how robust their analytics are, and begin to understand their true detection coverage against an adversary attack. The talk will break down the Pyramid of Pain more granularly, define analytic and event robustness measures, and walk through worked examples of how to score and improve analytics against this model. Defenders will be able to apply this process to create more robust analytics and change the game on the adversary.

Michaela Adams, Roman Daszczyszak (@rdunspellable), and Steve Luke are cybersecurity engineers working for MITRE Engenuity’s Center for Threat-Informed Defense (CTID). Together, they have almost 40 years of combined experience conducting federal cybersecurity research at MITRE across multiple security disciplines, including threat hunting, adversary emulation, malware analysis, digital forensics, cyber threat intelligence analysis, penetration testing/red teaming, and many others.


TaskMooster: Spikygeek and (Not so) Little Bruce Antlers: Bruce Potter

What happens when you gather 4 hackers together to complete silly tasks, rank their execution, and see who ends up with the most points at the end? Taskmooster, that’s what. Inspired by the UK game show Taskmaster, TaskMooster is ShmooCon’s take here stateside. What? You haven’t heard of Taskmaster? Seriously, stop reading this program right now and go watch at least one episode. All seasons are available to stream on YouTube, and it’s totally binge-worthy.

Come join the contestants at the start and end of the con as they watch how their tasks went and get graded by our very own TaskMooster. For an added bonus, the final task will take place on stage. Who will take home the coveted golden payphone? Only time will tell. Yes, the prize is a real golden payphone, straight from NYC (via a warehouse in PA), to the halls of ShmooCon.


  • Heidi “Don’t eat those eggs” Potter
  • Jacque “How many ways can I screw this up” Blanchard
  • Danny “As if I can get any more random” Akacki (Rand0h)
  • “i’m nothing if not running dialogue” Arozqueta (MadMex)

Tobacco 2.0: When Money Buys the Truth & the Outcome

Libby Liu and Joan Donovan

As early as the 1940s, Big Tobacco had knowledge that their product causes cancer. They lied about it and poured money into a systematic campaign to bury scientific research about the causation while buying research reports supporting their lies. Result > 100m dead.

Their playbook was used again to gaslight the world by the Big Oil industry, the Big Pharma industry–and now–by Big Tech.

Whistleblower Aid will be launching a public interest whistleblower whose renown in the field online misinformation and refusal to play by the playbook put them in Big Tech’s crosshairs. With the complicity of a private university with significant conflicts of interest, a major platform whose public narrative has been exposed as false, this whistleblower’s work was put on ice for 2 years at a time it is most crucial–and during the run up to critical 2024 elections both here and abroad.

Let’s explore the case together & fight back for objective truth and research freedom.

Dr. Joan Donovan is an Assistant Professor of Journalism and Emerging Media Studies in the College of Communications at Boston University and founded the non-profit Critical Internet Studies Institute which fosters knowledge mobilization and advocacy for a public interest internet with community safety and privacy at the design core. She co-authored Meme Wars with Emily Dreyfuss and Brian Friedberg.

Libby Liu is the CEO of Whistleblower Aid, a non-profit legal organization, and a long-time internet freedom and human rights activist. Whistleblower Aid represents public interest whistleblowers including Dr. Donovan and Peiter Mudge Zatko in the area of Big Tech, national security, and protecting democracy.

Tracking the World’s Dumbest Cyber-Mercenaries

Eva Galperin

For the last 6 years my colleagues and I have been tracking the activities of the cyber-mercenaries we call Dark Caracal. In this time, we have observed them make a number of hilarious mistakes which have allowed us to gain crucial insights into their activities and victims. In this talk, we will discuss the story of Dark Caracal, the mistakes they have made, and how they have managed to remain effective despite quite possibly being the dumbest APT to ever exist.

Eva Galperin is EFF’s Director of Cybersecurity. Prior to 2007, when she came to work for EFF, Eva worked in security and IT in Silicon Valley and earned degrees in Political Science and International Relations from SFSU. Her work is primarily focused on providing privacy and security for vulnerable populations around the world. To that end, she has applied the combination of her political science and technical background to everything from organizing EFF’s Tor Relay Challenge, to writing privacy and security training materials (including Surveillance Self Defense and the Digital First Aid Kit), and publishing research on malware in Syria, Vietnam, Lebanon, and Kazakhstan. Since 2018, she has worked on addressing the digital privacy and security needs of survivors or domestic abuse. She is also a co-founder of the Coalition Against Stalkerware.

Unlocking Enterprise-scale Security Visibility

Eknath Venkataramani and Frank Olbricht

In the face of ever-expanding server fleets, obtaining comprehensive security telemetry poses a significant challenge. Although off-the-shelf solutions exist, they often fall short when dealing with larger scales, fail to meet all requirements, do not integrate well, and become prohibitively expensive as fleets grow.

Contrary to common perceptions, a custom solution does not need to be overly complex and can offer numerous benefits. These advantages include simplified integration, the ability to tailor the solution to address specific business needs and queries, and improved visibility for security telemetry and compliance requirements. In this presentation, we present a case-study of what it took to build a custom solution at Stripe, covering essential aspects such as data collection, processing, and storage. Additionally, we showcase the simplicity and performance at scale, supported by real performance metrics, to underscore the immense potential of a custom visibility solution for managing large server fleets.

Eknath Venkataramani (@eknath) has been working in Cybersecurity for over a decade and currently works on Analytics and Infrastructure in Stripe’s Security Organization. Prior to Stripe, Eknath worked in AWS IoT in designing new security features and products in the IoT space and in McAfee as a security researcher and architect on McAfee’s machine-learning based anti-malware detection systems.

Frank Olbricht has extensive experience with security infrastructure as well as product development. He has worked on malware analysis as well as email security products at Cisco and is currently implementing security-visibility and response solutions for a variety of assets at Stripe, such as servers, Macbooks and Chromebooks.

War Planning for Tech Companies

Greg Conti and Tom Cross

No one wants war, but it’s important to be prepared for any potential conflict. The wars in Ukraine and the Middle East sent tech companies world-wide scrambling to protect their people, move or delete sensitive data, protect (and sometimes burn down) at-risk infrastructure, among many other challenges, all while continuing operations. For many organizations their response was reactive and ad-hoc. Despite the tremendous cost of lives and property, these wars are localized. What if Ukraine escalates into all out conflict between NATO and Russia, a global hotspot like Taiwan becomes a shooting war, or we face some other unforeseen scenario? Is your organization ready?

This talk provides tools and insights to ensure your organization is ready should a major conflict arise. We will consider historical lessons learned, war game potential scenarios, provide a planning methodology adapted from military war planning, and share essential advice and guidance to organizations on how to prepare for the potential of a war. You’ll leave this talk with valuable tools and insights to ensure that your organization is better prepared should a major conflict arise, as well as a deeper understanding of what steps to take to protect your people, data, infrastructure, and operations.

Greg Conti (@cyberbgone) is Principal at Kopidion, a cybersecurity training firm. Formerly, he served as Director of Security Research at IronNet, on the West Point Computer Science faculty, and with U.S. Cyber Command and NSA. He deployed to two combat zones, as an Intelligence Officer and as a Cyber Operations Officer.

Tom Cross (@_decius_) has two decades of experience in infosec R&D. He is currently an independent security consultant, Principal at Kopidion, and creator of FeedSeer, a news reader for Mastodon. Previously, he was CTO of Drawbridge Networks, Director of Security Research at Lancope, and Manager of IBM’s X-Force Advanced Research team.

Why We Need to Stop Panicking About Zero-Days

Katie Nickels

This community has a problem: we like to panic about zero-day vulnerabilities. Though it might be uncomfortable, we need to talk about this because it’s wasting time and burning us out. In this presentation, Katie will discuss what the evidence around vulnerability exploitation shows us and where the significant risks lie. She will share how we can stop this cycle and focus on impactful actions to create a better return on investment and improved security posture. Attendees can expect to leave with a deeper understanding of when vulnerabilities matter (or not) as well as permission to stop panicking, take a deep breath, and get your weekend back.

Katie Nickels (@likethecoins) is the Director of Intelligence Operations for Red Canary as well as a SANS Certified Instructor for FOR578: Cyber Threat Intelligence and a non-resident Senior Fellow for the Atlantic Council’s Cyber Statecraft Initiative. She has worked on cyber threat intelligence and network defense for over a decade and was previously a member of the MITRE ATT&CK team. Katie enjoys sharing knowledge with the community, including through the SANS STAR livestream and her personal blog. She is a recipient of the Microsoft Security Changemaker award and the SANS Difference Maker Award.

You Wouldn’t Scrape the Internet to Make an LLM: Law and Policy of Scraping the Ago of AI

Kurt Opsahl

AI projects are built on large language models, developed by ingesting massive amounts of data used to train on billions of parameters, often scraped from the internet. The growth of AI in the public consciousness has led to spirited debates about the legality and policy issues with gathering and processing this data–the application of privacy, free expression, copyright, and anti-hacking law.

This intense focus on LLMs has led to a multitude of cases challenging the AI models that will shape the legal landscape, including online scans, searches and scrapes used for cybersecurity and threat intelligence, whether enhanced with AI or not. At the same time, security professionals may be charged with technically protecting privacy or personal data against being ingested, at least without permission.

We will discuss the state of the law on online scraping, examining key legal cases that paved the way for search engines built on scraping–web, images, and books–and the cases that limited or allowed access to some private sites, as well as the open tech policy issues as the courts and Congress attempt to balance innovation, safety, and privacy. Finally, we will tie this together with the potential impact on cybersecurity.

Kurt Opsahl (@kurtopsahl) is the Associate General Counsel for Cybersecurity and Civil Liberties Policy for the Filecoin Foundation. Formerly, Opsahl was the Deputy Executive Director and General Counsel of the Electronic Frontier Foundation and continues to work with EFF as a Special Counsel. He is a member of the CISA Cybersecurity Advisory Committee’s Technical Advisory Council, the Open Archive Advisory Board, the Zcash Community Advisory Panel, and the Security Researchers Legal Defense Fund’s Board.