Between Two Moose

Whitney Merrill, LosT, Andrew Morris, and Bruce Potter (interviewer)

Join us as we close down ShmooCon 2019 in the inaugural episode of “Between Two Moose.” Rather than a panel discussion on a topic, Bruce will interview a few folks who are helping to shape the security industry. With a focus on the what formed their understanding of security, the journey they traveled in their careers, and opinions on contemporary security topics (with of course some irreverent frivolity thrown in), we hope you’ll enjoy the show and renew us for another episode next year.

Whitney Merrill (@wbm312)

LosT (@1o57)

Andrew Morris (@Andrew___Morris)

Bruce Potter (@gdead) is the CISO at Expel and spends most of his time instructing people on the correct pronunciation of CISO (it’s “ciz-oh”).


Reversing SR-IOV For Fun and Profit

Adir Abraham

We are surrounded with PCIe devices everywhere. They are in charge of interconnecting extremely important and exciting functionalities inside and outside our systems.

Have you ever been wondering how to explore and reverse engineer those devices and their functionalities? SR-IOV (Single-Root I/O Virtualization) is a peripheral component interconnect (PCI) standard for sharing PCIe devices within a single computer.

In this talk I will provide thorough background of PCIe devices and the standard. Afterwards, I will share my research experience and explain how SR-IOV PCIe devices can be reverse engineered, what information we can get, how to find vulnerabilities in PCIe devices, and what we can learn from those findings.

Adir Abraham (@adirab) is a reverse engineer and a vulnerability researcher at Intel, with more than 15 years of experience as a cybersecurity researcher and exploit developer. He likes to learn new and cutting-edge technologies, break them, and explore anything from low-level SW to HW vulnerabilities and build exploitation scenarios accordingly. Recently, he also took part in the CTF organizers team of BSidesTLV. He holds a BSc degree in CS education and a BA degree in Economics–both from the Technion. He is also CISSP, CCSK, and CySA+ certified.


Ad-Laundering: Bribes & Backdoors

John Amirrezvani

Ad-Laundering is a new tactic for exploiting social media platforms to spread fake news and fraud via legitimate users. As Facebook and other social media platforms have faced pressure to stem the flow of fake news, they have begun to make it more difficult for fake accounts to buy ads on their platform. As a result malicious groups have pivoted from creating fake accounts to bribing people with real profiles into enabling their dirty deeds. While the overall strategy of targeted manipulation via ads is well known, ad­laundering is creating new headaches for social media platforms looking to balance income and integrity.

In this presentation we will cover how I stumbled across this technique, identified various similar campaigns, and an analysis of their approach for enabling access to target accounts. Additionally any IOCs will be made available.

John Amirrezvani (@trojawn) is a security researcher with Novetta and alumni of Whitehatters Computer Security Club at USF. He has taught workshops at BSidesLV and BSidesNoVA.


BECs and Beyond: Investigating and Defending Office 365

Douglas Bienstock

As organizations increase their adoption of cloud services, we see attackers following them to the cloud. Microsoft Office 365 is becoming the most common email platform in enterprises across the world and it is also becoming an increasingly relevant artifact for intrusion investigations. This presentation will discuss two real world attacks that targeted Office 365–one motivated by money and the other by information. Through the case studies we will analyze the TTPs of both threat actors and how they differ, describe how to optimize Office 365 for investigations, provide an overview of the log sources that are available (and their limitations), and provide recommendations for enhancing the security of Office 365.

Doug Bienstock (@doughsec) splits his time at Mandiant performing Incident Response and Red Team work. He uses lessons learned from IRs to better simulate attacker techniques and aid organizations stay ahead of the bad guys.


High Confidence Malware Attribution using the Rich Header

Kevin Bilzer, RJ Joyce, and Seamus Burke

Attribution of malware is a complicated problem as there are many ways to mislead and misdirect attempts to tie back malware to its authors. The Rich header, undocumented by Microsoft, can be a powerful tool in the analyst’s toolbox. It provides a wealth of information about the build environment of software samples, which can be used to uniquely identify the environment a piece of malware was created in, as well as to tie other unknown samples to that environment. We will present our research into how the header is generated, how it can be used to fingerprint build environments, and the metadata hash we developed to scale across large datasets to detect similar samples.

RJ, Kevin, and Seamus are students at UMBC. All of them are highly involved in the computer security world, participating in a variety of competitions and conferences. All three are members of their school’s national champion Collegiate Cyber Defense Competition team. Kevin is the president of the CyberDawg’s, the school’s security club; Seamus has spoken at DEF CON on his previous baseband research; RJ is a master’s student whose focus is in malware analysis and machine learning as well as being a two-time Shmooze-A-Student recipient.


Behind Enemy Lines: Inside the operations of a nation state’s cyber program

Andrew Blaich and Michael Flossman

We’ve all heard about Nation State surveillance programs and their capabilities throughout the world, but have you ever wondered how these programs were developed and the decisions that went into them? In this talk we will go through the very recent actions a particular nation-state undertook in order to build up their offensive cyber capabilities for both desktop and mobile, including iOS and Android. With insights gleaned from exfiltrated content obtained during a recent investigation into one of their bespoke tools, we will look at the build vs. buy decisions that key individuals involved in this process went through–from the lawful intercept and exploit shops they communicated with, to their in-house development, and ultimately to what their resulting solution(s) were. This talk will have mystery, intrigue, couch surfing, and as usual a bunch of op-sec failures.

Michael (@terminalrift) and Andrew (@ablaich) are security researchers at Lookout and lead the Threat Intelligence team. They specialize in discovering, tracking, and disrupting the offensive cyber operations of state sponsored actors and have presented on some of their research including Pegasus, Chrysaor, Dark Caracal, Desert Scorpion, Frozencell, and others. They’re always looking for new adversary campaigns, OpSec fails, and are repeat offenders for maxing out their VT API quota in the first few days of the month.


Electronic Voting in 2018: Bad or Worse?

Matt Blaze

Electronic voting systems used in the US are notoriously insecure, but how did they actually hold up in 2018? This talk will discuss the what we know about vulnerabilities in these systems (updating our talk from last year), what we do and don’t know about actual exploitation of these vulnerabilities, and prospects for doing better before the 2020 presidential race.

Matt Blaze (@mattblaze) is a local hacker and computer science professor who works on security, cryptography, scale, and public policy.


It’s 2019 and Special Agent Johnny Still Can’t Encrypt

Matt Blaze

In 2011, we published “Why (Special Agent) Johnny (still) Can’t Encrypt,” which examined protocol, implementation, and usability failures in the P25 encrypted two-way radio protocol used by federal and local law enforcement. In 2019, we’re still seeing lots of sensitive clear traffic on federal systems. This talk will examine what changed, what didn’t, and the difficulties of fixing protocols, standards, and practices in real deployed systems.

Matt Blaze (@mattblaze) is a local hacker and computer science professor who works on security, cryptography, scale, and public policy.


The APT at Home: The attacker that knows your mother’s maiden name

Chris Cox

While we’re fighting for our security and privacy, some are being left behind. Traditional security models tend to rely on certain assumptions in order to work effectively. For example current solutions assume that the user is able to make configuration changes and is permitted to keep certain facts secret in order to authenticate to a service or device. For victims of domestic violence, their proximity to their attacker means that many security solutions are ineffective and, in some cases, even harmful to implement. This talk will explore cases where traditional security measures fail an already-vulnerable population and how we need to rethink our approach to security.

Chris Cox (@Cyber_Cox) is the founder and past president of the nonprofit Operations Security Professional’s Association and Executive Director of its anti-domestic violence initiative, Operation: Safe Escape. He also teaches cybersecurity for the Department of Defense and is the former Chief Information Officer and Information Assurance Manager for the Army’s National Training Center.


Looking for Malicious Hardware Implants with Minimal Equipment

Falcon Darkstar

We’ve all seen a lot of hype about malicious hardware and hardware implants this year. Not much came of it–except that now everyone wants to know if they might be affected or whether they have a chance to see a unicorn. I show my own process for finding unexpected features in hardware with little more than my soldering iron, discuss the use of some more advanced tools, and show what I’m looking for. Then, I’ll lay out my working general threat model for hardware security and conclude that scrappy, observant hackers can make life difficult for even advanced threat actors in this space.

Falcon (@FalconDarkstar) is a senior security consultant at Leviathan Security Group. He builds sub-Turing switching hardware for Shadytel and is still working on an M. Sc. at Athabasca University.


Advancing a Scientific Approach to Security Tool Evaluations with MITRE ATT&CK™

Francis Duff

As security practitioners we struggle with what products we should buy and how we can cut through the marketing to figure out what products do. As the community have recognized the need to find adversaries post-compromise, a multitude of Endpoint Detection and Response (EDR) products have popped up on the market, but consumers have had limited information to try to help them decide which is right for them.

To help fill this gap, MITRE conducted impartial evaluations of vendor capabilities in an effort to increase transparency and drive the EDR market forward. Using the common lexicon of the ATT&CK knowledge base, MITRE used a purple-teaming approach to evaluate vendor capabilities. In November 2018, we publicly released our methodology and results showing detection capabilities for 90 ATT&CK-based procedures derived from real threat intelligence. This talk will explain the approach the MITRE team used as well as the challenges we faced in articulating how detections happen. The presenter will explain how you can use our publicly-available methodology and results to make decisions about products as well as perform your own evaluations.

Frank Duff (@FrankDuff) is a Principal Cyber Operations Engineer for The MITRE Corporation and is the ATT&CK based Evaluations Lead. Frank is also the lead for MITRE’s Leveraging External Transformational Solutions research and development effort that works with commercial cybersecurity vendors to accelerate their adoption by the government community. His work has focused on endpoint security, particularly in forming public-private partnerships to drive product improvement. Frank most recently has briefed at “A Conference on Defense” and “SecureWorld Boston” in 2018. He has a BS in Computer Engineering and a MS in Cybersecurity from Syracuse University.


Incident Response and the Attorney Client Privilege

Wendy Knox Everette

Oh no, you’ve suffered a computer security incident. The DFIR team you hired wrote up a great report detailing exactly what happened and making suggestions for how to fix some of these issues. But now you’re being sued, and opposing counsel requests that report!

Many times, companies will seek to protect investigations under the cover of attorney-client privilege. But what is that, when and how does the privilege attach, and how helpful is it most of the time? What should your goal be, and just what are best practices for working with attorneys?

Wendy (@wendyck) is a software developer who burned out and went to law school, where she completed a concentration in National Security Law and interned with the FTC, FCC, and some other three letter agencies (no, not the fun ones). After law school she completed a fellowship in privacy and information security law at ZwillGen. She currently lives in Seattle, where she is a Senior Security Advisor at Leviathan Security Group.


Un-f*$#ing Cloud Storage Encryption

Adam Everspaugh

Individuals, enterprises, and government agencies encrypt information before uploading to commodity cloud storage systems like Box or Amazon’s S3 to gain strong security in the event the storage provider is compromised. Regulations like HIPAA and PCI (and good security hygiene) require that encryption keys be rotated periodically. The current schemes in use for rotating encryption keys are either infeasible or insecure as we discuss in this presentation. We describe attacks against the current scheme and present two new encryption schemes that improve the security of key rotation offering different security and performance trade-offs.

Dr. Adam Everspaugh (@AdamEverspaugh) is a cryptographer and software engineer. He researches and presents on topics including oblivious password hardening, secure random number generators, and updatable encryption. Adam is a security engineer for Coinbase, and a cryptographic advisor to Keeper Security (password management service), and the distributed app platform Mainframe.com.


A Code Pirate’s Cutlass: Recovering Software Architecture from Embedded Binaries

evm

Reversing large binaries is really hard, but what if we could automatically recover the software architecture before we got started? This talk introduces the CodeCut problem: given the call graph of a large binary, segment the graph to recover the original object file boundaries. It also introduces local function affinity (LFA), a measurement representing the directionality of a function’s relationship to nearby functions. It applies LFA to solve the CodeCut problem. It shows some useful applications, including automated module-to-module call graphs (extracting software architecture) and automated section naming based on common strings. New work on applying the NCUT algorithm to the CodeCut problem will be presented.

evm (@evm_sec) has been staring at code for over a decade. A recovering Windows internals guy, he now spends most of his time with embedded systems. At APL he helped start an RE working group and a hacker magazine. He enjoys teaching the young’uns how to snatch the error code from the trap frame.


Kinder Garten Security: Teaching the Pre-college Crowd

Sandra Gorka and Jacob Miller

There is currently a shortage of cybersecurity professionals both worldwide and in the US. This presentation will discuss an after school program for high school students to explore cybersecurity topics and careers while earning college credit for an introductory cybersecurity course. The presentation will discuss the motivation for the after school program, the cybersecurity topics in the course, and some of the hands-on activities that illustrate how the topics can be applied. A website with links to an online repository of program materials will be shared with the audience. This work effort is the result of the NSF funded grant Improving the Pipeline: After-School Model for Preparing Information Assurance and Cyber Defense Professionals (Grant No. 1623525).

Dr. Gorka and Dr. Miller are associate professors of information technology at Pennsylvania College of Technology. Gorka teaches auditing and advanced/research topics in security; Miller teaches cryptology, risk analysis, and forensics. Gorka and Miller were instrumental in the initial effort of ACM and ACM‐SIGITE (Information Technology Educators) to define curriculum (ACM IT2008) and ABET accreditation criteria for IT. They developed and maintain the security program at Penn College. Along with colleagues from Penn College, they are co‐principal investigators for the NSF CyberCorps Capacity Building Grant entitled Improving the Pipeline: After‐School Model for Preparing Information Assurance and Cyber Defense Professionals (Grant No. 1623525).


IPv666: Address of the Beast

Christopher Grayson and Marc Newlin

IPv6 comes with a slew of improvements from larger address space to self-organizing addressing to required support of multicast, but these improvements are a double-edged sword. With NAT going away, DHCP no longer being required, modern operating systems and networks supporting and preferring IPv6 over IPv4, ICMP being required for network operation, iptables not applying to IPv6, and multiple IP addresses being associated with individual interfaces, IPv666 conjures the perfect storm of fail open defaults.

Why, then, haven’t more boxes been popped via IPv6? Because 2^128 is far larger than 2^32.

In this talk we will take a practical look at how to enumerate hosts over IPv6, using statistical models to discover servers and novel IPv6 honeypotting techniques to discover clients. We’ll talk about what works and what doesn’t when it comes to finding IPv6 addresses and how we used our model and scanning techniques to start amassing a corpus of the global IPv6 address space. We’ll cover statistics about how much more exposed IPv6 hosts are over their IPv4 counterparts and how prevalent IPv6 hosts are on various hosting platforms. Lastly, we will release our scanning software (open source) and all of the data we’ve collected.

Hey there, we’re Marc (@marcnewlin) and Chris (@_lavalamp). Two jolly hacker-types that like solving interesting problems. Hailing from Los Angeles we’ve got a bit of research experience under our collective belt and hope that you find our newest journey interesting! Our backgrounds have a mix of software engineering, penetration testing, academic and industry research, and general shenanigans.


Encrypting the Web Isn’t Enough: How EFF Plans to Encrypt the Entire Internet

Jeremy Gillula

In 2009, the EFF set out on a long-term mission to encrypt the Web. Our aim was to switch hypertext from insecure HTTP to secure HTTPS. HTTPS is essential in order to defend Internet users against surveillance of the content of their communications; cookie theft, account hijacking, and other web security flaws; and some forms of Internet censorship. In the intervening ten years, we’ve seen tremendous progress. We cajoled tech companies, wrote a browser extension, and even helped launch a certificate authority. But we’re still not satisfied.

Now, the EFF is working to not only encrypt the web: we’re expanding our mission to encrypt the entire Internet. The first stage of that campaign is a new project called STARTTLS Everywhere, which will do for mailserver communication what Let’s Encrypt and Certbot did for webservers. But in order to do it right, we need your feedback. If you’re a mailserver sysadmin, we particularly want you to come to this talk and share your thoughts with us.

Jeremy Gillula (@the_zeroth_law) is the Tech Projects Director at the Electronic Frontier Foundation, the leading nonprofit organization defending civil liberties in the digital world. As Tech Projects Director Jeremy leads the team that develops EFF’s public-facing software projects, including EFF’s browser extensions HTTPS Everywhere and Privacy Badger, as well as Certbot, our tool for sysadmins to make setting up HTTPS easy. The team also draws on their experience to advise EFF’s lawyers and activists, the press, and the public about policy issues from a technology perspective.


How the Press Gets Pwned

David Huerta

“Journalists and Activists” is a common line in copy promoting privacy or security-enhancing technology, especially in announcements discussing some novel use of cryptography. The usefulness of any of these tools or services however, don’t always line up perfectly with the real world. In this session we’ll go over a variety of recent real-world cases where journalists have faced some form of cyberattack and construct a more accurate picture of what tools and services would actually provide protection for at-risk journalists (and activists).

David Huerta (@huertanix) is a digital security trainer at the Freedom of the Press Foundation, where he helps journalists and media makers protect their work from anyone opposing a free press. He’s co-organized dozens of digital security and privacy trainings across the US, including one at the Whitney Museum of American Art as part of Laura Poitras’s Astro Noise exhibition in 2016.


iPhone Surgery for the Practically Paranoid

Evan Jensen and Rudy Cuevas

Is there a point past which the risks generated by high fidelity sensors in smartphones overshadows their utility? Maybe, but this isn’t a philosophical hand waiving session where we grow grey hairs and rhapsodize about policy. This is a session about taking control and getting the junk out of your phone that you don’t want.

Come join us to explore how resilient modern iPhones are against component failure and how that resilience can be leveraged to remove or disable sensors that may be used to infringe on your privacy. We explore the construction of Apple products at the component level and identify the choices and tradeoffs available to those willing to take more drastic measures in the pursuit of evading data collection.

Rudy and Evan (@jensensec) are researchers at the Boston Cybernetics Institute (BCI) where they focus on privacy, security and empowering people to control the technology around them through education, outreach, and sponsorship. Before BCI, they worked together at MIT Lincoln Laboratory, in the Cyber Systems Assessments group where they reverse engineered systems of interest to the Department of Defense and Intelligence Community. When not creating long and lasting impact on national security, you can find them in the greater Boston area emptying bookstores, playing CTF, organizing meetups, and filling every available ear with the gospel of systems analysis.


Mentoring the Intelligent Deviant: What the special operations and infosec communities can learn from each other

Nina Kollars and Paul Brister

There are unique challenges to developing and mentoring communities of practitioners whose jobs are, by design, intended to skirt laws and bypass security systems. As such, there are bizarre similarities between the hacking and the special operations communities. Nina (an analyst of military technology and infosec) has been attempting to norm a very reluctant special operations officer (Paul) into the community. Paul, as a career officer in the special operations community has endured several ShmooCons and at least one BSides. Together we present the contrast and similarities of the people/threat landscape/and professional development challenges and what the two communities can learn from one another–namely a deeper understanding of what it is like to develop and mentor intelligent deviants.

Paul Brister is a career special operations officer. He has deployed to all sorts of places to do all sorts of things. For the past two years, he has worked in the Pentagon as a Strategic Plans advisor to the Secretary of Defense.

Kitty Hegemon (Nina Kollars) (@NianaSavage) is a teacher and social scientist at the Naval War College who has studied both the Special Operations and info sec communities. Her work emphasizes horizontal knowledge transfer and the adaptive behaviors of soldiers on the battlefield.


Firemen vs. Safety Matches: How the current skills pipeline is wrong

Amélie Koran

Most of the discussion about solving the skills shortage and staffing pipeline in cyber/information/data/computer security has focused solely on training people to be the “next cyber professional.” However, this methodology is woefully misplaced and can be equated to just how first responders, such as EMTs, firemen, police, and others are acquired, developed, and deployed in their operating environment. You can’t get everybody to choose to be on the front lines, nor have them run into a burning building without exhausting your supply of ready volunteers, and burning out those who are already dealing with a high stress, intense, and critical role that is already woefully understaffed.

As a senior technology executive who has risen from a start in engineering and front-line security incident handling and analysis passing through multiple industry sectors and organizations, I believe that the strategy currently being promoted in the highest levels of the public sector, but also peddled by many in the private sector and academia, could be adjusted to produce a better overall outcome. In my presentation I propose leveraging and exploiting the diverse source of skills we already have in place and in development to ensure we can use them as a force multiplier for those in the security field, and in turn, create more secure systems and technology.

Amélie Koran (@webjedi) has performed the role of n00b to executive in 20 “short” years but has enjoyed every minute of learning and sharing ideas along her rather circuitous track. As a Deputy CIO within the Federal government, she’s helped develop national cybersecurity policy, perform workforce planning and development, but also respond and handle major security incidents at major NGOs, Fortune 125, and other organizations. She can be found regularly volunteering at local DC area security conferences and groups, trying to mentor others, and give back to the community that has given so much to her. She misses the daily hunt tracking of a SOC but also likes not having to ever really be on-call any more and getting regular sleep.


IMSI Catchers Demystified

Karl Koscher

IMSI catchers (sometimes known by the popular brand name “Stingrays”) are shrouded in mystery. Originally developed for military use, they are now used by law enforcement, foreign intelligence, and spammers. IMSI catchers are unauthorized cell sites designed to coerce phones into providing persistent identifiers (IMSIs) and enable RF direction-finding of particular users, intercept traffic, and/or deliver spam. Unfortunately, due to sketchy legal arrangements around their procurement and deployment, very little is publicly known about IMSI catchers, how they work, and how they are used. Based on leaked documents, 3GPP specifications, and experience detecting (and accidentally deploying) IMSI catchers, this talk infers many previously publically unknown aspects of IMSI catchers. We will cover how they convince phones to connect, reveal their IMSIs, and capture or release particular phones. We will also talk about how IMSI catchers use RF direction-finding to precisely locate particular users. We will describe how one might identify IMSI catchers based on their abuse of particular cellular standards. We will demonstrate a city-wide passive monitoring system for IMSI catchers and introduce our open-source app to detect IMSI catchers using Calypso-based GSM phones running custom baseband firmware. Finally, we’ll talk about how one might build their own IMSI catcher.

Karl Koscher (@supersat) is a research scientist working at the University of Washington Security and Privacy Research Lab where he specializes in wireless and embedded systems security. Previously, he was a postdoctoral scholar working with Stefan Savage at UC San Diego. He received his Ph.D. from the University of Washington in 2014, working with his advisor Tadayoshi Kohno. In 2011, he led the first team to demonstrate a complete remote compromise of a car over cellular, Bluetooth, and other channels.


Be an IoT Safety Hero: Policing Unsafe IoT through the Consumer Product Safety Commission

Andrea Matwyshyn and Elliot Kaye

The persistent vulnerability of many IoT devices is a source of concern for security researchers and policymakers alike. In this talk Commissioner Elliot Kaye of the Consumer Product Safety Commission and Professor Andrea Matwyshyn explain the current regulatory landscape around information security accountability and safety flaws in IoT devices. After explaining the different roles of various federal agencies involved in information security and safety issues, the speakers will highlight the pivotal role played by the Consumer Product Safety Commission in overseeing IoT safety. In particular the talk will teach you about the history of the CPSC, the scope of its regulatory authority, the role of its testing labs, and its rulemaking and product recall processes. Most importantly this talk will teach you how to be an IoT safety hero by reporting unsafe IoT products to the CPSC through the saferproducts.gov website and provide you with an introduction to Commissioner Kaye’s newly-released safety framework for IoT.

Elliot F. Kaye (@ElliotKayeCPSC) is a Commissioner and former Chairman of the U.S. Consumer Product Safety Commission (CPSC), nominated by President Obama and confirmed by the Senate in 2014. He previously served the CPSC as Executive Director. He holds a J.D. from New York University School of Law.

Dr. Andrea Matwyshyn (@amatwyshyn) is a Professor of Law/ Professor of Computer Science (by courtesy) and Co-director, Center for Law, Innovation & Creativity (CLIC) at Northeastern University. In 2014, she served as the Senior Policy Advisor/Academic in Residence at the U.S. Federal Trade Commission.


Five-sigma Network Events (and how to find them)

John O’Neil

Networks are complex systems and too often, despite their best effort, no one knows everything about what’s going on. And most of the knowledge about the network is about typical activity. But what about the atypical activity?

There are many reasons to want to find unusual behavior in your network. The biggest reason is that it may be a sign of something new and unexpected—rather than the usual stuff—driving the activity. This doesn’t necessarily imply that a network intrusion in underway. There are many other possibilities, both innocuous and dangerous. In any case, though, unusual behavior is probably something you want to know.

There are a variety of tools related to “anomaly detection” or “outlier detection,” and this talk isn’t about any of them. Instead, this talk is an introduction to writing your own tools for detecting unusual network events. We’ll use Python, with some easily available pip installations, and look at some simple approaches to the problem that answer some interesting questions and scale well.

John O’Neil is the Data Scientist at Edgewise Networks. He writes and designs software for data analysis and analytics, search engines, natural language processing and machine learning. He has a PhD in linguistics from Harvard University, is the author of more than twenty papers in Computer Science, Linguistics, and associated fields, and has given talks at numerous professional and academic conferences.


24/7 CTI: Operationalizing Cyber Threat Intelligence

Xena Olsen

Reese Witherspoon said, “With the right kind of coaching and determination you can accomplish anything.” She said “anything” right?! Join me on an 8 month adventure of building a 24/7 threat intelligence program. In this talk you’ll learn the techniques and training methodologies as well as lessons learned when leveraging SOC Analysts to perform threat intelligence analysis. The focus will be on how you can implement a 24/7 CTI program NOW with your existing tools, budget, and experience.

Xena Olsen (@ch33r10) is a cyber threat intelligence analyst in the financial services industry. She is a graduate of the SANS Women’s Academy with 3 GIAC certifications and a current graduate student seeking an MBA in IT Management. Her current focus is malware analysis and paying it forward through her Women in Information Security Group.


Trip Wire(less)

Omaha

At DEF CON 26, multiple guests of Caesars Entertainment properties were taken off-guard by the security practices employed by the hotel chain. A series of alarming tweets ignited significant press coverage, highlighting the inconsistent and poorly-published hotel policies. DEF CON management met with Caesars representatives, and were informed that while a Do Not Disturb sign will trigger a security visit by “clearly identifiable” hotel staff, no belongings should be disturbed, opened, or taken.

Planning on staying at Caesars property for DEF CON 27, or just concerned about privacy while traveling in general? This presentation will show you how to set up customizable travel “trip wires” that operate over 433 MHz and fit in a small toiletries case. With a Raspberry Pi, less than $20 worth of supplies, and an hour of spare time, you can configure 4 or 5 sensors that will alert you if your favorite things are moved, opened, or disturbed while you’re away from the room.

Omaha’s (@4F4D414841) career began with a BFA in Fine Arts in 2005, with a concentration in industrial design and human factors. This led to an interest in web development, which quickly forked to exploiting web applications. After attending DEF CON 21 in 2013, Omaha finally figured out what she wanted to be when she grows up, got motivated to complete a MS in Computer Science, and has been working (and playing) in the infosec community ever since.


Post-quantum Crypto: Today’s defense against tomorrow’s quantum hacker

Christian Paquin

Quantum computers pose a grave threat to the cryptography we use today. Sure, they might not be built for another decade, but today’s secrets are nonetheless at risk: indeed, many adversaries have the capabilities to record encrypted traffic and decrypt it later. In this talk I’ll give an overview of post-quantum cryptography (PQC), a set of quantum-safe alternatives developed to alleviate this problem. I’ll present the lessons we have learned from our prototype integrations into real-life protocols and applications (such as TLS, SSH, and VPN), and our experiments on a variety of devices, ranging from IoT devices, to cloud servers, to HSMs. I’ll discuss the Open Quantum Safe project for PQC development, and related open-source forks of OpenSSL, OpenSSH, and OpenVPN that can be used to experiment with PQC today. I’ll present a demo of a full (key exchange + authentication) PQC TLS 1.3 connection. Come learn about the practicality of PQC, and how to start experimenting with PQC to defend your applications and services against the looming quantum threat.

Christian Paquin (@chpaquin) is a crypto specialist in Microsoft Research’s Security and Cryptography team. He is currently involved in projects related to post-quantum cryptography, such as the Open Quantum Safe project. He is also leading the development of the U-Prove technology. He is also interested in privacy-enhancing technologies, smart cloud encryption (e.g., searchable and homomorphic encryption), and the intersection of AI and security. Prior to joining Microsoft in 2008, he was the Chief Security Engineer at Credentica, a crypto developer at Silanis Technology working on digital signature systems, and a security engineer at Zero-Knowledge Systems working on TOR-like systems.


Building and Selling Solo, an Open Source Secure Hardware Token

Conor Patrick

Solo is a low-cost security key that implements U2F and FIDO2–FIDO Alliance protocols that are part of the new W3C standard WebAuthn that allow you to securely authenticate on the web and potentially have a passwordless experience. My team and I created Solo in 2018 and are bootstrapping a business to produce and sell security keys full time. We just crowdfunded $123k to kickstart our first production run.

Most security keys use smart cards or EAL certified chips, which are very proprietary and relatively expensive. Solo is open source software and hardware and uses no components that require an NDA, which is quite a rarity. Because of this Solo can be regularly updated and extended without having to go through costly product revisions and re-certifications.

Conor (@_conorpp) is a hardware designer and hacker. In grad school he focused on secure hardware design and how to crack chips using power analysis or fault injection. Conor created U2F Zero, an open source U2F security key, and produced and sold around 5k units. He loves to talk about hardware design and physical security.


Analyzing Shodan Images With Optical Character Recognition

Michael Portera

Shodan Images is a collection of screenshots from RDP sessions, VNC sessions, and Webcams that have been crawled. While high level tags are applied, we can use a little bit of free AI sorcery from AWS to extract text out of the images to make them easier to search (with an average accuracy of 96%). Through this approach we can quickly identify company names, usernames, connected machine names, and even full names in some cases. In many scenarios we can attribute cloud instances to an entity that would otherwise have no identifiable characteristics. We’ll go over how this might be useful for offensive and defensive security toolkits. Using the free tiers from both services, I’ll demonstrate the full process from narrowing your search on Shodan to analyzing with AWS. I’ll also provide code that automates this task!

Michael (@mportatoes) is a manager of threat hunting ninjas at a large consulting firm. When he’s not playing family man, he pretends to be an engineer building tech gadgets. He may or may not have been in the July and November 2018 MagPi issues for the nerdiest invention imaginable.


Security Response Survival Skills

Ben Ridgway

Despite the many talks addressing the technical mechanisms of security incident response (from the deep forensic know-how to developing world-class tools), the one aspect of IR that has been consistently overlooked is the human element. Not every incident requires forensic tooling or state of the art intrusion detection systems, yet every incident involves coordinated activity of people with differing personalities, outlooks, and emotional backgrounds. Often these people are scared, angry, or otherwise emotionally impaired.

Drawing from years of real-word experience, hundreds of incidents worked by Microsoft Security Response Center, and the many lessons learned from some of the greats in IR around the company, this talk will delve into:

  • Human psychological response to stressful and/or dangerous situations
  • Strategies for effectively managing human factors during a crisis
  • Structures that set incident response teams up for success
  • Techniques that make better managers, responders, and investigators
  • Tools for building a healthy and happy incident response team

Effectively navigating the human element is a critical skill for anybody who may be called upon to manage or participate in a security incident. This talk is geared toward occasional or full-time responders who are looking for practical human-management skills.

Ben Ridgway (@b_ridg) started his career at NASA looking for vulnerabilities in spacecraft control systems. Following that, his work involved everything from pen testing high assurance CDS systems to building out Cyber Security Operations Centers. He was hired by Microsoft in 2011 and was a founding member of the Microsoft Azure Security Response Team. Over time, that scope has grown across multiple online service, cloud, and machine learning technologies. Today, he is the technical lead of the Microsoft Security Response Center’s government response and strategy team.


Process Control Through Counterfeit Comms: Using and Abusing Built-In Functionality to Own a PLC

Jared Rittle

Programmable Logic Controllers (PLCs) are devices that factories, office buildings, and utilities, among other facilities, use to control the processes running in their environment. These devices were designed to do their job and do it well, however they were not built to protect against malicious actors. This talk walks through some of the vulnerabilities discovered while investigating a well known PLC, discussing some of the methodologies used in discovery and showing how stringing together a few seemingly minor vulnerabilities can result in device takeover.

Jared Rittle is a security researcher with Cisco Talos who spends his time focusing on the discovery, exploitation, and coverage of vulnerabilities in the embedded systems found in Industrial Control Systems (ICS), Supervisory Control and Data Acquisition (SCADA), and Internet of Things (IoT) devices. Jared’s background includes a couple college degrees as well as work in the private sector.


0wn the Con

The Shmoo Group

For fourteen years, we’ve chosen to stand up and share all the ins and outs and inner workings of the con. Why stop now? Join us to get the break down of budget, an insight to the CFP process, a breakdown of the hours it takes to put on a con like ShmooCon, and anything thing else you might want to talk about. This is an informative, fast paced, and generally fun session as Bruce dances on stage, and Heidi tries to hide from the mic. Seriously though–if you ever wanted to know How, When, or Why when it comes to ShmooCon you shouldn’t miss this. Or go ahead and do. It’ll be online later anyway.

The Shmoo Group is the leading force behind ShmooCon. Together with our amazing volunteers we bring you ShmooCon. It truly is a group effort.


The Beginner’s Guide to the Musical Scales of Cyberwar

Jessica ‘Zhanna’ Malekos Smith

Whether you have a background in technology, law, academia, or national security, this talk is a beginner’s guide to understanding the law of war in cyberspace. By juxtaposing the law of war with a keyboard, the process of how states evaluate the scale and effects of a cyber operation and determine a basis for resorting to a use of force under the Law of Armed Conflict can be more readily conceptualized. For if music is indeed the universal language of mankind then by encouraging society to learn about this area, we can collectively better strategize ways to mitigate cyber conflict.

Jessica ‘Zhanna’ Malekos Smith, is the inaugural Reuben Everett Cyber Scholar at Duke University Law School. Previously, she served as a Captain in the U.S. Air Force Judge Advocate General’s Corps. Prior to military service, Malekos Smith was a post-doctoral fellow at the Belfer Center’s Cyber Security Project at the Harvard Kennedy School. She holds a J.D. from the University of California, Davis; a B.A. from Wellesley College, where she was a Fellow of the Madeleine Korbel Albright Institute for Global Affairs; and is finishing her M.A. with the Department of War Studies at King’s College London. Malekos Smith has presented her research at DEF CON, RSA, and NextGen@ICANN and has been published in the Harvard Kennedy School Review, Defense One, and The Cipher Brief, among others.


Three Ways DuckDuckGo Protects User Privacy While Getting Things Done (and how you can too)

Marc Soda

At DuckDuckGo we believe in privacy–this belief is in everything we do. Also, like many tech companies, we solve hard problems. Over the years we’ve developed some pretty interesting technologies and strategies to solve these hard problems while keeping our users’ privacy intact. We are proof that you can run a successful organization and raise the standard of trust online.

One of the ways we protect our users is to use NGiNX to proxy all external content on our site. In order to manage and troubleshoot this functionality, without logging any user information, we’ve had to patch NGiNX. We’d like to share those patches and how we use them. In addition with our privacy apps and browser extensions, we upgrade user connections to use SSL based on an extensive upgrade list that we maintain. Finally, we continually improve our site using techniques to anonymously A/B test various changes.

We want to share these techniques, code, and data with the world in the hopes that people will start using and improving them.

Marc (@marcantoniosr) is Director of Site Reliability at DuckDuckGo.


Deconstructing DeFeNeStRaTe.C

Soldier of FORTRAN

In 2012, hackers were running rampant in Swedens federal mainframes. During the course of the investigation, it was thought it might be a good idea to release *ALL* the investigation documentation to the public. Included in these public files were snippets (or full programs) of the tools the hackers developed to work on an IBM z/OS mainframe. But not every tool developed was included in those papers. Shortly after the documents were released, your speaker was sent a DM out of the blue with a link to a pastebin and two simple questions, “was this an exploit? how did it work?” Why did they contact the speaker? Because it was thought he originally was the one who did the breach. This talk is a deep dive in to the unix part of a mainframe, looking at exactly what this C program was doing, and how it accomplished it. This talk has got it all, when it comes to mainframe privilege escalation, APF unix programs, buffer overflows, hijacking return addresses, debugging, and changing ACEEs. After this talk, you’ll be able to know exactly what DeFeNeStRaTe.C was (trying?) to do and see it in action!

Soldier of FORTRAN (@mainframed767) is a mainframe security researcher. He has been recognized as one of the leading global experts on mainframe hacking; this title was unfortunately bestowed on him since very few have bothered to pick up the mantle. He has worked on implementing support for Nmap and Metasploit. He created libraries for attacking mainframes (njelib & libtn3270) and has spoken at BlackHat, RSA, Thotcon, ISACA SF, ISACA CACS, and DEFCON. He has also keynoted mainframe conferences including SHARE and Guide Share Europe Amsterdam. On top of speaking engagements, he also teaches classes on mainframe auditing and mainframe penetration testing.


CryptoLocker Deep-Dive: Tracking security threats on the Bitcoin public ledger

Olivia Thet and Nicolas Kseib

WhiteRabbit is an open source security research tool built on top of BlockSci, a blockchain analysis and exploration framework. In this presentation we will show how to leverage Bitcoin addresses associated to known ransomware campaigns and track payments made to these addresses. Our goal is to provide a tool that can act as another intelligence collection system for SOC analysts, threat hunters, malware researchers, and other defenders by leveraging Bitcoin public ledger data. This intelligence collection system allows analysts to track the activity of known ransomwares and assess the impact of these campaigns by directly looking into the amount of payments received. Furthermore, as cryptocurrencies continue gaining traction in public markets and criminal networks, we will demonstrate why Bitcoin wallet and other cryptocurrency addresses should be added as indicators of compromise (IOCs) to the “Pyramid of Pain.”

Olivia Thet (@thet_threat) is a Fullstack Software Engineer at TruSTAR Technology, an intelligence platform that helps organizations leverage multiple sources of threat intelligence and fuse it with historical event data to prioritize and enrich investigations. Olivia oversees TruSTAR’s Enclave knowledge management architecture and she’s passionate about helping teams collaborate better. Before joining TruSTAR, Olivia received her B.A. in Applied Mathematics and Computer Science at UC Berkeley.

Nicolas (@NKseib) is the Lead Data Scientist at TruSTAR Technology, a cyber intelligence platform built to accelerate enterprise security investigations. He leads the company’s data science initiatives and roadmap. He is always thinking of ways to leverage analytics and machine learning to design features improving the operational efficiency of security teams. Before joining TruSTAR, Nicolas received his M.S. and Ph.D. in Mechanical Engineering from Stanford University specializing in Flow Physics and Computational Engineering.


Ground Truth: 18 vendors, 6000 firmware images, 2.7 million binaries, and a flaw in the Linux/MIPS stack

Parker Thompson, Mudge, and Tim Carstens

We present data on recent work conducted at CITL concerning embedded devices, IoT, and home routers. This data, generated from an analysis of over 6000 firmware images from 18 vendors (over 2.7 million binaries total), shows:

  • Over the lifetime of a single product, it is more common for a vendor to regress software hardening features than add new ones;
  • All major vendors failed to apply the most basic hardening uniformly;
  • Images built for newer architectures tend to have more hardening than images built for older architectures;
  • However, comparing firmware released in 2012 to 2018, while many hardening protections became enabled, ASLR was lower across the board.

The data also reveals a disturbing trend: the consistent presence of executable stacks in binaries from Linux/MIPS firmware. We discuss our investigation of this phenomena, and how an old flaw in Linux’ support for the MIPS FPU specification has resulted in a universal DEP bypass, and how subsequent attempts to fix this have resulted in the recent addition of a universal ASLR bypass.

Lastly, we remark on the utility of large empirical studies in assessing the overall state of security–a topic often discussed, but rarely backed by data.

Parker Thompson (@m0thran) is a computer hacker and research engineer from Seattle, Washington, specializing in reverse engineering and software analysis. His prior research includes contributions to crash dump analysis, fuzzing, Internet censorship, and related areas. He currently serves as the lead engineer at CITL.

Tim Carstens (@intoverflow) is a mathematician and research engineer from Seattle, Washington, specializing in geometry, logic, and software verification. His prior research includes contributions to crash dump analysis, computational number theory, and related areas. He currently serves as the acting director at CITL.

Mudge (@dotMudge) is a computer hacker from the United States. His prior research includes early contributions to the theory and practice of buffer overflows, vulnerability discovery, and other foundational topics in computer and communications security. For over 20 years, he has been working to inform and protect the public, in both public and private sector. In 2016, together with Sarah Zatko, he co-founded CITL and currently serves as the chairman of the board.


Machine Learning Models that Predict Mental Health Status on Twitter and Their Privacy Implications

Janith Weerasingh and Rachel Greenstadt

Recent studies have shown that machine learning can be used to identify individuals with mental illnesses by analyzing their social media posts. These findings open up various possibilities in mental health research and early detection of mental illnesses. However, they also raise numerous privacy concerns. Our results show that machine learning can be used to make predictions even if the individuals do not actively talk about their mental illness on social media. In order to fully understand the implications of these findings, we need to analyze the features that make these predictions possible. We analyze bag of words, word clusters, part of speech n-grams and topic models to understand the machine learning model and to discover language patterns that differentiate individuals with mental illnesses from a control group. This analysis confirmed some of the known language patterns and uncovered several new patterns. We then discuss the possible applications of machine learning to identify mental illnesses, the feasibility of such applications, and associated privacy implications. We then suggest mitigating steps that can be taken by policymakers, social media platforms, and users.

Janith Weerasinghe is a Doctoral Candidate at Drexel University working on machine learning and privacy. He has worked on identifying mental illnesses by analyzing the language use of individuals, and on understanding manipulation of social media curation algorithms. He recently spoke at HotPETS 2018.

Dr. Rachel Greenstadt (@ragreens) is an Associate Professor of Computer Science and Engineering at NYU (formerly Drexel University). She has a history of speaking at hacker conferences including DEF CON 14 and 26, ShmooCon 2009, 31C3, and 32C3.


Patchwerk: Kernel Patching for Fun and Profit

Parker Wiksell and Jewell Seay

With the proliferation of inexpensive IOT devices running insecure Linux kernels on corporate networks, maintaining secure infrastructure has become an almost impossible task; IOT device manufacturers seldom keep up with the latest disclosed vulnerabilities, and usually do not provide complete working source code. There are few viable solutions for network administrators to patch and maintain their devices. Efforts to create a standard of live patching capabilities have been proposed by Oracle’s ksplice, SuSE’s kGraft, RedHat’s kpatch, and even built into the 4.0 kernel as “livepatch.” Unfortunately all these solutions require capabilities to be pre-compiled into the kernel and present a host of other security concerns.

Based on hacker techniques as old as the mid-90’s, we have solved this problem by developing a tool suite for inspecting, compiling, and applying patches to vendor OEM Linux kernels as a means to patch vulnerabilities, instrument performance, and aid in reverse engineering efforts. Rather than requiring whole vendor-specific kernel source code, configs, and build chains, we provide the opportunity to patch vendor OEM Linux kernels with representative source code and cross-compilers. This allows us to hook functions before and after, replace functions, alter parameters passed to a function, alter return values, and much more.

Jewell Seay and Parker Wiksell (@pwiksell) are security researchers at Battelle Memorial Institute. Jewell was part of Legitimate Business Syndicate, host of DEF CON CTF for 4 years, and the author of the cLEMENCy architecture. Parker has over 20 years industry experience, with the last 8 being focused on security research. Last year, Parker presented on the AFL-Unicorn toolset at ShmooCon. When not geeking out on computers, Parker has been known to write the occasional musical composition professionally.


Social Network Analysis: A scary primer

Andrew Wong and Phil Vachon

Everywhere you go, who and what you associate with says a lot about you. Your friends, your families, your business contacts, and your ideological associations all are valuable information to a variety of businesses and government agencies. We will explore ways to collect, analyze, and visualize a social network as well as look at how this network changes with time. This talk will cover how this all can be tied together to tell a lot of interesting stories about you and others around you.

Some unconventional data fusion techniques will also be covered. Additional data sources, such as real-world sensors as well as the additional joy that is people willingly handing over their personal information for coupons or other retail ‘perks,’ are also in scope. Really, everything is fair game.

Team MILK is a group of like minded data fetishists exploring the dark side of technology and the data leaking everywhere. Their past-time is finding unconventional data sources and integrating that data into over complex models. Our techniques include boredom, Jupyter, and weapons-grade nerd rage.


A Tisket, a Tasket, a Dark Web Shopping Basket

Emma Zaballos and Anne Addison Meriwether

We regret to inform you that much of what you’ve been told about dark web pricing–and indeed, data on the dark web–is wrong.

Periodically, researchers from cyber security companies publish reports on the going rates for goods and services on the dark web. We studied and compared 22 of these reports, published between 2013 and 2018, with the intent of developing a dark web pricing index. We concluded that even though these reports purport to inform the audience about the value of certain data types, their inconsistent terminology and haphazard collection strategies only add to the already confusing picture of the dark web. While educating end users about the value of their data and about the adversaries exploiting it is a valuable exercise, many of these reports fall into the traps of fear, uncertainty, and doubt (FUD). The inability or unwillingness to accurately illustrate the dark web data economy to an inexpert audience exacerbates the myth-filled public perception of the dark web.

To move forward as an industry, we need a consistent, shared taxonomy of digital goods available for sale and the development of a price index (based on a basket of goods and services) to measure pricing fluctuations in a standardized manner. With this development of definitions and measures of sensitive data pricing on the dark web, organizations can collaborate more effectively to combat the threat and minimize the risks associated with dark web enabled fraud.

Emma Zaballos (@theemmazaballos) and Anne Addison Meriwether are Analysts at Terbium Labs, a dark web intelligence company, where they work on evaluating and contextualizing threats to customer data. Emma specializes in visualizing trends in the sale and trade of stolen payment cards and studying the many ways companies fail to secure user data. She enjoys reading dark web forum drama. Anne Addison focuses on the governance, risk, and compliance aspects of data exposure. She loves to remind people about the boring parts of the dark web.


A Little Birdy Told Me About Your Warrants

Avi Zajac

An overview on the history and current state of warrant canaries; why they were abandoned by some from public and tech policy discussions, transparency reports, and other methods of notifying users on their privacy in relation to requests of user data by law enforcement due to perceived past failures and being experimental untested legal theories but shouldn’t be; and the future (that I see) in warrant canaries.

Avi (@_llzes) loves rabbits, cheesecake, and cute things. Cute things like warrant canaries! Privacy activist, runs around Cryptovillage, and loves to ride the 7000-series Metro trains.


Writing a Fuzzer for Any Language with American Fuzzy Lop

Ariel Zelivansky

American fuzzy lop (afl) is one of the most prominent tools used for fuzz testing nowadays. Many critical security issues found in widespread programs are attributed to afl.

For efficient fuzzing afl requires compiling source code, to which it adds its instrumentation bits. This requires code that gcc or clang can compile, generally C/C++ code. It is possible, however, to hack afl into fuzzing any code or language, even interpreted languages such as Python or Ruby.

In the talk we will dive into the internals of afl and walk through the steps needed to write an afl interface to fuzz any programming language. The Ruby language will be used as an example, based on my work on Kisaten (https://github.com/zelivans/kisaten), a ruby fuzzing tool which is responsible for the findings of various bugs in ruby gems and the ruby standard library.

Ariel Zelivansky is a security researcher and the head of Twistlock’s research team (https://www.twistlock.com/labs), dealing with hacking and securing anything related to containers.


Firetalk #1: Shut up and Listen

Kirsten Renner

This is a discussion about closing the gap between the search for the right job in Infosec and resolving the “perceived” shortage in available talent. We’ll discuss the challenges on both sides, for employers and candidates, touch on some points and truths that are constant, identify some tactics for success, then share, and collaborate.

Kirsten Renner (@Krenner) is possibly best known in this community for her role in the Car Hacking Village, but what she really does as a day job is recruit. She dabbled with programming in the early 90s, but as fun as it was, she wasn’t very good at it. She then found herself building and running help desks, both for a local municipality and a startup ecomm firm at the ‘dotcom boom’ era. For the last 18 years, she has done technical recruiting, and for the last decade, Information Security.


Firetalk #2: Specialists versus Jack-Of-All-Trades

Nicole Schwartz

Information Technology (IT) has grown up a lot, and security only really became a field recently. Those who have had a variety of experiences, or those who have grown up along with the field of security, have had the advantage of being Jack-Of-All-Trades. Those who know how many of the pieces fit together (even if they can’t do the other roles) have a better understanding of upstream and downstream risks, more ways to look at problems, and a larger selection of solutions to choose from. I want to advocate stepping back from being only specialists, leverage conferences to learn, and network to grow your knowledge of the whole IT sphere.

Nicole Schwartz (@CircuitSwan) has been a Technical Product Manager for the Rackspace for 5 years and ARAMARK for 4 years. She has worked with and developed Agile teams (Scrum and Kanban) for over a decade and more recently has started assisting Scrum Masters and teams as an Agile Coach. She has assisted companies with recommendations to improve their hiring practices. She has a background in Windows systems administration, Active Directory administration, and Microsoft SQL database administration. When she isn’t working she attends conventions (you may have known her as @AmazonV) and is one of the organizers for Skytalks.


Firetalk #3: Équipe Rouge: The Ethics of Prosecuting An Offensive Security Campaign

Tarah Wheeler and Roy Iversen

Those of us who conduct offensive security campaigns use all the tactics of cyberwarfare. We prepare, gather information, engage the enemy, attack and capture objectives, and celebrate victory. While there are technical specifications about best practices in offensive security methods, our industry is lacking on ethical guidance. Most available literature and discussion at best focus on the legal issues and rarely or never discuss the role of ethics in our profession.

We need to discuss the effects of red team tactics on internal company morale. What does it mean to lie, cheat, and steal when engaging in testing a company’s defenses, and is it smart to permit employees of a company to deceive others? Are there ways to avoid detrimental effects to the perceived integrity of the security professional? We will describe the conduct of an ethical red team engagement, and the parts best reserved for external and third-party engagements.

Tarah Wheeler (@tarah) is an information security researcher, political scientist in the area of international conflict, author, and poker player. She is Senior Director, Data Trust & Threat and Vulnerability Management at Splunk, as well as Cybersecurity Policy Fellow at New America. She is a cybersecurity expert for the Washington Post and a Foreign Policy contributor on cyber warfare.

Roy Iversen is Director of Security Engineering & Operations at Fortalice Solutions where he leads a team of security engineers. Prior to joining Fortalice, Mr. Iversen served under the CISO as Director of Security Operations Division at the U.S. General Services Administration (GSA).


Firetalk #4: Weapons of Text Destruction

Jared Stroud

Do you trust your text editor? Have you ever considered the offensive capabilities of vim? Countless Linux/Unix users log into machines every day and run “vim $FILENAME”. However, few are aware of the hidden capabilities that vim can provide the offensive security community. Given the plethora of plugins, subsystems and user-defined functions Vim natively provides unique value for red team engagements. Weapons of Text Destruction highlights the capabilities of the leveraging Vim for performing unconventional red-team activities.

Jared Stroud is a Hack Fortress Judge, Collegiate Competition Red Teamer and SPARSA alumni. As a day job Jared is a Security Engineer for MITRE, a non-profit federally funded research and development center.


Firetalk #5: Infosec and 9-1-1: When the Location of Your Emergency is in the Building

Christine Giglio

9-1-1 networks are primarily closed networks with no access to the outside world. Because of the closed nature of 9-1-1 networks, the attitude towards security in public safety has been ignored. As Public Safety Answering Points (PSAPs) change over to Next Gen 911 and allow their systems Internet connectivity to the outside world, IT professionals that work with PSAPs will have a new set of information security concerns that previously did not exist. It will be easier in centers that are located in metropolitan areas, as they have more experienced personnel already in place and the money to implement new technologies. For the vast majority of small town and rural PSAPs, this is going to prove a challenge because they lack experience and resources. I will summarize the major concerns for public safety, why they exist, and will have some first hand stories from the trenches of a 9-1-1 call center.

Christine Giglio (@kesseret) is the CAD Administrator for the Bedford County, Virginia department of E-911 communications. Prior to this position, she was the Public Safety LAN Administrator for Bedford County Sheriff’s Office, Fire & Rescue, and E-911 communications for 10 years. Bedford County is a rural joint E-911 center supporting both the Town of Bedford and the County of Bedford with a service area of approximately 762 square miles with a population of 84,000 people. Bedford center’s call volume is approximately 88,000 calls this year with a dispatched calls breakdown of 60 percent being law enforcement, 9 percent EMS, 2 percent fire, and the rest goes to miscellany.


Firetalk #6: Whats the latest 411 on 419s?

Ray [Redacted]

Scammers and thieves continue to develop new and innovative ways to rip you off. The purpose of this session will be to discuss the newest tricks and techniques being used by 419 online scammers as well as three OpSec pitfalls you can avoid to protect yourself in an ever more-hostile environment.

Ray [Redacted] (@RayRedacted) is a technologist and researcher for a 1.3 billion dollar global provider of connectivity and cyber security solutions. He has 20 years of expertise in application solution design, next-generation network architectures, and evolving and emerging cyber threat. Ray frequently presents at cryptocurrency and infosec security conferences, discussing topics such as the History of Hacking, advanced persistent threats, cryptography, and influence operations.