DC9723 March 26 2019 Meeting
When: Tuesday 26 of March, 2019 from 18:45 to 22:00
Where: Checkpoint Offices in Tel-Aviv (Ha’Solelim Street 5, Tel Aviv)
Agenda:
Brief Introduction
Using Machines to Exploit Machines – Harnessing AI to Accelerate Exploitation – Guy Barnhart-Magen, Ezra Caltum
TBD
As always, the talks are free and there is no need to register. Come and bring your friends.
The talks will be uploaded to youtube a week after the meeting.
You can watch the previous talks at https://www.dc9723.org/
*We need more talks, please consider submitting a talk for the next DC9723 meeting. For more details and questions, please contact cfp@dc9723.org
Abstracts:
Title: Using Machines to Exploit Machines – Harnessing AI to Accelerate Exploitation – Guy Barnhart-Magen, Ezra Caltum
Abstract:
Imagine yourself looking through a myriad number of crash dumps trying to find that one exploitable bug that has escaped you for days!
We will explore ML for offensive exploration, categorize and determine the exploitability of crashes and bugs, accelerating the triage process for exploitation.
Imagine yourself looking through a myriad number of crash dumps trying to find that one exploitable bug that has escaped you for days!
And if that wasn’t difficult enough, the defenders know that they can make us chase ghosts and red herrings, making our lives waaaay more difficult (Chaff Bugs: Deterring Attackers by Making Software Buggier)
Offensive research is a great field to apply Machine Learning (ML), where pattern matching and insight are often needed at scale. We can leverage ML to accelerate the work of the offensive researcher looking for fuzzing–>crashes–>exploit chains.
Current techniques are built using sets of heuristics. We hypothesized that we can train an ML system to do as well as these heuristics, faster and more accurately.
Machine Learning is not the panacea for every problem, but an exploitable crash has multiple data points (features) that can help us determine its exploitability. The presence of certain primitives on the call stack or the output of libraries and compile-time options like libdislocator, address sanitizer among others, can be indicators of “exploitability”, offering us a path to a greater, more generalized insight.
Defenders can find a lot of value in this work as well, as we can help developers isolate and focus on crashes that will lead to exploitation instead of drudging through countless crashes and analyzing them manually.
In this talk we will explore the current state of the art in ML for offensive exploration and present our ongoing work to automatically categorize and determine the exploitability of crashes and bugs, accelerating the triage process tremendously.