Sébastien Larinier
Last known affiliation: ESIEA
Sébastien Larinier 🗣
Abstract (click to view)
The goal of this workshop is to present how to use python to make machine learning. We take examples of security data like malware and we explain how to transform data to use algorithms of machine learning. We detail the different algorithms and the different librairies Scikit-learn and Tensorflow.
The algorithms help to clusterize quickly a database malware to create yara signature for using in Incident Response. The participants will work on little dataset and develop some code based on theses librairies and create yara signature.
Robert Erra 🗣 | Sébastien Larinier 🗣 | Alexandre Letois | Marwan Burelle
Abstract (click to view)
Malware are now developed at an industrial scale and human analysts need automatic tools to help them.
We propose here to present the results of our experiments on this difficult problem: how to cluster a very large set of malware (with only static information) to be able to classify some new malware. To cluster a set of (numerical) objects is to group into meaningful categories these objects. We want objects in the same group to be closer (or more similar) to each other than to those in other groups. Such groups of similar objects are called clusters. When data are labeled, this problem is called supervised clustering. It is a difficult problem but easier that the {it unsupervised clustering} problem we have when data are not labeled.
All our experiments have been done with code written in Python and we have mainly used scikit-learn so you will probably be able to do the work again with your own feature vectors (well we hope for you!).
We will present some results on our dataset of two million malware. We will give some example of the results we have found and we will talk about future works that could be interesting to do (well: problems still to be solved).
Sébastien Larinier 🗣
Abstract (click to view)
The goal of the wokshop is to present and use the open source live forensic collector FastIR on differents cases investigations on Windows: RAT with tricks anti forensics, rootkits, Trojan with dll injections… And we’ll present new features we have developped this year with agent and server.
Paul Rascagnères 🗣 | Sébastien Larinier 🗣 | Alexandra Toussaint 🗣
Abstract (click to view)
During an incident, CERT Sekoia investigated fraudulent money transfers. These transfers were made from a French firm account to other bank accounts based in different places in Europe. The fraud has been valued at 800 000 euros.
Initially, the bank of the French firm indicted an accountant officer of this firm for making these transfers. The transaction were made with 2FA authentication process.
CERT Sekoia has demonstrated that the accountant officer’s computer was compromised and his computer was certainly used to perform these transfers.
The compromising occurred in two stages:
- First, when Dridex arrived on the computer
- Secondly, Dridex was used to download another malware (RAT).
Sébastien Larinier 🗣 | Guillaume Arcas 🗣
Abstract (click to view)
Exploit Krawler is a device that will allow us to grab the tools from miscellaneous exploit kits (applet java,pdf..) in order to make their analysis easier. These exploit kits are more and more numerous on Internet and are more and more used to drop malwares and build botnets. One problem for the security researchers is to reproduce the infections and access the while infection chain. The Exploit Krawler framework goal is to answer these problems at a large scale. Exploit Krawler is a cluster of Selenium instrumented browsers. Browsers are driven in different virtual machines; each virtual machine is monitored to detect an intrusion through its browser.
Monitoring is implemented through the hypervisor. The hypervisor API is used to dump the memory, dump the disks and also launch actions on the virtual machine. Process, socks and DLL which are added or removed during the crawl are checked. Each VM reaches the web pages through Honeyproxy. So all the accesses are logged and the proxy downloads the whole set of web transactions (page, applet, executable,…).
The initial URL list is shared inside the cluster and every newly found URL is distributed through a demultiplexer; the goal is to run different browsers on the same URL with different or identical referrers to trigger the infection, as some exploit kits only triggers on a given Referer and/or for a given browser.
The cluster is spread on different continents in order to come from different networks, because some exploit kits also trigger depending on the browser location. When a browser finds a trapped page, it will follow the whole infection chain (redirection, Javascript callback) and the virtual machine will be freezed as soon as the first control channel with the central server will be up. Meanwhile, the proxy has registered the whole infection and grabbed the miscellaneous infection vectors (executable, Java applet…) which exploited the browser vulnerabilities. Once the virtual machine is freezed, the whole memory is dump for analysis, and the whole file system as well. The virtual machine will be released to let the compromission go on and all the connections to the control channel will be registered to get the whole chain.