Schedule
Thursday 5th December 2013
Thomas Siebert 🗣
Abstract (click to view)
Banking Trojans are proven to be a successful cybercrime business model. However, the creation of URL blacklists heavily affected the cybercrime operations. This talk shows new command-and-control methods the cybercriminals have implemented to avoid blacklists. Another challenge for banking Trojan authors was the emerge of Google Chrome and the 64bit variant of Microsoft Internet Explorer, which require new techniques to hook encryption functions and plant the Man-in-the-Browser. In this part of the talk it is shown how banking Trojan authors accomplish this task.
Jessa dela Torre 🗣
Abstract (click to view)
This presentation will discuss our research on a threat that involves the massive attacks on WordPress, Joomla and Drupal sites and where they attempt to test the waters on a new spamming cycle. This routine involves different forms of web threats working independently of each other and has posed a challenge when it comes to email authentication. We will look into the (1) compromised website, (2) compromised machine, (3) command and control server, the (4) payloads and/or aliates involved and (5) the telemetry of the data we collected.
White paper: Show
Brad Porter 🗣 | Nick Summerlin
Abstract (click to view)
This presentation discusses several existing proxy networks used in malware campaigns and our efforts to track them in an automated fashion. Criminals are increasingly turning to proxy networks to provide an additional layer of protection between their adversaries (law enforcement / AV industry / security industry) and their command and control (C2) infrastructure. With increased collaboration between network operators and the security industry, it is becoming more difficult for criminals to maintain and protect their C2 infrastructure. Reliable, durable proxy networks provide relief and allow the botnet masters to focus more of their efforts on monetization. This presentation will cover several proxy networks in use today, how they function, how they are being used, and how they can be tracked.
Oğuz Kaan Pehlivan 🗣
Abstract (click to view)
Passive defense mechanisms are necessary component of well-designed cyber defense programs but they are no longer sufficient to address increasingly sophisticated threats. Thus, addressing the threats more comprehensively may require additional mechanisms. So, active cyber defense is consist of proactive actions to prevent, detect and respond to attacks and provide real-time capability to discover, detect, analyze and mitigate threats and vulnerabilities. However, this mechanism raise some legal problems. Coreflood Botnet Takedown Operation is a good example of it. Some argue that it breaches personal privacy but others assert that it eliminates a known threat to that victim’s privacy and financial security. Thus trying to set the legal limits of actions or propose a legal model for actions (stop C&C, eliminate botnets or to step forward and eliminate malware) becomes significant. In this regard consent based theories has to be evaluated.
Vasileios Friligkos 🗣
Abstract (click to view)
In this presentation, I will talk about all the distinctive characteristics of botnet behavior and more specifically, how we can detect it using effective solutions while avoiding over flood of false positives. How we can collect pieces of information across the IT infrastructure and, by using multiple layers of correlation as well as context metadata, succeed in detecting botnet infection and activity. Moreover, I will present how we can profit out of this enrichment of raw data with context in order to build and deploy Indicators of Compromise (IOC) so as to further enhance detection. All that is made possible by making use of a fairly new trend in the security world called Security Information and Event Management or SIEM for short. During the last years, many great actors in IT security made sure to acquire a company offering SIEM technology, foreseeing a rise in demand for such solutions.
Enrico Branca 🗣
Abstract (click to view)
Our purpose is to present a cyber intelligence system created to analyze network communications in order to detect and identify botnet activities and distribution of malware related to botnets, both over the internet and within targeted networks. The ever increasing dependance of states and businesses from interconnected resources and the rapid evolution of highly-complex malware used by botnets threatens to hold any enterprise hostage. Botnets create serious security and data issues and can be effective tools for both cyber-espionage and cyber-crime, as a bot typically runs hidden and is capable of communicating with its command and control server using covert or encrypted channels, operating like a intelligent cyber agent.
The technical solution we have designed utilizes a combination of behavioral analysis and artificial intelligence techniques to process live or recorded information coming from a variety of sources, and performs cross cluster correlation and multi variate analysis to generate actionable intelligence. Currently supports analysis of flow records, inspection of fourteen different network protocols, spam records, DNS and whois answers, file metadata, binary files and log files. Thanks to a multi agent architecture we are able to decode and analyze each connection both at packet and application level, messages sent and received by a bot can be inspected for malicious contents or modified to mimic a successful and unhampered operation, while data streams can be cryptographically analyzed and decoded when possible. All imported data is checked to remove duplicate information, for each file or partial string imported but also at binary level by storing only unique sequences of bytes.
The intelligence generated is encrypted and stored using a combination of graph databases and distributed hash tables to ensure that stored information are protected and easily searchable, while results are made available to analysts only through encrypted channels. We will provide two cases where such solution could be used effectively with 100 billion records and up to which extent this approach can be effective. We have implemented our prototype system because we believe a better and more automated way of correlating information can be vastly beneficial, as initial tests shows that in the detection process both efficiency and accuracy can be improved with the proposed approach.
Prakhar Prasad 🗣 | Himanshu Sharma 🗣
Abstract (click to view)
Botnets have got a lot of popularity during the recent time. And we have also seen various kinds of botnets ranging from IRC bots to P2P to HTTP bots. In this talk, we will be discussing about the advanced trends in a different kind of botnets, which operate via Browsers, also known as Browser Based Botnets.
We will be demonstrating how we can use the various HTML5 APIs in order to perform a full fledged bot attack with a remote C&C server. One of the interesting points is, in our case, the browser does not need to be vulnerable, instead we would be using its legitimate properties in order to craft our attack, get full access to the victim’s system, spread in the network, and perform further exploitation.
Etienne Stalmans 🗣 | Barry Irwin
Abstract (click to view)
Botnets consist of thousands of hosts infected with malware. As these hosts are widely dispersed and usually not physically accessible to botnet owners, a means to communicate with these hosts is needed. Using Command and Control (C2) servers botnet owners are able to communicate with and send commands to the members of the botnet with minimal eort. As these C2 servers are used to control the botnet, they can be used to shutdown the entire botnet by either by taking over or blocking the C2 servers. In defense to this botnet owners have employed numerous shutdown avoidance techniques. One of these techniques, DNS Fast-Flux, relies on rapidly changing address records. The addresses returned by the Fast-Flux DNS servers consist of geographically widely distributed hosts. These Fast-Flux C2 servers tend to be dispersed through multiple countries and across timezones. This distributed nature of Fast-Flux botnets differs from legitimate domains, which tend to have geographically clustered server locations. This paper examines the use of spatial autocorrelation techniques based on the geographic distribution of domain servers to detect Fast-Flux domains. Two means of measuring spatial autocorrelation, Moran’s I and Geary’s C, are used to produce classiers. These classiers use multiple geographic co-ordinate systems to assign unique values to each C2 server and subsequently to produce efficient and accurate classiers. It is shown how Fast-Flux domains can be detected reliably while only a small percentage of false positives are produced.
White paper: Show
Sébastien Duquette 🗣
Abstract (click to view)
In recent years, exploit packs have become an increasingly popular tool for the distribution of malware. An advantage of those packs is that it does not require cooperation on the part of the user which has the potential to be far more effective than traditional social-engineering methods. However, cybercriminals need to bring visitors to those exploit packs. Some groups rely on spam messages to drive traffic while others rely on paid advertising, a practice sometimes referred to as malvertising.
A third method aims to reach users by compromising the websites they visit. With the discovery of Darkleech and CDorked, it has become apparent that malicious modifications to web servers running on Linux are now used for mass malware distribution. In this presentation we will describe 2 campaigns using these malwares: The Home campaign and the CDorked campaign.
We will describe our own experience in tracking those 2 campaigns, what worked well, the shortcomings we faced and the steps we took to mitigate these threats. Finally, we will consider what it implies for monitoring efforts suggest methods to make them more effective.
Maciej Kotowicz 🗣 | Tomasz Bukowski | Łukasz Siewierski
Abstract (click to view)
Zitmo (ZeuS in the MObile) is a mutation of ZeuS that appeared for the first time in early 2011, targeting bank customers in Poland and Spain, infecting unknown numbers of users. Zitmo consists of two parts: spyware installed od PC and an application installed on mobile device. At the time the PC app is capable to run on all modern Windows systems (2000-8) both 32 and 64 bits, while the mobile part runs on Android, (although it’s prepered for Symbian and Blackbery as well).
We have recently discoverd that the banker used in malware is a strange mixture of ZeuS and Spy-Eye, served as a module, and it’s only one of functionalities offered by malware. It also incorporates a sophisticated communication schema used to trasport stolen data from mobile phones which we are still investigating. We will show how malware operates on both PCs and mobiles to stealing money. In addition, we will release tools that aid analysis.
Ivan Fontarensky 🗣
Abstract (click to view)
Disass is a binary analysis framework written in Python to automate static malware reverse engineering. Currently Disass is not designed to handle packed binary as static unpacking is a pretty tough task on its own.
The approach is simple : it’s stupid to repeat the same reverse engineering steps for the same malware again and again. The framework allows a reverser to describe in a simple way the individual steps that have to be done and replay it automatically. Currently, such task can be achieved by relying on bytes patterns, regular expressions and probably fixed offsets. Our approach tends to understand the assembly code and Disass is able to follow the structure, analyze the stack to extract function arguments, etc.
This leads to signatures that are far more easy to understand and thus to maintain.
Last advantage but no least, by describing assembly code “checkpoints” instead of a byte pattern, the signature is not impacted by junk code or the compiler version generating different variants of assembly.
Thanh Dinh Ta 🗣 | Jean-Yves Marion 🗣 | Guillaume Bonfante 🗣
Abstract (click to view)
One of the issues of a malware detection service is to update its database. For that, an analysis of new samples must be performed. Usually, one tries to replay the behavior of malware in a safe environment. But, a bot sample may activate a malicious function only if it receives some particular input from its command and control server. The game is to find inputs which activate all relevant branches in a bot binary in order to retrieve its malicious behaviors. From a larger viewpoint, this problem is an aggregation of the program exploration and the message format extraction problem, both of them captures many active researches. This is a work in progress in which we try a new approach to code coverage relying on input tainting.
Hendrik Adrian 🗣 | Dhia Mahjoub 🗣
Abstract (click to view)
“Facing a come-back Fast Flux (HLUX) botnet like Kelihos (Khelios) which was previously announced to be shutdown by big entities is not an easy task that can be done by a small group of people. A better understanding of the technicality “under the hood” of the threat itself was providing a better method in suppressing, evidence collecting, spear target intelligence and law enforcement coordination strategy within region and countries to control the growth, and in the end the shutdown effort. This is the story of a persistent and a outsmart effort of engineers gathered in MalwareMustDie with partners in fighting the well known botnet.
To make the strategy works as per expected the solid team needed with the vertical and horizontal team management and communication effort, and InfoSec has all of the resources need to make it happen, we share in BotConf the know-how and motivation on how good people/engineers can focus and gather to form big achievement, and management of battling a botnet is can be done in very cost-effective.
The talk will be closed with the offline full-disclosure of important achievements collected during the operation and there will be a hall of fame for the contributors involved.
We will try to cover all aspect within 20 minutes of Short Talk with handing out print-outs of the shared basic details before the talks.”
Friday 6th December 2013
Pasquale Stirparo 🗣 | Laurent Beslay 🗣
Abstract (click to view)
Due to the substantially different ecosystem we have to deal with when it comes to mobile security, it makes it harder to detect and react to malware attacks if using conventional techniques. We introduce the concept of Participatory Honeypot, a privacy-by-design system where users becomes partner of the collection of meaningful information subsequently used for the analysis
White paper: Show
Tom Ueltschi 🗣
Abstract (click to view)
In early 2011 we discovered some malware infected systems in our network. Starting from one A/V event we found several host- and network-based indicators to identify and confirm several infections within our company. A few weeks later the sinkholing of several known C&C domains showed the botnet was very big (several million bots). Quickly I got obsessed with analyzing and hunting this malware, which could infect fully patched systems protected by firewalls, IPS and multi-layered A/V without using exploits (only social engineering).
The malware got some media attention in June 2012 with titles such as “printer virus”, “printer bomb” or “Trojan.Milicenso: A Paper Salesman’s Dream Come True”. A/V detection names for this malware vary greatly and there may be as little as one registry key in common as indicator for all infected hosts. Over time the infection and C&C domains, IPs and URL patterns changed to avoid detection.
In late 2012 a “anti-sinkholing technique” was introduced in using C&C domains. Just recently I discovered how this technique can be overcome to allow sinkholing of botnet domains again. Unfortunately the currently used C&C domains are not as well known as they were after the incident and analysis in 2011.
David Décary-Hétu 🗣
Abstract (click to view)
The Internet has become over the past fifteen years the medium of choice for people to communicate with each other. As Boase & Wellman (2002) have predicted, we are now firmly in the era of networked individualism where each person creates his own personal social network and interacts with numerous circles of individuals who have very different backgrounds and live in different time zones.
This telecommunication revolution has forever changed how people communicate in both legitimate and illegitimate parts of society. Past research (Wall, 2007; Décary-Hétu, 2013) has shown that the balance of power between guardians, victims and criminals has shifted over the past few years in favour of the later. Indeed, it is now easier than ever for a criminal to find willing co-offenders and to offload stolen financial data on the black market (Holt & Lampke, 2010). To do so, criminals can use online forums and IRC chat rooms to post messages about what they need or have for sale. Possible business partners can then privately communicate in order to negotiate a satisfactory agreement.
While the Internet has solved many of the networking issues criminals were facing, it has also created new ones. As no one will share (or is able to prove) past criminal activities, criminals have had to rely on signs and signals that others send or display in order to decide whether or not to trust someone with a co-offending opportunity or with a business transaction. Signs and signals (Gambetta, 2009) such as clothing, tattoos and ethnicity that were commonly used to assess the trustworthiness of individuals are difficult to translate in the virtual world. In the context of the Internet, it is considerably easier to fake any of the aforementioned signs and signals and they therefore lose most of their significance.
To offset this problem, the administrators and moderators of online criminal forums and IRC chat rooms have adopted reputation scales that work just like the ones on popular merchant sites like eBay and Amazon (Motoyama et al., 2011). Users and administrators can then rate each other and provide a sense of the trustworthiness of others in the criminal community. Past research (Décary-Hétu, 2013) has shown that this reputation is not distributed randomly among the criminal population. To the contrary, many predictors of higher reputation can be identified and only a few individuals manage to outperform others in this regards. Those that accumulate the most reputation capital can then use it to increase their sales of illicit goods and services (Décary-Hétu, 2013).
This presentation aims to build on this research and provide a new understanding of how individuals accumulate reputation by looking at an illicit forum where participants talk about botnets and buy/sell botnet-related services. To do so, we have collected data on all of the forum members as well as their reputation level over a period of several months. Using Nagin et al.’s (2006) life-course trajectories approach, we have developed a model that identifies the different paths that members follow when they accumulate reputation in this online forum. This approach takes into account multiple predictors to classify each individual into a single group of offenders based on how they accumulate reputation points. Our results confirms that reputation is not distributed randomly but extends past research by demonstrating that there are differences in how people accumulate reputation. This enables us to better understand the careers of these individuals and to create tools that would identify key players in the online criminal underground before they have reached their full potential.
Paul Rascagnères 🗣
Abstract (click to view)
Earlier this year Mandiant published a report about a hacking group called APT1. Paul’s presentation focuses on his own in-depth analysis of this group, based on the information provided by Mandiant. Paul discovered numerous C&C (Command & Control) servers located in China running the same software that is highlighted in the Mandiant report. He managed to penetrate the infrastructure using vulnerabilities identified in the C&C server. Paul’s research provides a rare insight into activities and methodologies used by these attackers. This presentation will identify the infrastructure, tools, and malware used by the group to perform unscheduled backups of company data and intellectual property.
Thomas Barabosch 🗣 | Sebastian Eschweiler 🗣 | Mohammad Qasem | Daniel Panteleit | Daniel Plohmann | Elmar Padilla
Abstract (click to view)
We will present a general-purpose laboratory for large-scale botnet experiments. We reveal how several key points have been implemented, e.g., realistic simulation of the Internet or total observability within the laboratory. As a case study, we demonstrate the feasibility of our approach in simulating a large-scale takedown of the Citadel botnet. Additionally, we will show a screencast of the Citadel takedown.
Ronan Mouchoux 🗣
Abstract (click to view)
This presentation aims to explain how works MalwareTrap, a DNS resolution traffic analysis platform deployed into a major French company’s network. MalwareTrap was created to complete internal anti-malwares protections. It constantly listens to the internal DNS resolution traffic between workstations and Internal DNS. When it spots a DNS request for a domain name considered by MalwareTrap as a security threat, the internal DNS replies not the domain name’s real IP but the IP of the MalwareTrap’s entry point. The suspicious workstation then talks to MalwareTrap as if it were the server behind the domain name.
Sébastien Larinier 🗣 | Guillaume Arcas 🗣
Abstract (click to view)
Exploit Krawler is a device that will allow us to grab the tools from miscellaneous exploit kits (applet java,pdf..) in order to make their analysis easier. These exploit kits are more and more numerous on Internet and are more and more used to drop malwares and build botnets. One problem for the security researchers is to reproduce the infections and access the while infection chain. The Exploit Krawler framework goal is to answer these problems at a large scale. Exploit Krawler is a cluster of Selenium instrumented browsers. Browsers are driven in different virtual machines; each virtual machine is monitored to detect an intrusion through its browser.
Monitoring is implemented through the hypervisor. The hypervisor API is used to dump the memory, dump the disks and also launch actions on the virtual machine. Process, socks and DLL which are added or removed during the crawl are checked. Each VM reaches the web pages through Honeyproxy. So all the accesses are logged and the proxy downloads the whole set of web transactions (page, applet, executable,…).
The initial URL list is shared inside the cluster and every newly found URL is distributed through a demultiplexer; the goal is to run different browsers on the same URL with different or identical referrers to trigger the infection, as some exploit kits only triggers on a given Referer and/or for a given browser.
The cluster is spread on different continents in order to come from different networks, because some exploit kits also trigger depending on the browser location. When a browser finds a trapped page, it will follow the whole infection chain (redirection, Javascript callback) and the virtual machine will be freezed as soon as the first control channel with the central server will be up. Meanwhile, the proxy has registered the whole infection and grabbed the miscellaneous infection vectors (executable, Java applet…) which exploited the browser vulnerabilities. Once the virtual machine is freezed, the whole memory is dump for analysis, and the whole file system as well. The virtual machine will be released to let the compromission go on and all the connections to the control channel will be registered to get the whole chain.
Jason Jones 🗣 | Marc Eisenbarth 🗣
Abstract (click to view)
The problem of tracking botnets is not a new one, but still proves to be an important and fruitful research topic. We have been tracking many botnets for years using an internally built tracking system, which has undergone a number of significant improvements and changes over the years. The basic tenet is a language for implementing botnet command-and-control mechanisms and tracking the resulting infiltrated botnets. This paper will cover the evolution of this system, which offers a vignette of the evolution of the modern day botnet itself. With this historical backdrop, we discuss our current monitoring mechanisms and selected botnet family case-studies, highlighting results we have obtained from our system and conclude with offering a toolkit which allows others to conduct similar investigations.
White paper: Show
Thomas Chopitea 🗣
Abstract (click to view)
Since their first signs of existence in the early 2000’s, botnets have been a subject of interest for information security researchers. Considering the technological advancements in the latest releases of most common botnets, it can be said that their impact in the cyber-landscape is not only technical, but also financial and sociological. Nowadays, botnets are a real game-changer in the underground economy, providing criminals with the infrastructure they need to perpetrate a wide array of crimes: spam, click jacking, carding and denial-of-service attacks are some well-known examples.
There are several methods to study botnets – some of them stem from classical malware analysis techniques, like reverse engineering, behavioral analysis, and others are closer to computer and network forensic science. Since botnets are usually operated according to important financial incentives, open-source investigation techniques (a.k.a. ‘good old detective work’) are also a way to gather interesting intelligence on botnets and their handlers.
Botnets have a very specific characteristic that makes them unique: they’re a social malware. Just as social animals must interact with each other in order to survive, bots belonging to a same botnet must communicate, between bots or towards a central command and control (C&C) point in order to run. Bots can hide, but they must run. This has great consequences on their resilience, and also on how complicated it is to create and maintain one. Remember that bots can hide, but they must run; no matter how complex or advanced, they will eventually have to reach out to their peers. Botnets’ communication channels and protocols as well as C&C infrastructures will be our main focus throughout the presentation.
I will expose my point of view on how network traffic and botnets’ communication protocols can be analyzed to understand how they operate and establish proper strategies for identification, containment, and countermeasures against botnet attacks. I’ll start by giving a brief overview of the evolutions of botnets’ network architecture throughout history, usually following closely the habits of corporate and personal computer users. With the important financial motives behind them, botnets are becoming increasingly complex; different botnets use different C&C topologies – centralized, decentralized, multi-server, hierarchical, peer-to-peer, fluxing… we’ll take a look at these architectures and see what kind of information we can extract when analyzing their communications, and which countermeasures are best for each case. I will also introduce Malcom, a malware communications analysis tool that I created to obtain real-time visualizations of a given malware’s network communications. Malcom allows us to determine – in a whisk – what kind of topology is in use, and track eventual changes as they are being made by the botmasters.
Actionable intelligence is great to have when dealing with botnets. But knowing where to strike, which servers to take down, or which addresses to avoid is not enough if that information is not fresh. We’ll see how Malcom can be used to track and correlate malicious elements in botnets, and how that information can be used to build a profile on the botherders or malware family, by using Whois, emails, URLs, or AS information. Sharing that kind of information with other entities dealing with this kind of threats (such as CERTs) is a crucial step in the fight against malware. We’ll also see how Malcom can allow such information to be shared in a safe and anonymous way, so as to make incident response as swift as possible.