Inhalt des Dokuments
|Content||The main focus of the Computer Security Seminar is on current research topics in the field of Internet Security, Cryptography, Foundations of Security, Hardware Security, Software Security, and Telecommunication Security.|
|Event Type||advanced seminar|
|Lecturer||Prof. Jean-Pierre Seifert|
|Course ID|| 0434 L 956|
|Teaching Period||summer semester 2013|
|Room||Room TEL 1118/19|
|Time||Preparatory Meeting: April 17th, 2013, 14 - 16hrs|
Group Meeting: June 10th or 13th, 14 - 16hrs
Talks: July 8th, 10th and/ or 11th, starting 14 hrs
|Credit||3 ECTS / 2 SWS|
|Exam||attendance in group meetings, paper and talk|
Please note: You have to fulfill tasks during the group meetings to get the certificate. That means, if you do not attend a group meeting you will not get the seminar certificate.
All deadlines (meetings, drafts as well as final versions) are fixed and hard. You will get a -0.3 penalty per day of delay on the final grade.
|Target Audience||main course students|
|Contact Person||Prof. Jean-Pierre Seifert and Juliane Krämer and Max Suraev|
Schedule (subject to modifications!)
April 17th, 2013 (preparatory meeting)
- Introduction to Computer Security Seminar: TEL 1118/19, 14 - 16 hrs
- Please inform yourself about the seminar topics.
until April 24th, 2013
- Send 3 topic wishes + your matriculation number + course type (bachelor/master/diplom) via email to Max Suraev. (Topics are allocated by lot if multiple wishes are for the same topic.)
- Also indicate whether you are Erasmus student.
- Subscribe to the ISIS course with your matriculation number (Matrikelnummer).
until May 8th, 2013
- Bring along the yellow module application to our team assistant (otherwise we have to exclude you from the seminar).
until May 15th, 2013
- Read the papers.
- Elaborate on your topic (search literature, sort it, read it and - if possible - understand it).
- Send a short version of your seminar paper containing structural and short hints of the planned content to your supervisor (as a discussion basis).
until June 5th, 2013
- Summarize the literature and the papers in a seminar paper (up to 10 pages).
- Send the draft to your supervisor.
June 10th & 13th,2013, 14 - 16 hrs (group meetings)
- Read and correct the drafts of your fellow group members.
- Discuss and give feedback to your group members.
- Attendance is mandatory!
- (only one of the two appointments per student/ group)
June 19th, 2013
- Incorporate results and hints from the group meeting.
- Send final version of the seminar paper to your supervisor.
June 26th, 2013
- Prepare your presentation slides (20 minutes talk, 10 minutes discussion) and send them to your supervisor.
tba, July 2013 (talks)
- Prepare your talk.
- Attendance at all talks is mandatory!
# 1: ....
Supervisor: Janis Danisevskis
Description: For various reasons operating systems expose the DMA capability of hardware devices to the user-space. An example for such an application is the connection of a camera with a display controller. Both devices should share a single buffer for performance reasons. Exposing such a feature to the user can however have a severe impact on the integrity of the system, as can be seen by the recently discovered /dev/exynos_mem bug. It boils down to the issue of proving to the device driver, that the user-space program is in its right to allow the device access to a certain memory resource. Linux introduced the DMA buffer sharing API in version 3.3. By this Linux addresses this issue in a save and secure manner (http://lwn.net/Articles/474819/). Other implementations have shown up such as UMP (universal memory provider) by ARM. The issue of proving the ownership of resources to other system components, be it the OS-kernel, drivers or an other user-space component, has led to the development of various techniques.
Task: Your task is to provide an overview of such techniques and discuss them in the light of DMA buffer sharing and quota management. Starting points:
Abstract: I/O intensive workloads running in virtual machines can suffer massive performance degradation. Direct assignment of I/O devices to virtual machines is the best performing I/O virtualization mechanism, but its performance still remains far from the bare-metal (non-virtualized) case. The primary gap between direct assignment I/O performance and bare-metal I/O performance is the overhead of mapping the VM's memory pages for DMA in IOMMU translation tables. One could avoid this overhead by mapping all of the VM's pages for the lifetime of the VM, but this leads to memory consumption which is unacceptable in many scenarios. The DMA mapping problem can be stated briefly as "when should a memory page be mapped or unmapped for DMA?" We begin by presenting a theoretical framework for reasoning about the DMA mapping problem. Then, using a quota-based approach, we propose the on-demand DMA mapping strategy, which provides the best DMA mapping performance for a given amount of memory consumed. In particular, on-demand mapping can achieve the same performance as state-of-the-art mapping strategies while consuming much less memory (exact amount depends on the workload's requirements). We present the design and implementation of on-demand mapping in the Linux-based KVM hypervisor and an experimental evaluation of its application to various workloads.
Abstract: Since we first devised and defined password-capabilities as a new technique for building capability-based operating systems, a number of research systems around the world have used them as the bases for a variety of operating systems. Our original Password-Capability System was implemented on custom built hardware with a novel address translation and protection scheme specifically designed to support password-capabilities. The password-capability concept later formed the basis of Opal developed at the University of Washington, and Mungi from the University of New South Wales, both of which used commercially available hardware. A second generation password-capability based system, Walnut, was developed at Monash University in the 1990s. Walnut was designed to run on commercially available hardware. It addressed some shortcomings of the original Password-Capability System but had to sacrifice some features that depended on hardware support. A third generation system that will extend Walnut to support mandatory security policies and other advanced features is currently being considered. This paper analyses the evolution of the Password-Capability System into Walnut, examines the shortcomings of the systems, and identifies issues to be addressed in the new system.
# 2: The Photonic Side Channel
Supervisor: Juliane Krämer
Description: Since the mid-nineties, side channel attacks have been studied in depth and a lot of effort has been placed in securing smartcards and embedded systems against them. Several sources of side channel leakage proved to yield enough information for analysis, including power consumption, timing, and electromagnetic radiation. The photonic side channel, which exploits photonic emissions from switching transistors, was first presented in 2008. However, the immense cost for the necessary measurement setup was regarded as unaffordable by most conceivable attackers. Although it was proven to work in a first proof of concept attack, the photonic side channel was not regarded as a realistic threat as recently as five years ago. More recently, such attacks were carried out with low cost equipment, demonstrating that such attacks pose a greater threat than previously believed.
Task: Write a Systematization of Knowledge (SoK) paper about the topic "The Photonic Side Channel". That is: read literature and summarize current scientific knowledge about side channel attacks in general and the photonic side channel and related attacks in particular. Use the following papers as a starting point for your research.
Abstract: The authors present a short note describing the newly emerging optical side channel. The basic idea of the channel is very simple - many parts of the integrated circuits consist of transistors that represent one of the two logical states 0 or 1. When the state changes, there is some light that is emitted in the form of a few photons. A device employing the method which is able to detect these photons (called picosecond imaging circuit analysis) is available in several laboratories, for example, in the French space agency CNES. From the point of view of the cryptanalyst, once the optical side channel information is available for a specific cipher on a device, it is possible to identify deep inner states that should not be revealed. In fact, it turns out that for an outdated and unprotected 0.8 mum PIC16F84A microcontroller it is possible to recover the AES secret key directly during the initial AddRoundKey operation as the side channel can distinguish the individual key bits being XORed to the plaintext.
Abstract: This work presents a novel low-cost optoelectronic setup for time- and spatially resolved analysis of photonic emissions and a corresponding methodology, Simple Photonic Emission Analysis (SPEA). Observing the backside of ICs, the system captures extremly weak photo-emissions from switching transistors and relates them to code running in the chip. SPEA utilizes both spatial and temporal information about these emissions to perform side channel analysis of ICs. We successfully performed SPEA of a proof-of-concept AES implementation and were able to recover the full AES secret key by monitoring accesses to the S-Box. This attack directly exploits the side channel leakage of a single transistor and requires no additional data processing. The system costs and the necessary time for an attack are comparable to power analysis techniques. The presented approach significantly reduces the amount of effort required to perform attacks based on photonic emission analysis and allows AES key recovery in a relevant amount of time. We present practical results for the AVR ATMega328P and the AVR XMega128A1.
Differential Photonic Emission Analysis (will be send to the student)
Abstract: This work presents the first differential side channel analysis
to exploit photonic emissions. We call this form of analysis Differential
Photonic Emission Analysis (DPEA). After identifying a suitable area
for the analysis, our system captures photonic emissions from switch-
ing transistors and relates them to the program running in the chip. The
subsequent differential analysis reveals the secret key. We recovered leak-
age from the datapath’s driving inverters of a proof of concept AES-128
implementation. We successfully performed DPEA and were able to re-
cover the full AES secret key from the photonic emissions. The system
costs for an attack are comparable to power analysis techniques and the
presented approach allows for AES key recovery in a relevant amount of
time. Thus, this work extends the research on the photonic side channel
and emphasizes that the photonic side channel poses a serious threat to
modern secure ICs.
# 3: Secure GPU Virtualization
Supervisor: Matthias Lange
Description: Modern graphics processors (GPUs) can produce high fidelity images orders of magnitude faster than general purpose CPUs. Especially on modern smartphones 3D accelerated graphics is key to a satisfying user experience. Virtualization has been proposed as a technology to enhance the security but most of todays virtualization solutions severely limit the growing class of graphics intensive applications. Compared to pure software rendering hardware acceleration provides much better performance.
Task: In this seminar paper the student should research and compare different GPU virtualization techniques. The following papers may be used as a starting point.
Abstract: Modern graphics co-processors (GPUs) can produce high fidelity images several orders of magnitude faster than general purpose CPUs, and this performance expectation is rapidly becoming ubiquitous in personal computers. Despite this, GPU virtualization is a nascent field of research. This paper introduces a taxonomy of strategies for GPU virtualization and describes in detail the specific GPU virtualization architecture developed for VMware’s hosted products (VMware Workstation and VMware Fusion). We analyze the performance of our GPU virtualization with a combination of applications and microbenchmarks. We also compare against software rendering, the GPU virtualization in Parallels Desktop 3.0, and the native GPU. We find that taking advantage of hardware acceleration significantly closes the gap between pure emulation and native, but that different implementations and host graphics stacks show distinct variation. The microbenchmarks show that our architecture amplifies the overheads in the traditional graphics API bottlenecks: draw calls, downloading buffers, and batch sizes. Our virtual GPU architecture runs modern graphics-intensive games and applications at interactive frame rates while preserving virtual machine portability. The applications we tested achieve from 86% to 12% of native rates and 43 to 18 frames per second with VMware Fusion 2.0.
Abstract: This paper describes VMGL, a cross-platform OpenGL virtualization solution that is both VMM and GPU independent. VMGL allows applications executing within virtual machines (VMs) to leverage hardware rendering acceleration, thus solving a problem that has limited virtualization of a growing class of graphics-intensive applications. VMGL also provides applications running within VMs with suspend and resume capabilities across GPUs from different vendors. Our experimental results from a number of graphics-intensive applications show that VMGL provides excellent rendering performance, coming within 14% or better of native graphics hardware acceleration. Further, VMGL’s performance is two orders of magnitude better than that of software rendering, the commonly available alternative today for graphics-intensive applications running in virtualized environments. Our results confirm VMGL’s portability across VMware Workstation and Xen (on VT and non-VT hardware), and across Linux (with and without paravirtualization), FreeBSD, and Solaris. Finally, the resource demands of VMGL align well with the emerging trend of multi-core processors.
Abstract: The available rendering performance on current computers increases constantly, primarily by employing parallel algorithms using the newest many-core hardware, as for example multi-core CPUs or GPUs. This development enables faster rasterization, as well as conspicuously faster software-based real-time ray tracing. Despite the tremendous progress in rendering power, there are and always will be applications in classical computer graphics and Virtual Reality, which require distributed configurations employing multiple machines for both rendering and display. In this paper we address this problem and use NMM, a distributed multimedia middleware, to build a powerful and flexible rendering framework. Our framework is highly modular, and can be easily reconfigured – even at runtime – to meet the changing demands of applications built on top of it. We show that the flexibility of our approach comes at a negligible cost in comparison to a specialized and highly-optimized implementation of distributed rendering.
# 4: Threats on Mobile Devices and their Detection
Supervisor: Steffen Liebergeld
Description: Smartphones and tablets became very popular. They aggregate people's private information auch as contact information, emails, SMS/MMS and even store login credentials to social networks. Most mobile devices have basebands to access the cellular network for calls, SMS/MMS and mobile data connectivity. With their high connectivity, built-in cash generation through premium calls/SMS and wealth of precious private information, mobile devices also became very popular targets for attackers.
Task: Write a Systematization of Knowledge (SoK) paper about the topic "Threats on Mobile Devices and their Detection". That is read literature and summarize current scientific knowledge about threats on mobile devices and how these threats can be detected. You should also give your opinion on the topic as established during your research. For this seminar, you can limit your research to Android. Use the following papers as a starting point for your research.
Abstract: The popularity and adoption of smartphones has greatly stimulated the spread of mobile malware, especially on the popular platforms such as Android. In light of their rapid growth, there is a pressing need to develop effective solutions. However, our defense capability is largely constrained by the limited understanding of these emerging mobile malware and the lack of timely access to related samples. In this paper, we focus on the Android platform and aim to systematize or characterize existing Android malware. Particularly, with more than one year effort, we have managed to collect more than 1,200 malware samples that cover the majority of existing Android malware families, ranging from their debut in August 2010 to recent ones in October 2011. In addition, we systematically characterize them from various aspects, including their installation methods, activation mechanisms as well as the nature of carried malicious payloads. The characterization and a subsequent evolution-based study of representative families reveal that they are evolving rapidly to circumvent the detection from existing mobile anti-virus software. Based on the evaluation with four representative mobile security software, our experiments show that the best case detects 79.6% of them while the worst case detects only 20.2% in our dataset. These results clearly call for the need to better develop next-generation anti-mobile-malware solutions.
Abstract: It is still difficult to assess the real danger posed by Bluetooth-propagated malware. BlueBat is an effort to build and deploy a practical honeypot for capturing in-the-wild samples and empirically study malware prevalence. This paper describes the design and implementation of a first prototype, focusing on Bluetooth worms propagating over the OBEX Push service. We develop and perform initial field testing of different types of sensors, in order to achieve an optimal collection capability. We analyze the results of the field tests, and demonstrate various design constraints. Also, from these preliminary tests, we cast some doubts on the prevalence of in-the-wild Bluetooth worms, and hint at some reasons why such threat could be more limited than previously thought.
Abstract: Mobile nodes, in particular smartphones are one of the most relevant devices in the current Internet in terms of quantity and economic impact. There is the common believe that those devices are of special interest for attackers due to their limited resources and the serious data they store. On the other hand, the mobile regime is a very lively network environment, which misses the (limited) ground truth we have in commonly connected Internet nodes. In this paper we argue for a simple long-term measurement infrastructure that allows for (1) the analysis of unsolicited traffic to and from mobile devices and (2) fair comparison with wired Internet access. We introduce the design and implementation of a mobile honeypot, which is deployed on standard hardware for more than 1.5 years. Two independent groups developed the same concept for the system. We also present preliminary measurement results.
# 5: Internet censorship circumvention
Supervisor: Benjamin Michéle
Description: Internet censorship is ubiquitous and therefore a lot of circumvention solutions have been proposed. Some of these solutions enhance the TOR network to evade censorship, others hide traffic using various "legitimate" Internet services.
Task: Summarize and compare the following papers from CCS'12 with regard to their strengths and weaknesses.
Abstract: The Tor network is designed to provide users with low-latency anonymous communications. Tor clients build circuits with publicly listed relays to anonymously reach their destinations. However, since the relays are publicly listed, they can be easily blocked by censoring adversaries. Consequently, the Tor project envisioned the possibility of unlisted entry points to the Tor network, commonly known as bridges. We address the issue of preventing censors from detecting the bridges by observing the communications between them and nodes in their network. We propose a model in which the client obfuscates its messages to the bridge in a widely used protocol over the Internet. We investigate using Skype video calls as our target protocol and our goal is to make it difficult for the censoring adversary to distinguish between the obfuscated bridge connections and actual Skype calls using statistical comparisons.
We have implemented our model as a proof-of-concept pluggable transport for Tor, which is available under an open-source licence. Using this implementation we observed the obfuscated bridge communications and compared it with those of Skype calls and presented the results.
Abstract: A key challenge in censorship-resistant web browsing is being able to direct legitimate users to redirection proxies while preventing censors, posing as insiders, from discovering their addresses and blocking them. We propose a new framework for censorship-resistant web browsing called CensorSpoofer that addresses this challenge by exploiting the asymmetric nature of web browsing traffic and making use of IP spoofing. CensorSpoofer de-couples the upstream and downstream channels, using a low-bandwidth indirect channel for delivering outbound requests (URLs) and a high-bandwidth direct channel for downloading web content. The upstream channel hides the request contents using steganographic encoding within Email or instant messages, whereas the downstream channel uses IP address spoofing so that the real address of the proxies is not revealed either to legitimate users or censors. We built a proof-of-concept prototype that uses encrypted VoIP for this downstream channel and demonstrated the feasibility of using the CensorSpoofer framework in a realistic environment.
Abstract: Internet censorship by governments is an increasingly common practice worldwide. Internet users and censors are locked in an arms race: as users find ways to evade censorship schemes, the censors develop countermeasures for the evasion tactics. One of the most popular and effective circumvention tools, Tor, must regularly adjust its network traffic signature to remain usable.
We present StegoTorus, a tool that comprehensively disguises Tor from protocol analysis. To foil analysis of packet contents, Tor's traffic is steganographed to resemble an innocuous cover protocol, such as HTTP. To foil analysis at the transport level, the Tor circuit is distributed over many shorter-lived connections with per-packet characteristics that mimic cover-protocol traffic. Our evaluation demonstrates that StegoTorus improves the resilience of Tor to fingerprinting attacks and delivers usable performance.
# 7: Deterministic Replay
Supervisor: Michael Peter
Description: Over the last few decades, many research projects have usedli deterministic replay for many purposes such as debugging, performance analysis, and forensics. Despite the obvious utility of logging and replay, and the existence of a number of prototype implementations, mainstream operating systems and virtualization platforms have largely failed to support them. In the past, the construction of efficient deterministic record-and-replay systems has been mainly hindered by two factors: First, the interface of general purpose operating systems does not easily allow the interception execution events and use them later to repeat the execution. Second, deterministic replay often imposed a significant run-time overhead or generated an excessive amount of run-time information, or both. The advent of virtualization may open up an opportunity to mitigate the aformentioned issues. The VM interface is small compared with OS APIs and provides a clean interface of true nondeterminism introduced by the external world.
Task: The aim of this seminar is to examine both techniques to build deterministic replay systems and their application.
Abstract: Log-based recovery and replay systems are important for system reliability, debugging and postmortem analysis/recovery of malware attacks. These systems must incur low space and performance overhead, provide full-system replay capabilities, and be resilient against attacks. Previous approaches fail to meet these requirements: they replay only a single process, or require changes in the host and guest OS, or do not have a fully-implemented replay component. This paper studies full-system replay for uniprocessors by logging and replaying architectural events. To limit the amount of logged information, we identify architectural nondeterministic events, and encode them compactly. Here we present ExecRecorder, a full-system, VM-based, log and replay framework for post-attack analysis and recovery. ExecRecorder can replay the execution of an entire system by checkpointing the system state and logging architectural nondeterministic events, and imposes low performance overhead (less than 4% on average). In our evaluation its log files grow at about 5.4 GB/hour (arithmetic mean). Thus it is practical to log on the order of hours or days between checkpoints. It can also be integrated naturally with an IDS and a post-attack analysis tool for intrusion analysis and recovery.
Abstract: Deterministic record-replay has many useful applications, ranging from fault tolerance and forensics to reproducing and diagnosing bugs. When choosing a record-replay solution, the system administrator must choose a priori how comprehensively to record the execution and at what abstraction level to record it. Unfortunately, these choices may not match well with how the recording is eventually used. A recording may contain too little information to support the end use of replay, or it may contain more sensitive information than is allowed to be shown to the end user of replay. Similarly, fixing the abstraction level at the time of recording often leads to a semantic mismatch with the end use of replay. This paper describes how to remedy these problems by adding customizable replay stages to create special-purpose logs for the end users of replay. Our system, called Crosscut, allows replay logs to be "sliced" along time and abstraction boundaries. Using this approach, users can create slices that include only the processes, applications, or components of interest, excluding parts that handle sensitive data. Users can also retarget the abstraction level of the replay log to higher-level platforms, such as Perl or Valgrind. Execution can then be augmented with additional analysis code at replay time, without disturbing the replayed components in the slice. Crosscut thus uses replay itself to transform logs into a more efficient, secure, and usable form for replay-based applications. Our current Crosscut prototype builds on VMware Workstation's record-replay capabilities, and supports a variety of different replay environments. We show how Crosscut can create slices of only the parts of the computation of interest and thereby avoid leaking sensitive information, and we show how to retarget the abstraction level of the log to enable more convenient use during replay debugging.
Abstract: Concurrency bugs are becoming increasingly prevalent in the multi-core era. Recently, much research has focused on data races and atomicity violation bugs, which are related to low-level memory accesses. However, a large number of concurrency typestate bugs such as "invalid reads to a closed file from a different thread" are under-studied. These concurrency typestate bugs are important yet challenging to study since they are mostly relevant to high-level program semantics. This paper presents 2ndStrike, a method to manifest hidden concurrency typestate bugs in software testing. Given a state machine describing correct program behavior on certain object typestates, 2ndStrike profiles runtime events related to the typestates and thread synchronization. Based on the profiling results, 2ndStrike then identifies bug candidates, each of which is a pair of runtime events that would cause typestate violation if the event order is reversed. Finally, 2ndStrike re-executes the program with controlled thread interleaving to manifest bug candidates. We have implemented a prototype of 2ndStrike on Linux and have illustrated our idea using three types of concurrency typestate bugs, including invalid file operation, invalid pointer dereference, and invalid lock operation. We have evaluated 2ndStrike with six real world bugs (including one previously unknown bug) from three open-source server and desktop programs (i.e., MySQL, Mozilla, pbzip2). Our experimental results show that 2ndStrike can effectively and efficiently manifest all six software bugs, most of which are difficult or impossible to manifest using stress testing or active testing techniques that are based on data race/atomicity violation. Additionally, 2ndStrike reports no false positives, provides detailed bug reports for each manifested bug, and can consistently reproduce the bug after manifesting it once.
Abstract: Program debugging has almost universally been considered from the perspective of performing detailed examination of a single program target (application, operating system, etc.) at a single point in time. We present an early prototype of the Tralfamadore Debugger (TDB), a software debugger based on the Tralfamadore offline dynamic analysis engine. Unlike conventional debuggers, TDB presents a source-level debugging interface on a CPU-level execution log. The system maps processor-level events back to source-level semantics and allows developers to examine all of execution, through time, with familiar gdb-like operations.
# 8: Analyzing kernel security and approaches for improving it: three different approaches for protecting against various threats to the operating system.
Supervisor: Matthias Petschick
Description / task: The following three papers present an analysis of kernel security and approaches for improving it and protecting against various threats to the operation system. It is your task to read, understand and present them to a reader by writing your own paper in your own words that explains and introduces the topic, outlines how the authors of the three papers approached it and which conclusions they came to. Furthermore, you should give a general overview over related work and classify the papers you read in relation to it. Last but not least, you are expected to give your own view on the topic as you established it during your research for this seminar.
Abstract: As dynamic kernel runtime objects are a significant source of secu- rity and reliability problems in Operating Systems (OSes), having a complete and accurate understanding of kernel dynamic data layout in memory becomes crucial. In this paper, we address the problem of systemically uncovering all OS dynamic kernel runtime objects, without any prior knowledge of the OS kernel data layout in memory. We present a new hybrid approach to uncover kernel runtime objects with nearly complete coverage, high accuracy and robust results against generic pointer exploits. We have implemented a prototype of our ap- proach and conducted an evaluation of its efficiency and effectiveness. To demonstrate our approach’s potential, we have also developed three different proof-of-concept OS security tools using it.
Abstract: It is very challenging to verify the integrity of Operating System (OS) kernel data because of its complex layout. In this paper, we address the problem of systematically generating an accurate kernel data definition for OSes without any prior knowledge of the OS kernel data. This definition accu- rately reflects the kernel data layout by resolving the pointer-based relations ambiguities between kernel data, in order to support systemic kernel data inte- grity checking. We generate this definition by performing static points-to analy- sis on the kernel’s source code. We have designed a new points-to analysis algorithm and have implemented a prototype of our system. We have performed several experiments with real-world applications and OSes to prove the scala- bility and effectiveness of our approach for OS security applications.
Abstract: Commodity operating system kernels isolate applications via sep- arate memory address spaces provided by virtual memory man- agement hardware. However, kernel memory is unified and mixes core kernel code with driver components of different provenance. Kernel-level malicious software exploits this lack of isolation be- tween the kernel and its modules by illicitly modifying security- critical kernel data structures. In this paper, we design an access control policy and enforcement system that prevents kernel com- ponents with low trust from altering security-critical data used by the kernel to manage its own execution. Our policies are at the granularity of kernel variables and structure elements, and they can protect data structures dynamically allocated at runtime. Our hypervisor-based design uses memory page protection bits as part of its policy enforcement. The granularity difference between page- level protection and variable-level policies challenges the system’s ability to remain performant. We develop kernel data-layout par- titioning and reorganization techniques to maintain kernel perfor- mance in the presence of our protections. We show that our system can prevent malicious modifications to security-critical kernel data with small overhead. By offering protection for critical kernel data structures, we can detect unknown kernel-level malware and guar- antee that security utilities relying on the integrity of kernel-level state remain accurate.
# 9: Exploiting Peripherals to Attack the Host Computer Platform
Supervisor: Patrick Stewin
Description: Peripherals such as network cards, video cards, management controller, etc. use their own execution environment to run the peripheral's firmware. This environment is separated from the host computer platform that runs system software (e.g., operating system, hypervisor) and user applications. However, security researchers demonstrated how to exploit such peripherals to attack the host computer several times in recent the years. For example, peripherals can access the runtime memory of the host system to steal cryptographic key, keystroke codes, and other sensitive data. The exploit code that is executed on the peripheral's execution environment cannot be detected by modern security software such as state-of-the-art anti-virus programs.
Task: Write a Systematization of Knowledge (SoK) paper about the topic "Exploiting Peripherals to Attack the Host Computer Platform". Use the following three papers as a STARTING POINT for your research!
Abstract: In the last few years, many different attacks against computing platform targeting hardware or low level firmware have been published. Such attacks are generally quite hard to detect and to defend against as they target components that are out of the scope of the operating system and may not have been taken into account in the security policy enforced on the platform. In this paper, we study the case of remote attacks against network adapters. In our case study, we assume that the target adapter is running a flawed firmware that an attacker may subvert remotely by sending packets on the network to the adapter. We study possible detection techniques and their efficiency. We show that, depending on the architecture of the adapter and the interface provided by the NIC to the host operating system, building an efficient detection framework is possible. We explain the choices we made when designing such a framework that we called NAVIS and give details on our proof of concept implementation.
Abstract: Attackers constantly explore ways to camou age illicit activities against computer platforms. Stealthy attacks are required in industrial espionage and also by criminals stealing banking credentials. Modern computers contain dedicated hardware such as network and graphics cards. Such devices implement independent execution environments but have direct memory access (DMA) to the host runtime memory. In this work we introduce DMA malware, i. e., malware executed on dedicated hardware to launch stealthy attacks against the host using DMA. DMA malware goes beyond the capability to control DMA hardware. We implemented DAGGER, a keylogger that attacks Linux and Windows platforms. Our evaluation conrms that DMA malware can effciently attack kernel structures even if memory address randomization is in place. DMA malware is stealthy to a point where the host cannot detect its presense. We evaluate and discuss possible countermeasures and the (in)eectiveness of hardware extensions such as input/output memory management units.
Abstract: Recent research demonstrates that malware can infect peripherals' firmware in a typical x86 computer system, e.g., by exploiting vulnerabilities in the firmware itself or in the firmware update tools. Verifying the integrity of peripherals' firmware is thus an important challenge. We propose software-only attestation protocols to verify the integrity of peripherals' firmware, and show that they can detect all known software-based attacks. We implement our scheme using a Netgear GA620 network adapter in an x86 PC, and evaluate our system with known attacks.
# 10: Exploring location privacy issues in GSM.
Supervisor: Max Suraev
Description: In addition to confidentiality and integrity, privacy is an important aspect of GSM security. It's especially sensitive topic due to the fact that most of the users have their mobile phone in close proximity 24 hours a day. The location privacy is about preserving data regarding user's location while using GSM network: on the one hand network has to know user's position with regards to BTS in order to communicate efficiently, on the other hand it should be difficult for an attacker to obtain this data.
Task: Write summary paper outlining known risks for location privacy, implemented mitigation techniques and promising developments in that area. Use the following papers as starting point of your research. Note: your paper has to include information on RRLP despite it's not being directly mentioned in 3 reference papers
Abstract: A protocol for private proximity testing allows two mobile users com- municating through an untrusted third party to test whether they are in close phys- ical proximity without revealing any additional information about their locations. At NDSS 2011, Narayanan and others introduced the use of unpredictable sets of “location tags” to secure these schemes against attacks based on guessing another user’s location. Due to the need to perform privacy-preserving threshold set in- tersection, their scheme was not very efficient. We provably reduce threshold set intersection on location tags to equality testing using a de-duplication technique known as shingling. Due to the simplicity of private equality testing, our result- ing scheme for location tag-based private proximity testing is several orders of magnitude more efficient than previous solutions. We also explore GSM cellular networks as a new source of location tags, and demonstrate empirically that our proposed location tag scheme has strong unpredictability and reproducibility.
Abstract: Cellular phones have become a ubiquitous means of communications with over 5 billion users worldwide in 2010, of which 80% are GSM subscribers. Due to their use of the wireless medium and their mobile nature, those phones listen to broadcast communications that could re- veal their physical location to a passive adversary. In this paper, we investigate techniques to test if a user is present within a small area, or absent from a large area by simply listening on the broadcast GSM channels. With a combina- tion of readily available hardware and open source soft- ware, we demonstrate practical location test attacks that include circumventing the temporary identifier designed to protect the identity of the end user. Finally we propose solu- tions that would improve the location privacy of users with low system impact.
Abstract: Mobile telephony (e.g., Global System for Mobile Communications [GSM]) is today’s most common communica- tion solution. Due to the specific characteristics of mobile commu- nication infrastructure, it can provide real added value to the user and various other parties. Location information and mobility pat- terns of subscribers contribute not only to emergency planning, general safety, and security, but are also a driving force for new commercial services. However, there is a lack of transparency in today’s mobile telephony networks regarding location disclosure. Location information is generated, collected, and processed with- out being noticed by subscribers. Hence, by exploiting subscriber location information, an individual’s privacy is threatened. We develop a utility-based opponent model to formalize the conflict between the additional utility of mobile telephony infrastructure being able to locate subscribers and the individual’s privacy. Based on these results, measures were developed to improve an individual’s location privacy through a user-controllable GSM software stack. To analyze and evaluate the effects of specific sub- scriber provider interaction, a dedicated test environment will be presented, using the example of GSM mobile telephony networks. The resulting testbed is based on real-life hardware and open- source software to create a realistic and defined environment that includes all aspects of the air interface in mobile telephony networks and thus, is capable of controlling subscriber–provider interaction in a defined and fully controlled environment.
# 11: Runtime analysis and tracing of embedded systems
Supervisor: Julian Vetter
Description: Runtime analysis and tracing  are key techniques for debugging and validating the operation of embedded systems. Generating/analyzing traces helps developers to identify potential performance problems. Different tools (e.g. LTTng, Feather-Trace) are available. Most of them depend on a specific platform. For new or unpopular platforms, programmers suffer from few analysis tools. This has been a problem for the development on embedded systems. Thus, platform independent analysis/tracing tools  assisting in the process of embedded development are necessary. Instrumenting code (tracing) to collect profiling information causes execution overhead (References:[3,10,11,16,17,27] in ). This overhead makes instrumentation expensive to perform at runtime, therefore approaches to reduce this overhead are mandatory .
Task: Show different techniques to optimize execution overhead when tracing is enabled. Give an overview over different tracing frameworks and how they fit into the process of embedded system development.
Abstract: Instrumenting code to collect profiling information can cause substantial execution overhead. This overhead makes instrumentation difficult to perform at runtime, often preventing many known offline feedback-directed optimizations from being used in online systems. This paper presents a general framework for performing instrumentation sampling to reduce the overhead of previously expensive instrumentation. The framework is simple and effective, using code-duplication and counter-based sampling to allow switching between instrumented and non-instrumented code. Our framework does not rely on any hardware or operating system support, yet provides a high frequency sample rate that is tunable, allowing the tradeoff between overhead and accuracy to be adjusted easily at runtime. Experimental results are presented to validate that our technique can collect accurate profiles (93-98% overlap with a perfect profile) with low overhead (averaging 6% total overhead with a naive implementation). A Jalape~ no-specific optimization is also presented that reduces overhead further, resulting in an average total overhead of 3%.
Abstract: Performance evaluation is key to many computer applications. Many techniques and profiling tools are available for measuring performance, but most of them depend on the hardware and the software on which they run. For a new platform, or a platform which is not popular, programmers usually suffer from few analysis tools, which has been a serious problem for application development on many embedded systems. Thus, a performance analysis tool with the software mechanism is quite important for developing embedded applications. This paper describes a software mechanism for analyzing program performance on a wide range of platforms via code instrumentation at the source level. We implement this mechanism in a pure software profiling toolkit, called Module tracer, which works with a public-domain tool, CIL, to carry out code instrumentation for C programs. The toolkit aids programmers in understanding the behavior of applications by generating and analyzing traces and identify potential performance problems.
Abstract: Execution tracing is one of the key techniques for analyzing and validating the operation of embedded products. After reviewing several approaches to the runtime behavior analysis of embedded systems, we present the experience gained in developing a range of high-bandwidth communications devices combining multiple wireless and wired link technologies. In particular, all cases studies are based on actual product development.
Zusatzinformationen / Extras
Prof. Dr. Jean-Pierre Seifert
+49 - 30 - 8353 58 681
Postal AddressTechnische Universität Berlin - An-Institut Telekom Innovation Laboratories