GRSRD 2012 Accepted Papers with Abstracts
A formal analysis of the Norwegian e-voting protocol
Norway has used e-voting in its last political election in September 2011, with more than 25 000 voters using the e-voting option. The underlying protocol is a new protocol designed by the ERGO group, involving several actors (a bulletin box but also a receipt generator, a decryption service, and an auditor).
Of course, trusting the correctness and security of e-voting protocols is crucial in that context. Formal definitions of properties such as privacy, coercion-resistance or verifiability have been recently proposed, based on equivalence properties.
We propose a formal analysis of the protocol used in Norway, w.r.t. privacy, considering several corruption scenarios. Part of this study has conducted using the ProVerif tool, on a simplified model.
Universally Composable Key-Management
We present the first key-management functionality in the Universal
Composability (UC) framework by Canetti. It allows to enforce a wide
range of security policy and is highly extensible. We illustrate
its use by proving an implementation of a Security API secure with
respect to arbitrary key-usage operations and explore a proof
technique that allows to store cryptographic keys externally, a
novelty in the UC framework.
DANAK: Finding the Odd!
In this paper introduces DANAK for the detection of anomalies in Netflow records by referring to spatio-temporal aggregation. Spatially aggregated Netflow records are fed into a new kernel function to analyze those on context and quantitative evolution. To enhance the analysis of sparse or missing data in time series, phase space embedding (PSE) is applied.
Security proof with dishonest keys
Symbolic and computational models are the two families of models for rigorously analysing security protocols. Symbolic models are abstract but offer a high level of automation while computational models are more precise but security proof can be tedious. Since the seminal work of Abadi and Rogaway, a new direction of research aims at reconciling the two views and many soundness results establish that symbolic models are actually sound w.r.t. computational models.
This is however not true for the prominent case of encryption. Indeed, all existing soundness results assume that the adversary only uses honestly generated keys. While this assumption is acceptable in the case of asymmetric encryption, it is clearly unrealistic for symmetric encryption. In this paper, we provide with several examples of attacks that do not show-up in the classical Dolev-Yao model, and that do not break the IND-CPA nor INT-CTXT properties of the encryption scheme.
Our main contribution is to show the first soundness result for symmetric encryption and arbitrary adversaries. We consider arbitrary indistinguishability properties and an unbounded number of sessions.
This result relies on an extension of the symbolic model, while keeping standard security assumptions: IND-CPA and IND-CTXT for the encryption scheme.
Verifiable Security of Merkle-Damgard: towards the Formal Verification of SHA-3 Finalists
Hash functions are used pervasively as building blocks to realize most
cryptographic functionalities. Since weaknesses in hash functions may
imply vulnerabilities in the constructions that build upon them,
ensuring their security is essential. Unfortunately, many widely used
hash functions, including SHA-1 and MD5, are subject to practical
attacks. The search for a secure replacement is one of the most active
topics in the field of cryptography, chiefly motivated by the ongoing
NIST SHA-3 competition. All finalists of this competition are based on
variants of Merkle-Damgard, an iterated construction that
underlies most existing hash functions. By extending the EasyCrypt
framework, we build the first rigorous machine-checked and
independently-verifiable proofs of collision-resistance and
indifferentiability of Merkle-Damgard. Our results make a
significant step towards verifying the security of finalists of the
Differential Privacy by Typing for Security Protocols
Differential privacy is widely recognized as the state-of-the-art framework to
release statistical information about the content of a database
without disclosing sensitive information. The variety of
database queries and enforcement mechanisms has sparked the development
of a number of formal and mechanized analysis techniques to streamline the
certification of differential privacy guarantees. Despite this
exciting body of work, little attention has
been given to the formal certification of differential privacy guarantees for
distributed databases, in which the usage of cryptographic protocols
is indispensable to perform computations on the whole dataset.
In this work, we explore the interplay between differential privacy
and cryptographic protocols, taking the first steps to reconcile the formal analysis
of cryptographic protocols with the growing body of work
on the formal certification of differential privacy guarantees. In
particular, we present the first symbolic definition of differential privacy
for distributed databases that takes into account Dolev-Yao intruders that
can overhear, intercept, and synthesize the cryptographic messages
exchanged on the network. In contrast to the existing symbolic
definitions of secrecy for cryptographic protocols, our definition is
quantitative, in that it provides a bound on the amount of sensitive information that is
disclosed by running the protocol, and yet amenable to automated
verification. We have developed a linear, distance-aware type system to
statically and automatically enforce differential privacy in cryptographic
Automated Synthesis of Privacy-Preserving Distributed Applications
We introduce a framework for the automated synthesis of
security-sensitive distributed applications. The central idea is to
provide the programmer with a high-level declarative language for
specifying the system and the intended security properties, abstracting
away from any cryptographic details. A compiler takes as input such
high-level specifications and automatically produces the corresponding
cryptographic implementations (i.e., cryptographic library,
cryptographic protocols, and F# source code).
In this work, we focus on two important, and seemingly contradictory,
security properties, namely, authorization and privacy. On the one hand,
the access to sensitive resources should be granted only to authorized
users; on the other hand, these users would like to share as little
personal information as possible with third parties. These opposing
goals make it challenging to enforce privacy-aware authorization
policies in a distributed setting.
The high-level declarative language builds on Evidential DKAL, a logic
for authorization polices of decentralized systems, which we extend to
reason about privacy policies. Specifically, the traditional says
modality from authorization logics is accompanied by existential
quantification in order to express the secrecy of sensitive information.
The cryptographic realization of privacy-aware authorization policies is
obtained by a powerful combination of digital signatures and
zero-knowledge proofs. This approach is general and can be seen as a
privacy-enabling plugin for existing authorization languages and
proof-carrying authorization architectures.
We proved that the implementations output by the compiler enforce the
intended authorization policies and we conducted an experimental
evaluation to demonstrate the feasibility of our approach.
ObliviAd: Provably Secure and Practical Online Behavioral Advertising
Online behavioral advertising (OBA) involves the tracking of web users'
online activities in order to deliver tailored advertisings, which has
become a rapidly increasing source of revenue for a number of web
services, typically involving third-party data analytic firms. This
practice raises significant privacy concerns among users and privacy
advocates alike. Therefore, the task of designing OBA systems that do
not reveal user profiles to third parties has been receiving growing
interest from the research community. Nevertheless, existing solutions
are not ideal for privacy preserving OBA: some of them do not provide
adequate privacy to users or adequate targeting information to brokers,
while others require trusted third parties that are difficult to
In this paper, we propose a provably secure architecture for privacy
preserving online behavioral advertising (OBA). The distinguishing
features of our approach are the usage of secure hardware-based private
information retrieval for distributing advertisements and high-latency
mixing of electronic coins for billing advertisers without disclosing
any information about client profiles to brokers. ObliviAd does not
assume any trusted party and provides an economical alternative to
brokers to preserve the privacy of users without hampering the precision
of ads selection.
Clash Attacks on the Verifiability of E-Voting Systems
Verifiability is a central property of modern e-voting
systems. Intuitively, verifiability means that voters can
check that their votes were actually counted and that the
published result of the election is correct, even if the
voting machine/authorities are (partially) untrusted.
In this paper, we raise awareness of a simple attack,
which we call clash attack, on the verifiability of
e-voting systems. The main idea behind this attack is
that voting machines manage to provide different voters
with the same receipt. As a result, the bulletin board
can safely replace ballots by new ballots, and by this,
manipulate the election without being detected.
This attack does not seem to have attracted much
attention in the literature. Even though the attack is
quite simple, we show that, under reasonable trust
assumptions, it applies to several e-voting systems which
have been designed to provide verifiability. In
particular, we show that it applies to the prominent
ThreeBallot and VAV voting systems as well as to two
e-voting systems that have been deployed in real
elections: the Wombat Voting system and a variant of the
Helios voting system.
We discuss countermeasures for each of these systems and
for (various variants of) Helios provide a formal
analysis based on a rigorous definition of
verifiability. More precisely, our analysis of Helios is
with respect to the more general and in the area of
e-voting often overlooked notion of accountability.
A Framework for the Cryptographic Verification of Java-like Programs
We consider the problem of establishing cryptographic
guarantees---in particular, computational
indistinguishability---for Java or Java-like programs
that use cryptography. For this purpose, we propose a
general framework that enables existing program analysis
tools that can check (standard) noninterference
properties of Java programs to establish cryptographic
security guarantees, even if the tools a priori cannot
deal with cryptography. The approach that we take is new
and combines techniques from program analysis and
simulation-based security. Our framework is stated and
proved for a Java-like language that comprises a rich
fragment of Java. The general idea of our approach
should, however, be applicable also to other practical
As a proof of concept, we use an automatic program
analysis tool for checking noninterference properties of
Java programs, namely the tool Joana, in order to
establish computational indistinguishability for a Java
program that involves clients sending encrypted messages
over a network, controlled by an active adversary, to a
About ByteCode Static Analysis and Manipulation for Android Security
Android based devices are becoming widespread. As a result and since those devices contain personal and confidential data, the security model of the Android has been analyzed extensively. In this paper, we show how bytecode static analysis and manipulation can help secure Android applications. First, we present the permission gap problem, i.e., when an Android application declares more permissions than required. Second, we demonstrate that it's feasible, in a reasonable amount of time, to apply static analysis approaches directly on smartphone. Finally we explain how what we have leart from the two previous points could be used to develop new approaches, working directly on the smartphone, to detect malwares.
Exploring Query Dependency in LBS Privacy Protection
In this paper, we study a new type of context information – query dependency and its influences on user’s query privacy protection in location-based services.
Differentially Private Smart Metering with Battery Recharging
The energy industry has recently begun using smart meters to take fine-grained readings of energy usage. These smart meters enable flexible time-of-use billing, forecasting, and demand response, but they also raise serious user privacy concerns. We propose a novel technique for provably hiding sensitive power consumption information in the overall power consumption stream. Our technique relies on a rechargeable battery that is connected to the household's power supply. This battery is used to modify the household's power consumption by adding or subtracting noise (i.e., increasing or decreasing power consumption), in order to establish strong privacy guarantees in the sense of differential privacy. To achieve these privacy guarantees in realistic settings, we first investigate the influence of, and the interplay between, capacity and throughput bounds that batteries face in reality. We then propose an integrated method based on noise cascading that allows for recharging the battery on-the-fly so that differential privacy is retained, while adhering to capacity and throughput constraints, and while keeping the additional consumption of energy induced by our technique to a minimum.
Provably Secure and Practical Onion Routing
The onion routing network Tor is undoubtedly the most widely employed technology for anonymous web access. Although the underlying onion routing (OR) protocol appears satisfactory, a comprehensive analysis of its security guarantees is still lacking. This has also resulted in a significant gap between research work on OR protocols and existing OR anonymity analyses. In this work, we address both issues with onion routing by defining a provably secure OR protocol, which is practical for deployment in the next generation Tor network.
We start off by presenting a security definition (an ideal functionality) for the OR methodology in the universal composability (UC) framework. We then determine the exact security properties required for OR cryptographic primitives (onion construction and processing algorithms, and a key exchange protocol) to achieve a provably secure OR protocol. We show that the currently deployed onion algorithms with slightly strengthened integrity properties can be used in a provably secure OR construction. In the process, we identify the concept of predictably malleable symmetric encryptions, which might be of independent interest. On the other hand, we find the currently deployed key exchange protocol to be inefficient and difficult to analyze and instead show that a recent, significantly more efficient, key exchange protocol provides a provably secure OR construction.
In addition, our definition greatly simplifies the process of analyzing OR anonymity metrics. We define and prove forward secrecy for the OR protocol, and realize our (white-box) OR definition from an OR black- box model assumed in a recent anonymity analysis. This realization not only makes the analysis formally applicable to the OR protocol but also identifies the exact adversary and network assumptions made by the black box model.
Rational Cooperation in Security Protocols
When checking whether an objective in a security protocol is satisfied, traditionally, no attention is paid to the incentives of the agents, and to what rational outcomes can be expected based on these incentives. Instead, it is assumed that in order to check whether an objective is satisfied, the group is required to have a joint strategy that achieves this objective. We will show that this requirement is not sufficient when the incentives of the agents, and the rationality assumptions on these agents, are also taken in consideration: even if there exists a joint strategy that satisfies the objective of the protocol, it is possible that the group of agents has no incentive to play this joint strategy, or that the group cannot coordinate on a joint strategy to play.