with André van Cleeff, Trajce Dimkov, Pieter Hartel and Roel Wieringa
The security perimeter, which once was simply defined as the fence around the premises of an organisation, is becoming increasingly flexible and adaptable to the environment and the circumstances. We call this process re-perimeterisation (ReP). The effects of ReP are felt in the digital domain (where data moves from organisation to organisation through networks), the social domain (where one individual may play a variety of roles in cooperating organisations) and the physical domain (where appliances such as mobile phones and laptops move around). ReP brings about new security risks because of the difficulty of keeping the domains aligned. For example, stealing a laptop (social domain) with a motion sensor triggers an alarm (physical domain), which then selects a security policy that blocks access to all sensitive data (digital domain). By making the security perimeter explicit in business processes, security policies and security mechanisms, we intend to foster alignment of the three domains. This would then mitigate the risks of ReP.
The precautionary principle, stating that parties should refrain from actions in the face of scientific uncertainties about serious or irreversible harm, has successfully been applied to technologies potentially affecting human health or the environment. Meanwhile, effects of technology on the sustainability of society are not discussed from such a point of view. Still, even more than about environmental effects of technology, there is always uncertainty about how a new technology will be used by humans. This is especially striking in examples from information technology, such as biometric databases and public transport administration systems. In these examples, huge amounts of information on citizens could easily be used for other than intended purposes, and thereby affect the foundations of society in terms of privacy and autonomy. Still, these systems seem to be introduced without clear normative guidance on what is acceptable and what is not. We therefore aim at developing a precautionary principle for social effects of information technology. Whereas the traditional precautionary principle aims at improving the safety of technology in order to protect health and environment, the new principle would aim at improving the information security of technology in order to protect society. Intuitively, information security means that not every person should be able to access any information at will. The topic of information security has received considerable attention in the computer science community. Still, this task is not trivial, since it is unclear a) how definitions of information security can be linked to social sustainability, b) how a precautionary principle can be formulated with respect to human behaviour without sacrificing the idea of human autonomy, and c) how such a principle could be used in design practices of information systems, especially requirements engineering.
with André van Cleeff and Roel Wieringa
Over the last years, something called “cloud computing” has become a major theme in computer science and information security. Essentially, it concerns delivering information technology as a service, by enabling the renting of software, computing power and storage. We investigate the security and privacy issues that the emergence of cloud computing as a paradigm raises, both from a computer science and a philosophical perspective. We study the capabilities and limitations of existing tools, as well as the necessity of simulating physical constraints in virtualised infrastructures.
with Trajce Dimkov and Pieter Hartel
Traditional information security modelling approaches often focus on containment of assets within boundaries. Due to what is called de-perimeterisation, such boundaries, for example in the form of clearly separated company networks, disappear. We investigate how to model and test the interactions between digital, physical and social aspects of security in such an environment. We provide algorithms for threat finding as well as methodologies for penetration testing.
with Qiang Tang
According to the Jericho forum, the trend in information security is moving the security perimeter as close to the data as possible. In this context, we suggest the idea of data-based access control, where decryption of data is made possible by knowing enough of the data. Trust is thus based on what someone already knows. A specific problem is defined as follows: given n pieces of data, an agent is able to recover all n items once she knows k of them. The problem is similar to both secure sketches and secret sharing, and we show that both can be used as a basis for constructions. Examples of possible applications are granting access without credentials, recovering forgotten passwords and sharing personal data in social networks.
There is a common problem in artificial intelligence and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, the goal of explanation is to acquire or maintain the users' trust. In this project, we investigate the relation between explanation and trust in the context of computing science. This analysis draws on literature study and concept analysis. We apply the conceptual framework to both expert systems and information security, and show the benefit of the framework for both fields by means of examples. The main focus is on case-based reasoning systems (AI) and electronic voting systems (security).
What does it mean for an information system to be "secure"? And what does it mean that an information system is "trustworthy" or "publicly acceptable"? The philosophy of information security is a field that has been largely ignored by the computing and philosophy community until now. We try to get this field started, by analysing the concepts used in the scientific and public discourses, and by providing a phenomenological framework to think about issues of risks, trust and acceptability.
I maintained a website on electronic voting in the Netherlands.