Doctor of Philosophy (DPhil) in Computer Science and Artificial Intelligence, 2010
University of Sussex
Bachelor of Engineering (BEng) in Computer Systems Engineering, 2004
University of Sussex
Data in cloud has always been a point of attraction for the cyber attackers. Nowadays healthcare data in cloud has become their new interest. Attacks on these healthcare data can result in annihilating consequences for the healthcare organizations. Decentralization of these cloud data can minimize the effect of attacks. Storing and running computation on sensitive private healthcare data in cloud are possible by decentralization which is enabled by peer to peer (P2P) network. By leveraging the decentralized or distributed property, blockchain technology ensures the accountability and integrity. Different solutions have been proposed to control the effect of attacks using decentralized approach but these solutions somehow failed to ensure overall privacy of patient centric systems. In this paper, we present a patient centric healthcare data management system using blockchain technology as storage which helps to attain privacy. Cryptographic functions are used to encrypt patient’s data and to ensure pseudonymity. We analyze the data processing procedures and also the cost effectiveness of the smart contracts used in our system.
In the Näive Bayes classification problem using a vertically partitioned dataset, the conventional scheme to preserve privacy of each partition uses a secure scalar product and is based on the assumption that the data is synchronised amongst common unique identities. In this paper, we attempt to discard this assumption in order to develop a more efficient and secure scheme to perform classification with minimal disclosure of private data. Our proposed scheme is based on the work by Vaidya and Clifton , which uses commutative encryption to perform secure set intersection so that the parties with access to the individual partitions have no knowledge of the intersection. The evaluations presented in this paper are based on experimental results, which show that our proposed protocol scales well with large sparse datasets.
Farmers’ markets are the source of a rich and pleasurable consumption experience. In this chapter, we report on our attempts to support and augment these experiences through the deployment of pervasive advertising. We describe the ethnographic approach we used to delineate the areas of enjoyment and pleasure, how narratives are weaved around and through the stalls and their products, and how trust is formed and maintained between the stallholders and the customers. We show how this understanding can be applied in the design of supporting advertising applications. We then evaluate the applications using the same ethnographic approach, uncovering problems which would not have been visible with other evaluation techniques.
Rating-based collaborative filtering (CF) predicts the rating that a user will give to an item, derived from the ratings of other items given by other users. Such CF schemes utilise either user neighbourhoods (i.e. user-based CF) or item neighbourhoods (i.e. item-based CF). Lemire and MacLachlan  proposed three related schemes for an item-based CF with predictors of the form f(x) = x+b, hence the name ``slope one’’. Slope One predictors have been shown to be accurate on large datasets. They also have several other desirable properties such as being updatable on the fly, efficient to compute, and work even with sparse input. In this paper, we present a privacy-preserving item-based CF scheme through the use of an additively homomorphic public-key cryptosystem on the weighted Slope One predictor; and show its applicability on both horizontal and vertical partitions. We present an evaluation of our proposed scheme in terms of communication and computation complexity.
The open architecture of the Internet has enabled its massive growth and success by facilitating easy connectivity between hosts. At the same time, the Internet has also opened itself up to abuse, e.g. arising out of unsolicited communication, both intentional and unintentional. It remains an open question as to how best servers should protect themselves from malicious clients whilst offering good service to innocent clients. There has been research on behavioural profiling and reputation of clients, mostly at the network level and also for email as an application, to detect malicious clients. However, this area continues to pose open research challenges. This thesis is motivated by the need for a generalised framework capable of aiding efficient detection of malicious clients while being able to reward clients with behaviour profiles conforming to the acceptable use and other relevant policies. The main contribution of this thesis is a novel, generalised, context-aware, policy independent, privacy preserving framework for developing and sharing client reputation based on behavioural history. The framework, augmenting existing protocols, allows fitting in of policies at various stages, thus keeping itself open and flexible to implementation. Locally recorded behavioural history of clients with known identities are translated to client reputations, which are then shared globally. The reputations enable privacy for clients by not exposing the details of their behaviour during interactions with the servers. The local and globally shared reputations facilitate servers in selecting service levels, including restricting access to malicious clients. We present results and analyses of simulations, with synthetic data and some proposed example policies, of client-server interactions and of attacks on our model. Suggestions presented for possible future extensions are drawn from our experiences with simulation.
Due to the open nature of the Internet, the abuse of the provided services has become widespread. Recent research has been conducted on behavioural profiling and reputations, mostly at the network level, to aid detection of malicious clients. We build on this research and propose a novel generalised framework for developing and globally sharing client reputations formed from behavioural histories. Promising initial simulations show a high level of success of our framework.
Trust models have been used widely in the literature in a number of different contexts. We examine an emotive scenario where trust would be extremely useful: online dating. We explore the use of a semiring-based trust model with online dating in a social network. Introductions are facilitated by friend-of-a-friend connections in the network in a manner similar to some real world scenarios. We show how ethical and human factors, which are usually not considered, cause problems. We hope that as a result the trust community can work together to formulate new approaches to trust for emotive applications that accommodate ethical and human factors which are difficult to quantify.
We report on designing augmented reality (AR) applications to support the practices of going shopping, using an accompanied shopping and reflection technique to assess the key points of engagement among shoppers and producers at a farmers’ market. Our goal was to deploy innovative mobile technology in a low-tech context so that it supported everyday behaviour. The paper documents how a short research intervention was decisive in shaping the applications designed for the AR tool and explores how stories told as part of the market and in interview were used to help organise our insights.
Farmers’ markets are the source of a rich and pleasurable consumption experience. In this paper, we explored how we could support and augment these experiences through the deployment of a pervasive computing application. We describe the ethnographic approach we used to delineate the areas of enjoyment and pleasure, how narratives are weaved around and through the stalls and their products, and how trust is formed and maintained between the stallholders and the customers. We show how this understanding can be applied in the design of supporting applications. We then evaluate the applications using the same ethnographic approach, uncovering problems which would not have been visible with other evaluation techniques.
In client-server interaction scenarios over a network the problem of unsolicited network transactions is often encountered. In this paper, we propose a reputation model based on the behavioural history of long-lived network client identities as a solution to this problem. The reputations of clients are shared between trusted servers anonymously through global reputation analysers. Shared global reputations and local reputations help servers to infer local opinions of clients and control service levels in attempts to reduce unsolicited network transactions.
Facilitating navigation through commercial spaces by third party systems is a likely step in pervasive computing. For these applications to fully engage people they must build trust relationships in a natural manner. We hypothesize that the use of an explicit trust model in the design of the application would improve the rate at which trust is generated. To investigate this hypothesis, we have taken as a case study the design of a shopping guide for a local trading association.We have created an explicit trust model and incorporated this into our design.We have evaluated both our model and our application. The results of this confirmed our hypothesis and provided additional insight into how to model trust in the design of applications.
In this paper, we discuss the current situation with respect to simulation usage in P2P research, testing the available P2P simulators against a proposed set of requirements, and surveying over 280 papers to discover what simulators are already being used. We found that no simulator currently meets all our requirements, and that simulation results are generally reported in the literature in a fashion that precludes any reproduction of results. We hope that this paper will give rise to further discussion and knowledge sharing among those of the P2P and network simulation research communities, so that a simulator that meets the needs of rigorous P2P research can be developed.
As scientists, our research must be tested and evaluated in order for it to be shown to be valid. Part of this process includes providing reproducible results so that peers are able to confirm any findings. The techniques used to achieve this include analytical solutions, simulators and experiments with the actual system, however in the area of Peer-to-Peer (P2P) computing, this gives rise to a number of challenges. In this paper, we focus on P2P simulators by surveying a number of P2P simulators with respect to important criteria when simulating a P2P system. Simulators are compared, usage experiences are given and limitations are discussed. The survey shows that while there are a plethora of simulators, there are many issues such as poor scalability, little or no documentation, steep learning curves, and system limitations that makes using these simulators a convoluted process.
There are a number of P2P overlay simulators developed by various research groups for use by the P2P academic community, however many still opt to use their own custom-built simulator. Having surveyed the area of Peer-to-Peer simulators in previous work, we believe that this is due to the simulators lacking key functionality such as mechanisms to gather statistical data from simulation runs. The use of custom built simulators gives rise to a number of problems that include an increase in the difficulty to reproduce and validate results and comparison of similar simulated systems and their associated results. In this paper, we discuss the current situation with respect to simulation usage in P2P research and our work towards creating a new simulator that will meet the requirements of P2P researchers. It is our hope that this paper will give rise to further discussion and knowledge sharing among those of the P2P and simulation research communities, so that a simulator that meets the needs of the P2P community can be developed.
Traditional exhibitions in museums provide limited possibilities of interaction between the visitor and their artefacts. Usually, interaction is confined to reading labels with little information on the exhibits, shop booklets, and audio guided tours. These forms of interaction provide minimal information, and do not respond to a visitor’s personalized information preferences. As a result, there is no direct involvement between the visitor and the exhibit. This paper expands on a presentation metaphor, the virtual museum, which through the use of technologies such as Web/X3D, VR and AR offers to the visitor the possibility of exploring a virtual museum, interacting with virtual exhibits in real-time and visualising these exhibits in contexts such as 3D gallery spaces. We offer a
pot pouri' of novel and cost effective interaction and visualisation techniques that can be integrated into web based virtual museums. In our virtual museums, the exhibit is a digital representation of the cultural artefact, represented in various multimedia formats such as text, images, videos and 3D models/scenes that can be placed inside virtual gallery spaces. Such spaces can be explored, interacted with, and visualised on museum web pages using standard VRML browsers. The interactions provided within our system allow a museum web visitor to: participate in education quizzes about the exhibits; examine them from different perspectives in VR environments; pick-up’ and freely observe the exhibits in indoor AR environments; and finally interact with an artefacts replica through a safe multimodal interface.
ARCO – Augmented Representation of Cultural Objects (http://www.arco-web.org/) is a EU IST Framework V funded research project aimed at providing museums with useful technologies for digitising, managing and presenting virtual museum artefacts in virtual cultural environments. ARCOLite is a derivative of ARCO focused on small and medium scale museums, reducing the total cost of ownership by eliminating expensive Oracle database and patented X-VRML technology. ARCOLite is self-sufficient and can communicate with external digital culture systems. This final year project (A visualisation system for viewing museum artefacts) focuses on certain areas of the ARCOLite system. This report presents the overall concepts of the ARCOLite system and discusses in details the design and implementation of the underlying client-server architecture that forms the heart of ARCOLite by eliminating the equivalent Oracle and X-VRML architecture in ARCO. This report also describes the connectivity between an augmented reality client and the exhibition server.
This paper describes our lightweight prototype XML based client-server architecture for creating, managing and presenting virtual cultural exhibitions. ARCOLite provides a low cost solution for museums to assemble multimedia content into an XML archive for dynamic presentation locally or over the Internet extended with virtual and augmented reality visualization.
Cultural institutions, such as museums are particularly interested in making their collections accessible to people with physical disabilities. New technologies, such as Web3D and augmented reality (AR) can aid museums to respond to this challenge by building virtual museums accessible over the Internet or through kiosks located in accessible places within the museum. In this paper, we propose a prototype user-friendly visualisation interface that uses Web3D and AR techniques to visualise cultural heritage artefacts for virtual museum exhibitions. User interactions within the virtual museum are performed in an effective way with the help of assistive technology, so that users can feel completely related with the virtual museum artefacts and so benefit in terms of education and entertainment.
This paper describes ARCOLite, our low cost XML based client-server architecture for building and presenting digital heritage content in virtual museums. Our system includes components for creation and refinement of virtual artefacts including virtual reconstruction of buildings; XML content management, XML technologies for content repositories and presentation; and content visualisation using Web3D, virtual and augmented reality.
This paper presents a low cost architecture that extensively uses XML technologies to present generic digital content using the web and augmented reality. We describe our client-server architecture with particular emphasis on unique XML schemas that are used to design a generic XML repository and illustrate its use with two different application scenarios. Our solution allows management through XML and publication to a visualisation client supporting both virtual and augmented reality integrated with standard web browsing. Two application scenarios have been developed to illustrate the effectiveness of the system.
In the article, the authors present an educational application that allows users to interact with 3D Web content (Web3D) using virtual and augmented reality (AR). This enables an exploration of the potential benefits of Web3D and AR technologies in engineering education and learning. A lecturer’s traditional delivery can be enriched by viewing multimedia content locally or over the Internet, as well as in a tabletop AR environment. The implemented framework is composed in an XML data repository, an XML-based communications server, and an XML-based client visualisation application. In this article, the authors illustrate the architecture by configuring it to deliver multimedia content related to the teaching of mechanical engineering. Four mechanical engineering themes (machines, vehicles, platonic solids and tools) are illustrated here to demonstrate the use of the system to support learning through Web3D.
Recent and selected
If you would like to see how my name is written but do not have the Bengali or Japanese fonts to read the names in the paragraph above, click on the links below to see them as images.