Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham Research Online
You are in:

Data quality assessment and anomaly detection via map / reduce and linked data : a case study in the medical domain.

Bonner, S. and McGough, S. and Kureshi, I. and Brennan, J. and Theodoropoulos, G. and Moss, L. and Corsar, D. and Antoniou, G. (2015) 'Data quality assessment and anomaly detection via map / reduce and linked data : a case study in the medical domain.', in Proceedings, 2015 IEEE International Conference on Big Data : Oct 29-Nov 01, 2015, Santa Clara, CA, USA. , pp. 737-746.

Abstract

Recent technological advances in modern healthcare have lead to the ability to collect a vast wealth of patient monitoring data. This data can be utilised for patient diagnosis but it also holds the potential for use within medical research. However, these datasets often contain errors which limit their value to medical research, with one study finding error rates ranging from 2.3%???26.9% in a selection of medical databases. Previous methods for automatically assessing data quality normally rely on threshold rules, which are often unable to correctly identify errors, as further complex domain knowledge is required. To combat this, a semantic web based framework has previously been developed to assess the quality of medical data. However, early work, based solely on traditional semantic web technologies, revealed they are either unable or inefficient at scaling to the vast volumes of medical data. In this paper we present a new method for storing and querying medical RDF datasets using Hadoop Map / Reduce. This approach exploits the inherent parallelism found within RDF datasets and queries, allowing us to scale with both dataset and system size. Unlike previous solutions, this framework uses highly optimised (SPARQL) joining strategies, intelligent data caching and the use of a super-query to enable the completion of eight distinct SPARQL lookups, comprising over eighty distinct joins, in only two Map / Reduce iterations. Results are presented comparing both the Jena and a previous Hadoop implementation demonstrating the superior performance of the new methodology. The new method is shown to be five times faster than Jena and twice as fast as the previous approach.

Item Type:Book chapter
Keywords:RDF, Medical Data, Map / Reduce, Joins.
Full text:(AM) Accepted Manuscript
Download PDF
(371Kb)
Status:Peer-reviewed
Publisher Web site:http://dx.doi.org/10.1109/BigData.2015.7363818
Publisher statement:© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Date accepted:05 September 2015
Date deposited:26 November 2015
Date of first online publication:November 2015
Date first made open access:No date available

Save or Share this output

Export:
Export
Look up in GoogleScholar