A Quality Model for Linked Data Exploration

@inproceedings{Cappiello2016AQM,
  title={A Quality Model for Linked Data Exploration},
  author={Cinzia Cappiello and T. D. Noia and Bogdan Alexandru Marcu and Maristella Matera},
  booktitle={ICWE},
  year={2016}
}
Linked (Open) Data (LD) offer the great opportunity to interconnect and share large amounts of data on a global scale, creating added value compared to data published via pure HTML. However, this enormous potential is not completely accessible. In fact, LD datasets are often affected by errors, inconsistencies, missing values and other quality issues that may lower their usage. Users are often not aware of the quality and characteristics of the LD datasets that they use for various and diverse… 

Linked Data

TLDR
The objective of this chapter is to provide an overview of the essential aspects of this fairly recent and exciting field, including the model of linked data: resource description framework (RDF), its query language: simple protocol, and the RDF query language (SPARQL), the available means of publication and consumption of linkedData and the existing applications and the issues not yet addressed in research.

Web Engineering

TLDR
A new navigation support approach underpinned by the subsumption theory of meaningful learning is proposed, which postulates that new concepts are grasped by starting from familiar concepts which serve as knowledge anchors from where links to new knowledge are made.

Evaluating Knowledge Anchors in Data Graphs Against Basic Level Objects

TLDR
A new navigation support approach underpinned by the subsumption theory of meaningful learning is proposed, which postulates that new concepts are grasped by starting from familiar concepts which serve as knowledge anchors from where links to new knowledge are made.

Web Intelligence Linked Open Data for Website Design Reuse

TLDR
This paper proposes extraction of website-relevant data from online global services considered as linked open data sources, using specially developed web intelligence data miner, and performs pilot feature engineering for finding similar solutions within Domain, Task, and User UI models supplemented by Quality aspects.

Crowd-annotation and LoD-based semantic indexing of content in multi-disciplinary web repositories to improve search results

TLDR
It is claimed that by expert crowd-annotation of content on top of automatic semantic annotation, it can enrich the semantic index over time to augment the contextual value of content in web repositories so that they remain findable despite changes in language, terminology and scientific concepts.

Assessing the Importance of Data Factors of Data Quality Model in the Business Intelligence Area

TLDR
How the findings demonstrate that the intrinsic, contextual, representational and accessibility IQs in terms of their dimensions and reputation, accuracy, believability, and reliability, proved to be the most important factors, is revealed.

Requirements for Data Quality Metrics

TLDR
This work presents a set of five requirements for data quality metrics relevant for a metric that aims to support an economically oriented management of data quality and decision making under uncertainty, and demonstrates the applicability and efficacy of these requirements.

Feature Factorization for Top-N Recommendation: From Item Rating to Features Relevance

TLDR
This paper proposes to exploit past user ratings to evaluate the relevance of every single feature within each feature matrix thus moving from a user-item to a userfeature matrix and shows that the proposed method outperforms the matrix factorization approach performed in the user- item space in terms of accuracy of results.

Towards the Construction of a User Unique Authentication Mechanism on LMS Platforms through Model-Driven Engineering (MDE)

TLDR
A security abstraction model on LMS, based on MDA is proposed to provide a set of guidelines on how to carry out unified authentication, establishing a common dialogue among stakeholders.

References

SHOWING 1-10 OF 15 REFERENCES

Quality assessment for Linked Data: A Survey

TLDR
A systematic review of approaches for assessing the quality of Linked Data, which unify and formalize commonly used terminologies across papers related to data quality and provides a comprehensive list of 18 quality dimensions and 69 metrics.

Sieve: linked data quality assessment and fusion

TLDR
Sieve, a framework for flexibly expressing quality assessment methods as well as fusion methods for quality assessment and fusion, is presented, which is integrated into the Linked Data Integration Framework (LDIF), which handles Data Access, Schema Mapping and Identity Resolution.

Methodology for Assessment of Linked Data Quality

TLDR
This paper proposes a data quality assessment methodology specically designed for Linked Data, which consists of three phases and six steps with specic emphasis on considering a use case.

Building a relatedness graph from Linked Open Data: A case study in the IT domain

Enhancing Workspace Composition by Exploiting Linked Open Data as a Polymorphic Data Source

TLDR
This paper presents a polymorphic data source that exploits the wide availability of information structured in the Linked Open Data cloud and presents a semi-automatic annotation algorithm that creates semantic annotations for services available in a composition platform.

Beyond Accuracy: What Data Quality Means to Data Consumers

TLDR
Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs and this research provides a basis for future studies that measure data quality along the dimensions of this framework.

A Linked Data Perspective for Effective Exploration of Web APIs Repositories

TLDR
A novel approach to provide a comprehensive cross-repositories view of the available Web APIs information, based on Linked Data principles to identify and use semantic links across repositories for search purposes is proposed.

Data fusion

TLDR
This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data Fusion.

Methodologies for data quality assessment and improvement

TLDR
Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, thetypes of information systems addressed by each methodology.

Quality-driven information filtering using the WIQA policy framework