The data-literature interlinking service: Towards a common infrastructure for sharing data-article links

@article{Burton2017TheDI,
  title={The data-literature interlinking service: Towards a common infrastructure for sharing data-article links},
  author={Adrian Burton and Hylke Koers and Paolo Manghi and Sandro La Bruzzo and Amir Aryani and Michael Diepenbroek and Uwe Schindler},
  journal={Program},
  year={2017},
  volume={51},
  pages={75-100}
}
Research data publishing is today widely regarded as crucial for reproducibility, proper assessment of scientific results, and as a way for researchers to get proper credit for sharing their data. However, several challenges need to be solved to fully realize its potential, one of them being the development of a global standard for links between research data and literature. Current linking solutions are mostly based on bilateral, ad hoc agreements between publishers and data centers. These… 

Figures and Tables from this paper

Enabling Researchers to Make Their Data Count

  • Ajit Singh
  • Computer Science, Sociology
    SSRN Electronic Journal
  • 2019
The outcomes of the work of the Scholarly Link Exchange (Scholix) working group and the Data Usage Metrics working group are described, which developed a framework that allows organizations to expose and discover links between articles and datasets, thereby providing an indication of data citations.

Bringing Citations and Usage Metrics Together to Make Data Count

The outcomes of the work of the Scholarly Link Exchange (Scholix) working group and the Data Usage Metrics working group are described, which developed a framework that allows organizations to expose and discover links between articles and datasets, thereby providing an indication of data citations.

Scholarly Resources Structuring: Use Cases for Digital Libraries

It is claimed that by adoption of links as new resources, DLs can extend their collection and/or services for their users and represent and publish them via the Semantic Web technology stack.

Using the IR as a Research Data Registry

This presentation will conclude by discussing how efforts to put the KAUST repository on a path towards becoming a reliable registry for information about the existence and location of research data released by affiliated researchers help put the repository in a position to provide expanded services in support of improved research data management.

Open Science Graphs Must Interoperate!

This work describes the key motivations for i) the definition of a classification for OSGs to compare their features, identify commonalities and differences, and added value and for ii) thedefinition of an Interoperability Framework, specifically an information model and APIs that enable a seamless exchange of information across graphs.

Affiliation Information in DataCite Dataset Metadata: a Flemish Case Study

How and to what extent metadata of datasets indexed in DataCite offer clear human- or machine-readable information that enables the research data to be linked to a particular research institution is evaluated.

Scholarly resource linking: Building out a “relationship life cycle”

This paper presents results and insights from three different projects that focused on supporting more robust linkages among scholarly resources, to help guide new research initiatives and operational services focused on integrating relationship information into the scholarly record.

Context-Driven Discoverability of Research Data

The ability to search, discover and reuse data items is nowadays vital in doing science, but the ability to automatically enrich metadata with semantic information is limited by the data files format, which is typically not textual and hard to mine.

Why is getting credit for your data so hard?

This paper investigated where the research data ended up for 11 research institutions, and how this data is currently tracked and attributed, and analysed the gap between theResearch data that is currently in institutional repositories, and where their researchers truly share their data.

References

SHOWING 1-10 OF 21 REFERENCES

On Bridging Data Centers and Publishers: The Data-Literature Interlinking Service

This paper presents the synergic effort of the PDS-WG and the OpenAIRE infrastructure to realize and operate the Data-Literature Interlinking Service, a universal, open service for collecting and sharing dataset-literature links.

Connecting Scientific Articles with Research Data: New Directions in Online Scholarly Publishing

These data-linking efforts tie in with other initiatives at Elsevier to enhance the online article in order to connect with current researchers’ workflows and to provide an optimal platform for the communication of science in the digital era.

The OpenAIRE Literature Broker Service for Institutional Repositories

The high-level architecture behind the realization of an institutional repository Literature Broker Service for OpenAIRE is presented, which implements a subscription and notification paradigm supporting institutional repositories willing to learn about publication objects in Open AIRE that do not appear in their collection but may be pertinent to it.

Keeping your aggregative infrastructure under control

This demo will present the solution offered in the context of the OpenAIRE infrastructure, which today collects metadata and files from around 450+ data sources (and growing) of several typologies.

Mapping Large Scale Research Metadata to Linked Data: A Performance Comparison of HBase, CSV and XML

This work evaluated the performances of creating LOD by a MapReduce job on top of HBase, by mapping the intermediate CSV files, andBy mapping the XML output.

OpenAIREplus: the European Scholarly Communication Data Infrastructure

The high­level architecture and functionalities of that infrastructure, including services designed to collect, interlink and provide access to peer­reviewed and non­peer reviewed publications, datasets, and projects of the European Commission and national funding schemes, are described.

Coping with interoperability and sustainability in cultural heritage aggregative data infrastructures

The effectiveness of D-NET in the CH scenario is demonstrated by describing its usage in the realisation of a real-case ADI for the EC project Heritage of the People's Europe HOPE, which uses D- NET to implement a two-phase metadata conversion methodology that addresses data interoperability issues while facilitating sustainability by encouraging participation of data sources.

DataQ: A Data Flow Quality Monitoring System for Aggregative Data Infrastructures

DataQ is described, a general-purpose system for flexible and cost-effective data flow quality monitoring in ADIs, which supports ADI admins with a framework where they can represent ADI data flows and the relative monitoring specification, and be instructed on how to meet such specification on the ADI side to implement their monitoring functionality.

Cross-Linking Between Journal Publications and Data Repositories: A Selection of Examples

Examples of the many ways that a link can be made between a journal article and a dataset held in a data repository, as explored by the PREPARDE project are provided.

The D-NET software toolkit: A framework for the realization, maintenance, and operation of aggregative infrastructures

D-NET is a framework where designers and developers find the tools for constructing and operating aggregative infrastructures in a cost-effective way and is proposed as an optimal solution for designers and Developers willing to realize aggre...