ViewpointResearch evaluation for computer science

  title={ViewpointResearch evaluation for computer science},
  author={Bertrand Meyer and Christine Choppy and J{\o}rgen Staunstrup and Jan van Leeuwen},
  journal={Commun. ACM},
Reassessing the assessment criteria and techniques traditionally used in evaluating computer science research effectiveness. 

How productivity and impact differ across computer science subareas

How to understand evaluation criteria for CS researchers and why it is important to have a clear understanding of what is considered in a given study.

Invisible work in standard bibliometric evaluation of computer science

Most of a computer scientist's production can go uncounted if a standard bibliographic service is used.

Publish now, judge later

A proposal to address the problem of too many conference submissions and not enough time for reviewers to carefully evaluate each one.

Using Community Structure to Categorize Computer Science Conferences: Initial Results

The experiments show that categorizing by exemplars matches well with curated topic classification from the Chinese CCF conference list, and the results also accord with manual judgement which show promise as a practical and robust method for categorizing CS conferences.

Computational support for academic peer review

New tools tackle an age-old practice.

How to Write a Good Paper in Computer Science and HowWill It Be Measured by ISI Web of Knowledge

The limits of numerical assessment tools as applied to computer science publications and guidelines on how to write a good paper, where to submit the manuscript, and how to deal with the reviewing process are given.

An Effective End-User Development Approach through Domain-Specific Mashups for Research Impact Evaluation

This thesis presents a novel approach for an effective end-user development, specifically for non-programmers, and introduces a domain-specific approach to mashups that "speaks the language of users", i.e., that is aware of the terminology, concepts, rules, and conventions the user is comfortable with.

The fate of computing in research performance evaluation schemes – ERA vs PBRF

The prevailing ‘audit culture’ in national governments has seen a global proliferation of research performance evaluation schemes. Most recently the Excellence in Research for Australia (ERA) results

Publication practices in the Argentinian Computer Science community: a bibliometric perspective

A study of the publication practices of the Argentinian CS community, their evolution over time and, more importantly, the impact they achieved in terms of citations is presented.


Stop the numbers game

A large number of scientists believe that counting papers slows the rate of scientific progress, so it is important to consider the number of papers published in order to estimate the amount of time it takes to write a paper.