April 1, 1999
Similar papers 3
March 11, 2016
This paper describes the intense software filtering that has allowed the arXiv eprint repository to sort and process large numbers of submissions with minimal human intervention, making it one of the most important and influential cases of open access repositories to date. The paper narrates arXiv's transformation, using sophisticated sorting-filtering algorithms to decrease human workload, from a small mailing list used by a few hundred researchers to a site that processes t...
May 19, 2012
There has been a lively debate in many fields, including statistics and related applied fields such as psychology and biomedical research, on possible reforms of the scholarly publishing system. Currently, referees contribute so much to improve scientific papers, both directly through constructive criticism and indirectly through the threat of rejection. We discuss ways in which new approaches to journal publication could continue to make use of the valuable efforts of peer r...
February 17, 2011
The peer review system as used in several computer science communities has several flaws including long review times, overloaded reviewers, as well as fostering of niche topics. These flaws decrease quality, lower impact, slowdown the innovation process, and lead to frustration of authors, readers, and reviewers. In order to fix this, we propose a new peer review system termed paper bricks. Paper bricks has several advantages over the existing system including shorter publica...
May 4, 2012
Existing norms for scientific communication are rooted in anachronistic practices of bygone eras, making them needlessly inefficient. We outline a path that moves away from the existing model of scientific communication to improve the efficiency in meeting the purpose of public science - knowledge accumulation. We call for six changes: (1) full embrace of digital communication, (2) open access to all published research, (3) disentangling publication from evaluation, (4) break...
August 21, 2004
This paper describes a new breed of academic journals that use statistical machine learning techniques to make them more democratic. In particular, not only can anyone submit an article, but anyone can also become a reviewer. Machine learning is used to decide which reviewers accurately represent the views of the journal's readers and thus deserve to have their opinions carry more weight. The paper concentrates on describing a specific experimental prototype of a democratic j...
January 27, 2017
Scientific evaluation is a determinant of how scientists, institutions and funders behave, and as such is a key element in the making of science. In this article, we propose an alternative to the current norm of evaluating research with journal rank. Following a well-defined notion of scientific value, we introduce qualitative processes that can also be quantified and give rise to meaningful and easy-to-use article-level metrics. In our approach, the goal of a scientist is tr...
May 23, 2016
We add a small increment to understanding the notion of Primary Source Knowledge, knowledge that the non-expert and the citizen can acquire by assiduously reading the primary scientific journal literature without being embedded in the cultural life of the corresponding technical specialty. This comes from exposing four papers to the automated computer filters used by the physics preprint server arXiv. These filters are used to flag papers in need of further review by human as...
May 31, 2018
In this paper and demo we present a crowd and crowd+AI based system, called CrowdRev, supporting the screening phase of literature reviews and achieving the same quality as author classification at a fraction of the cost, and near-instantly. CrowdRev makes it easy for authors to leverage the crowd, and ensures that no money is wasted even in the face of difficult papers or criteria: if the system detects that the task is too hard for the crowd, it just gives up trying (for th...
May 31, 2002
In this paper we compare the relevance of information obtained from "discriminative" media and from "non-discriminative" media. Discriminative media are the ones which accumulate and deliver information using a heuristic selection of it. This can be made by humans, or by artificial intelligent systems, exhibiting some form of "knowledge". Non-discriminative media just collect and return information without any distinction. This can also be made by humans or by artificial syst...
January 2, 2023
Science depends on a communication system, and today that is largely provided by digital technologies such as the internet and web. Despite that digital technologies provide the infrastructure for that communication system, peer-reviewed journals continue to mimic workflows and processes from the print era. This paper focuses on one artifact from the print era, the journal issue, and describes how this artifact has been detrimental to the communication of science and therefor...