Semantic Scholar Bot

A bot, also known as a web robot, web spider, or web crawler, is a software application that runs automated tasks over the Internet in a more effective, structured, and concise manner than a human could ever do.

The Semantic Scholar bot crawls certain domains to find academic PDFs. These PDFs are served on semanticscholar.org (opens in a new tab) so researchers can discover and understand other academic accomplishments.

If you have any questions or concerns about our crawler, please

Technical Details

Our crawler always makes requests with the following User-Agent HTTP header when looking for documents:

Mozilla/5.0 (compatible) SemanticScholarBot (+https://www.semanticscholar.org/crawler)

This user agent string can be used to filter or reject traffic from our crawler if desired.