Audio interview with Peter Murray-Rust on the Data Skeptic Podcast (53 minutes)

In August, Peter Murray-Rust agreed to doing an interview with Kyle Polich at Data Skeptic “The podcast that is skeptical of and with data”. The interview was published online on 28th August 2015.

Data Skeptic is a podcast that alternates between short mini episodes with the host explaining concepts from data science to his non-data scientist wife, and longer interviews featuring practitioners and experts on interesting topics related to data, all through the eye of scientific skepticism.

ContentMine is a project which provides the tools and workflow to convert scientific literature into machine readable and machine interpretable data in order to facilitate better and more effective access to the accumulated knowledge of human kind. The program’s founder Peter Murray-Rust joins us this week to discuss ContentMine. Our discussion covers the project, the scientific publication process, copyright, and several other interesting topics.

Full transcript available here: ContentMine full transcript.

The draft transcript is  ~98% accurate and Peter will edit this in due course.

//platform.twitter.com/widgets.js

 

Both the audio and transcript are licensed under Creative Commons CC-BY.

With so much valuable content, in due course, we shall break this down into more sizable segments, but for now, enjoy the interview in full.

Just some of the topics covered:-

 

Entry points to the ContentMine toolchain

Where to get started when content mining? Well, it really depends on what you’ve currently got and/or where you plan to source your content from. In this bite-size post I’ll cover all 3 of the major entry points to the ContentMine toolchain.

entry points

 

We envisage that there will be 3 significant entry points to the ContentMine toolchain:

  1. From academic content aggregator websites via getpapers (most recommended route, if possible)
  2. From journal websites via quickscrape
  3. From local-desktop access user-supplied files (least recommended)

 

All three of these entry points pass content to norma, to normalise the to-be-mined-content to ContentMine standards and specifications, prior to analysis and visualisation by downstream ContentMine tools.

 

Here’s some command-line examples of how each of these entry-points work:

 

1.) The ideal workflow if your subject matter / resource provider allows it is to take standardised XML e.g. NLM XML from EPMC and work with this highly structured content. The example below is taken from a previous blog post on finding species.

#getpapers
getpapers --query 'species JOURNAL:"PLOS ONE" AND FIRST_PDATE:[2015-04-02 TO 2015-04-02]'  
          -x  --outdir plos-species
#norma
norma -q plos-species/ -i fulltext.xml -o scholarly.html --transform nlm2html
#downstream analyses proceed on normalised content from here onwards...

 

2.) If your subject matter isn’t covered by IEEE / arXiv / Europe PubMedCentral, or some other reason like pesky embargo periods, then you can try to enter content into the ContentMine toolchain via quickscrape.  In terms of file format the order of preference is (from best to worst): XML > HTML > PDF. Sadly many legacy subscription access publishers choose not to expose content in XML from their journal websites. The HTML workflow goes from publisher HTML -> tidied-up XHTML -> scholarly HTML.

#quickscrape usage on a Nature Communications paper
quickscrape --url http://www.nature.com/ncomms/journal/v1/n3/abs/ncomms1031.html  
            --scraper journal-scrapers/scrapers/nature.json --output natcomms
info: quickscrape 0.4.5 launched with...
info: - URL: http://www.nature.com/ncomms/journal/v1/n3/abs/ncomms1031.html
info: - Scraper: /home/ross/workspace/quickscrape/journal-scrapers/scrapers/nature.json
info: - Rate limit: 3 per minute
info: - Log level: info
info: urls to scrape: 1
info: processing URL: http://www.nature.com/ncomms/journal/v1/n3/abs/ncomms1031.html
info: [scraper]. URL rendered. http://www.nature.com/ncomms/journal/v1/n3/abs/ncomms1031.html.
info: [scraper]. download started. fulltext.html.
info: [scraper]. download started. ncomms1031-s1.pdf.
info: [scraper]. download started. fulltext.pdf.
info: URL processed: captured 11/25 elements (14 captures failed)
info: all tasks completed

tree natcomms/
natcomms/
└── http_www.nature.com_ncomms_journal_v1_n3_abs_ncomms1031.html
    ├── fulltext.html
    ├── fulltext.pdf
    ├── ncomms1031-s1.pdf
    └── results.json

1 directory, 4 files

#norma steps
norma -i fulltext.html -o fulltext.xhtml --cmdir natcomms/ --html jsoup
norma -i fulltext.xhtml -o scholarly.html --cmdir natcomms/ --transform nature2html

tree natcomms/
natcomms/
└── http_www.nature.com_ncomms_journal_v1_n3_abs_ncomms1031.html
    ├── fulltext.html
    ├── fulltext.pdf
    ├── fulltext.xhtml
    ├── ncomms1031-s1.pdf
    ├── results.json
    └── scholarly.html

1 directory, 6 files

 

3.) If all else fails, you can feed your files into our toolchain directly via norma but this route doesn’t capture rich metadata about each item of user-supplied content, so it’s not an optimal pathway. Here’s an example of three random PDFs being prepared for analysis with norma:

#put content in direct via norma
norma -i A.pdf B.pdf C.pdf -o output/ctrees --cmdir

tree output/
output/
└── ctrees
    ├── A_pdf
    │   └── fulltext.pdf
    ├── B_pdf
    │   └── fulltext.pdf
    └── C_pdf
        └── fulltext.pdf

4 directories, 3 files

 

…and that’s it. Three different entry-points to the ContentMine toolchain: primarily designed with XML, HTML or PDF in mind but other formats available as input too.

Explaining the difference between getpapers and quickscrape

Having written a blog post about getpapers yesterday, I thought it might be useful to explain the difference in utility between getpapers and quickscrape.

I think of getpapers as a handy command-line tool for search & retrieval of relevant research. However, there are a variety of circumstances that can prevent getpapers from returning you the full text of some relevant papers, this is where quickscrape becomes very useful.

quickscrape is a command-line tool simply for retrieval of known research you want to download, with more power and flexibility of download techniques than getpapers. To some extent, it is in theory possible to get anything and everything you have legal access to, in bulk, via quickscrape. Now that’s what I mean by POWER!

 

Q: Is there a situation in which I might use both getpapers and quickscrape?

A: Yes! getpapers has functionality specifically designed for input into quickscrape which can be very useful when getpapers finds relevant closed access papers for which publisher-imposed restrictions don’t allow EPMC to make available for full text download.

A worked example: I want to mine the last 3 months of papers published in PNAS. PNAS typically imposes a 6-month embargo on research published in it, so EPMC cannot allow full-text download of recent PNAS research from EPMC. So you have to go via the PNAS journal website to get recent PNAS articles.

#
# Use getpapers to get a list of all recent PNAS articles
getpapers  
  --query 'JOURNAL:"PNAS" AND FIRST_PDATE:[2015-04-01 TO 2015-07-01]' 
  --all  
  --outdir recentpnas 

# Use quickscrape to download recent PNAS articles output by getpapers
quickscrape 
  --urllist recentpnas/fulltext_html_urls.txt 
  --scraper journal-scrapers/scrapers/pnas.json  
  --output recentpnasfull 
  --outformat bibjson

Perfect synergy, eh?

 

Q: What’s a real use case in which someone would use quickscrape instead of getpapers?

A: When the journal (e.g. Acta Palaeontologica Polonica) or platform (e.g. bioRxiv) that the desired research is published on, is not in Europe PubMedCentral (EPMC), arXiv, or IEEE.

Incidentally, there are two Acta Palaeontologica Polonica articles in EPMC and I have no idea why they are in EPMC to be honest! It would certainly make my life easier if EPMC / PMC were more widely scoped in terms of subjects/journals allowed in.

I’m not a biomedical researcher myself so unfortunately this is a common problem for me. There is no central aggregation of evolution, ecology or palaeontology journal content – if you want to do full text mining on them you have to aggregrate the content yourself, with quickscrape !