/Users/lmarti/.pyenv/versions/3.8.2/envs/risotto/lib/python3.8/site-packages/pandas/compat/__init__.py:117: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError.
  warnings.warn(msg)
 
env: CUDA_VISIBLE_DEVICES=4,5,6
 

embed_papers[source]

embed_papers(papers)

cosine_distance_query[source]

cosine_distance_query(model, query, papers_embeddings)

get_sentence_transformer[source]

get_sentence_transformer(name='bert-base-nli-mean-tokens')

Natural Language Inference (NLI) method

In this approach we use a BART classifier (Lewis et al., 2019) pre-trained on the Multi-Genre NLI (MultiNLI, Williams et al., 2018) corpus as the base model.

Given research interests expressed in natural language, we pose the problem of recovering relevant research from the CORD-19 dataset (Wang et al., 2020) as a Zero Shot Topic Classification task (Yin et al., 2019). Leveraging the Natural Language Inference task framework, we assess each paper relevance by feeding the model with the paper's title and abstract as premise and a research interest as hypothesis.

Finally, we use the model's entailment inference values as proxy relevance scores for each paper.

get_nli_model[source]

get_nli_model(name='facebook/bart-large-mnli')

from risotto.artifacts import load_papers_artifact

try:
    papers = load_papers_artifact()
    model, tokenizer = get_nli_model()
except:
    print('Data is not ready.')
/Users/lmarti/.pyenv/versions/3.8.2/envs/risotto/lib/python3.8/site-packages/spacy/util.py:271: UserWarning: [W031] Model 'en_core_sci_sm' (0.2.4) requires spaCy v2.2 and is incompatible with the current spaCy version (2.3.0). This may lead to unexpected results or runtime errors. To resolve this, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate
  warnings.warn(warn_msg)
Data is not ready.

build_tokenized_papers_artifact[source]

build_tokenized_papers_artifact(papers, tokenizer, should_dump=True, dump_path=None, batch_size=128)

load_tokenized_papers_artifact[source]

load_tokenized_papers_artifact(artifacts_path)

tokenized_papers = build_tokenized_papers_artifact(
    papers=papers,
    tokenizer=tokenizer,
    dump_path="artifacts/nli_artifacts.hdf")
tokenized_papers.head()
100.00% [604/604 08:29<00:00]
ug7v899j    [0, 20868, 1575, 9, 2040, 12, 32012, 1308, 438...
02tnwd4m    [0, 19272, 4063, 30629, 35, 10, 1759, 12, 3382...
ejv2xln0    [0, 6544, 24905, 927, 8276, 12, 495, 8, 34049,...
2b73a28n    [0, 21888, 9, 253, 15244, 2614, 12, 134, 11, 1...
9785vg6d    [0, 13120, 8151, 11, 22201, 44828, 4590, 11, 1...
Name: tokenized_papers, dtype: object
tokenized_papers = load_tokenized_papers_artifact("artifacts/nli_artifacts.hdf")
tokenized_papers.head()
ug7v899j    [0, 20868, 1575, 9, 2040, 12, 32012, 1308, 438...
02tnwd4m    [0, 19272, 4063, 30629, 35, 10, 1759, 12, 3382...
ejv2xln0    [0, 6544, 24905, 927, 8276, 12, 495, 8, 34049,...
2b73a28n    [0, 21888, 9, 253, 15244, 2614, 12, 134, 11, 1...
9785vg6d    [0, 13120, 8151, 11, 22201, 44828, 4590, 11, 1...
Name: tokenized_papers, dtype: object

build_entailments_artifact[source]

build_entailments_artifact(tokenized_papers, query_tokenized, batch_size=64, device='cuda', should_dump=True, dump_path=None)

load_entailments_artifact[source]

load_entailments_artifact(artifacts_path)

References

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. https://arxiv.org/abs/2005.14165

Davison, J. (2020). Zero-Shot Learning in Modern NLP. https://joeddav.github.io/blog/2020/05/29/ZSL.html

Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., & Zettlemoyer, L. (2019). BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. http://arxiv.org/abs/1910.13461

Reimers, N., & Gurevych, I. (2020). Sentence-BERT: Sentence embeddings using siamese BERT-networks. EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, 3982–3992. https://doi.org/10.18653/v1/d19-1410

Veeranna, S. P., Nam, J., Mencía, E. L., & Fürnkranz, J. (2016). Using semantic similarity for multi-label zero-shot classification of text documents. ESANN 2016 - 24th European Symposium on Artificial Neural Networks, April, 423–428.

Wang, L. L., Lo, K., Chandrasekhar, Y., Reas, R., Yang, J., Eide, D., Funk, K., Kinney, R., Liu, Z., Merrill, W., Mooney, P., Murdick, D., Rishi, D., Sheehan, J., Shen, Z., Stilson, B., Wade, A. D., Wang, K., Wilhelm, C., … Kohlmeier, S. (2020). CORD-19: The Covid-19 Open Research Dataset. https://arxiv.org/abs/2004.10706

Williams, A., Nangia, N., & Bowman, S. R. (2018). A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 1112--1122. http://aclweb.org/anthology/N18-1101

Yin, W., Hay, J., & Roth, D. (2019). Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, 3914–3923. https://doi.org/10.18653/v1/d19-1404