%env CUDA_VISIBLE_DEVICES=6
Natural Language Inference (NLI) method
In this approach we use a BART classifier (Lewis et al., 2019) pre-trained on the Multi-Genre NLI (MultiNLI, Williams et al., 2018) corpus as the base model.
Given research interests expressed in natural language, we pose the problem of recovering relevant research from the CORD-19 dataset (Wang et al., 2020) as a Zero Shot Topic Classification task (Yin et al., 2019). Leveraging the Natural Language Inference task framework, we assess each paper relevance by feeding the model with the paper's title and abstract as premise and a research interest as hypothesis.
Finally, we use the model's entailment inference values as proxy relevance scores for each paper.
from risotto.artifacts import load_papers_artifact
try:
papers = load_papers_artifact()
model, tokenizer = get_nli_model(name="huggingface/prunebert-base-uncased-6-finepruned-w-distil-mnli")
except FileNotFoundError:
print('Data artifacts not ready.')
tokenized_papers = build_tokenized_papers_artifact(
papers=papers,
tokenizer=tokenizer,
dump_path="artifacts/nli_bert_artifacts.hdf",
)
tokenized_papers.head()
tokenized_papers = load_tokenized_papers_artifact(
"artifacts/nli_bert_artifacts.hdf")
tokenized_papers.head()
query_tokenized = tokenizer.encode(
"This paper is about vaccines and therapeutics.")
build_entailments_artifact(batch_size=256,
tokenized_papers=tokenized_papers,
query_tokenized=query_tokenized,
dump_path="artifacts/nli_bert_artifacts.hdf")
References
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. https://arxiv.org/abs/2005.14165
Davison, J. (2020). Zero-Shot Learning in Modern NLP. https://joeddav.github.io/blog/2020/05/29/ZSL.html
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., & Zettlemoyer, L. (2019). BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. http://arxiv.org/abs/1910.13461
Reimers, N., & Gurevych, I. (2020). Sentence-BERT: Sentence embeddings using siamese BERT-networks. EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, 3982–3992. https://doi.org/10.18653/v1/d19-1410
Veeranna, S. P., Nam, J., Mencía, E. L., & Fürnkranz, J. (2016). Using semantic similarity for multi-label zero-shot classification of text documents. ESANN 2016 - 24th European Symposium on Artificial Neural Networks, April, 423–428.
Wang, L. L., Lo, K., Chandrasekhar, Y., Reas, R., Yang, J., Eide, D., Funk, K., Kinney, R., Liu, Z., Merrill, W., Mooney, P., Murdick, D., Rishi, D., Sheehan, J., Shen, Z., Stilson, B., Wade, A. D., Wang, K., Wilhelm, C., … Kohlmeier, S. (2020). CORD-19: The Covid-19 Open Research Dataset. https://arxiv.org/abs/2004.10706
Williams, A., Nangia, N., & Bowman, S. R. (2018). A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 1112--1122. http://aclweb.org/anthology/N18-1101
Yin, W., Hay, J., & Roth, D. (2019). Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, 3914–3923. https://doi.org/10.18653/v1/d19-1404