{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["# Reding Comprehension Challenges\n", "\n", "This list aims to be a minimal list of RC challenges/datasets that a new system should measure themselves on. \n", "\n", "\n", "- [Squad](https://rajpurkar.github.io/SQuAD-explorer/): Stanford Question Answering Dataset\n", "- [Marco](http://www.msmarco.org/): Microsoft Machine Reading Comprehension Dataset\n", "- [Natural Questions](https://ai.google.com/research/NaturalQuestions/dataset): Open Domain Question Answering\n", "- [NarrativeQA](https://github.com/deepmind/narrativeqa):  The NarrativeQA Reading Comprehension Challenge\n", "- [Qangaroo](https://qangaroo.cs.ucl.ac.uk/): Reading comprehenstion with multiple hops\n", "- [MultiRc](https://cogcomp.org/multirc/):  Reading Comprehension over Multiple Sentences\n", "- [wikiqa](https://www.microsoft.com/en-us/research/publication/wikiqa-a-challenge-dataset-for-open-domain-question-answering/): A Challenge Dataset for Open-Domain Question Answering\n", "- [HotpotQA](https://hotpotqa.github.io/): A Dataset for Diverse, Explainable Multi-hop Question Answering\n", "- [PiQA](https://github.com/uwnlp/piqa): Phrase Indexed Question Answering\n", "- [RACE](https://www.cs.cmu.edu/%7Eglai1/data/race/): Large-scale ReAding Comprehension Dataset From Examination"]}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.2"}}, "nbformat": 4, "nbformat_minor": 2}