WikiText
Text
NLP
|...
许可协议: CC BY-SA 3.0

Overview

The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia.
Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies.
In comparison to the Mikolov processed version of the Penn Treebank (PTB), the WikiText datasets are larger. WikiText-2 aims to be of a similar size to the PTB while WikiText-103 contains all articles extracted from Wikipedia. The WikiText datasets also retain numbers (as opposed to replacing them with N), case (as opposed to all text being lowercased), and punctuation (as opposed to stripping them out).
dataset statistics

Data Collection

We selected articles only fitting the Good or Featured article criteria specified by editors on Wikipedia. These articles have been reviewed by humans and are considered well written, factually accurate, broad in coverage, neutralin point of view, and stable. This resulted in 23,805 Good articles and 4,790 Featured articles. The text for each article was extracted using the Wikipedia API. Extracting the raw text from Wikipedia mark-up is nontrivial due to the large number of macros in use. These macros are used extensively and include metric conversion, abbreviations, language notation, and date handling.
Once extracted, specific sections which primarily featured lists were removed by default. Other minor bugs, such assort keys and Edit buttons that leaked in from the HTML, were also removed. Mathematical formulae and LaTeX code, were replaced with〈formula〉tokens. Normalization and tokenization were performed using the Moses tokenizer, slightly augmented to further split numbers (8,600→8 @,@ 600) and with some additional minor fixes. A vocab-ulary was constructed by discarding all words with a count below 3. Words outside of the vocabulary were mapped to the〈unk〉token, also a part of the vocabulary.

Citation

Please use the following citation when referencing the dataset:

@article{DBLP:journals/corr/MerityXBS16,
  author    = {Stephen Merity and
               Caiming Xiong and
               James Bradbury and
               Richard Socher},
  title     = {Pointer Sentinel Mixture Models},
  journal   = {CoRR},
  volume    = {abs/1609.07843},
  year      = {2016},
  url       = {http://arxiv.org/abs/1609.07843},
  archivePrefix = {arXiv},
  eprint    = {1609.07843},
  timestamp = {Thu, 21 Mar 2019 11:19:44 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/MerityXBS16.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

License

CC BY-SA 3.0

数据概要
数据格式
Text,
数据量
--
文件大小
373.28MB
发布方
salesforce
Salesforce is a customer relationship management solution that brings customers and companies together.
数据集反馈
| 79 | 数据量 -- | 大小 373.28MB
WikiText
Text
NLP
许可协议: CC BY-SA 3.0

Overview

The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia.
Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies.
In comparison to the Mikolov processed version of the Penn Treebank (PTB), the WikiText datasets are larger. WikiText-2 aims to be of a similar size to the PTB while WikiText-103 contains all articles extracted from Wikipedia. The WikiText datasets also retain numbers (as opposed to replacing them with N), case (as opposed to all text being lowercased), and punctuation (as opposed to stripping them out).
dataset statistics

Data Collection

We selected articles only fitting the Good or Featured article criteria specified by editors on Wikipedia. These articles have been reviewed by humans and are considered well written, factually accurate, broad in coverage, neutralin point of view, and stable. This resulted in 23,805 Good articles and 4,790 Featured articles. The text for each article was extracted using the Wikipedia API. Extracting the raw text from Wikipedia mark-up is nontrivial due to the large number of macros in use. These macros are used extensively and include metric conversion, abbreviations, language notation, and date handling.
Once extracted, specific sections which primarily featured lists were removed by default. Other minor bugs, such assort keys and Edit buttons that leaked in from the HTML, were also removed. Mathematical formulae and LaTeX code, were replaced with〈formula〉tokens. Normalization and tokenization were performed using the Moses tokenizer, slightly augmented to further split numbers (8,600→8 @,@ 600) and with some additional minor fixes. A vocab-ulary was constructed by discarding all words with a count below 3. Words outside of the vocabulary were mapped to the〈unk〉token, also a part of the vocabulary.

Citation

Please use the following citation when referencing the dataset:

@article{DBLP:journals/corr/MerityXBS16,
  author    = {Stephen Merity and
               Caiming Xiong and
               James Bradbury and
               Richard Socher},
  title     = {Pointer Sentinel Mixture Models},
  journal   = {CoRR},
  volume    = {abs/1609.07843},
  year      = {2016},
  url       = {http://arxiv.org/abs/1609.07843},
  archivePrefix = {arXiv},
  eprint    = {1609.07843},
  timestamp = {Thu, 21 Mar 2019 11:19:44 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/MerityXBS16.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

License

CC BY-SA 3.0

数据集反馈
0
立即开始构建AI
graviti
wechat-QR
长按保存识别二维码,关注Graviti公众号