Wordsim-297
Text
NLP
|...
许可协议: MIT

Overview

Most word embedding methods take a word as a basic unit and learn embeddings according to words’ external contexts, ignoring the internal structures of words. However, in some languages such as Chinese, a word is usually composed of several characters and contains rich internal information. The semantic meaning of a word is also related to the meanings of its composing characters. Hence, we take Chinese for example, and present a character-enhanced word embedding model (CWE). In order to address the issues of character ambiguity and non-compositional words, we propose multiple-prototype character embeddings and an effective word selection method. We evaluate the effectiveness of CWE on word relatedness computation and analogical reasoning. The results show that CWE outperforms other baseline methods which ignore internal character information.

Data Collection

We select a human-annotated corpus with news articles from The People’s Daily for embedding learning. The corpus has 31 million words. The word vocabulary size is 105 thousand and the character vocabulary size is 6 thousand (covering 96% characters in national standard charset GB2312). We set vector dimension as 200 and context window size as 5. For optimization, we use both hierarchical softmax and 10-word negative sampling. We perform word selection for CWE and use pre-trained character embeddings as well. We introduce CBOW, Skip-Gram and GloVe as baseline methods, using the same vector dimension and default parameters. We evaluate the effectiveness of CWE on word relatedness computation and analogical reasoning.

Citation

@inproceedings{chen2015joint,
  title={Joint learning of character and word embeddings},
  author={Chen, Xinxiong and Xu, Lei and Liu, Zhiyuan and Sun, Maosong and Luan, Huanbo},
  booktitle={Twenty-Fourth International Joint Conference on Artificial Intelligence},
  year={2015}
}

License

MIT

数据概要
数据格式
Text,
数据量
--
文件大小
7KB
发布方
Tsinghua University
Tsinghua University is a major research university in Beijing, and a member of the C9 League of Chinese universities. Since its establishment in 1911, it has graduated Chinese leaders in science, engineering, politics, business, academia, and culture.
数据集反馈
| 26 | 数据量 -- | 大小 7KB
Wordsim-297
Text
NLP
许可协议: MIT

Overview

Most word embedding methods take a word as a basic unit and learn embeddings according to words’ external contexts, ignoring the internal structures of words. However, in some languages such as Chinese, a word is usually composed of several characters and contains rich internal information. The semantic meaning of a word is also related to the meanings of its composing characters. Hence, we take Chinese for example, and present a character-enhanced word embedding model (CWE). In order to address the issues of character ambiguity and non-compositional words, we propose multiple-prototype character embeddings and an effective word selection method. We evaluate the effectiveness of CWE on word relatedness computation and analogical reasoning. The results show that CWE outperforms other baseline methods which ignore internal character information.

Data Collection

We select a human-annotated corpus with news articles from The People’s Daily for embedding learning. The corpus has 31 million words. The word vocabulary size is 105 thousand and the character vocabulary size is 6 thousand (covering 96% characters in national standard charset GB2312). We set vector dimension as 200 and context window size as 5. For optimization, we use both hierarchical softmax and 10-word negative sampling. We perform word selection for CWE and use pre-trained character embeddings as well. We introduce CBOW, Skip-Gram and GloVe as baseline methods, using the same vector dimension and default parameters. We evaluate the effectiveness of CWE on word relatedness computation and analogical reasoning.

Citation

@inproceedings{chen2015joint,
  title={Joint learning of character and word embeddings},
  author={Chen, Xinxiong and Xu, Lei and Liu, Zhiyuan and Sun, Maosong and Luan, Huanbo},
  booktitle={Twenty-Fourth International Joint Conference on Artificial Intelligence},
  year={2015}
}

License

MIT

数据集反馈
0
立即开始构建AI
graviti
wechat-QR
长按保存识别二维码,关注Graviti公众号