A contest among five neural language models was held by researchers at Russia's Skolkovo Institute of Science and Technology (Skoltech), Samsung Research Center Russia, HSE University, and Lomonosov Moscow State University.
The models competed in lexical substitution tasks, including plain substitution and word sense induction (for example, when a machine must differentiate between the bank of a river and a bank as a financial institution).
The researchers demonstrated which models tend to generate semantic relations of which types (synonyms, hypernyms, and more), and that additional data about the target word can boost lexical substitution quality substantially.
Skoltech's Alexander Panchenko said the outcomes may be helpful for language learning, enhancement of textual data for training neural networks, and writing assistance like "automatic suggestion of synonyms and text reformulation."
From Skolkovo Institute of Science and Technology (Russia)
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found