Technology
ELMo
ELMo provides deep contextualized word representations by using a bidirectional LSTM trained on a massive language modeling objective.
Developed by the Allen Institute for AI (AI2) in 2018, ELMo (Embeddings from Language Models) generates vector embeddings that change based on a word's surrounding context. Unlike static models like word2vec, ELMo uses a two-layer BiLSTM to distinguish between different meanings of the same word (e.g., 'bank' as a river edge versus a financial institution). This architecture improved the state of the art across six major NLP benchmarks, including SQuAD and SNLI, by capturing both complex syntax and nuanced semantics.
Related technologies
Recent Talks & Demos
Showing 1-1 of 1