Technology
InternLM
A multilingual foundational large language model series (InternLM) from Shanghai AI Laboratory, delivering state-of-the-art performance in reasoning, coding, and comprehensive exams.
InternLM is a powerful, open-source large language model series: Shanghai AI Laboratory and SenseTime developed the initial 104B-parameter model, pre-trained on a massive 1.6T tokens. The series, including models like InternLM3-8B-Instruct, consistently achieves superior results on critical benchmarks: it excels in knowledge understanding, mathematics, and coding. Specifically, InternLM has demonstrated state-of-the-art performance on Chinese-oriented exams (C-Eval, GAOKAO-Bench), often surpassing competitors like ChatGPT in those domains. The technology is supported by a full-stack toolchain, including LMDeploy for efficient deployment and XTuner for fine-tuning, ensuring practical application across various use cases.
Related technologies
Recent Talks & Demos
Showing 1-1 of 1