8.7 C
New York

Mistral AI and NVIDIA Collaborate to Release Mistral NeMo: A 12B Open Language Model Featuring 128k Context Window, Multilingual Capabilities, and Tekken Tokenizer

In collaboration with NVIDIA, the Mistral AI team has unveiled Mistral NeMo, a groundbreaking 12-billion parameter model that promises to set new standards in artificial intelligence. Released under the Apache 2.0 license, Mistral NeMo is designed to be a high-performance, multilingual model capable of handling a context window of up to 128,000 tokens. This extensive context length is a significant advancement, allowing the model to process and understand large amounts of data more efficiently than its predecessors. The team has released two variants:

Mistral-Nemo-Instruct-2407

Mistral-Nemo-Base-2407

Mistral NeMo stands out for its exceptional reasoning abilities, extensive world knowledge, and high coding accuracy, making it the top performer in its size category. Its architecture is based on standard designs, ensuring it can be easily integrated into any system currently using Mistral 7B. This seamless compatibility is expected to facilitate widespread adoption among researchers and enterprises seeking to leverage cutting-edge AI technology.

The Mistral AI team has released both pre-trained base and instruction-tuned checkpoints. These resources are intended to support the research community and industry professionals in their efforts to explore and implement advanced AI solutions. Mistral NeMo was developed with quantization awareness, enabling FP8 inference without any degradation in performance. This feature ensures the model operates efficiently even with lower precision data representations.

A key component of Mistral NeMo’s success is its multilingual capability, making it a versatile tool for global applications. The model has been trained in function calling and is particularly adept in several major languages, including English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. This broad linguistic proficiency aims to democratize access to advanced AI technologies, enabling users from diverse linguistic backgrounds to benefit from its capabilities.

Introducing Tekken, a new tokenizer, further enhances Mistral NeMo’s performance. Based on Tiktoken, Tekken was trained in over 100 languages and is significantly more efficient at compressing natural language text and source code than its predecessors. For instance, it is approximately 30% more efficient at compressing source code and several major languages, and it outperforms the Llama 3 tokenizer in compressing text for about 85% of all languages. This increased efficiency is crucial for handling the vast data required for modern AI applications.

Mistral NeMo’s advanced instruction fine-tuning process distinguishes it from earlier models like Mistral 7B. The fine-tuning and alignment phases have significantly improved the model’s ability to follow precise instructions, reason effectively, handle multi-turn conversations, and generate accurate code. These enhancements are critical for applications requiring high interaction and accuracy, such as customer service bots, coding assistants, and interactive educational tools.

The performance of Mistral NeMo has been rigorously evaluated and compared with other leading models. It consistently demonstrates superior accuracy and efficiency, reinforcing its position as a state-of-the-art AI model. Weights for the base and instruction-tuned models are hosted on HuggingFace, making them readily available for developers and researchers. Additionally, Mistral NeMo can be accessed via Mistral Inference and adapted using Mistral Finetune, providing flexible options for various use cases.

Mistral NeMo is also integrated into NVIDIA’s NIM inference microservice, available through ai.nvidia.com. This integration highlights the collaborative effort between Mistral AI and NVIDIA to push the boundaries of AI technology and deliver robust, scalable solutions to the market.

In conclusion, the release of Mistral NeMo, with its advanced features, including extensive multilingual support, efficient data compression, and superior instruction-following capabilities, positions it as a powerful tool for researchers and enterprises. The collaboration between Mistral AI and NVIDIA exemplifies the potential of joint efforts in driving technological advancements and making cutting-edge AI accessible to a broader audience.

Weights are hosted on HuggingFace both for the Base and for the Instruct models. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. 

Join our Telegram Channel and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 46k+ ML SubReddit

Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.

🚀 [FREE AI WEBINAR] ‘Optimise Your Custom Embedding Space: How to find the right embedding model for YOUR data.’ (July 18, 2024) [Promoted]

Related articles

Recent articles