Mistral-NeMo-Minitron-8B-Instruct
Property | Value |
---|---|
Parameter Count | 8.41B |
Context Length | 8,192 tokens |
License | NVIDIA Open Model License |
Research Paper | Link |
Base Model | Mistral-NeMo-Minitron-8B-Base |
What is Mistral-NeMo-Minitron-8B-Instruct?
Mistral-NeMo-Minitron-8B-Instruct is NVIDIA's advanced language model, derived from the Mistral-NeMo 12B through sophisticated pruning and distillation techniques. This model represents a significant achievement in balancing performance with efficiency, featuring 8.41B parameters while maintaining robust capabilities across various text-generation tasks.
Implementation Details
The model architecture features a 4096-dimensional embedding size, 32 attention heads, and 40 layers with an MLP intermediate dimension of 11520. It implements advanced features like Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE), supporting a context window of 8,192 tokens.
- Multi-stage SFT and preference-based alignment using NeMo Aligner
- BF16 tensor type optimization
- Specialized prompt template for optimal performance
Core Capabilities
- Achieves 70.4 on MMLU (5-shot)
- 87.1% accuracy on GSM8K (0-shot)
- 71.3% success rate on HumanEval
- Excels in instruction following (84.4% on IFEval)
Frequently Asked Questions
Q: What makes this model unique?
This model stands out for its efficient architecture derived from pruning and distilling a larger 12B parameter model, while maintaining strong performance across various benchmarks. It's particularly notable for its instruction-following capabilities and extensive safety testing.
Q: What are the recommended use cases?
The model excels in roleplaying, retrieval augmented generation (RAG), and function calling tasks. It's particularly well-suited for applications requiring strong instruction following and code generation capabilities.