Magnum-v2-123b
Property | Value |
---|---|
Parameter Count | 123B |
Model Type | Text Generation / Chat |
Architecture | Mistral-based Transformer |
License | MRL |
Supported Languages | 9 (EN, FR, DE, ES, IT, PT, RU, ZH, JA) |
What is magnum-v2-123b?
Magnum-v2-123b is a sophisticated language model developed by Anthracite-org, representing the sixth iteration in their series aimed at replicating Claude 3's prose quality. Built upon Mistral-Large-Instruct-2407, this model combines advanced training methodologies with careful parameter tuning to achieve high-quality text generation across multiple languages.
Implementation Details
The model underwent specialized training for 1.5 epochs using 8x AMD Instinctâ„¢ MI300X Accelerators, with particular attention to learning rate optimization. The training process revealed unique characteristics of Mistral-based models, including their sensitivity to learning rate adjustments and narrow weight distributions.
- Fine-tuned using custom datasets including Stheno-Data-Filtered and Claude writing samples
- Implements BF16 tensor type for optimal performance
- Utilizes Mistral formatting for input structure
Core Capabilities
- Multi-language support across 9 major languages
- Enhanced prose quality matching Claude 3 standards
- Optimized for both context and instruct-based interactions
- Compatible with text-generation-inference endpoints
Frequently Asked Questions
Q: What makes this model unique?
The model stands out for its carefully optimized learning rate (2e-6) and effective batch size of 64, along with its specific focus on maintaining Mistral's architecture while enhancing prose quality to match Claude 3 standards.
Q: What are the recommended use cases?
The model excels in conversational AI applications, creative writing, and multi-language text generation tasks. It's particularly well-suited for scenarios requiring high-quality prose output and natural language understanding across multiple languages.