Mistral-7B-OpenOrca

Maintained By
Open-Orca

Mistral-7B-OpenOrca

PropertyValue
LicenseApache 2.0
Base ModelMistral-7B
Training Cost~$400 (62 hours on 8x A6000 GPUs)
PaperOrca Paper

What is Mistral-7B-OpenOrca?

Mistral-7B-OpenOrca is a state-of-the-art language model that has achieved remarkable performance by fine-tuning the Mistral-7B base model on the carefully curated OpenOrca dataset. It ranks #1 for all models smaller than 30B on the HuggingFace leaderboard, demonstrating exceptional capabilities in various benchmarks including MMLU, ARC, HellaSwag, and TruthfulQA.

Implementation Details

The model utilizes OpenAI's Chat Markup Language (ChatML) format and was trained using the Axolotl framework. It completed 4 epochs of full fine-tuning on a filtered subset of GPT-4 augmented data, achieving 106% of the base model's performance on HF Leaderboard evaluations.

  • Achieves 65.84 average score across major benchmarks
  • Performs at 98.6% of Llama2-70b-chat's level
  • Supports modern chat templating and structured conversations
  • Available in multiple quantized versions (AWQ, GPTQ, GGUF)

Core Capabilities

  • Strong performance on MMLU (62.24) and HellaSwag (83.99)
  • Enhanced reasoning and truthfulness (TruthfulQA: 53.05)
  • Efficient operation on consumer GPUs
  • Comprehensive chat functionality with system prompts support

Frequently Asked Questions

Q: What makes this model unique?

This model represents a breakthrough in efficiency-to-performance ratio, offering top-tier capabilities in a 7B parameter model that can run on consumer hardware while matching or exceeding the performance of much larger models.

Q: What are the recommended use cases?

The model excels in conversational AI, reasoning tasks, and general-purpose language understanding. It's particularly suitable for applications requiring strong performance with limited computational resources.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.