NeuralBeagle14-7B

Maintained By
mlabonne

NeuralBeagle14-7B

PropertyValue
Parameter Count7.24B
Context Window8,000 tokens
LicenseCC-BY-NC-4.0
Base ModelBeagle14-7B

What is NeuralBeagle14-7B?

NeuralBeagle14-7B is a state-of-the-art language model that represents a significant advancement in the 7B parameter category. It's a DPO (Direct Preference Optimization) fine-tuned version of Beagle14-7B, utilizing the argilla/distilabel-intel-orca-dpo-pairs dataset. The model achieves remarkable performance across various benchmarks, including a 72.95% normalized accuracy on AI2 Reasoning Challenge and 64.55% accuracy on MMLU.

Implementation Details

The model is implemented using LazyMergekit, combining UNA-TheBeagle-7b-v1 and distilabeled-Marcoro14-7B-slerp. It supports multiple chat templates including chatml and Llama's chat template, and is available in various quantized versions (GGUF, GPTQ, AWQ, and EXL2).

  • Advanced DPO fine-tuning methodology
  • 8k context window capability
  • Multiple quantization options for different deployment scenarios
  • Compatible with various chat templates

Core Capabilities

  • Strong performance in instruction following tasks
  • Enhanced reasoning capabilities (70.28% accuracy on GSM8k)
  • High truthfulness scores (69.93% on TruthfulQA)
  • Effective for both general text generation and specialized tasks
  • Suitable for RP and storytelling applications

Frequently Asked Questions

Q: What makes this model unique?

NeuralBeagle14-7B stands out for its top-ranking performance in the 7B category on the Open LLM Leaderboard, particularly excelling in reasoning tasks and instruction following while maintaining high truthfulness scores.

Q: What are the recommended use cases?

The model is particularly well-suited for instruction following, reasoning tasks, role-playing, and storytelling. Its 8k context window makes it valuable for tasks requiring longer context understanding.

The first platform built for prompt engineering