miquliz-120b-v2.0

Maintained By
wolfram

miquliz-120b-v2.0

PropertyValue
Parameter Count120 Billion
Context Length32,768 tokens
ArchitectureLinear Merged Model
LanguagesEnglish, German, French, Spanish, Italian
LicenseOther
PaperLinear Merging Paper

What is miquliz-120b-v2.0?

miquliz-120b-v2.0 is an advanced language model created through the strategic merging of two powerful 70B parameter models: miqu-1-70b-sf and lzlv_70b_fp16_hf. Using the linear merging technique, this model achieves enhanced performance while maintaining the strengths of both parent models.

Implementation Details

The model utilizes a sophisticated merging architecture with 140 layers, implemented using mergekit's linear merge method. It supports multiple quantization formats including GGUF and EXL2, making it versatile for different deployment scenarios.

  • Employs Mistral's prompt template format
  • Supports context length of 32,768 tokens
  • Available in multiple quantization formats (2.4bpw to 8.0bpw)

Core Capabilities

  • Multi-lingual processing in 5 languages
  • Advanced conversation and roleplay capabilities
  • Complex problem-solving and analysis
  • High-quality text generation across various domains

Frequently Asked Questions

Q: What makes this model unique?

The model's uniqueness stems from its innovative merging approach, combining the best aspects of two powerful language models while achieving superior performance metrics in testing. It particularly excels in multi-lingual capabilities and maintaining coherent outputs across different domains.

Q: What are the recommended use cases?

The model is well-suited for conversational AI, creative writing, multilingual translation, analysis and problem-solving tasks. It shows particular strength in detailed explanations and maintaining context in long conversations.

The first platform built for prompt engineering