MythoMax-L2-13B-GGUF
Property | Value |
---|---|
Parameter Count | 13B |
Model Type | LLaMA |
License | Other + Meta Llama 2 License |
Author | Gryphe (Original) / TheBloke (Quantization) |
What is MythoMax-L2-13B-GGUF?
MythoMax-L2-13B-GGUF is an advanced language model that represents a sophisticated merger between MythoLogic-L2 and Huginn models, utilizing experimental tensor-type merge techniques. This GGUF version, quantized by TheBloke, offers various compression options ranging from 2-bit to 8-bit precision, making it accessible for different hardware configurations.
Implementation Details
The model employs a unique approach where 363 individual tensors are merged with specific ratios, creating a balanced architecture that leverages MythoLogic-L2's understanding capabilities and Huginn's writing prowess. The GGUF format provides improved tokenization and special token support compared to legacy GGML formats.
- Multiple quantization options (Q2_K through Q8_0) for different size/quality trade-offs
- Supports context lengths up to 4096 tokens
- Implements advanced gradient-based tensor merging
- Compatible with modern LLM frameworks including llama.cpp
Core Capabilities
- Enhanced roleplay and storytelling performance
- Robust language understanding from MythoLogic-L2
- Superior writing capabilities inherited from Huginn
- Efficient performance across various hardware configurations
- Supports both CPU and GPU acceleration
Frequently Asked Questions
Q: What makes this model unique?
The model's distinctive feature is its tensor-based merge technique, where each of its 363 tensors has been carefully calibrated with unique ratios, creating a harmonious blend of understanding and generation capabilities.
Q: What are the recommended use cases?
The model excels in roleplay and creative writing scenarios, making it ideal for interactive storytelling, character-based interactions, and narrative generation. It uses Alpaca formatting for optimal performance.