Mythalion-13B-GGUF

Maintained By
TheBloke

Mythalion-13B-GGUF

PropertyValue
Parameter Count13 Billion
Model TypeLLaMA Architecture
LicenseLLaMA 2
Primary UseText Generation/RP Chat

What is Mythalion-13B-GGUF?

Mythalion-13B-GGUF is a sophisticated language model that represents a strategic merge between Pygmalion-2 13B and MythoMax L2 13B, optimized specifically for roleplay and chat applications. This GGUF version provides various quantization options ranging from 2-bit to 8-bit precision, enabling deployment across different hardware configurations.

Implementation Details

The model leverages the GGUF format, which offers improved tokenization and special token support compared to the legacy GGML format. It comes in multiple quantization versions, with the Q4_K_M variant being recommended for balanced performance and resource usage.

  • Supports both Alpaca and Pygmalion formatting for prompts
  • Multiple quantization options from 5.43GB to 13.83GB file sizes
  • Compatible with popular frameworks like llama.cpp and text-generation-webui

Core Capabilities

  • Advanced roleplay and chat interactions
  • Flexible deployment options with various quantization levels
  • Support for extended context windows
  • GPU acceleration support with layer offloading

Frequently Asked Questions

Q: What makes this model unique?

The model combines the strengths of Pygmalion-2 and MythoMax, optimized specifically for roleplay and chat applications, while offering multiple quantization options for various deployment scenarios.

Q: What are the recommended use cases?

The model is primarily designed for fictional writing and entertainment purposes, particularly excelling in roleplay and chat applications. It's important to note it's not fine-tuned for safety or factual accuracy.

The first platform built for prompt engineering