Sakura-14B-Qwen2.5-v1.0-GGUF
Property | Value |
---|---|
Parameter Count | 14.8B |
License | CC-BY-NC-SA-4.0 |
Format | GGUF |
Downloads | 31,889 |
What is Sakura-14B-Qwen2.5-v1.0-GGUF?
Sakura-14B is a specialized language model designed for translating Japanese light novels into Chinese. Built on the Qwen architecture with 14.8B parameters, it employs GQA (Grouped-Query Attention) for improved inference speed and memory efficiency.
Implementation Details
The model implements a sophisticated prompt-based translation system with support for terminology dictionaries (GPT Dictionary) to maintain consistency in proper nouns and pronouns. It uses a structured prompt format that includes system, user, and assistant components.
- Enhanced translation accuracy, especially for pronoun handling
- Support for terminology tables to maintain consistency
- Improved retention of control characters, particularly newline preservation
- Optimized multi-threaded inference capabilities
Core Capabilities
- High-quality Japanese to Chinese translation
- Light novel-specific styling and tone preservation
- Context-aware pronoun handling
- Terminology consistency management
Frequently Asked Questions
Q: What makes this model unique?
Its specialized focus on Japanese light novel translation, combined with GQA architecture for improved performance and sophisticated terminology management system makes it stand out for literary translation tasks.
Q: What are the recommended use cases?
The model is specifically optimized for translating Japanese light novels into Chinese while maintaining the original style and context. It's particularly useful for projects requiring consistent terminology and accurate pronoun handling.