Taiyi-Stable-Diffusion-1B-Anime-Chinese-v0.1
Property | Value |
---|---|
Model Size | 1B parameters |
License | CreativeML OpenRAIL-M |
Training Data | 1M filtered Chinese anime image-text pairs |
Paper | Research Paper |
What is Taiyi-Stable-Diffusion-1B-Anime-Chinese-v0.1?
This groundbreaking model represents the first open-source Chinese Stable Diffusion model specifically designed for anime-style image generation. Developed by IDEA-CCNL, it was trained on a carefully curated dataset of 1 million Chinese anime image-text pairs, with an additional refinement using 10,000 high-quality samples.
Implementation Details
The model was trained in two stages using 4 A100 GPUs over approximately 100 hours. It supports both text-to-image generation and image-to-image transformation, with built-in support for super-resolution scaling through R-ESRGAN 4x+ Anime6B integration.
- Supports Chinese text prompts for image generation
- Includes built-in upscaling capabilities
- Compatible with popular WebUI interfaces
- Optimized for anime-style artwork generation
Core Capabilities
- High-quality anime character generation
- Landscape and scene creation
- Style transfer and image manipulation
- Support for both indoor and outdoor scenes
- Detailed control over character attributes and environmental elements
Frequently Asked Questions
Q: What makes this model unique?
It's the first open-source Chinese Stable Diffusion model specifically trained for anime-style image generation, offering native support for Chinese language prompts and specialized anime aesthetics.
Q: What are the recommended use cases?
The model excels at generating anime-style character illustrations, scenic backgrounds, and creative artwork based on Chinese text prompts. It's particularly effective for creating character portraits, environmental scenes, and stylized anime compositions.