phi3-uncensored-chat
Property | Value |
---|---|
Base Model | microsoft/phi-3-mini-4k-instruct |
Model Type | Fine-tuned Conversational AI |
Training Method | LoRA/DeepSpeed fine-tuning |
Context Window | 4k tokens |
Dataset Size | ~13k curated examples |
What is phi3-uncensored-chat?
phi3-uncensored-chat is a specialized fine-tuned version of Microsoft's Phi-3 Mini model, optimized for character-based conversations and roleplaying scenarios. The model implements a unique approach to maintaining consistent character personas while offering flexible deployment options across different hardware configurations through various precision settings.
Implementation Details
The model utilizes LoRA fine-tuning with specific parameters including rank 16 and alpha 32, targeting key projection modules. Training was conducted using DeepSpeed ZeRO stage 2 optimization, with an AdamW optimizer and cosine learning rate schedule. The model supports multiple precision options from FP32 to 4-bit quantization, making it adaptable to various GPU configurations.
- Strict prompt format requirements for optimal performance
- Multiple precision options (FP32, FP16, 8-bit, 4-bit)
- Trained on ~13k high-quality curated examples
- Implements emoji-based response styling
Core Capabilities
- Character persona maintenance and consistency
- Adaptive conversation handling
- Support for various hardware configurations
- Interactive chat interface implementation
- Context-aware responses within 4k token limit
Frequently Asked Questions
Q: What makes this model unique?
The model's distinctive feature is its ability to maintain consistent character personas while adapting to different roles, combined with flexible deployment options and strict prompt formatting for optimal performance.
Q: What are the recommended use cases?
The model is designed for creative fiction writing and roleplaying scenarios between consenting adults, with particular strength in character-driven conversations and educational interactions.