Llama-3.1-8B-Lexi-Uncensored-V2
Property | Value |
---|---|
Parameter Count | 8.03B |
Model Type | Text Generation |
License | Llama 3.1 |
Tensor Type | BF16 |
Downloads | 29,554 |
What is Llama-3.1-8B-Lexi-Uncensored-V2?
Llama-3.1-8B-Lexi-Uncensored-V2 is an advanced language model based on Meta's Llama-3.1-8B-Instruct architecture. This uncensored version offers enhanced compliance and intelligence, designed for flexible text generation tasks. The model demonstrates impressive performance with a 77.92% accuracy on IFEval (0-Shot) testing.
Implementation Details
The model utilizes the Transformers architecture and operates with BF16 tensor precision. It requires specific system prompts for optimal performance, with users having the flexibility to use either detailed prompts or a simple dot "." as a system message. Notable technical metrics include 29.69% accuracy on BBH (3-Shot) and 16.92% on MATH Level 5 (4-Shot) evaluations.
- Improved compliance and intelligence compared to previous versions
- Compatible with text-generation-inference systems
- Supports both detailed and minimal system prompts
- Achieves 30.9% accuracy on MMLU-PRO (5-shot)
Core Capabilities
- High-accuracy text generation and response
- Flexible system prompt handling
- Uncensored response generation
- Strong performance on various benchmark tests
Frequently Asked Questions
Q: What makes this model unique?
This model stands out due to its uncensored nature and high compliance, combined with impressive performance metrics on standard benchmarks. It offers a balance between capability and flexibility, making it suitable for various applications while maintaining responsible usage guidelines.
Q: What are the recommended use cases?
The model is suitable for text generation tasks requiring detailed responses. However, users are advised to implement their own alignment layer before deploying it as a service, particularly due to its uncensored nature. It's recommended to run in F16 or Q8 format for optimal performance.