Meta-Llama-3.1-8B-Instruct-abliterated
Property | Value |
---|---|
Parameter Count | 8.03B |
License | LLaMA 3.1 |
Tensor Type | BF16 |
Downloads | 28,078 |
What is Meta-Llama-3.1-8B-Instruct-abliterated?
Meta-Llama-3.1-8B-Instruct-abliterated is an uncensored variant of the LLaMA 3.1 8B Instruct model, created using the abliteration technique. This model maintains the powerful capabilities of the original LLaMA architecture while providing more unrestricted outputs. The model has demonstrated impressive performance on various benchmarks, particularly achieving 73.29% accuracy on IFEval (0-Shot) tasks.
Implementation Details
The model utilizes the transformers library and implements the abliteration technique, which helps in removing certain restrictions while maintaining model performance. It's available in multiple quantized versions, including GGUF and EXL2 formats, making it accessible for different deployment scenarios.
- Built on Meta-Llama/Meta-Llama-3.1-8B-Instruct base model
- Implements BF16 tensor format for efficient computation
- Available in multiple quantized versions for different use cases
Core Capabilities
- Strong performance on IFEval with 73.29% accuracy
- Decent performance on BBH (3-Shot) with 27.13% normalized accuracy
- MMLU-PRO (5-shot) score of 27.81%
- Specialized for text generation and conversational tasks
Frequently Asked Questions
Q: What makes this model unique?
This model stands out due to its implementation of the abliteration technique, which provides uncensored outputs while maintaining the core capabilities of LLaMA 3.1. It offers a good balance between performance and freedom in generation.
Q: What are the recommended use cases?
The model is particularly well-suited for text generation tasks, conversational applications, and scenarios where unrestricted outputs are desired. It performs especially well in instruction-following tasks as demonstrated by its IFEval performance.