gemma-3-27b-it-abliterated

Maintained By
mlabonne

Gemma-3-27B-IT Abliterated

PropertyValue
Base Modelgoogle/gemma-3-27b-it
Authormlabonne
Model URLhttps://huggingface.co/mlabonne/gemma-3-27b-it-abliterated
Recommended Parameterstemperature=1.0, top_k=64, top_p=0.95

What is gemma-3-27b-it-abliterated?

Gemma-3-27B-IT Abliterated is an experimental uncensored version of Google's Gemma-3-27B-IT model, created using an innovative layerwise abliteration technique. This model represents a significant advancement in maintaining model capabilities while removing built-in restrictions, achieving an impressive acceptance rate exceeding 90% while preserving coherent output generation.

Implementation Details

The model employs a novel layerwise abliteration approach that differs from traditional methods. Instead of computing refusal directions across the entire model, this technique processes each layer independently, calculating refusal directions based on hidden states. A refusal weight of 1.5 is applied to enhance the importance of these directions in each layer.

  • Layer-independent abliteration processing
  • Hidden state-based refusal direction computation
  • 1.5x refusal weight scaling
  • Experimental quantization (in development)

Core Capabilities

  • High acceptance rate (>90%) for previously restricted content
  • Maintained coherence and quality of outputs
  • Enhanced resistance to traditional censorship mechanisms
  • Optimized performance with specific generation parameters

Frequently Asked Questions

Q: What makes this model unique?

This model's uniqueness lies in its layerwise abliteration technique, which has proven particularly effective with Gemma 3, showing higher resilience compared to other models like Qwen 2.5. The approach maintains model capabilities while effectively removing restrictions.

Q: What are the recommended use cases?

The model is best suited for applications requiring unrestricted language generation while maintaining high-quality outputs. Users should apply the recommended generation parameters (temperature=1.0, top_k=64, top_p=0.95) for optimal results.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.