gemma-3-27b-it-abliterated-GGUF

Maintained By
mlabonne

Gemma 3 27B IT Abliterated

PropertyValue
Model Size27B parameters
Base Modelgoogle/gemma-3-27b-it
Authormlabonne
Model URLHuggingFace/mlabonne/gemma-3-27b-it-abliterated-GGUF
Recommended Parameterstemperature=1.0, top_k=64, top_p=0.95

What is gemma-3-27b-it-abliterated-GGUF?

This is an experimental uncensored version of Google's Gemma 3 27B model, created using an innovative layerwise abliteration technique. The model represents a significant advancement in maintaining model capabilities while removing unwanted restrictions, achieving an impressive >90% acceptance rate while preserving coherent outputs.

Implementation Details

The model employs a novel layerwise abliteration approach, computing refusal directions based on hidden states for each layer independently. This technique differs from traditional methods by analyzing individual layers rather than the entire model at once. A refusal weight of 1.5 is applied to enhance the importance of refusal directions in each layer.

  • Layer-specific refusal direction computation
  • Hidden state-based analysis inspired by Sumandora's repository
  • 1.5x refusal weight scaling
  • Experimental approach showing strong resilience to abliteration compared to other models like Qwen 2.5

Core Capabilities

  • High acceptance rate (>90%) for previously restricted content
  • Maintained coherence and output quality
  • Optimized performance with specific generation parameters
  • Enhanced response freedom while preserving model intelligence

Frequently Asked Questions

Q: What makes this model unique?

The model's uniqueness lies in its layer-wise abliteration approach, which provides a more nuanced way of removing restrictions while maintaining model capabilities. This is particularly noteworthy as Gemma 3 showed unusual resilience to traditional abliteration techniques.

Q: What are the recommended use cases?

This model is designed for research and experimental purposes where unrestricted outputs are needed while maintaining high-quality responses. Users should apply the recommended generation parameters (temperature=1.0, top_k=64, top_p=0.95) for optimal results.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.