Llama-3.1-70B-Instruct-lorablated

Maintained By
mlabonne

Llama-3.1-70B-Instruct-lorablated

PropertyValue
Parameter Count70.6B
Model TypeLanguage Model (Instruction-tuned)
ArchitectureLLaMA 3.1
LicenseLLaMA 3.1
Tensor TypeBF16
Research PaperTask Arithmetic Paper

What is Llama-3.1-70B-Instruct-lorablated?

Llama-3.1-70B-Instruct-lorablated is an innovative adaptation of Meta's LLaMA 3.1 70B model that employs a novel LoRA-abliteration technique to remove content filtering while maintaining model quality. This model represents a significant advancement in creating unrestricted language models while preserving their core capabilities.

Implementation Details

The model implementation follows a two-step process: first extracting a LoRA adapter by comparing a censored LLaMA 3 with an abliterated version, then merging this adapter with a censored LLaMA 3.1 using task arithmetic. The process was specifically optimized for the 70B parameter scale.

  • Uses bfloat16 precision for efficient processing
  • Implements task arithmetic merge methodology
  • Optimized LoRA rank for 70B scale
  • Maintains full model capability while removing restrictions

Core Capabilities

  • General-purpose text generation and conversation
  • Enhanced role-play capabilities
  • Unrestricted content generation
  • Compatible with LLaMA 3 chat template
  • Available in GGUF format for efficient deployment

Frequently Asked Questions

Q: What makes this model unique?

The model's unique LoRA-abliteration technique allows it to maintain high performance while removing content restrictions, making it suitable for unrestricted applications while preserving the base model's capabilities.

Q: What are the recommended use cases?

The model is particularly well-suited for general-purpose applications and role-play scenarios. It's designed for users who need unrestricted content generation while maintaining high-quality outputs.

The first platform built for prompt engineering