WizardLM-30B-Uncensored

Maintained By
cognitivecomputations

WizardLM-30B-Uncensored

PropertyValue
Model Size30B parameters
LicenseOther
FrameworkPyTorch
Training DataWizardLM alpaca evol instruct dataset (unfiltered)

What is WizardLM-30B-Uncensored?

WizardLM-30B-Uncensored is a large language model derived from WizardLM, specifically trained without built-in alignment or moral constraints. This model represents a unique approach where alignment-related responses were intentionally removed from the training data, allowing for separate alignment implementation through methods like RLHF LoRA.

Implementation Details

The model leverages the transformer architecture and achieves impressive benchmark scores, including 82.93% on HellaSwag (10-shot) and 56.8% on MMLU (5-shot). It's implemented using PyTorch and designed for text generation tasks.

  • Achieves 60.24% on AI2 Reasoning Challenge
  • 74.35% accuracy on Winogrande (5-shot)
  • 51.57% on TruthfulQA (0-shot)
  • 12.89% on GSM8K mathematical reasoning

Core Capabilities

  • Unconstrained text generation without built-in guardrails
  • Strong performance on common sense reasoning (HellaSwag)
  • Capable of handling complex reasoning tasks
  • Flexible architecture allowing for custom alignment implementation

Frequently Asked Questions

Q: What makes this model unique?

This model is distinctive in its intentional removal of alignment constraints during training, allowing users to implement their own alignment strategies separately. This approach provides greater flexibility but requires responsible usage.

Q: What are the recommended use cases?

The model is suited for research and development purposes where custom alignment is desired. Users must implement their own safety measures and are responsible for the model's outputs and applications.

The first platform built for prompt engineering