distilbert-base-uncased-finetuned-sst-2-english

Maintained By
distilbert

DistilBERT Base Uncased Fine-tuned SST-2

PropertyValue
Parameter Count67M
LicenseApache-2.0
PaperView Paper
Accuracy91.05%
F1 Score0.914

What is distilbert-base-uncased-finetuned-sst-2-english?

This is a lightweight sentiment analysis model that leverages DistilBERT's architecture, fine-tuned specifically on the Stanford Sentiment Treebank (SST-2) dataset. It represents a powerful balance between performance and efficiency, achieving 91.3% accuracy on sentiment classification while maintaining a smaller footprint compared to BERT.

Implementation Details

The model was trained using carefully optimized hyperparameters including a learning rate of 1e-5, batch size of 32, and 3 training epochs. It employs the DistilBERT architecture, which maintains 97% of BERT's performance while being 40% smaller.

  • Maximum sequence length: 128 tokens
  • Warmup steps: 600
  • Optimized for binary sentiment classification
  • Supports both PyTorch and TensorFlow frameworks

Core Capabilities

  • Binary sentiment classification (positive/negative)
  • High-performance text classification with 91.05% accuracy
  • Efficient inference with reduced model size
  • Production-ready with multiple framework support

Frequently Asked Questions

Q: What makes this model unique?

This model stands out for its optimal balance between performance and efficiency. While it sacrifices only 1.4 percentage points in accuracy compared to BERT-base, it offers significantly faster inference times and lower resource requirements.

Q: What are the recommended use cases?

The model is ideal for production sentiment analysis tasks, particularly in scenarios requiring real-time analysis of user feedback, review classification, or social media monitoring. However, users should be aware of potential biases in predictions, particularly regarding geographical references.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.