english-abusive-MuRIL

Maintained By
Hate-speech-CNERG

english-abusive-MuRIL

PropertyValue
Parameter Count238M
LicenseAFL-3.0
PaperView Research Paper
Downloads638,946

What is english-abusive-MuRIL?

english-abusive-MuRIL is a specialized transformer-based model designed for detecting abusive speech in English text. Built upon the MuRIL architecture, this model has been fine-tuned specifically for binary classification of text as either normal or abusive content. Developed by Hate-speech-CNERG, it represents a significant advancement in content moderation and online safety tools.

Implementation Details

The model was trained with carefully optimized learning rates of 2e-5 and utilizes the PyTorch framework. It employs safetensors for efficient tensor operations and is based on the BERT architecture, specifically adapted from MuRIL.

  • Binary classification output (Normal: LABEL_0, Abusive: LABEL_1)
  • 238 million parameters for robust feature extraction
  • Implements efficient transformer architecture
  • Supports inference endpoints for practical deployment

Core Capabilities

  • Real-time detection of abusive content in English text
  • High-accuracy binary classification
  • Optimized for production deployment
  • Supports batch processing and streaming inference

Frequently Asked Questions

Q: What makes this model unique?

This model uniquely combines the robust MuRIL architecture with specialized training for English abusive content detection, backed by academic research and extensive real-world usage as evidenced by its high download count.

Q: What are the recommended use cases?

The model is ideal for content moderation systems, social media platforms, online forums, and any digital spaces where detecting and filtering abusive content is crucial for maintaining a safe user environment.

The first platform built for prompt engineering