whisper-tiny-quiztest

Maintained By
tutikentuti

Whisper-tiny-quiztest

PropertyValue
Parameter Count37.8M
LicenseApache 2.0
Base Modelopenai/whisper-tiny
Tensor TypeF32
Best WER Score55.05

What is whisper-tiny-quiztest?

Whisper-tiny-quiztest is a specialized automatic speech recognition (ASR) model fine-tuned from OpenAI's Whisper-tiny base model. It's specifically optimized for quiz-related speech recognition tasks, trained using the tutikentuti/quiztest dataset.

Implementation Details

The model was trained using PyTorch and Transformers framework, implementing a cosine learning rate schedule with restarts. Training utilized an Adam optimizer with carefully tuned hyperparameters (β1=0.9, β2=0.999, ε=1e-08) and a learning rate of 3e-05.

  • Training conducted over 1000 steps with 1000 warmup steps
  • Batch size of 8 for both training and evaluation
  • Achieved final validation loss of 0.0947
  • Uses Safetensors for model storage

Core Capabilities

  • Automatic Speech Recognition optimized for quiz content
  • Achieved 55.05 WER (Word Error Rate) on evaluation
  • Supports real-time transcription through inference endpoints
  • Compatible with TensorBoard for monitoring

Frequently Asked Questions

Q: What makes this model unique?

This model is specifically fine-tuned for quiz-related speech recognition, making it particularly suited for educational and assessment applications. Its relatively small size (37.8M parameters) makes it efficient while maintaining reasonable accuracy.

Q: What are the recommended use cases?

The model is best suited for transcribing quiz-related audio content, educational materials, and assessment scenarios where speech recognition is needed. It's particularly useful in applications where computational resources are limited due to its compact size.

The first platform built for prompt engineering