glue-mrpc
Property | Value |
---|---|
License | Apache 2.0 |
Framework | PyTorch |
Base Model | bert-base-cased |
Task Type | Text Classification |
What is glue-mrpc?
glue-mrpc is a fine-tuned version of the BERT base cased model specifically optimized for the Microsoft Research Paraphrase Corpus (MRPC) task within the GLUE benchmark. This model demonstrates strong performance with an accuracy of 85.54% and an F1 score of 89.74% on the evaluation set.
Implementation Details
The model was trained using the Adam optimizer with a learning rate of 5e-05, implementing a linear learning rate scheduler. Training was conducted over 3 epochs with batch sizes of 16 for both training and evaluation. The model utilizes the Transformers framework version 4.13.0.dev0 and PyTorch 1.10.0.
- Precision: 87.16%
- Recall: 92.47%
- AUC: 90.46%
- Combined Score: 87.64%
Core Capabilities
- Paraphrase Detection
- Semantic Similarity Assessment
- Natural Language Inference
- Binary Text Classification
Frequently Asked Questions
Q: What makes this model unique?
This model specializes in identifying semantic equivalence between sentence pairs, achieving high recall (92.47%) which makes it particularly effective at identifying true paraphrases.
Q: What are the recommended use cases?
The model is ideal for tasks involving paraphrase detection, semantic similarity assessment, and general text pair classification tasks where determining semantic equivalence is crucial.