sentence-t5-large
Property | Value |
---|---|
Parameter Count | 335M |
Tensor Type | FP16 |
License | Apache 2.0 |
Paper | Sentence-T5: Scalable sentence encoders |
What is sentence-t5-large?
sentence-t5-large is a powerful sentence embedding model that maps sentences and paragraphs to 768-dimensional dense vector spaces. Converted from the original TensorFlow model st5-large-1, this PyTorch implementation maintains identical benchmark performance while offering enhanced compatibility with the sentence-transformers framework.
Implementation Details
The model utilizes only the encoder portion of a T5-large architecture, with weights stored in FP16 format for efficient memory usage. It's built on the sentence-transformers library and requires version 2.2.0 or newer for proper functionality.
- 768-dimensional output vectors
- Optimized for sentence similarity tasks
- FP16 weight storage for efficiency
- Compatible with sentence-transformers framework
Core Capabilities
- Sentence and paragraph embedding generation
- High-quality sentence similarity comparison
- Efficient processing with reduced precision
- Cross-lingual text processing
Frequently Asked Questions
Q: What makes this model unique?
This model stands out for its specific optimization for sentence similarity tasks while maintaining a balance between model size and performance. It's particularly notable for using the T5 architecture in a sentence embedding context, with FP16 precision for efficient deployment.
Q: What are the recommended use cases?
The model excels in sentence similarity tasks and can be effectively used for text comparison, clustering, and semantic analysis. However, it's worth noting that it may not perform optimally for semantic search tasks.