ESM2_t30_150M_UR50D
Property | Value |
---|---|
Parameter Count | 150M |
License | MIT |
Author | |
Model Type | Protein Language Model |
Framework Support | PyTorch, TensorFlow |
What is esm2_t30_150M_UR50D?
ESM2_t30_150M_UR50D is a state-of-the-art protein language model developed by Facebook, featuring 30 transformer layers and 150 million parameters. It's designed for masked language modeling of protein sequences and represents a balanced compromise between model size and performance in the ESM-2 family.
Implementation Details
This model is implemented using both PyTorch and TensorFlow frameworks, supporting F32 and I64 tensor types. It's part of the ESM-2 series, which ranges from 8M to 15B parameters. The 150M parameter version offers a practical balance between computational requirements and model capability.
- 30 transformer layers architecture
- Masked language modeling objective
- Compatible with both PyTorch and TensorFlow
- Supports Fill-Mask operations
- Available in Safetensors format
Core Capabilities
- Protein sequence analysis
- Masked language modeling for proteins
- Fine-tuning capability for specific protein-related tasks
- Inference endpoint support
Frequently Asked Questions
Q: What makes this model unique?
This model represents an optimal balance between computational efficiency and performance in the ESM-2 family. With 150M parameters, it's large enough to capture complex protein patterns while remaining practical for most applications.
Q: What are the recommended use cases?
The model is ideal for protein sequence analysis, structure prediction tasks, and can be fine-tuned for specific protein-related applications. It's particularly suitable for organizations that need good performance but can't accommodate larger models like the 15B parameter version.