EfficientNet B0 with RandAugment
Property | Value |
---|---|
Parameter Count | 5.3M |
Model Type | Image Classification |
Framework | PyTorch (timm) |
License | Apache-2.0 |
Dataset | ImageNet-1k |
Image Size | 224 x 224 |
What is efficientnet_b0.ra_in1k?
The efficientnet_b0.ra_in1k is an optimized variant of the EfficientNet B0 architecture, enhanced with RandAugment training methodology. This model represents a carefully balanced approach to neural network design, achieving impressive accuracy while maintaining computational efficiency with just 5.3M parameters and 0.4 GMACs.
Implementation Details
This implementation uses the RandAugment (RA) recipe, which evolved from the original EfficientNet training procedures and was published as the 'B' recipe in "ResNet Strikes Back". The model employs RMSProp optimization with TensorFlow 1.0 behavior and implements EMA weight averaging. The learning rate follows a step-based schedule with exponential decay and warmup.
- Advanced RandAugment data augmentation strategy
- Efficient architecture with 6.7M activations
- Optimized for 224x224 input images
- Implements modern training improvements
Core Capabilities
- High-accuracy image classification on ImageNet-1k
- Feature extraction for downstream tasks
- Efficient inference with moderate computational requirements
- Support for both classification and feature backbone usage
Frequently Asked Questions
Q: What makes this model unique?
This model combines EfficientNet's compound scaling method with modern training techniques like RandAugment, offering an excellent balance between model size and accuracy. Its moderate parameter count of 5.3M makes it suitable for deployment in resource-constrained environments while maintaining competitive performance.
Q: What are the recommended use cases?
The model is ideal for general image classification tasks, particularly when deployment efficiency is crucial. It can be used for direct classification, feature extraction, or as a backbone for transfer learning in computer vision tasks. The model supports both full classification and feature map extraction modes.