fbnetc_100.rmsp_in1k
Property | Value |
---|---|
Parameter Count | 5.6M |
GMACs | 0.4 |
Input Size | 224x224 |
License | Apache-2.0 |
Paper | FBNet Paper |
What is fbnetc_100.rmsp_in1k?
fbnetc_100.rmsp_in1k is an efficient convolutional neural network designed through Facebook's hardware-aware neural architecture search. This implementation is trained on ImageNet-1k using a specialized RMSProp-based recipe, offering an optimal balance between computational efficiency and accuracy.
Implementation Details
The model employs a carefully crafted training recipe that includes RMSProp optimization with TensorFlow 1.0 behavior, combined with EMA weight averaging. The training process incorporates several advanced techniques without using RandAugment, including:
- RandomErasing and mixup for data augmentation
- Dropout for regularization
- Standard random-resize-crop augmentation
- Step-based learning rate schedule with warmup
Core Capabilities
- Image Classification on 1000 ImageNet classes
- Feature Map Extraction with multiple resolution levels
- Image Embedding Generation
- Efficient inference with only 5.6M parameters
Frequently Asked Questions
Q: What makes this model unique?
This model stands out for its hardware-aware architecture design, optimized through neural architecture search to balance performance and efficiency. The specialized RMSProp training recipe and relatively small parameter count (5.6M) make it particularly suitable for resource-constrained deployments.
Q: What are the recommended use cases?
The model is well-suited for image classification tasks, particularly when deployment efficiency is crucial. It can be used for feature extraction in transfer learning scenarios, image embedding generation, and as a backbone for more complex computer vision tasks.