Brief-details: Optimized 3B parameter LLaMA-3 model with FP8 quantization, offering 50% memory reduction while maintaining 99.7% performance across benchmarks
Brief-details: Specialized 7B parameter mathematical LLM supporting both Chain-of-Thought and Tool-integrated Reasoning for solving English and Chinese math problems with enhanced computational accuracy.
Brief-details: A creative text-to-image diffusion model combining Playground AI's compositions with ZovyaRPG artist tools, optimized for digital art and RPG-style imagery.
BRIEF DETAILS: A 7B-parameter multimodal video understanding model based on Qwen2, trained on LLaVA-Video-178K dataset. Supports up to 110 frames with strong performance on video benchmarks.
BRIEF DETAILS: An 8B parameter biomedical LLM fine-tuned on Llama-3, achieving 72.5% average accuracy across medical benchmarks. Specialized for healthcare tasks and medical knowledge generation.
BRIEF-DETAILS: Advanced inpainting ControlNet model supporting 1024x1024 resolution, built on FLUX.1-dev base with enhanced detail generation and prompt control capabilities.
Brief-details: Aria is a breakthrough 25.3B parameter multimodal MoE model with state-of-the-art performance in video and document understanding, featuring efficient 3.9B active parameters
Brief Details: ALBERT XXLarge v2 - Advanced language model with 223M params, 12 repeating layers, and 4096 hidden dims. Optimized for MLM tasks. Apache 2.0 licensed.
Brief Details: A Tacotron2-based text-to-speech model trained on LJSpeech dataset, offering high-quality English speech synthesis with HiFiGAN vocoder support.
Brief-details: A specialized BERT-based model for distinguishing code from natural language text, achieving 99.8% accuracy with 28.8M parameters. Ideal for code detection tasks.
Brief Details: BLIP-2 model with 12.2B parameters using Flan T5-XXL LLM for image-text tasks. Features visual Q&A, captioning, and chat capabilities.
Brief Details: Vietnamese bi-encoder model for semantic text similarity, 135M params, trained on MS Macro & SQuAD datasets, achieves 73.28% Acc@1 on legal text retrieval.
Brief-details: Sentence embedding model with 768-dim vectors, trained on 1B+ sentence pairs using MPNet architecture. Optimized for semantic search and clustering.
Brief-details: A 3.48B parameter code-focused LLM trained on 4T tokens from 116 programming languages, optimized for code generation, explanation, and fixing tasks.
Brief-details: 8B parameter Llama3.1 model optimized for instruction-following, featuring multiple GGUF quantization options from 32GB to 2.95GB, ideal for various hardware configurations.
Brief-details: Cross-encoder model for semantic similarity scoring across 6 languages, trained on NLI datasets. Outputs similarity scores 0-1 for sentence pairs.
BRIEF DETAILS: Russian language model based on Mistral Nemo, optimized for conversational AI. 12.2B parameters, GGUF format, Apache 2.0 license.
Brief-details: YOLOS-based fashion object detection model supporting 46 clothing categories. Fine-tuned on Fashionpedia dataset with 16.7K+ downloads.
Brief-details: A specialized Stable Diffusion XL-based model for generating Minecraft character skins, featuring transparent layer support and high-quality texture generation.
Brief-details: Qwen1.5-0.5B-based PEFT model optimized for efficiency, featuring safetensors implementation. Popular with 16.7K+ downloads, built for specialized tasks.
Brief-details: A Spanish to Catalan neural machine translation model achieving 68.9 BLEU score, built by Helsinki-NLP using transformer architecture with SentencePiece tokenization