Brief-details: A powerful 72B parameter vision-language model capable of processing long videos, multilingual text, and high-resolution images with state-of-the-art performance on visual understanding benchmarks.
BRIEF-DETAILS: LLaMA-65B-HF is Meta AI's 65B parameter language model trained on diverse web data, optimized for research and featuring strong reasoning capabilities.
Brief-details: A 6.9B parameter language model fine-tuned on synthetic instruction data, built by Lambda Labs for enhanced instruction-following capabilities
Brief Details: A 2.92B parameter FLAN-T5-XL model fine-tuned for advanced grammar correction, capable of handling multiple errors while preserving semantics.
Brief Details: Popular text-to-image model optimized for photorealistic results. Features high-detail skin rendering and film-like quality. 340 likes, 11.3k+ downloads.
BRIEF DETAILS: Zero-shot image classification model based on CLIP architecture with 1.2M+ downloads. Research-focused implementation for exploring computer vision capabilities.
Brief-details: Flan-UL2 is a powerful 20B parameter encoder-decoder model combining T5 architecture with Flan instruction tuning, optimized for diverse NLP tasks and improved few-shot learning capabilities.
Brief Details: A French text-to-speech model built on Tortoise-TTS, fine-tuned on extensive French datasets with multiple versions offering improved pronunciation and voice cloning capabilities.
Brief Details: OpenVINO-optimized Stable Diffusion v1.5 model for CPU/GPU acceleration, featuring FP16 precision and CreativeML OpenRAIL-M license.
Brief Details: 20B parameter open-source chat model fine-tuned from GPT-NeoX, trained on 40M+ instructions with carbon-negative compute. Excels at QA and classification.
Brief-details: HorrorLora is a specialized LoRA model trained on horror-themed images, featuring built-in noise offset and using the token 'hrrsks' for generating creepy and dark artistic outputs.
Brief Details: OpenVINO-optimized Stable Diffusion model for efficient text-to-image generation on Intel hardware, featuring FP16 precision and static shape support.
Brief-details: Multilingual sentence similarity model supporting 12 Indian languages with cross-lingual capabilities, built on SBERT architecture.
Brief Details: Compact Russian sentiment analysis model (11.8M params) for 3-class text classification. Achieves up to 0.98 F1 score on specific datasets.
Brief Details: 4-bit quantized version of FLUX.1-dev optimized for 16GB GPUs, using hybrid quantization for improved efficiency (8.5-11GB VRAM usage)
Brief Details: InternLM2-1.8B is a powerful 1.8B parameter language model supporting 200K context length, with strong capabilities in reasoning, math, and coding. Available in base and chat variants.
Brief Details: A sentiment analysis model built on ALBERT, offering 7-level classification from very positive to very negative. 11.7M params, F32 tensor type.
Brief-details: Multilingual text embedding model supporting 100+ languages, uses prefix-based encoding, and optimized for semantic search and retrieval tasks (118M params)
Brief-details: Identity-preserving text-to-video generation model using frequency decomposition techniques. Apache 2.0 licensed, supports image-to-video conversion with ONNX runtime.
Brief Details: Compact 1.1B parameter LLaMA-architecture model trained on 105B tokens, optimized for efficient deployment with Apache 2.0 license.
Brief Details: Qwen1.5-14B is a powerful 14.2B parameter transformer-based language model with 32K context length, featuring multilingual support and improved architecture.