Brief-details: A comprehensive GGUF quantization suite of OpenThinker2-7B offering multiple compression levels from 2.78GB to 15.24GB with various quality-size tradeoffs.
BRIEF-DETAILS: Dream-v0-Base-7B is an open-source 7B parameter diffusion language model optimized for high-performance text generation and processing.
BRIEF DETAILS: NarrowMaid-8B is an 8B parameter LLM merge optimized for roleplaying and storytelling, combining 30+ models with exceptional character consistency and context retention.
Brief-details: LlamaCpp quantized versions of GemmaCoder3-12B offering various compression levels (2-23GB) with specialized formats for different hardware optimizations
BRIEF DETAILS: A 32B parameter LLM specialized in creating detailed character cards for SillyTavern, featuring advanced reasoning capabilities and creative writing, based on QwQ architecture.
Brief-details: AReaL-boba-SFT-32B is a high-performance 32B parameter model achieving SOTA results in mathematical reasoning, trained using only 200 data samples and matching QwQ-32B performance.
Brief-details: A comprehensive collection of GGUF quantizations of Tessa-T1-3B, offering various compression levels from 6.18GB to 1.14GB with different quality-size tradeoffs
Brief Details: Creative writing-focused 12B parameter Gemma-3 model, merging instruct and base fine-tunes for enhanced storytelling capabilities with novel-like prose.
Brief-details: TRELLIS-normal-v0-1 is an enhanced normal-conditioned version of the TRELLIS model, developed by Stable-X for improved AI processing capabilities.
Brief Details: MotionPro: Advanced AI model for precise image-to-video generation with intuitive motion control, supporting both object and camera movements through simple brush-based interface.
Brief Details: DSO-finetuned-TRELLIS is a specialized model fine-tuned on the TRELLIS dataset, focusing on enhanced domain-specific optimization capabilities.
Brief Details: A LoRA model trained on Replicate's Flux trainer, designed for image generation with specific trigger word "TOK". Compatible with diffusers library and Flux-Super-Realism base model.
Brief-details: Derm Foundation is a specialized healthcare AI model by Google, requiring acceptance of Health AI Developer Foundation's terms for dermatological applications.
Brief Details: Llama 4 Scout variant optimized with Unsloth's dynamic 4-bit quantization, offering 17B parameters with 16 experts. Supports multilingual text/image input with 10M context length.
BRIEF-DETAILS: A LoRA model for FLUX.1-dev text-to-image generation, trained with 3500 steps and rank 16, using TOK as trigger word for image generation
Brief-details: UNO is a multi-image conditioned subject-to-image model using diffusion transformers, featuring high-consistency data synthesis and universal rotary position embedding.
Brief Details: Advanced style-adaptable AI art model optimized for cartoon/anime with expanded versatility in brushwork, color manipulation, and artistic techniques.
Brief Details: GGUF conversion of Wan2.1-Fun-14B-Control model optimized for ComfyUI integration. 14B parameters, focused on image generation control.
Brief-details: A 3B parameter LLM specialized in function calling with added chat capabilities, featuring natural follow-up questions and context management for enhanced API interactions.
BRIEF-DETAILS: State-of-the-art 3B parameter multimodal embedding model for visual document retrieval, achieving 58.8 NDCG@5 on Vidore-v2, with unified text-image encoding capabilities.
Brief-details: ReSearch-Qwen-32B-Instruct is an advanced LLM trained to reason with search capabilities via reinforcement learning, built on Qwen2.5 architecture.