Brief Details: FanFic-Illustrator - A 3B parameter AI model that analyzes creative stories and generates optimal illustration scene prompts for image generation, specialized in anime/manga content.
Brief-details: Qwen 2.5 7B model fine-tuned with RLHF for creative writing, using Erebus dataset and custom reward model for improved narrative generation
Brief-details: A fine-tuned version of Phi-3-mini optimized for character-based chat with strict prompt formatting requirements and multiple precision options for various hardware specs.
Brief-details: RWKV7 2.9B parameter language model using flash-linear attention, trained on 3.119T tokens. Features efficient architecture and World tokenizer with 65k vocabulary.
Brief-details: A 32B parameter Japanese-focused instruction-tuned LLM built on Qwen2.5, enhanced with Chat Vector and ORPO optimization, showing strong reasoning capabilities.
BRIEF-DETAILS: EasyControl: A flexible conditional DiT framework enhancing transformer-based diffusion models with efficient control mechanisms and multi-condition support.
BRIEF DETAILS: LHM_Runtime is a groundbreaking AI model that converts single images to animatable 3D human models in seconds, using feed-forward architecture and video-based training.
Brief-details: A LoRA model trained on Replicate using flux-dev-lora-trainer, designed for image generation with TOK trigger word support and UltraRealism capabilities.
BRIEF-DETAILS: 32B parameter RWKV-based model converted from Qwen 2.5, offering 1000x inference cost reduction while maintaining competitive performance across benchmarks
Brief-details: Specialized Japanese speech recognition model optimized for anime content, featuring reduced hallucination and improved domain-specific accuracy with beam search support.
Brief Details: Specialized 7B parameter Japanese-to-Chinese translation model optimized for visual novels (Galgame), with strong handling of script elements and formatting.
BRIEF DETAILS: Quantized 3B parameter text-to-speech model supporting 8 distinct voices and emotions, optimized for efficient inference at 24kHz audio output
Brief-details: A specialized 7B-parameter visual-language-action model for Minecraft gameplay, enabling natural language control of in-game actions using keyboard and mouse interactions.
Brief-details: A modified 12B parameter variant of Google's Gemma model, tuned for adversarial responses and unique perspectives. Features vision capabilities and alternative personality traits.
Brief Details: Specialized 4B parameter variant of Google's Gemma optimized for neutral information retrieval, featuring reduced moral constraints and bias dampening
Brief Details: A 3B parameter Vietnamese reasoning model focused on analytical tasks, developed by 5CD-AI. Currently in beta, specializing in detailed multi-step reasoning.
Brief-details: A comprehensive 24B parameter LLaMA-based model with multiple GGUF quantizations, optimized for performance and efficiency with sizes ranging from 7GB to 47GB.
BRIEF-DETAILS: Mistral Small 3.1 24B Instruct model converted to Hugging Face format, optimized for text-only tasks, featuring 24B parameters
Brief-details: Experimental uncensored version of Gemma 3 4B using layerwise abliteration technique, optimized for reduced refusals while maintaining coherent outputs
BRIEF-DETAILS: Advanced 70B parameter LLM fine-tuned from Llama-3.3, specializing in selecting high-quality responses. Achieves 93.4% accuracy on Arena Hard with Feedback-Edit ITS.
Brief-details: Sonata is a Facebook-developed AI model accessible via Huggingface, focused on audio and music processing capabilities.