Brief Details: An open-source 1.18B parameter language model from Allen AI, trained on 3T tokens. Features strong performance for its size and Apache 2.0 license.
Brief-details: Neural machine translation model for Galician to Portuguese conversion, achieving 57.9 BLEU score, built by Helsinki-NLP using transformer-align architecture.
Brief-details: Helsinki-NLP's Bulgarian-to-English translation model using Marian architecture. BLEU score of 59.4 on Tatoeba dataset, Apache 2.0 licensed.
Brief-details: A 3.09B parameter GGUF-formatted language model optimized for text generation with multiple quantization options (2-8 bit precision), based on Mistral architecture.
Brief Details: A 3.09B parameter GGUF-formatted language model optimized for text generation with multiple quantization options (2-8 bit precision)
Brief Details: A learned sparse retrieval model with 133M parameters, optimized for search relevance with OpenSearch. Encodes queries/docs to 30522-dim sparse vectors.
Brief-details: A Vision Transformer model fine-tuned for fashion image classification, achieving 99.6% accuracy for gender and age detection based on fashion images.
Brief Details: Qwen2's 0.5B instruction-tuned model in GGUF format - compact yet capable language model with multiple quantization options for efficient deployment.
Brief Details: Versatile text-to-image model fine-tuned from VectorArtz, specializing in detailed art, photography, and vector-style outputs with simple prompts.
BRIEF-DETAILS: 3.21B parameter GGUF-formatted language model optimized for efficient local deployment with multiple quantization options (2-8 bit precision)
Brief-details: OpenFLUX.1 is an Apache 2.0 licensed text-to-image model that removes distillation from FLUX.1-schnell, enabling fine-tuning while maintaining fast generation capabilities.
Brief-details: A 3.09B parameter GGUF-optimized language model with multiple quantization options (2-8 bit), designed for efficient text generation and conversation tasks.
Brief-details: Qwen2.5-72B-Instruct-GGUF is a powerful 72.7B parameter LLM optimized for instruction following, supporting 29+ languages and 128K context length.
BRIEF DETAILS: Long-context Mistral-7B variant supporting 524,288 tokens, achieving 88.7% average score on RULER benchmark and 100% on NIAH tests. Built for extended context processing.
Brief-details: 7B parameter Mistral-based language model fine-tuned with DPO, excelling in reasoning tasks with strong benchmark performance and 8k context window.
Brief-details: A versatile English text embedding model optimized for retrieval and semantic similarity tasks, achieving strong performance on MTEB benchmarks with 768-dimension vectors and 512 sequence length.
BRIEF-DETAILS: A 3.2B parameter GGUF-formatted language model optimized for text generation with multiple quantization options (2-8 bit precision), built for efficient local deployment
Brief-details: A 4.52B parameter LLaMA-based merged model created using mergekit's passthrough method, optimized for text generation and conversational tasks utilizing FP16 precision.
Brief-details: Compact text embedding model (33.4M params) optimized for semantic similarity and retrieval tasks with strong MTEB benchmark performance.
Brief Details: RoBERTa-large model (355M params) finetuned on WANLI dataset, achieving superior performance on NLI tasks with 11% HANS and 9% ANLI improvements.
Brief Details: A 3.09B parameter GGUF-formatted instruction model, quantized for efficient deployment with multiple precision options from 2-bit to 8-bit.