LibreFLUX

Maintained By
jimmycarter

LibreFLUX

PropertyValue
LicenseApache 2.0
Training Compute~1,500 H100 hours equivalent
LibraryDiffusers
Pipeline TypeText-to-Image

What is LibreFLUX?

LibreFLUX is an Apache 2.0 licensed de-distilled version of FLUX.1-schnell, designed to provide full functionality with restored classifier-free guidance and complete T5 context length support. The model was trained using approximately 1,500 H100-equivalent hours and implements attention masking for improved token utilization.

Implementation Details

The model employs several innovative technical approaches, including LoKr parameter-efficient fine-tuning with 3.2B parameters, beta timestep scheduling, and multi-rank stratified sampling. It was trained on about 0.5 million high-resolution images with diverse captions.

  • Full 512 token context length (upgraded from original 256)
  • Attention masking implementation for better token utilization
  • Restored classifier-free guidance functionality
  • Quantization support for lower VRAM usage

Core Capabilities

  • High-quality text-to-image generation
  • Support for both short and long text prompts
  • Efficient inference with int8 quantization option
  • Easy integration with Fine-tuning frameworks

Frequently Asked Questions

Q: What makes this model unique?

LibreFLUX stands out by offering a fully open-source, Apache 2.0 licensed version of FLUX with restored functionality and improved token handling, making it suitable for both commercial use and further fine-tuning.

Q: What are the recommended use cases?

The model excels at general text-to-image generation tasks and works best with CFG scale of 2.0 to 5.0. It's particularly suitable for commercial applications requiring an open-source license and scenarios needing long text prompt processing.

The first platform built for prompt engineering