F Lite: Freepik & Fal.ai unveil an open-source image model trained on licensed data

Generative AI is moving at blistering speed, driven by powerful open-source collaboration. Yet, developing high-quality, large-scale AI models trained solely on licensed data remains a significant challenge. Today, Freepik and Fal.ai proudly present F Lite, a powerful text-to-image model and a significant milestone in open, responsible AI.

Our AI research teams joined forces to build F Lite from scratch. Trained exclusively on high-quality, legally compliant, copyright-safe images from Freepik’s stock library, F Lite explores what’s possible with a much smaller dataset (just 80 Million images, compared to the usual more than one billion images). This makes it potentially the largest publicly available text-to-image model trained entirely on legally sound content.

Meet F Lite

F lite - image 1

F Lite leverages a 10-billion-parameter architecture based on DiT, incorporating numerous enhancements. Although trained with fewer computing resources – 64x H100 GPUs over two months – and data compared to typical flagship models, the model remains highly capable and ready for further innovation by the community.

Impressive performance with room to grow

F Lite - image 2

F Lite excels in generating diverse, high-fidelity images, especially strong in illustrative and vector styles reflective of its training data. As a first release, it has some known limitations:

  • Fine-grained detail: Photorealistic images occasionally miss ultra-fine textures.
  • Complex scenes: Intricate compositions or anatomy may produce defects.
  • Prompt sensitivity: Optimal results require descriptive prompts; shorter prompts with less detail may underperform.
  • Text rendering: Accurate text in images remains a known challenge.

After vigorous testing and scrutiny, we believe F Lite’s core architecture and training methodology are sound. These limitations primarily reflect the bounds of the computing and data used.

Two flavors, tailored to your needs

We’re releasing two variants of F Lite. F Lite Regular is ideal for general-purpose usage, whereas F Lite Textured offers enhanced aesthetic quality and richer textures, best suited for more detailed prompts (less effective with vectors and short prompts).

Try these demos now:

F Lite Regular: Hugging Face space and Fal
F Lite Texture: Hugging Face space and Fal

Both models are openly licensed, with the regular and textured weights available on Hugging Face. The model’s code is also open-source, allowing you to use F Lite in ComfyUI, integrate it into your Python workflows via diffusers, or fine-tune and create custom LoRAs.

Deep dive into technical details

F Lite - image 3

For AI enthusiasts and researchers, we’ve published a detailed F Lite Technical Report explaining the innovative methods used during training, including µ-Parameterization, WSD scheduling, Register Tokens, Residual Value Connections, Sequence Dropout, MaPO and GRPO among others.

Let’s build together

We’re excited to see how the community responds to F Lite! Whether fine-tuning for specific art styles, creating IP-Adapters or ControlNets, or optimizing quantized versions, we’re here to support your creativity.

A smaller, GPU-friendly “micro version” is also in the pipeline, aiming to bring F Lite’s power to even more creators.

Join the journey

The release of F Lite demonstrates that even without unlimited resources, focused innovation and collaboration within the open-source community can produce remarkable foundation models. Let’s shape the future of generative AI together!