<!-- index-menu -->
In this article, we will guide you step-by-step on how to use FLUX.1.
The open-source community has welcomed a new text-to-image generation model, FLUX.1, following the releases of SD 3 Medium and Kolors. Developed by former core members of Stability AI, FLUX.1 significantly surpasses the quality of SD 3 and even rivals the closed-source Midjourney v6.1 model. This positions FLUX.1 as a new benchmark in AI-generated art and injects fresh momentum into the development of open-source AI art.
The company behind FLUX.1 is Black Forest Labs, founded by the original team behind Stable Diffusion and several former researchers from Stability AI. Like Stability AI, Black Forest Labs is dedicated to developing high-quality multimodal models and making them open-source. The company has already completed a $31 million seed funding round.
Black Forest Labs website: https://blackforestlabs.ai/
FLUX.1 offers three versions: Pro, Dev, and Schnell. The first two models outperform mainstream models like SD3-Ultra, while the smaller FLUX.1 [schnell] surpasses larger models such as Midjourney v6.0 and DALL·E 3.
Comparison of FLUX.1 ELO Scores with Mainstream Models
FLUX.1 excels in text generation, following complex instructions, and rendering human hands. Below are examples of images generated by its most powerful model, FLUX.1 [pro]. As you can see, even when generating large blocks of text or multiple characters, there are no errors in details such as text or human hands.
Examples of Images Generated by FLUX.1 [pro]
Here are some websites where you can experience the FLUX.1 model online. If you just want to explore and use this powerful text-to-image model, free online access is the best option for you.
FLUX.1-dev: