Flux.1
Flux.1 is the image generation model that changed the open-source AI art landscape. Created by Black Forest Labs — the team that built the original Stable Diffusion — it delivers hyper-realistic photos, perfect text rendering, and a level of prompt adherence that makes other models feel primitive. And the [dev] and [schnell] variants are open-weight, spawning the most active fine-tuning community in AI imagery.
Overview
Launched in August 2024, Flux.1 arrived as a dramatic step up from the Stable Diffusion family. In nearly every benchmark that matters to practical users — text accuracy, photorealism, prompt adherence, and anatomical consistency — Flux.1 outperformed models that had dominated the space for years.
The model comes in three variants optimized for different needs: [pro] for maximum quality via API, [dev] for community fine-tuning and non-commercial local use, and [schnell] for real-time, 4-step generation. By April 2026, there are thousands of community LoRAs (specialized fine-tunes) for Flux built on the dev weights, covering everything from specific artistic styles to corporate brand kits.
Key Features
- Best-in-Class Text Rendering: Flux.1 renders text within images with near-perfect spelling and typography — a capability where previous models consistently failed. Signs, labels, watermarks, and typographic art are now reliable.
- Photorealistic Precision: Ultra-high-frequency detail in skin, fabric, architecture, and natural environments that rivals professional photography.
- Exceptional Prompt Adherence: Places objects, colors, lighting, and compositional elements where you ask for them, with a precision that makes prompting feel like directing a real photographer.
- Three Purpose-Built Variants: Choose between maximum quality ([pro]), community fine-tuning ([dev]), or 4-step instant generation ([schnell]).
- Massive LoRA Ecosystem: The [dev] open weights have generated thousands of community fine-tunes for specific styles, characters, products, and aesthetics.
- ComfyUI Integration: First-class support in ComfyUI for complex, node-based workflows, ControlNet conditioning, and chained generation pipelines.
- FLUX.1 Tools: Variants for specific tasks — FLUX.1 Fill (inpainting), FLUX.1 Depth (depth-conditioned generation), FLUX.1 Canny (edge-conditioned generation).
Model Variants Explained
| Variant | Use Case | License | Speed | Quality |
|---|---|---|---|---|
| [pro] | Production, API | Commercial (API only) | Fast (API) | Highest |
| [dev] | Local, fine-tuning | Non-commercial | Moderate | Very High |
| [schnell] | Real-time, prototyping | Apache 2.0 | 4 steps (~2s) | Good |
Use Cases
Professional Design
- Logo Exploration: Generate typographic logos with accurate letterforms and complex compositions.
- Product Photography: Create realistic product shots with custom lighting and environments.
- Brand Identity: Maintain visual consistency across campaigns with fine-tuned LoRAs.
Creative & Art
- AI Photography: Hyper-realistic human portraits, street photography, and documentary-style imagery.
- Concept Art: Rapid ideation with high visual quality for games, film, and publishing.
- Typography Art: Posters, signs, and typographic compositions with reliable text accuracy.
Development & Applications
- App Asset Generation: Creating UI mockups, icons, and illustrations programmatically.
- E-commerce: Automated product visualization in different settings and colorways.
Getting Started
Option A: Use via API (Easiest)
import fal_client
# Set your FAL API key: https://fal.ai/dashboard
result = fal_client.run(
"fal-ai/flux/dev",
arguments={
"prompt": "A neon sign reading 'OPEN 24/7' glowing in a rainy Tokyo alley, photorealistic, f/1.8",
"image_size": "landscape_4_3",
"num_inference_steps": 28,
"guidance_scale": 3.5,
}
)
print(result["images"][0]["url"])
Option B: Use in ComfyUI (Local)
- Install ComfyUI.
- Download the FLUX.1 [dev] weights from Hugging Face (24GB).
- Place in
ComfyUI/models/unet/(or use the ComfyUI Manager to install automatically). - Load a Flux workflow from openart.ai/workflows and start generating.
Option C: Online Platforms (No Setup)
- Fal.ai: fal.ai/models/fal-ai/flux — easy web UI + API.
- Replicate: replicate.com/black-forest-labs — API with free tier.
- Hugging Face Spaces: Free FLUX.1 [schnell] demo.
Prompting Best Practices
- Be descriptive about lighting: "golden hour light", "dramatic side lighting", "soft overcast diffusion."
- Specify photography parameters: "f/1.8 bokeh", "35mm lens", "shot on Hasselblad."
- For text: Put text in quotes directly in the prompt —
"OPEN 24/7". - Use [schnell] for iteration: Fast and free (Apache 2.0) for rapid prompt testing, then switch to [dev] for final quality.
Pricing & Access
- FLUX.1 [schnell]: Free, Apache 2.0 — run locally or via free Hugging Face Spaces.
- FLUX.1 [dev]: Free weights (non-commercial). API access via Fal.ai (~$0.025/image) or Replicate.
- FLUX.1 [pro]: API only, commercial license.
- Via Fal.ai: ~$0.05 per image.
- Via Replicate: ~$0.055 per image.
- BFL API (Direct): api.bfl.ml — direct from Black Forest Labs, enterprise pricing.
Limitations
- VRAM Requirements: [dev] requires 24GB VRAM for local inference (or CPU with 48GB+ RAM, very slowly).
- Non-Commercial [dev]: The dev weights cannot be used commercially without licensing from BFL.
- No Video: Flux.1 is image-only — use Luma or Runway for video.
- Slower than [schnell]: [dev] quality requires 20-50 inference steps, which takes 1-3 minutes locally.
Community & Support
- Black Forest Labs: blackforestlabs.ai
- Hugging Face: huggingface.co/black-forest-labs
- GitHub: github.com/black-forest-labs/flux
- Reddit: r/StableDiffusion — largest community for Flux workflows and LoRAs.
- Civitai: civitai.com — thousands of community Flux LoRAs.