FLUX Tools now available via Together APIs: Get greater control over image generation with Canny, Depth and Redux models
Starting today on Together AI, you can access three powerful new image generation models from Black Forest Labs (BFL)'s FLUX Tools suite, giving you amazing control over composition, depth, and style. These models enable you to iterate on existing images with image → image and image + text → prompt, which unlocks lots of exciting use cases for AI developers. We're excited to bring you these new models on day 1, continuing our mission to give developers fast, simple API access to the best new models.
TL;DR:
- Three new FLUX Tools image generation models from BFL now available on Together AI: Canny for precise composition control, Depth for accurate spatial relationships, and Redux for instant image variations.
- FLUX.1 [dev] now also available on Together AI, offering higher quality image generation than schnell. Try it now.
- Reminder: We’re offering a free endpoint for FLUX schnell until the end of the year
{{custom-cta-1}}
What's New?
We're launching support for three brand new FLUX tools that are part of the FLUX dev model family:
FLUX.1 [dev] Canny: Takes an image + text prompt, and creates variations of that image based on the prompt, while preserving the structural composition using edge detection.
FLUX.1 [dev] Depth: Takes an image + text prompt, and generates images that maintain the spatial relationships of the reference image using depth mapping.
FLUX.1 [dev] Redux: Takes an image input and produces slight image variations without requiring a text prompt.
FLUX.1 [dev]: A powerful open-source image generation model for creating high-quality images from text prompts.
How these new FLUX Tools work
At the heart of these new models is ControlNet - a method developed by our friends at Stanford that gives you precise control over image generation by using additional input images as guides. Think of it as providing a structural blueprint that the model follows while generating new images. This means more predictable outputs and fewer attempts to get the exact image you want.
FLUX Canny
Canny edge detection is a computer vision technique that identifies the boundaries and important details in an image. When you provide a reference image and a text prompt, the FLUX Canny model automatically creates an edge map that looks like a detailed sketch of your image.
The model then uses these edges as a framework, allowing you to generate new images that maintain the same structural composition while changing style, content, or artistic elements based on your prompt.
FLUX Depth
Depth maps are like 3D blueprints of an image - they show how far each object is from the viewer, usually represented in grayscale where lighter areas are closer and darker areas are further away.
The FLUX Depth model automatically generates these maps from your reference images and uses them to ensure new generations maintain proper spatial relationships. This is particularly powerful for scenarios where maintaining accurate perspective and object positioning is crucial.
FLUX Redux
Redux takes a different approach - just feed it an image, and it reproduces the image with slight variations, allowing you to easily refine a given image.
For example if could be used to generate different angles for product imagery:
Getting Started with FLUX
Try the new FLUX models today in our playground or connect via our APIs:
Experiment with our open-source example apps built on FLUX:
- Blinkshot: Real-time image playground
- LogoCreator: Professional logo generation tool
Contact us to discuss an enterprise deployment of FLUX, or via dedicated GPU endpoints.
We’re also providing free access to FLUX.1 [schnell] via a free model endpoint until the end of this year.
Use our Python SDK to quickly integrate the new FLUX models into your applications:
Happy building!
- Lower
Cost20% - faster
training4x - network
compression117x
Q: Should I use the RedPajama-V2 Dataset out of the box?
RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.