October 27th, 2025

ActiveFence Partners with Black Forest Labs to Strengthen Safety for FLUX

generative AI safety

With GenAI systems facing growing challenges related to content safety, misuse, and policy compliance, ActiveFence continues to stand at the forefront of responsible AI deployment.

Recently, Black Forest Labs, the developer behind FLUX, one of the world’s most advanced image generation models, partnered with ActiveFence to reinforce safety and trust across multiple model releases. FLUX is widely used across creative, entertainment, and digital media industries, delivering high-fidelity text-to-image and image-to-image generation.

To uncover safety vulnerabilities, ActiveFence conducted a series of expert-led red teaming exercises, crafting hundreds of adversarial prompts designed to expose edge cases, stress-test safeguards, and identify areas where malicious users might attempt to bypass protections. The insights gathered led to targeted retraining, policy refinement, and stronger enforcement mechanisms.

ActiveFence’s work was critical in surfacing issues related to non-consensual intimate imagery (NCII), child safety, inappropriate deepfakes, and broader misuse patterns, risks that can be difficult to detect through internal testing alone. The team helped Black Forest Labs meet tight release deadlines while ensuring the model could launch with stronger safeguards and greater resilience against emerging abuse tactics.

The result: safer outputs, preserved creative fidelity, and a new standard for responsible model development in generative AI.

Read the full case study here.

US Health Market, Decoded: A Practical Glossary for Israeli Founders