AI is transforming law enforcement, offering new tools for policing but also enabling advanced criminal tactics that challenge traditional methods. The global nature of crime, including cyber threats, trafficking, and terrorism, calls for innovative solutions as LEAs face vast data volumes and increasingly sophisticated criminal activities. AI has raised concerns with deepfakes – highly realistic but fake audio, video, or text that can depict individuals saying or doing things they never did. Deepfakes pose serious risks, impacting politics, economy, and social trust. Examples include fabricated videos of political figures and voice-cloned audio for financial fraud, often spread through social networks to deceive and defraud on a large scale. Forensic institutes and courts struggle to differentiate authentic evidence from AI fabrications, especially in cases involving national security. Despite promising detection research, existing methods fall short as current models rely on limited, non-diverse datasets and produce results with limited legal admissibility.
The DETECTOR initiative aims to address these challenges, supporting LEAs and forensic experts in analysing altered media. It offers an integrated solution through cross-border collaboration among AI researchers, LEAs, forensic scientists, legal experts, and ethicists. DETECTOR’s goals include:
MKLab participates in DETECTOR by contributing to the generation, curation and multilingual annotation of synthetic and manipulated text datasets and by supporting the development, evaluation and explainable integration of AI-based methods for detecting, characterising and attributing AI-generated or stylistically manipulated textual content.
HORIZON-CL3-2024-FCT-01