OSPREY

Public-facing professionals (PFPs) form the backbone of modern democratic institutions. Yet in recent years, the erosion of democratic values has exposed journalists, judges, elected officials, and others who operate in the public eye, to growing levels of hostility and risk. The visible nature of their work makes them vulnerable to online harms, understood as behaviours in digital environments that may inflict physical, emotional, or reputational damage. With the rapid expansion of AI-driven content generation and manipulation, these threats now extend not only to PFPs themselves but also to their families, whose personal lives can be targeted, amplified, or fabricated at scale. Despite the urgency of the issue, efforts to understand and mitigate online harms against PFPs remain fragmented. Conceptually, online risks often arise from dynamics that differ from traditional hate-driven behaviour, yet current frameworks rarely capture this. A knowledge gap further limits early identification of weak signals and precursors of escalation while Law Enforcement Agencies face a capabilities gap, as technological tools and operational guidance for threat detection, assessment, digital forensics, and deterrence remain insufficient. PFPs and their families experience a support and awareness gap, with few resources to help them understand or respond to targeted harms, including those directed at children. Finally, policy gaps across regulation, harmonisation, and institutional coordination continue to limit a coherent response.

To address these challenges, OSPREY will develop novel multi-faceted, multidisciplinary and operationally focused approaches, tools and capabilities for the protection of PFPs and, specifically, police officers and elected officials on local, national and EU level. The main objectives of the OSPREY project include:

  • To establish the link between online harms against PFPs, online safety and democratic resilience
  • To build a knowledgebase and taxonomical classifications of PFP-specific online harms and attack vectors
  • To create an integrated triage model for risk assessment and risk mitigation
  • To develop modular AI-driven risk and threat analytics capabilities for LEAs
  • To develop novel digital forensics tools for evidence collection, selection and reporting
  • To develop training curricula for law enforcement agencies, PFPs and public-facing organisations (PFOs)
  • To develop an operationally focused, sustainable and transferable toolbox for PFPs, PFP-family members and PFOs to aid harm prevention, mitigation and support
  • To raise awareness, evidence informed policy, capacity building for online harms against PFPs

MKLAB is the Technical Manager of the project and also leads a Work Package on the co-development of online harms and privacy exposure detection and analysis capabilities, leading the development of explainable linguistic analysis solutions to detect privacy-sensitive information and online harms against PFPs, as well as social network analysis methods to identify perpetrators and potential victims, while also contributing to the discovery, collection, and acquisition of PFP-related online data. Moreover, MKLAB leads the co-development of solutions for impact assessment and the mitigation of harm and privacy exposure, and also contributes to the co-development of risk assessment solutions for harm and privacy exposure, as well as to the advanced data pipeline scheduling and orchestration for intelligence filtering and enhancement.

Website

cordis.europa.eu

Program

HORIZON-CL3-2024-FCT-01-06

Contact

  • Tsikrika Theodora
  • Vrochidis Stefanos