• SocialSensor

    Sensing User Generated Input for Improved Media Discovery and Experience

    logo_socialsensor

     

     

     

    Project Description:
    SocialSensor will develop a new framework for enabling real-time multimedia indexing and search in the Social Web. The goal is to mine and aggregate user inputs and content over multiple social networking sites. Social Indexing will incorporate information about the structure and activity of the users’ social network directly into the multimedia analysis and search process.

    Furthermore, it will enhance the multimedia consumption experience by developing novel user-centric media visualization and browsing paradigms. For example, SocialSensor will analyse the dynamic and massive user contributions in order to extract unbiased trending topics and events and will use social connections for improved recommendations.

    Relevance for MULTISENSOR
    Topic detection based on multimodal representation, content integration. (WP4)
    Reuse social topic detection approaches


    > SocialSensor Website

  • TRENDMINER

    Large-scale, Cross-lingual Trend Mining Summarization of Real-Time Media Streams

    TM2

     

     

     

    Project description:
    The goal of this project is to deliver innovative, portable open-source real-time methods for cross-lingual mining and summarisation of large-scale stream media. TrendMiner will achieve this through an inter-disciplinary approach, combining deep linguistic methods from text processing, knowledge-based reasoning from web science, machine learning, economics, and political science. No expensive human annotated data will be required due to our use of time-series data (e.g. financial markets, political polls) as a proxy. A key novelty will be weakly supervised machine learning algorithms for automatic discovery of new trends and correlations. Scalability and affordability will be addressed through a cloud-based infrastructure for real-time text mining from stream media.

    Relevance for MULTISENSOR:
    Event detection (WP2,4), e-motion (WP3), hidden meaning extraction (WP5)
    Follow the updates in realtime data collection (WP3).


    > Trendminder Website

  • XLike

    Cross-lingual Knowledge Extraction

    xlike_logo

     

     

    Project Description:
    The goal of the XLike project is to develop technology to monitor and aggregate knowledge that is currently spread across mainstream and social media, and to enable cross-lingual services for publishers, media monitoring and business intelligence.

    The aim is to combine scientific insights from several scientific areas to contribute in the area of cross-lingual text understanding. By combining modern computational linguistics, machine learning, text mining and semantic technologies we plan to deal with the following two key open research problems:

    • to extract and integrate formal knowledge from multilingual texts with cross-lingual knowledge bases
    • to adapt linguistic techniques and crowdsourcing to deal with irregularities in informal language used primarily in social media.

    Relevance for MULTISENSOR
    Event detection (WP2), sentiment extraction (WP3), semantic integration
    Take into account XLIKE work (cross-lingual media monitoring).


    > XLike Website

  • NewsReader

    Building structured event Indexes of large volumes of financial and economic Data for Decision Making

    NWR_logo

     

     

    Project description:
    The goal of the NewsReader project is to process news in four different languages to extract what happened to whom, when and where. Thereby removing duplication, complementing information, registering inconsistencies and keeping track of the original sources. Any new information is integrated with the past, distinguishing the new from the old in an unfolding story line, providing constant access to all original sources and details (like a “History Recorder”).

    A decision-support tool will be developped allowing professional decision makers to explore these story lines using visual interfaces and interactions to exploit their explanatory power and their systematic structural implications. Likewise, NewsReader will help to make predictions from the past on future events or explain new events and developments through the past.

    Relevance for MULTISENSOR:
    Extract information from multimedia (WP2)
    Follow the work on indexing big financial data (WP2, WP4) and decision making (WP6)


    > NewsReader Website

  • EUMSSI

    Event Understanding through Multimodal Social Stream Interpretation

    Logo EUMSSI Project

     

     

    Project Description:
    The main objective of EUMSSI is to develope technologies for identifying and aggregating data presented as unstructured information in sources of very different nature (video, image, audio, speech, text and social context), including both online (e.g., YouTube) and traditional media (e.g. audiovisual repositories). Furthermore it aims to deal  with information of very different degrees of granularity.

    A core idea is that the process of integrating content from different media sources is carried out in an interactive manner, so that the data resulting from one media helps reinforce the aggregation of information from other media, in a cross-modal interoperable semantic representation framework. All this will be integrated in a multimodal platform of state-of-the-art information extraction and analysis techniques from the different fields involved.

    Relevance for MULTISENSOR:
    Summarisation techniques, Follow the advancements by common workshops; data exchange, use case discussion.


    > EUMSSI Website

  • Reveal

    Social Media Verification

    Logo Reveal Project

     

     

     

    Project Description:
    The world of media and communication is currently experiencing enormous disruptions: from one-way communication and word of mouth exchanges, we have moved to bi- or multidirectional communication patterns. No longer can selected few act as gatekeepers, deciding what is communicated to whom and what not. Individuals now have the opportunity to access information directly from primary sources, through a channel we label ‘e-word of mouth’, or what we commonly call ‘Social Media’.

    A key problem: it takes a lot of effort to distinguish useful information from the ‘noise’ (e.g. useless or misleading information). Finding relevant information is often tedious. REVEAL aims to discover higher level concepts hidden within information. In Social Media we do not only have bare content; we also have interconnected sources. We have to deal with interactions between them, and we have many indicators about the context within which content is used, and interactions taking place. A core challenge is to decipher interactions of individuals in permanently changing constellations, and do so in real time.

    Relevance for MULTISENSOR
    Extract information from multimedia (WP2)
    Community detection (WP3)
    Content integration (WP4)


    > Reveal Website

  • PERICLES

    Promoting and Enhancing Reuse of Information throughout the Content Lifecycle taking account of Evolving Semantics

    Logo PERICLES Project

     

     

     

    Project Description:
    PERICLES aims to address the challenge of ensuring that digital content remains accessible in an environment that is subject to continual change. This can encompass not only technological change, but also changes in semantics, academic or professional practice, or society itself, which can affect the attitudes and interests of the various stakeholders that interact with the content. PERICLES will take a ‘preservation by design’ approach that involves modelling, capturing and maintaining detailed and complex information about digital content, the environment in which it exists, and the processes and policies to which it is subject.

    Relevance for MULTISENSOR
    Content integration (WP4)
    Semantic representation (WP5)


    > PERICLES Website

  • KRISTINA

    Knowledge-Based Information Agent with Social Competence and Human Interaction Capabilities

    Logo KRISTINA Project

     

     

     

     

     

    Project Description:
    KRISTINA’s overall objective is to research and develop technologies for a human-like socially competent and communicative agent that is run on mobile communication devices and that serves for migrants with language and cultural barriers in the host country as a trusted information provision party and mediator in questions related to basic care and healthcare.
    To develop such an agent, KRISTINA will advance the state of the art in dialogue management, multimodal (vocal, facial and gestural) communication analysis and multimodal communication. The technologies will be validated in two use cases, in which prolonged trials will be carried out for each prototype, with a representative number of migrants recruited as users from the migration circles identified as especially in need: elderly Turkish migrants and their relatives and short term Polish care giving personnel in Germany and North African migrants in Spain.

    Relevance for MULTISENSOR
    Automatic speech recognition (WP2)
    Content integration (WP4)
    Semantic reasoning (WP5)


    > KRISTINA Website

  • TENSOR

    Retrieval and Analysis of Heterogeneous Online Content for Terrorist Activity Recognition

    Logo TENSOR Project

     

     

     

    Project Description:
    Law Enforcement Agencies (LEAs) across Europe face today important challenges in how they identify, gather and interpret terrorist generated content online. The Dark Web presents additional challenges due to its inaccessibility and the fact that undetected material can contribute to the advancement of terrorist violence and radicalisation. LEAs also face the challenge of extracting and summarising meaningful and relevant content hidden in huge amounts of online data to inform their resource deployment and investigations.
    The main objective of the TENSOR project is to provide a powerful terrorism intelligence platform offering LEAs fast and reliable planning and prevention functionalities for the early detection of terrorist organised activities, radicalisation and recruitment. The platform integrates a set of automated and semi-automated tools for efficient and effective searching, crawling, monitoring and gathering online terrorist-generated content from the Surface and the Dark Web; Internet penetration through intelligent dialogue-empowered bots; Information extraction from multimedia (e.g., video, images, audio) and multilingual content; Content categorisation, filtering and analysis; Real-time relevant content summarisation and visualisation; Creation of automated audit trails; Privacy-by-design and data protection.

    Relevance for MULTISENSOR
    Extract information from multimedia, automatic speech recognition, machine translation (WP2)
    Classification and topic and event detection approaches (WP4)
    Semantic reasoning (WP5)
    Summarisation techniques (WP6)


    > TENSOR Website

  • InVID

    In Video Veritas – Verification of Social Media Video Content for the News Industry

    Logo InVID Project

     

     

     

    Project Description:
    The digital media revolution is bringing breaking news to online video platforms; and, news organisations delivering information by Web streams and TV broadcast often rely on user-generated recordings of breaking and developing news events shared by social media to illustrate the story. However, in video there is also deception. Access to increasingly sophisticated editing and content management tools, and the ease in which fake information spreads in electronic networks requires reputable news outlets to carefully verify third-party content before publishing it.
    InVID will build a platform providing services to detect, authenticate and check the reliability and accuracy of newsworthy video files and video content spread via social media.

    Relevance for MULTISENSOR
    Extract information from multimedia (WP2)
    Social media analysis techniques (WP3)


    > InVID Website