Nowadays, devices and systems are becoming increasingly powerful including software platforms integrating multimodal sensors and components, as well as agent-based systems. At the same time the multimedia content analytics and retrieval technologies are boosting and they provide very promising results for real time audiovisual analysis and retrieval, speech recognition and natural language processing, as well as fusion of multimodal information, to name a few. However, the interface between human and computers is often lagging behind and constitutes a bottleneck for seamless and efficient use for real world applications.

Therefore, there is an important need for developing new technologies to offer interactions, which are closer to the communication patterns of human beings and allow for an intuitive and hence more “natural” communication with systems, by combining knowledge from research in multimedia analysis and retrieval, as well as the multimodal interaction domains. Such technologies will allow for the development of autonomous intelligent systems that are capable of processing audiovisual information including audio, spoken language and visual context, in order to interact in a natural way with humans. Such systems will have many real world applications for instance by developing avatars as tele-operators for supporting care giving for elderly people, customer support services, tourist agents, etc. Other applications include computer, tablet and mobile phone interfaces for disabled people and for professional users that need to interact with complex and large scale multimedia information.


Registration is open at the ICMR website

MARMI2016 program has been announced!

Keynote on “Event-based MultiMedia Search and Retrieval for Question Answering” by Benoit Huet has been announced!

MARMI2016 accepted papers have been announced!

The MARMI2016 deadline is extended! New submission deadline is March 8th 2016!


Supported by:
kristina MULTISENSOR_logo