This special session aims at presenting the most recent works and applications in the area of multimedia analysis and multimodal interaction in the context of health and basic care. As devices and systems are becoming increasingly powerful and in parallel the content analytics and retrieval technologies are boosting, the interface between human and computer is often lagging behind and constitutes a bottleneck for efficient use for real world applications. This is especially important in health and basic care applications, in which the interaction with humans is even more critical due to the special needs and urgent situations involved. Leveraging on multidisciplinary expertise combining knowledge from research in multimedia analysis, as well as the multimodal interaction domains, new technologies are required to offer interactions, which are closer to the communication patterns of human beings and allow for a more “natural” communication with systems in the context of health and basic care. This is currently envisioned by recent research, which aims at developing knowledge-based autonomous human-like social agents that can analyze, retrieve information and learn from conversational spoken and multimodal interaction in order to support care giving scenarios. In parallel, over the last few years we could observe an increasing need of video content processing for health applications. A very characteristic example is the videos from endoscopic procedures and surgeries, since endoscopists and surgeons are switching over to archive the videos they actually used to perform the endoscopic intervention. These endoscopic videos contain valuable information that can be used for later inspection, for explanations to patients, for case investigations, and for training purposes. Therefore there is an important need for the development of powerful multimedia systems that can effectively process huge amounts of video data with highly similar content and make them available for content exploration and retrieval.
The research topics of the special session include but are not limited to:
- Multimedia analysis and retrieval for multimodal interaction in the health domain
- Multimedia indexing and retrieval with video recordings from medical endoscopy
- Multimodal conversation and dialogue systems for social companion agents
- Exploration and retrieval in endoscopic video
- Speech and audio analysis and retrieval for health applications
- Facial analysis and gesture recognition in social agents
- Fusion of multimedia information for health and care-giving applications
- Semantic web approaches for multimedia health applications
- Multimodal analytics for human machine interaction in the health domain
- Web and social media retrieval for knowledge-based social companion applications
Organizers
Stefanos Vrochidis (Centre for Research and Technology Hellas, Information Technologies Institute)
Leo Wanner (ICREA – Universitat Pompeu Fabra)
Elisabeth André (University of Augsburg)
Klaus Schoeffmann (Klagenfurt University)
Reviewers
Georgios Meditskos (ITI-CERTH)
Bernd Muenzer (Klagenfurt University, Austria)
Adrian Muscat (University of Malta, Malta)
Eleni Kamateri (ITI-CERTH)
Stamatia Dasiopoulou (UPF)
Francois Bremond (INRIA, France)
Monika Dominguez (UPF)
Wolfgang Minker (University of Ulm)
Manfred Juergen Primus (Klagenfurt University, Austria)
Mariana Damova (Mozaika, Bulgaria)
Andreas Leibetseder (Klagenfurt University, Austria)
Jenny Benois-Pineau (LABRI, France)
Stefan Petscharnig (Klagenfurt University, Austria)
Important dates
– Deadline for paper submission: August 1st, 2016, August 31, 2016
– Notification of acceptance: October 10th, 2016, October 22th, 2016
– Camera Ready Paper and Registration: October 30th, 2016, November 4th, 2016
– Conference starts: January 4th, 2017
Accepted Papers
– Deep Learning of Shot Classification in Gynecologic Surgery Videos
– Classification of sMRI for AD diagnosis with Convolutional Neuronal Networks: a pilot 2-D+e study on ADNI
– Description Logics and Rules for Multimodal Situational Awareness in Healthcare
– Speech Synchronized Tongue Animation by Combining Physiology Modeling and X-ray Image Fitting
– Boredom Recognition based on Users’ Spontaneous Behaviors in Multiparty Human-Robot Interactions
Program
Thursday, January 5, 2017, 10:50 – 12:30
Room: V102
10:50 – 11:20: Paper advertisement: Each author would have 6 mins to provide a short presentation of her/his paper.
- Deep Learning of Shot Classification in Gynecologic Surgery Videos
- Classification of sMRI for AD diagnosis with Convolutional Neuronal Networks: a pilot 2-D+e study on ADNI
- Description Logics and Rules for Multimodal Situational Awareness in Healthcare
- Speech Synchronized Tongue Animation by Combining Physiology Modeling and X-ray Image Fitting
- Boredom Recognition based on Users’ Spontaneous Behaviors in Multiparty Human-Robot Interactions
11:20 – 12:00: Poster session: The session will include posters of all the accepted papers.
- Deep Learning of Shot Classification in Gynecologic Surgery Videos
- Classification of sMRI for AD diagnosis with Convolutional Neuronal Networks: a pilot 2-D+e study on ADNI
- Description Logics and Rules for Multimodal Situational Awareness in Healthcare
- Speech Synchronized Tongue Animation by Combining Physiology Modeling and X-ray Image Fitting
- Boredom Recognition based on Users’ Spontaneous Behaviors in Multiparty Human-Robot Interactions
12:00 – 12:30: Panel discussion: The panel will further discuss the research topics addressed by this special session.
Panel Members:
Kotaro Funakoshi (Honda Research Institute Japan)
Wolfgang Minker (University of Ulm)
Jenny Benois-Pineau (Universite de Bordeaux)
Klaus Schoeffmann (Klagenfurt University)
Stefanos Vrochidis (Centre for Research and Technology Hellas, Information Technologies Institute)
Conference
The 23rd International Conference on Multimedia Modeling (MMM) will be held on January 4-6, 2017 in Reykjavik, Iceland. The MMM conference is a leading international conference for researchers and industry practitioners for sharing new ideas, original research results and practical development experiences from all MMM related areas. The conference calls for full research papers reporting original research and investigation results, as well as demonstration proposals, in all areas related to multimedia modeling technologies and applications. In addition, five special sessions will be held at MMM 2017, and the conference will also host the CrowdMM workshop and the popular Video Browser Showdown. For more details, please see http://mmm2017.ru.is/.
Paper formatting instructions
The papers have to be formatted according to the MMM 2017 guidelines.
Submission website
Special session papers can be submitted here.