The Lab has extensive experience and expertise in semantic multimedia analysis, indexing and retrieval, social media and big data analytics, knowledge structures, reasoning and personalization for multimedia applications, eHealth and environmental applications

MKLab participation to the TRECVID 2016 AVS task

MKLab successfully participated to the Ad-hoc Video Search (AVS) of TRECVID 2016. The AVS task attempts to model the end-user video search use-case, where the user is looking for segments of video containing persons, objects, activities, locations, etc. and combinations of the former. To express this information need, the user provides to the retrieval system a short query in natural language, e.g., “Find shots of a person playing guitar outdoors”. The experiments this year were performed on a set of Internet Archive videos totalling about 600 hours of video duration, and using 30 different queries.

Our FastAR Prototype short-listed for the Innovation Radar Prize 2016

FartAR prototype developed by our lab in the context of the Live+Gov EU-funded proejct has been short-listed for the Innovation Radar Prize 2016. Our innovation is one of 10 innovations that have been selected to compete for the prize in the "ICT for Society" category. After publishing the information of all innovations on the Innovation Radar webpage, a poll will be open for one month by end of July, for public vote.

1st Int. Workshop on Semantic Change & Evolving Semantics - SuCCESS’16

1st Int. Workshop on “Semantic Change & Evolving Semantics - SuCCESS’16” Leipzig, Germany, 12 September 2016 Website: In conjunction with the 12th International Conference on Semantic Systems – SEMANTiCS 2016 The proceedings of the workshop will be published in the digital library of the ACM ICP Series. Workshop objectives This half-day workshop aims at exploring emerging research in the areas of semantic change and evolving semantics. Semantics differ across contexts, domains, communities and time.

Should we expect drones to change the way we experience culture?

In mklab we are definitely trying to answer Yes! By collaborating with drone experts, 3D specialists and archeological sites we have set the goal of creating New 3D Cultural Worlds. The Palace of Philip II at Aigai (Vergina) is one of the most prominent archeological treasures that we are trying to capture and transform into a 3D cultural world. During April 2016, a team of drone experts from the John Moores University of Liverpool flew over the Palace of Phillip II in Aigai, so as to capture a huge volume of images that will enable the full 3D re-construction of the site.

Pottery gestures style comparison by exploiting Myo sensor and forearm anatomy

Mklab performs research for comparing the style of pottery gestures by exploiting the Myo sensor and forearm anatomy. In particular, a set of Electromyogram (EMG) based features such as muscles total pressure, flexors pressure, tensors pressure, and gesture stiffness, have been used for the purpose of identifying differences in performing the same gesture across three pottery constructions namely bowl, cylindrical vase, and spherical vase. In identifying these EMG-based features we have developed a tool for visualizing in real-time the signals generated from a Myo sensor along with the muscle activation level in 3D space.

Efficient dimensionality reduction and classification software released

We just released software (executables and sources) for our accelerated Kernel Subclass Discriminant Analysis (AKSDA) method and its combination with SVM classifiers. This is an efficient dimensionality reduction and classification method, for very high-dimensional data. AKSDA is a new GPU-accelerated, state-of-the-art C++ library (also provided as command-line executable) for supervised dimensionality reduction and classification, using multiple kernels. It greatly reduces the dimensionality of the input data, while at the same time it increases their linear separability.

Joint Dem@Care Paper accepted for Publication in TPAMI

A joint paper by INRIA, UBX and CERTH on "Semantic Event Fusion of Different Visual Modality Concepts for Activity Recognition" has been accepted for publication in the special issue on Multimodal Human Pose Recovery and Behavior Analysis of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). With an impact factor of 5.781, TPAMI is one of the most prestigious journals in the areas of computer vision and image understanding, pattern analysis and recognition, with a particular emphasis on machine learning for pattern analysis.

1st International Workshop on Multimedia Analysis and Retrieval for Multimodal Interaction (MARMI2016)

The 1st International Workshop on Multimedia Analysis and Retrieval for Multimodal Interaction (MARMI2016) is organized in conjunction with the ACM Conference on Multimedia Retrieval (ICMR) 2016, New York, USA, June 6-9, 2016.   IMPORTANT DATES - Submission deadline: March 1, 2016 - Notifications: March 26, 2016 - Camera ready version: April 15, 2016 - Workshop: June 6, 2016

MKLab participates in the just launched hackAIR project

The hackAIR project was launched on January 2016 in Thessaloniki by a consortium of six partners of five European countries. hackAIR will develop an open technology toolkit for citizens€™ observatories on air quality. This toolkit aims to complement official air quality data with a number of community-driven data sources, including an easy-to-build open hardware sensor module that transmits regular air quality measurements via Bluetooth, air quality information derived from mobile phone pictures of the sky and social media, as well as a low-tech measuring setup involving vacuum cleaners and coffee filters.

Dem@Care Context Descriptor Pattern

The Dem@Care Context Descriptor ontology has been integrated in the Linked Open Vocabularies (LOV) dataset, enabling its sharing and reuse by other datasets in the Linked Data Cloud (a human-readable description of the vocabulary is available here). The ontology has been developed in the framework of the Dem@Care project and provides the vocabulary to annotate complex (high-level) activity classes with low-level observations for complex activity recognition. For more details, please refer to the relevant paper: G.

BDV associate member

Improve My City Mobile

imc logo

Improve My City Mobile, allows citizens to report local problems and suggest solutions for improving their neighbourhood. Learn more...

Motorola collaborations

Motorola Logo

The Multimedia Group has been collaborating with Motorola in several Motorola funded R&D projects.


NVIDIA links Multimedia Group for the GPU-LIBSVM implementation

HR Excellence in Research

HR Excellence in Research