Social Media and Web Multimedia Mining

The pervasive use of Online Social Networks (OSN) for networking, communication and search in tandem with the ubiquitous availability of smartphones, which enables real-time multimedia capturing and sharing, have led to massive amounts of user-generated content and activities being amassed online, and made publicly available for analysis and mining. Leveraging large amounts of user-generated content creates new exciting possibilities for a number of applications and services such as search and forecasting.

Image and video analysis and understanding

For us humans, it’s easy to assess visual information: just by viewing an image or a video, we can easily identify the objects and actions or events depicted in it, we get an impression of the quality and aesthetics of it, we know if we haven’t seen this before or not, we can meaningfully group our photos and select a few of them to share with friends. For computers, these and other similar tasks are by no means straightforward. To change this, we develop image and video analysis and understanding techniques, and machine learning techniques that support our analysis and understanding goals.

Interactive multimedia search and knowledge retrieval

Nowadays, the proliferation of content due to the internet penetration and the easy creation of user-generated content have raised the need for the research and development of efficient search and retrieval techniques. Depending on the application, there is a need for the research and development of information retrieval techniques that deal with browsing and search of heterogeneous multimedia, web search and crawling, multimodal fusion of information, as well as knowledge extraction.

Virtual and Augmented Reality

Virtual and augmented reality are currently considered among the technologies with the highest expected impact on user experience. Next to the low-cost and high-performance hardware for VR and AR, we may add the 3D digitization of objects and scenes that is becoming mainstream, as well as the ability to interlink content. All the above constitute the perfect setting for allowing the remote and on-site visitors of museums and cultural sites, to enjoy a drastically different experience. Following this trend, MKLab has set the goal of changing the way we experience culture using novel paradigms of virtual and augmented Reality.

Semantic Knowledge Representation and Management

The Semantic Web technologies provide the means and tools to unanimously represent the meaning of Web entities and their relations to one another, in a machine-interpretable manner. While modelling and managing knowledge for web entities has long ago been established, the Semantic Web technologies also show great potential in other domains, in both online and offline worlds.

Brain Research and BCI

The use of electroencephalograms (EEG) for conducting Brain Research has been one of the most exciting research fields gaining more and more attention. The potential to tap into the human’s brain and understand its secrets carries the promise of improving our lives in many different ways. With the aim to fulfill this promise, MKLab has setup a Lab with highly sophisticated equipment (i.e. ranging from a high-density EEG system (EGI 300 Geodesic EEG system – GES300) and the SMI Red500 eye tracker with high refresh rate, all the way to the lightweight EEG devices of Emotiv Epoc+ and Emotiv Insight) in order to perform research along two related fronts: a) Brain Computer Interfaces and b) Brain Cognitive Processes.
Collective Awareness Platforms for Sustainability and Social Innovation

Collective Awareness Platforms for Sustainability and Social Innovation

Collective Awareness Platforms for Sustainability and Social Innovation (CAPS) are ICT systems leveraging the emerging “network effect” by combining open data, information gleaned from social media, distributed knowledge creation and data from real environments (“Internet of Things”) in order to create awareness of problems and possible solutions requesting collective efforts, enabling new forms of social innovation. CAPS is part of the research priorities of the Horizon 2020 research programme of the European Commission and was also part of the last FP7 research programme.
Large-scale multimedia indexing and search


We develop a wide-range of multimedia analysis techniques: scalable approaches able to handle massive image collections, knowledge structures, languages and tools for multimedia analysis through extension of Semantic Web languages with multimedia descriptions rules and relations. We apply these techniques to various diverse applications: medical imaging and e-health, environment, security and remote sensing.

Computer Vision Methods for Video Content Analysis

The widespread proliferation of digital media, and in particular video data, in recent years, has made the automated analysis of its content necessary for retrieval, storage, trasnmission, security and commercial purposes. Video content is analysed for the detection and recognition of human activities in a variety of setups, ranging from constrained indoors environments to outdoors locations and videos “in the wild” (such as YouTube© content). Applications include health monitoring for ambient assisted living, analysis of online content, effective classification.