Tools and Resources

  • SemaPlorer is an easy to use application that allows end users to interactively explore and visualize a very large, mixed-quality and semantically heterogeneous distributed semantic data set in real-time. Its purpose is to acquaint oneself about a city, touristic area, or other area of interest. By visualizing the data using a map, media, and different context views, we clearly go beyond simple storage and retrieval of large numbers of triples.

  • With the WeKnowIt Visual Retrieval and Localization web-based tool
    (ViRaL) you can make queries with city photos, find similar ones and
    identify them on the map. Using a large database of geo-tagged images
    from Flickr™, ViRaL can match a given query and return a ranked list of
    images according to visual similarity. The geo-tags of the returned
    images are used to provide an estimate of the location of the query
    photo and display it using Google Maps™. The most common tags of these
    images are also shown, giving a hint of e.g. what landmark might be
    depicted in the query.

    The tool is available online at:

  • ClustTour is an online city exploration application that helps users identify interesting spots in a city by use of photo clusters corresponding to landmarks and events. ClustTour is based on an efficient landmark and event detection scheme for tagged photo collections. The proposed scheme relies on the combination of a graph-based photo clustering algorithm, making use of both visual and tag information of photos, with a cluster classiffication and merging module. ClustTour creates a map-based visualization of the identified photo clusters that are classified in prominent categories and are filterable by time and tag. Such an application can greatly facilitate the task of knowing a city through its landmarks and events. So far, the demo has been based on a large photo dataset focused on Barcelona, and we are in the process of expanding it to 20 major cities of Europe. Furthermore, we intend to provide an Android application that complements the current web-based version of ClustTour.

  • Managing and reviewing the log fi les that are created by the Emergency Response (ER) personnel in the course of an emergency incident is a critical task for understanding and improving the implemented ER actions. A major challenge arising in this task is the fact that multiple log fi les are created by diff erent members of the ER personnel. As a result, extensive manual e ffort is necessary in order to merge and align the incoming log entries in order to make them suitable for review. In addition, critical information for the incident under study, such as person names and locations, is also manually extracted. The WeKnowIt ER Log (WERL) Manager is a web-based application that facilitates the task of ER log merging and management by automatically merging and aligning multiple log fi les and extracting ER-relevant semantic information from log entry text. WERL makes use of the representation patterns of Event Model F in order to facilitate information sharing and reuse. Furthermore, WERL enables interactive exploration of the collected log fi les by means of temporal, location and semantic filters.

  • Attention Streams (AS) can be described as a semantic realtime attention tracker. Contrary to usual interest extraction approaches, AS analyses your interests as you navigate between pages. This process is totally transparent to the user since the tracking engine runs as a page background process. Attention Streams provides a simple application that uses these tags for querying different online services in realtime given your most important attentions streams. The application enables the user to discover new information based on its interests. Contrary to the existing recommendation services, AS can discovers content that fits the full user context based on his location and attention. AS does not rely on generic interests for finding recommendation but on the evolving interests of the user. As a consequence, AS is providing highly contextual and ambient recommendations that can be used for supporting the user activity. The passive recommendations minimize the explicit user interaction with the system thus avoiding the user distraction: the user can just check the system if he needs specific information since it is always updated with his current activity. Attention Streams were awarded the 3rd Prize in the AI Mashup Challenge at ESWC 2010

  • The CURIO Vocabulary is a general purpose ontology designed for managing the collaborative discussion of user generated resources in a unified and collaborative fashion. Despite being developed in the context of WeKnowIt, the vocabulary can be easily adapted for a particular need thanks to its flexible resource model. CURIO can be considered as an aggregative ontology since it aggregates and connect many concept from different standard ontology in a formalised way. This vocabulary reuse most of its concept from the SIOC vocabulary but incorporate many classes and properties from other ontology such as DCTerms, GeoOWL, CommonTag and OPO. The ontology is based around the concept of Resource. A Resource is a specific class that holds a user generated content. Currently different class exists. Particularly, the Document and Event  classes enables the combination of virtual events with a real one.

  • The World Wide Web has evolved into a distributed network of interactive web applications facilitating the publication of information on a large scale. Judging whether such information can be trusted is a difficult task for humans, often leading to blind trust. The veracity ontology allows trust to be placed in web content by web users and agents. Moreover the approach differs from current work by allowing the trustworthiness of web content to be asserted through the provision of machine readable proofs (i.e. by citing another piece of information, or stating the credentials of the user/agent).

  • The novel mobile application STEVIE (collaborative, semantic, and context-aware points-of-interest) enables its users to collaboratively create, share, and modify semantic points of interest (POI). Semantic POIs describe geographic places with explicit semantic properties of a collaboratively created ontology. As the ontology includes multiple subclassifications and instantiations and as it links to DBpedia, the richness of annotation goes far beyond mere textual annotations such as text. With the intuitive interface of STEVIE, users can easily create, delete, and modify their POIs and those shared by others. Thereby, the users adapt the structure of the ontology underlying the semantic annotations of the POIs. Data mining techniques are employed to cluster and thus improve the quality of the collaboratively created POIs. The semantic POIs and collaborative POI ontology are published as Linked Open Data.

    The tool is available online at:

  • Managing one's memberships in different online communities increasingly becomes a cumbersome task. This is due to the increasing number of communities in which users participate and in which they share information with different groups of people like colleagues, sports clubs, groups with specific interests, family, friends, and others.
    These groups use different platforms to perform their tasks such as collaborative creation of documents, sharing of documents and media, conducting polls, and others. Thus, the groups are scattered and distributed over multiple community platforms that each require a distinct user account and management of the group. 
    dgFOAF is an approach for distributed group management based on the well known Friend-of-a-Friend (FOAF) vocabulary. Our dgFOAF approach is independent of the concrete community platforms we find today and needs no central server. It allows for defining communities across multiple systems and alleviates the community administration task. Applications of dgFOAF range from access restriction to trust support based on community membership.


  • Existing metadata models and metadata standards are not sufficient to describe the semantics of rich, structured multimedia content in formats such as SMIL, SVG, and Flash. They are either conceptually too narrow, focus on a specific media type only, cannot be used and combined together, or are not practically applicable. This hampers the retrieval of such presentations by search engines today and makes their archival and management a difficult task. Existing W3C standards for rich multimedia presentations like SMIL and SVG foresee the use of the Resource Description Framework to describe the semantics of the multimedia content. However, until today there is no appropriate metadata model or best practice available that explains how to describe and annotate rich, structured multimedia content. To fill this gap, we have developed the Multimedia Metadata Ontology (M3O) for annotating rich, structured multimedia presentations. The M3O provides a generic modeling framework for representing sophisticated multimedia metadata. It allows for integrating the features provided by the existing metadata models and metadata standards. With the M3O, we make the semantics of rich multimedia presentations machine-readable and machine-understandable. The M3O is used with our SemanticMM4U framework for the multi-channel generation of semantically-rich multimedia presentations.

  • This application allows mobile phones users to quickly discover the location and name of photographed objects.
    It was developed by Software Mind S.A. The aim of the application is to provide user with the detailed information on the location and name of a POI that she photographed. The application uses services developed by WeKnowIt project partners in order to recognize object (POI - Point On Interest) on picture, determine its geolocation, and determine tags associated with this POI. The ability of image recognition is limited to the database of pictures held by VIRaL tool (Visual Image Retrieval and Localization) which currently contains more than 1 mln Flickr® images from 30 european cities. POIs are retrieved from Yahoo! database. Additionally application contacts Wikipedia and gathers information on the recognized POI.


  • The lack of a formal event model hinders interoperability in
    distributed event-based systems. Consequently, we developed a formal
    model of events, called F. The model bases on an upper-level ontology
    and provides comprehensive support for all aspects of events such as
    time and space, objects and persons involved, as well as the structural
    aspects, namely mereological, causal, and correlational relationships.
    The event model ßrovides a flexible means for event composition,
    modeling of event causality and correlation, and allows for
    representing different interpretations of the same event. The
    foundational event model F is developed in a pattern-oriented approach,
    modularized in different ontologies, and can be easily extended by
    domain specific ontologies.

  • The ER website is a part of the full Emergency Response demonstrator. Data arrives from a number of different sources: From Emergency Response workers at the scene of an emergency, from the general public observing an emergency or through other parties publishing information on the world wide web. This information is then collected by the WKI system and various processes are carried out to make the information more palatable for presentation. The information is geo-located, either from metadata provided with the images or through an analysis of textual or visual data and tags are generated for the information. Emergency Response workers have the ability to define an incident - automatic processing  then connects individual uploads to an overall document in order to provide some categorisation of the information.

    The information is then displayed in the explorer application showing the data as red points on a map. These can be selected and viewed in the interface. Various filters also are present - you can filter the information by time, by the tags applied to the data or geographically (by moving the map). The information list will update according to these filters (as will the map). In addition you can view the incident information by selecting an incident from the list to see the information provided about the incident itself.

    The data behind the interface is currently test data - you can use the username 'test' and password 'test' to log in.

  • Semantic Web technologies have been designed to enable computers to understand and process the information distributed on the Web. OWL/RDF representations are clearly targeting programs whereas HTML is designed for human beings. On the other hand, Microformats and more recently RDFa have been proposed for integrating ontological information into standard XHTML documents. This integration is currently used for third party systems such as screen scrappers and search engines. Ontological information of a document is not only useful for programs but can also be used for making the underlying knowledge explicit by augmenting visually its look and format. The Sparks 03 Browser is a pratical application of some of the concepts developed throughout the Sparks framework. Specifically, it uses the overlay concept for augmenting visually standard documents by adding new mecanisms of interaction with the underlining knowledge. Ozone Browser won the second prize at SFSW2009

  • Fannr – Flickr Annotator – is a prototype application for assisting users in annotating their Flickr photos. It integrates several services from WeKnowIt project partners. Two types of annotation support is provided when annotating a single photo. 1) Tag recommendation, where the user is presented with a list of tags that they could consider adding to their photo. 2) Location recommendation, where the user is presented with several potential locations as to help them remember where the photo was taken. Both the tag-recommendation and the location recommendation is based on data from different social levels: the user's own photos (personal intelligence); the photos of the user's contacts (social intelligence); everybody's photos (mass intelligence); and visually similar photos (media intelligence). Furthermore, for photos that have been localized the tag-recommendation is also based on geographically close-by photos. When the user has identified tags that they want to add to their photos or identified the location where the photo was taken, the Fannr prototype updates the annotation on the Flickr website using the Flickr API.


  • Mobile Facets is an intuitive and easy to use mobile application to access a large, distributed data set of different social media sources on a touchscreen mobile phone. As social media data sources we use DBpedia, geo-located Flickr photos, and the event directories Eventful and Upcoming. In addition, we use professional content from the event directories as well as places described in GeoNames. The users can search and explore the dataset by means of facets and retrieve resources such as places, persons, organizations, and events. To visualize the results, the Mobile Facets application provides a map view, result list view, and photo view. As the resources are queried live from the different data sources, we cannot make any assumptions about which facets are provided in a specific user's contextual situation and how many resources Mobile Facets receives. Thus, in contrast to existing approaches, the user interface had to be designed such that it is flexible with respect to the amount and kind of facets and resources retrieved live from the social media data.

    The tool is available online at:


Seventh Framework Programme