Deep learning and neural networks have recently introduced a paradigm shift in the domain of Earth Observation (EO) to support understanding, interpretability and explainability in Artificial Intelligence. Copernicus data, Very-High Resolution (VHR) commercial satellite images, and other georeferenced data sources are often highly heterogeneous, distributed and semantically fragmented. Handling satellite images of several attributes and resolutions often demand the use of downscaling techniques for more accurate estimations at local scales and effective visualisations on GIS platforms. The value of the original data is therefore increased so as to address these challenges, encouraging the development of deep learning applications of higher value.

Image analysis with novel, supervised, semi-supervised or unsupervised learning, is already part of our lives and is extensively entering the space sector to offer value-added Earth Observation products and services in various sectors, such as journalism, tourism, security, drinking water quality, public administration, and crisis management. Large volumes of satellite data are frequently coming to the Earth from the Sentinel constellation, offering a basis for creating value-added products that go beyond the space sector. The visual analysis and data fusion of all streams of data need to take advantage of the existing Data and Information Access Services (DIAS) and High Performance Computing (HPC) infrastructures, when required by the involved end users to deliver fully automated processes in decision support systems. Most importantly, interpretable machine learning techniques should be deployed to unlock the knowledge that is hidden in big Copernicus data.

Furthermore, addressing the big data management challenges of integrating, processing and analyzing Earth Observation data with other distributed data sources from industrial domains alternative to Space sector (e.g., ICT, media, security) will create new directions in EO applications and technologies, and new market opportunities. Satellite data can be fused at different levels and scales with geo-referenced non-EO data, such as videos from UAV missions, Galileo GNSS data, crowdsourced information from social media and street-view images. The multimodal combination of heterogeneous data can lead to the enhancement of existing remote sensing analyses as well as to entirely novel applications.

This special session of MMM’2022 includes presentation of novel research in:

  • Change and event detection over satellite image time series
  • Fusion of satellite images and other georeferenced data (street-view images, social media, UAV videos, Galileo GNSS data, etc.)
  • Downscaling satellite images at multiple spatial resolutions
  • Multimodal image retrieval in georeferenced data
  • Location-based image retrieval
  • Location-based social networks
  • Multimodal hashing in satellite images
  • Multimodal understanding and context enrichment
  • Geo-localization of multimedia content
  • Retrieving, processing and interpreting social geo-located data
  • Reinforcement learning and active learning on multispectral images
  • 3D-models and animations of multispectral content
  • Semantic analysis on Copernicus data for multispectral image retrieval
  • Linked Earth Observation data for semantic multispectral image retrieval
  • Deep learning architectures (CNN, RNN, LSTM, etc.) on satellite images
  • Graph Neural Networks and Graph Convolutional Neural Networks on satellite data
  • Generative Adversarial Networks (GANs) on satellite imagery
  • Semantics through word embeddings on satellite image metadata
  • Multimodal indexing and retrieval of Copernicus data
  • Data cube analytics on time-series of satellite imagery
  • Knowledge extraction and data mining on big Copernicus datasets
  • Machine learning techniques for unsupervised and semis-supervised learning on satellite imagery
  • Distributed machine learning techniques on High Performance Computing environments
  • Cloud computing on DIAS platforms for satellite images
  • Data augmentation and pseudo-labeling
  • Novel georeferenced datasets
  • Explainable Artificial Intelligence (XAI), feature selection and feature engineering
  • Causality analysis to infer statistical associations in observed data sets
  • Emerging applications of multimodal satellite image retrieval and fusion with other georeferenced multimedia (crisis management, journalism, tourism, security, drinking water quality, public administration, etc.)

Supported by: