REPEAT Project Overview

This project concerns autonomous perceiving and understanding of the environment, as well as image-based localization. These constitute key challenges for any autonomous or augmented reality system, especially when these functions are performed under low-quality sensor data. The project combines machine learning, computer vision and computer engineering to develop new methods for data-driven 3D visual computing and image based localization, while considering the computational resource limitations of autonomous systems. We put special interest in advancing the state-of-the-art in vision and sensor fusion methods, and in developing methods capable of inferring location, pose, and semantics from visual data. This is a cross-layer approach over of the fields of computer vision, machine learning, and computer engineering with the aim at renewing the view on how these can be combined.

Figure: Snapshot of large-scale LIDAR-based visual localization dataset from the University of Vaasa campus. The dataset is expected to be released in 2021.

Project Consortium

The REPEAT project is an Academy of Finland consortium project of

Tampere University (Prof. Esa Rahtu, esa.rahtu@tuni.fi; Google Scholar page, consortium lead)

Aalto University (Prof. Juho Kannala, juho.kannala@aalto.fi; Google Scholar page)

University of Vaasa (Prof. Jani Boutellier, jani.boutellier@univaasa.fi; Google Scholar page).

Timeline

REPEAT starts on January 1, 2020

Recent and Related Publications

  1. Khan M, Huttunen H, and Boutellier J, (2018) Binarized convolutional neural networks for efficient inference on GPUs, European Signal Processing Conference (EUSIPCO).
  2. Boutellier J, Wu J, Huttunen H, and Bhattacharyya SS (2018) PRUNE: Dynamic and decidable dataflow for signal processing on heterogeneous platforms, IEEE Transactions on Signal Processing.
  3. Meirhaeghe A, Boutellier J, and Collin J, (2019) The direction cosine matrix algorithm in fixed-point: Implementation and analysis, International Conference on Acoustics, Speech, and Signal Processing (ICASSP).
  4. Ma Y, Wu J, Bhattacharyya S, Boutellier J (2020)
    Decidable Variable-Rate Dataflow for Heterogeneous Signal Processing Systems
    International Conference on Acoustics, Speech, and Signal Processing (ICASSP).
  5. Vainio M, Ruotsalainen L, Valdez Banda O, Röning J, Laitinen J, Boutellier J, Koskinen S, Peussa P, Shamsuzzoha A, Bahoo Toroody A, Kramar V,  Visala A, Ghabcheloo R, Huhtala K, Alagirisamy R (2020) Safety Challenges of Autonomous Mobile Systems in Dynamic Unstructured Environments: Situational awareness, decision-making, autonomous navigation & human-machine interface, RAAS White Paper.
  6. Ferranti L, Li X, Boutellier J, Kannala J (2020)
    Can You Trust Your Pose? Confidence Estimation in Visual Localization
    International Conference on Pattern Recognition (ICPR).
  7. Ferranti L, Åström K, Oskarsson M, Boutellier J, Kannala J (2021) Sensor Networks TDOA Self-Calibration: 2D Complexity Analysis and Solutions, International Conference on Acoustics, Speech, and Signal Processing (ICASSP).
  8. Ferranti L, Åström K, Oskarsson M, Boutellier J, Kannala J (2021) Homotopy Continuation for Sensor Networks Self-Calibration, European Signal Processing Conference (EUSIPCO), accepted.

 

More REPEAT papers can be found on the webpage of Juho Kannala (Aalto University).

Presentations

News

Oct 1, 2019

Project page opened.

April-May, 2020

Luca Ferranti visits Kalle Åström‘s team in Lund University.

August, 2020

Technobothnia visual localization dataset work starts.

January, 2021

ICASSP 2021 paper accepted.

May, 2021

EUSIPCO 2021 paper accepted.