Details

Type: Bachelor thesis, Master thesis, Student Project

Persons: 1-3

 

Description

Vision is one of the most potent sources of information about the world we are interacting with. That is especially true for precise actions such as reaching for, touching, and grasping an object. Vision cues precede most of our actions and can be used for their anticipation by automated systems such as robots. In our setup, a mobile robot is used to present a haptic object to the users when they want to touch it. For a safe and realistic experience, the intention to touch should be predicted as soon and as accurate as possible.

You are expected to rely on the most relevant and cited sources, such as books and international research publications, to make a well-argued suggestion a good/practical/optimal solution for gaze-based action prediction in our setup.

Different sub-topics are available:

  • saliency evaluation of VR environment: prediction of what objects in the environment will be most interesting for the user,
  • gaze-to-action prediction: explore the connection of the gaze behavior to the actions performed,
  • gaze manipulation: how to manipulate the user’s attention and subtly provoke the desired behavior.

The chosen algorithm(s) will then be implemented for testing in Unity 3D (C#).

 

Tasks

The student will work with a framework for molecular properties prediction by deep learning. The code will be extended by the student to experiment with different network architectures and with processing of heterogenous input. The framework is written in python and it is based on Tensorflow and Keras libraries. Labeled data will be provided by the partner company.

 

Requirements

  • Knowledge of English language (source code comments and final report should be in English)
  • Familiarity with Unity3D is advantageous
  • Programming languages: C#, C++

 

Environment

Game engine: Unity3D.

 

Responsible

 

For more information please contact Khrystyna VasylevskaSoroosh Mortezapoor.