The Project

The project will build upon the results of the previous DACH project, which successfully integrated blind users into a teamwork around a horizontal interactive surface by taking into account the artifacts on the table as well as deictic gestures above the table. Motivated by the positive feedback of blind user’s testing our tabletop system, the goal of this proposed project is to extend this one-dimensional collaboration space to a multi-dimensional and multi-modal work environment, which fully integrates blind users into a team work. For doing so, new algorithms are required for a sophisticated sensor fusion, new metaphors need to be found for representing spatially distributed information and clustered information spaces to blind users, new reasoning procedures need to be developed to avoid false alerts, and a new sensor system needs to be researched to avoid occlusion during in-air gesturing. For the verification and validation of the researched topics, a so-called project room will be set up, which employs multiple interactive surfaces to support an ideation process with a team.

Since normal team meetings typically involve more than only one interactive surface – horizontal and vertical – the integration of blind users into a three-dimensional working environment will be addressed by this project. This implies several basic technical, perceptual and structural challenges, which should be solved in this project. In such an environment, many relevant NVC elements occur, which should be processed.

The chosen scenario of the Metaplan method is a good representative for a typical work behavior in ideation processes, but it also contains many of the basic working elements, which are also relevant of any other kind of teamwork that should integrate blind users. Due to the orientation of the interactive surfaces, the information is spatially distributed in the project room and thus new ways must be found how to display this spatially distributed data to blind users. Further, research is required on how corresponding NVC elements in this spatially distributed data space can be captured, assigned to the information on the interactive surfaces and displayed to blind users. Therefore, research on sophisticated algorithms is required in order to reliably assign these in-air gestures to the artifacts in order to avoid false alerts for blind users.