Communications

The communication presentations (keynote talks and posters) will be continuously in May and June.

Jean Rouat
Robust Auditory Scene Analysis in interaction with sensory-motor modalities for Human Language Understanding
Jean Rouat

Dr. Jean Rouat is Professor at the Sherbrooke University in Canada. His research interests include Intelligent Systems, Artificial Intelligence, Machine Learning, Signal, Speech and Audible Signals Processing, Visual Processing, Computational Neurosciences, Neurophysiological Signal Analysis, Human-Systems Interfaces, Sensorial Substitution.

Abstract

Most Human Language Understanding systems are based on statistical and machine learning pattern matching technics either implemented as graphical models (HMM, Language models, etc.) or as formal neural networks encoding the firing rate of neurons (convolutive N.N., deep N.N., Boltzmann N.N., etc.). Impressive practical classification and pattern matching results can now be reach thanks to recent developments in computing power and hardware implementations, notably on GPU (Graphical Processing Units).

Human communication through language is essential to our survival and is in strong interaction with motricity, vision, emotion, etc. Abstract interpretation of the acoustical signal (semantic, emotion, etc.) requires the use of most areas of our brain (motor, visual, planning, … areas). One of the striking performance of the brain is the auditory scene analysis and the capacity to decompose auditory scenes into auditory streams and objects. The most common known application of the auditory scene is the cocktail party effect, that is in practice only a side effect of a more complex and general process that implies our multisensory brain. In fact, auditory scene analysis is also fundamental to the acquisition of a new language and to the understanding of speech and sounds. Our ability to analyse auditory scenes by integration of visual and motor feedbacks is fundamental to our build up of human language understanding. Taking into account these feedbacks and these multisensory interactions for better human language understanding and acquisition systems cannot be reduced to pattern matching or classification algorithms. Dynamic feedbacks, active cochlea, attentional processes, anticipation, intention, planning,… occurring in the multisensory brain have to be taken into account and implemented for better auditory scene analysis modules that are part of human language understanding systems.

The poster discusses potential research directions and solutions to the design of better human language understanding systems that comprise robust auditory scene analysis modules in interaction with other sensory-motor modalities of our brain. Discussions about software and hardware implementations in relation with state of the art machine learning and NPU (Neural Processing Units) are also presented on the poster.

Human Language Understanding
June 18

Poster

Laurence Devillers
Affective and social dimensions in spoken interaction
Laurence Devillers

Dr. Laurence Devillers is professor at Paris IV University in France. Her research interests include Human-Human and Human-Machine Interaction, Emotion Detection (audio and multimodal signals), Social Interaction, Social Simulation, Social Signals.

Abstract

In order to better understand spoken language and to design social interaction with machines, experimental grounding is required to study expressions of emotion, attention and intention cues during spoken interaction. Robotics are a relevant framework for designing applications due to the learning and skills of robots. Many research topics linked to spoken language understanding will be presented with some new challenges linked to Multimodal, Multi-Party, Real-World Human-Robot Interaction.

Human Language Understanding
June 18

Poster

Laurent Besacier
Presentation of the Laboratory of Informatics of Grenoble
Laurent Besacier

Dr. Laurent Besacier is Professor at the University Joseph Fourier, Laboratory of Informatics of Grenoble (LIG). His research interests include Speech & Language Understanding, Automatic Speech Transcription and Translation, Computer Aided Translation, Processing of Under-Resourced Languages, Speech Processing / Analysis and Interactions in Ambient Environments, Modeling Social Affects, Automatic and Interactive Meaning Clarification Processes.

Abstract

This poster will present the LIG laboratory as a potential partner for CHIST-ERA project on the topic 'Human Language Understanding' as well as a list of potential projects ideas to which LIG will be interested to participate or coordinate.

Human Language Understanding
June 18

Poster

Mark Cieliebak
Sentiment Analysis for free – Can you detect positive texts in a language that you don't understand?
Mark Cieliebak

Dr. Mark Cieliebak is researcher and lecturer at Zurich University of Applied Sciences (ZHAW). His expertise includes Efficient Algorithms, Software Engineering and Data Analysis.

Abstract

Sentiment Analysis Systems (SAS) typically rely on human interaction to build them: lexica are assembled, documents need to be tagged manually, POS tagging requires a thorough understanding of the language, etc. Such language resources exist for common languages such as English, German or Chinese. But what if you want to build an SAS for a "new" language, one with poor or no language resources? We want to use large sets of opinionated documents (e.g. Amazon reviews or Tweets) to fully automatically create an SAS for any language, or even a dialect.

Human Language Understanding
June 18

Poster