SmartWeb handheld—multimodal interaction with ontological knowledge bases and semantic web services

D Sonntag, R Engel, G Herzog, A Pfalzgraf… - Artifical Intelligence for …, 2007 - Springer
SmartWeb aims to provide intuitive multimodal access to a rich selection of Web-based
information services. We report on the current prototype with a smartphone client interface to …

3D head pose estimation without feature tracking

Q Chen, H Wu, T Fukumoto… - Proceedings Third IEEE …, 1998 - ieeexplore.ieee.org
We present a robust approach to estimate the 3D pose of human heads in a single image. In
contrast with other research, this method only makes use of the information about the skin …

Dynamic product interfaces: A key element for ambient shop** environments

W Maass, S Janzen - BLED 2007 Proceedings, 2007 - aisel.aisnet.org
By embedding information technologies into tangible products a new class of products is
created that we call smart products. Smart products use product information in product …

Dialogue systems go multimodal: The smartkom experience

W Wahlster - SmartKom: foundations of multimodal dialogue …, 2006 - Springer
Multimodal dialogue systems exploit one of the major characteristics of humanhuman
interaction: the coordinated use of different modalities. Allowing all of the modalities to refer …

Building multimodal applications with EMMA

M Johnston - Proceedings of the 2009 international conference on …, 2009 - dl.acm.org
Multimodal interfaces combining natural modalities such as speech and touch with dynamic
graphical user interfaces can make it easier and more effective for users to interact with …

[หนังสือ][B] Ontologies and adaptivity in dialogue for question answering

D Sonntag - 2010 - books.google.com
Question answering (QA) has become one of the fastest growing topics in computational
linguistics and information access. To advance research in the area of dialogue-based …

To talk or not to talk with a computer: Taking into account the user's focus of attention

A Batliner, C Hacker, E Nöth - Journal on multimodal user interfaces, 2008 - Springer
If no specific precautions are taken, people talking to a computer can—the same way as
while talking to another human—speak aside, either to themselves or to another person. On …

Multisensor-pipeline: a lightweight, flexible, and extensible framework for building multimodal-multisensor interfaces

M Barz, OS Bhatti, B Lüers, A Prange… - … Publication of the 2021 …, 2021 - dl.acm.org
We present the multisensor-pipeline (MSP), a lightweight, flexible, and extensible framework
for prototy** multimodal-multisensor interfaces based on real-time sensor input. Our open …

[PDF][PDF] Towards a separation of pragmatic knowledge and contextual information

R Porzel, HP Zorn, B Loos… - Contexts and Ontologies …, 2006 - ftp-serv.inrialpes.fr
In this paper we address the question of how traditional approaches to modeling world
knowledge, ie to model shared conceptualizations of specific domains of interest via formal …

[PDF][PDF] To talk or not to talk with a computer: On-Talk vs. Off-Talk

A Batliner, C Hacker, E Nöth - 2006 - opus.bibliothek.uni-augsburg.de
In this paper, we present a database with emotional children's speech in a human-robot
scenario: the children were giving instructions to Sony's pet robot dog AIBO, with AIBO …