June 2002
Dear
Readers,
This month’s focus is on multimodal interaction and how it relates
to VoiceXML. In February this year, the World Wide Web Consortium
(W3C) formed a new
Multimodal Interaction Working Group. The charter
of this group is to extend the user interface of web applications
to support multimodal interactions by defining specifications to
synchronize multiple modalities and devices. Here at the VoiceXML
Forum we are
supportive of this effort and believe that building
upon VoiceXML is the logical path for multimodal development.
Deborah Dahl authors our first feature article, entitled
“W3C Natural Language Semantics Markup”. Deborah’s article
provides us with an update on the progress made for far in
the new W3C MMI working group, as well as an overview of
what to expect in the future. Deborah serves as chair of the
MMI working group, and is with Unisys.
T.V. Raman of IBM Research authors our second feature article.
This article introduces us to XHTML+Voice, which is a collection
of mature WWW technologies such as XHTML, VoiceXML, SSML, SRGS, and
XML-Events all integrated via XHTML Modularization to bring
multimodal interaction to web applications. The XHTML+Voice
specification was formally submitted to the W3C by IBM, Motorola
and Opera Software.
In the First Words column this month, Rob Marchand introduces
us to the shadowy world of shadow variables in VoiceXML.
Don’t hard code those audio URI’s in your VoiceXML applications!
Learn how to do it right, by reading Matt Oshry’s Speak-n-Listen
column. Be sure to keep sending Matt your tough VoiceXML
questions by emailing
speak.and.listen@voicexmlreview.org, and look for the answers
in future issues of the VoiceXML Review.
Sincerely,
Jonathan
Engelsma
Editor-in-Chief, VoiceXML Review
Jonathan.Engelsma@voicexmlreview.org
back
to the top
Copyright
© 2000 - 2004 VoiceXML Forum. All rights reserved.
The VoiceXML Forum is a program of the
IEEE Industry Standards
and Technology Organization (IEEE-ISTO).
|