PISCATAWAY, N.J., Feb. 25, 2002 — The VoiceXML Forum today announced support for the World Wide Web Consortium’s (W3C) Multimodal Activity (http://www.w3.org/2002/mmi/) – a newly-formed working group which will look into standards and software to access web applications and services by voice, keyboard, key pad, mobile phones and devices. Members of the VoiceXML Forum, including AT&T, IBM, Lucent, Motorola, Nuance, and VoiceGenie will participate in W3C’s efforts.
Multimodal applications are set to grow in importance in the coming years, bringing benefits to businesses, developers and end-users. Today, web applications cannot be accessed by more than one channel at a time – e.g., using both voice and keypad on a wireless handheld device at the same time. With upcoming technology, devices from the desktop computer to the handheld PDA, from the automobile to the cellular phone, will be able to support multiple modes of access and communication, allowing them true anyplace, anywhere, anytime access.
With that, developers will need an open, standards-based way to write applications that allow for multiple types of input and output simultaneously, as well as enable devices of various platforms to operate with one another. For instance, a business traveller will be able to call an automated call center to ask for flight information using speech, and have that information appear as text on his handheld device.
"Multimodal applications are the next step in the growth of voice technology. A key component in making anyplace, anywhere access more convenient and real, it allows end-users to use the most suitable form of input and output no matter what situation they’re in," said Bill Dykas Chairman of the VoiceXML Forum, "Standards that use existing languages that developers are already familiar with are key to this."
This also means that companies will not have to hire several groups of developers with different skill sets, saving companies resource, time and money. These new applications will also work with other multimodal applications, and be easily extended from existing web or voice applications, as a large number of those already use XHTML or VoiceXML.
Examples of multimodal applications:
Mobile stock trading: using voice to request stock quotes with a hand-held device, having the quote appear as a chart, and submitting a trade by voice. Web-based auctions: using a hand-held to view item and then bidding via voice commands Navigational systems in cars: voice-enabled navigational devices Web browsers in automobile: in a moving vehicle, the device automatically shuts off the graphical browser and switches to voice to ensure that the driver is not distracted. The VoiceXML Forum through its 600 plus members are developing products and deploying applications built on the VoiceXML standard, now in Version 2.0 review. The natural extension of the VoiceXML standard to support Multimodal applications will speed the expansion of combined voice and data applications. VoiceXML Forum member and Supporters of VoiceXML will contribute to the W3C’s efforts to develop multimodal standards. The Forum believes extending VoiceXML is the logical path for multimodal development.
More information on the VoiceXML Forum can be found at: https://voicexml.org.
# # #