The W3C Voice Browser Activity has published the candidate recommedation of Semantic Interpretation for Speech Recognition (SISR) Version 1.0. According to the document,
This document defines the process of Semantic Interpretation for Speech Recognition and the syntax and semantics of semantic interpretation tags that can be added to speech recognition grammars to compute information to return to an application on the basis of rules and tokens that were matched by the speech recognizer. In particular, it defines the syntax and semantics of the contents of Tags in the Speech Recognition Grammar Specification [SRGS].
The results of semantic interpretation describe the meaning of a natural language utterance. The current specification represents this information as an ECMAScript object, and defines a mechanism to serialize the result into XML. The W3C Multimodal Interaction Activity [MMI] is defining an XML data format [EMMA] for containing and annotating the information in user utterances. It is expected that the EMMA language will be able to integrate results generated by Semantic Interpretation for Speech Recognition.
Comments are due by February 20.
The W3C Voice Browser Activity has also published the last call working draft of Pronunciation Lexicon Specification (PLS) Version 1.0. This is an XML syntax for specifying pronunciation lexicons for Automatic Speech Recognition and Speech Synthesis engines in voice browser applications. This draft makes support for the IPA phonetic alphabet mandatory and adds RELAX NG amd W3C XMl Schema Language schemas. "There is also a new section on multiple pronunciations, clarifying the use of the 'prefer' attribute. A lot of the previous text has been corrected or clarified, and a glossary of terms has been added." Comments are due by March 15.