Français English

Multilanguage Speech recognition in Windows Phone 8

Tags: WP8, Windows Phone 8, Speech API


I decided to have a look to the Speech API of Windows Phone 8. To be more precise, I wanted to try speech recognition.

It is quite straightforward…when everything is in place.


First, turn on the speech recognition service in your phone :


Don’t forget to add the language you want to use and/or test :

Then, you have to set the Microphone and Speech capability in you app :

We’re ready now !


Basic recognition


There are two ways to have speech recognition in your app. Using WP8 basic UI with SpeechRecognizerUI, which looks like this :

or using your own UI with SpeechRecognizer.

I’ll use my own UI, but the principles are the same with the basic UI.

var speechrecognizer = new SpeechRecognizer();

var result = await speechrecognizer.RecognizeAsync();

// check if not rejected
if (result.TextConfidence != SpeechRecognitionConfidence.Rejected)
    MessageBox.Show(string.Format("You said {0} (With a confidence of {1})", result.Text, result.TextConfidence));

So I instanciate a SpeechRecognizer class, then I (asynchronously) wait it recognize something.

After, if the confidence is not too low, I show a messagebox with your text and the confidence value. Can’t be simpler than that.

There is not many settings you can change, besides the Grammar (I’ll talk about it later) and some timeouts in the Settingsproperty.


To improve recognition, you can use Grammars. To make it short, a grammar is a list of words/sentences. You can have several grammar, you can enable/disable them

There are two pre-definined grammars, one for dictation (It is the one by default) and one for web searches.

To activate them :


To optimize recognition, you can create your own, with words/sentences that make sense in the context of your application.

To use your own, you can add a IEnumerable<string> :

_speechRecognizer.Grammars.AddGrammarFromList("CustomList", new List<string> {"Hello World!", "Bye"});

Or you can add a SRGS file, which is a standard XML file for speech recognition.

To add :

Here is an example :

<?xml version="1.0" encoding="utf-8" ?>

<grammar version="1.0" xml:lang="en-us" root="commands" tag-format="semantics/1.0" 

  <!--Sample SRGS Grammar to show syntax.
      The rule element defines a grammar rule. A rule element 
      contains text or XML elements that define what speakers can 
      say, and the order in which they can say it.-->

  <rule id="commands" scope="public">
      <item> Up </item>
      <item> Down </item>
      <item> Right </item>
      <item> Left </item>
      <item> Stop </item>
      <item> Restart </item>


Then, to load it :

_speechRecognizer.Grammars.AddGrammarFromUri("GrammarEN", new Uri("ms-appx:///SRGSGrammar-en.xml", UriKind.Absolute));

To disable a grammar, you can do it that way :

 _speechRecognizer.Grammars["GrammarEN"].Enabled = true;


If you want to make a multilanguage speech recognition app, there are two steps :

First, you set the Recognizer to the language you need (Be sure you have installed the language, see introduction) :

var _englishRecognizer = InstalledSpeechRecognizers.All.FirstOrDefault(d => d.Language.ToUpper() == "EN-US");

var  _speechRecognizer = new SpeechRecognizer();


Now that it is set, you can add the grammar for that language.

Create a SRGS file, and don’t forget to set the language :


_speechRecognizer.Grammars.AddGrammarFromUri("GrammarEN", new Uri("ms-appx:///SRGSGrammar-en.xml", UriKind.Absolute));


For a list of errors of the Speech Recognition API, check here.

Sometimes the exception message I got was something like “no exception detail available”. It seems you can have more information by going in menu DEBUG/EXCEPTION and :



I created a sample app, where you can control a square using commands in English (Up,Down,Left,Right,Stop,Restart) or French (Haut,Bas,Droite,Gauche,Stop,Recommencer). You can change language in menu.




Comments powered by Disqus