Microsoft

Speech Service

  1. Retrain a previously trained model on custom speech portal

    I had previously trained a custom speech model and I trying to retrain that model but I am not seeing an option to retrain it, It only gives me an option to train the baseline model.

    3 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Custom Speech  ·  Flag idea as inappropriate…  ·  Admin →
  2. Properties to define : Max Audio Recognition Time from Microphone OR Stop Recognition on silence

    Hello,

    I am doing speech recognition and am using Android SDK. I plan to move to containers in future. Stopping on silence is default as per documentation. How do i define the following the max audio time recognition time as below:

    If the user is speaking and has spoken more than 15 seconds. The sdk should automatically stop the recognizer on Android end. If the user has spoken less than 15 seconds and was silent in between then it should be based on silence detection. The speech sdk should stop the microphone on android either on silence detection (when spoken…

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Speech to Text  ·  Flag idea as inappropriate…  ·  Admin →
  3. segmentation length config for recognized result

    from https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/610#event-3282436941

    The reason and scenario of this asking is that, some time the output recognized text is too long to render friendly, e.g. mobile app with recognized text limit of 2 lines each max 20-char (or less).

    E.g.

    Utterance: I will go to bookstore this afternoon to check if any new arrivals. After that Jack will pick up me there to gym for practice. We need to prepare a match in two weeks. Dinner will be taken in gym to save commute overhead. I will arrive home around 8:30 in the evening.

    Current result from speech sdk:
    RECOGNIZED: Text=I…

    2 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Speech to Text  ·  Flag idea as inappropriate…  ·  Admin →
  4. Fluency format of recognized result

    from https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/598#event-3275944556

    Suggest add fluency format for scenario like formal meeting transcription, translation, etc. , which does not expect spoken text forms.

    E.g. :

    Utterance: "i want to ah, to book a flight to Denver, i mean, to Boston, the day, the day after, after Monday. "

    RECOGNIZED: "I want to are to book a flight to Denver. I mean to Boston the day, the day after after Monday."

    Expected: " I want to book a flight to Boston the day after Monday."

    Thank you.

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Speech to Text  ·  Flag idea as inappropriate…  ·  Admin →
  5. Adding custom headers to speech to text Websocket requests

    Adding the ability of adding custom headers to the speech to text sdk so that the intermediate servers can verify the headers to authenticate and authroize . This is required for container versions

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Speech to Text  ·  Flag idea as inappropriate…  ·  Admin →
  6. School

    Teacher

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Text to Speech  ·  Flag idea as inappropriate…  ·  Admin →
  7. Possibility to talk English, but in the different foreign accents.

    Like a German/French/Spanish/Italian person speaking English, all have their own accent. Perfect for applications like Air Traffic Control etc.

    6 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Text to Speech  ·  Flag idea as inappropriate…  ·  Admin →
  8. Pronounced Unites Incorrectly. (E.g mili Watt as mW)

    Some of the unites are not pronounces correctly.
    In my case i am using "9 Mega Watt" as (9MW) and its speaking correctly, but for 9 mili Watt (mW), its speaking wrong and saying Mega Watt. Can you update it, or provide separate access for customization acronyms.

    2 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Text to Speech  ·  Flag idea as inappropriate…  ·  Admin →
  9. Azure AD Authentication

    Support for authenticating to the Service using Azure AD to allow an alternative to keys.

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Speech to Text  ·  Flag idea as inappropriate…  ·  Admin →
  10. Language Support for Greek

    Is Greek on the roadmap? Please let me know when it is planned. If not, please add it.

    2 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Speech to Text  ·  Flag idea as inappropriate…  ·  Admin →
  11. IVR Access to DirectLine Speech API

    how an IVR system can call this API? more details about protocol used and sample would be highly appreciated

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Sample Requests  ·  Flag idea as inappropriate…  ·  Admin →
  12. About the display of the character acquired by SpeechToText (SpeechSDK)

    The results obtained by the SpeechRecognizer's Recognized event are not broken by punctuation marks, and sentences are connected even if the speaker changes.
    Therefore, it is not possible to know the timing of the change of the speaker.
    I want you to improve it so that an event occurs for each punctuation mark.

    38 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Speech to Text  ·  Flag idea as inappropriate…  ·  Admin →
  13. Show multilanguage translations on a single screen during a presentation

    Hello,
    I have the following use case (international wedding): The presenter speaks in French. I would like to show the German and Catalan live translations on a single screen for the audience. Is this possible? I know that there is the conversation feature readily available, but not everybody in the audience has a smartphone.

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Speech to Text  ·  Flag idea as inappropriate…  ·  Admin →
  14. REST API support for custom phrase lists

    REST API support for custom phrase lists

    3 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Speech to Text  ·  Flag idea as inappropriate…  ·  Admin →
  15. ARM32 Support for Microsoft.CognitiveServices.Speech API (on Raspbian)

    I'd like to see the speech API coming to Raspberry PI, meaning having FULL ARM32 support for the linux implementation of this SDK. This would enable the raspi maker community to use Azure Speech seamlessly in their devices.

    2 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Text to Speech  ·  Flag idea as inappropriate…  ·  Admin →
  16. GovCloud - Cognitive Services Endpoints in overview

    It would be beneficial for GovCloud users to have secure access a list of available endpoints for the service listed under the overview instead of just the token issuing endpoint forcing one to comb through out of date documentation trying to find the secured GovCloud endpoints.

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Text to Speech  ·  Flag idea as inappropriate…  ·  Admin →
  17. Azure TTS Cognitive Service Voice Limit Issue

    I am very new to learn cognitive services of Text-to-Speech (TTS) of Microsoft Azure. I successfully able to convert the given text into an audio file by using TTS services of Azure. It works fine when I'm having a single voice element in my SSML XML document. The example of working SSML is;

    <speak version="1.0" xml:lang="en-US">
    
    <voice xml:lang="en-US" xml:gender="Male" name="en-US-Jessa24kRUS">
    Hello, this is my sample text to convert into audio?
    </voice>
    </speak>

    But, when I'm having multiple voice tags(on gender base), then it causes an error. The SSML of it is:

    <speak version="1.0" xml:lang="en-US">
    
    <voice xml:lang="en-US" xml:gender="Male" name="en-US-Guy24kRUS"> What’s your
    2 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Text to Speech  ·  Flag idea as inappropriate…  ·  Admin →
  18. Voice has changed

    Something changed with the zh-CN-XiaoxiaoNeural voice - the 'sentiment' expression no longer works, as of a few days ago. Using exactly the same code, but the output is quite different (no longer sounds sentimental). Was an update posted? If so, where do we find a list of changes?

    1 vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Text to Speech  ·  Flag idea as inappropriate…  ·  Admin →
  19. LUIS Reference Grammar ID fails West Europe when included

    For West Europe region, the service is returning "Specified grammar type is not supported!" when we pass in LUIS reference grammar ID (a.k.a. IntentRecognizer).

    This causes the speech service to fail with a "WebSocket is already in CLOSING or CLOSED state." error when the LUIS reference grammar ID is passed in. If it is not included, the service works correctly.

    May be related to this issue in the Cognitive Services Speech SDK repo: https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/127

    3 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Speech to Text  ·  Flag idea as inappropriate…  ·  Admin →
  20. Pronunciation Support for words in portuguese Brazil

    The custom speech has good accuracy compared to others solutions but it doen't support pronunciation for portuguese BR, so important words for business aren't recognized. It's a very important feature for business usage. The UI offers only support for english language.

    2 votes
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Custom Speech  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feedback and Knowledge Base