longrunningrecognize(body, x__xgafv=None)
Performs asynchronous speech recognition: receive results via the
recognize(body, x__xgafv=None)
Performs synchronous speech recognition: receive results after all audio
longrunningrecognize(body, x__xgafv=None)
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
For more information on asynchronous speech recognition, see the
[how-to](https://cloud.google.com/speech-to-text/docs/async-recognize).
Args:
body: object, The request body. (required)
The object takes the form of:
{ # The top-level message sent by the client for the `LongRunningRecognize`
# method.
"audio": { # Contains audio data in the encoding specified in the `RecognitionConfig`. # *Required* The audio data to be recognized.
# Either `content` or `uri` must be supplied. Supplying both or neither
# returns google.rpc.Code.INVALID_ARGUMENT. See
# [content limits](/speech-to-text/quotas#content).
"content": "A String", # The audio data bytes encoded as specified in
# `RecognitionConfig`. Note: as with all bytes fields, proto buffers use a
# pure binary representation, whereas JSON representations use base64.
"uri": "A String", # URI that points to a file that contains audio data bytes as specified in
# `RecognitionConfig`. The file must not be compressed (for example, gzip).
# Currently, only Google Cloud Storage URIs are
# supported, which must be specified in the following format:
# `gs://bucket_name/object_name` (other URI formats return
# google.rpc.Code.INVALID_ARGUMENT). For more information, see
# [Request URIs](https://cloud.google.com/storage/docs/reference-uris).
},
"config": { # Provides information to the recognizer that specifies how to process the # *Required* Provides information to the recognizer that specifies how to
# process the request.
# request.
"languageCode": "A String", # *Required* The language of the supplied audio as a
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.
# Example: "en-US".
# See [Language Support](/speech-to-text/docs/languages)
# for a list of the currently supported language codes.
"audioChannelCount": 42, # *Optional* The number of channels in the input audio data.
# ONLY set this for MULTI-CHANNEL recognition.
# Valid values for LINEAR16 and FLAC are `1`-`8`.
# Valid values for OGG_OPUS are '1'-'254'.
# Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only `1`.
# If `0` or omitted, defaults to one channel (mono).
# Note: We only recognize the first channel by default.
# To perform independent recognition on each channel set
# `enable_separate_recognition_per_channel` to 'true'.
"encoding": "A String", # Encoding of audio data sent in all `RecognitionAudio` messages.
# This field is optional for `FLAC` and `WAV` audio files and required
# for all other audio formats. For details, see AudioEncoding.
"enableAutomaticPunctuation": True or False, # *Optional* If 'true', adds punctuation to recognition result hypotheses.
# This feature is only available in select languages. Setting this for
# requests in other languages has no effect at all.
# The default 'false' value does not add punctuation to result hypotheses.
# Note: This is currently offered as an experimental service, complimentary
# to all users. In the future this may be exclusively available as a
# premium feature.
"alternativeLanguageCodes": [ # *Optional* A list of up to 3 additional
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
# listing possible alternative languages of the supplied audio.
# See [Language Support](/speech-to-text/docs/languages)
# for a list of the currently supported language codes.
# If alternative languages are listed, recognition result will contain
# recognition in the most likely language detected including the main
# language_code. The recognition result will include the language tag
# of the language detected in the audio.
# Note: This feature is only supported for Voice Command and Voice Search
# use cases and performance may vary for other use cases (e.g., phone call
# transcription).
"A String",
],
"enableSeparateRecognitionPerChannel": True or False, # This needs to be set to `true` explicitly and `audio_channel_count` > 1
# to get each channel recognized separately. The recognition result will
# contain a `channel_tag` field to state which channel that result belongs
# to. If this is not true, we will only recognize the first channel. The
# request is billed cumulatively for all channels recognized:
# `audio_channel_count` multiplied by the length of the audio.
"enableWordTimeOffsets": True or False, # *Optional* If `true`, the top result includes a list of words and
# the start and end time offsets (timestamps) for those words. If
# `false`, no word-level time offset information is returned. The default is
# `false`.
"enableSpeakerDiarization": True or False, # *Optional* If 'true', enables speaker detection for each recognized word in
# the top alternative of the recognition result using a speaker_tag provided
# in the WordInfo.
# Note: Use diarization_config instead. This field will be DEPRECATED soon.
"maxAlternatives": 42, # *Optional* Maximum number of recognition hypotheses to be returned.
# Specifically, the maximum number of `SpeechRecognitionAlternative` messages
# within each `SpeechRecognitionResult`.
# The server may return fewer than `max_alternatives`.
# Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of
# one. If omitted, will return a maximum of one.
"profanityFilter": True or False, # *Optional* If set to `true`, the server will attempt to filter out
# profanities, replacing all but the initial character in each filtered word
# with asterisks, e.g. "f***". If set to `false` or omitted, profanities
# won't be filtered out.
"useEnhanced": True or False, # *Optional* Set to true to use an enhanced model for speech recognition.
# If `use_enhanced` is set to true and the `model` field is not set, then
# an appropriate enhanced model is chosen if:
# 1. project is eligible for requesting enhanced models
# 2. an enhanced model exists for the audio
#
# If `use_enhanced` is true and an enhanced version of the specified model
# does not exist, then the speech is recognized using the standard version
# of the specified model.
#
# Enhanced speech models require that you opt-in to data logging using
# instructions in the
# [documentation](/speech-to-text/docs/enable-data-logging). If you set
# `use_enhanced` to true and you have not enabled audio logging, then you
# will receive an error.
"sampleRateHertz": 42, # Sample rate in Hertz of the audio data sent in all
# `RecognitionAudio` messages. Valid values are: 8000-48000.
# 16000 is optimal. For best results, set the sampling rate of the audio
# source to 16000 Hz. If that's not possible, use the native sample rate of
# the audio source (instead of re-sampling).
# This field is optional for FLAC and WAV audio files, but is
# required for all other audio formats. For details, see AudioEncoding.
"diarizationSpeakerCount": 42, # *Optional*
# If set, specifies the estimated number of speakers in the conversation.
# If not set, defaults to '2'.
# Ignored unless enable_speaker_diarization is set to true."
# Note: Use diarization_config instead. This field will be DEPRECATED soon.
"enableWordConfidence": True or False, # *Optional* If `true`, the top result includes a list of words and the
# confidence for those words. If `false`, no word-level confidence
# information is returned. The default is `false`.
"model": "A String", # *Optional* Which model to select for the given request. Select the model
# best suited to your domain to get best results. If a model is not
# explicitly specified, then we auto-select a model based on the parameters
# in the RecognitionConfig.
# | Model | #Description | #
command_and_search |
# Best for short queries such as voice commands or voice search. | #
phone_call |
# Best for audio that originated from a phone call (typically # recorded at an 8khz sampling rate). | #
video |
# Best for audio that originated from from video or includes multiple # speakers. Ideally the audio is recorded at a 16khz or greater # sampling rate. This is a premium model that costs more than the # standard rate. | #
default |
# Best for audio that is not one of the specific audio models. # For example, long-form audio. Ideally the audio is high-fidelity, # recorded at a 16khz or greater sampling rate. | #
recognize(body, x__xgafv=None)
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
Args:
body: object, The request body. (required)
The object takes the form of:
{ # The top-level message sent by the client for the `Recognize` method.
"audio": { # Contains audio data in the encoding specified in the `RecognitionConfig`. # *Required* The audio data to be recognized.
# Either `content` or `uri` must be supplied. Supplying both or neither
# returns google.rpc.Code.INVALID_ARGUMENT. See
# [content limits](/speech-to-text/quotas#content).
"content": "A String", # The audio data bytes encoded as specified in
# `RecognitionConfig`. Note: as with all bytes fields, proto buffers use a
# pure binary representation, whereas JSON representations use base64.
"uri": "A String", # URI that points to a file that contains audio data bytes as specified in
# `RecognitionConfig`. The file must not be compressed (for example, gzip).
# Currently, only Google Cloud Storage URIs are
# supported, which must be specified in the following format:
# `gs://bucket_name/object_name` (other URI formats return
# google.rpc.Code.INVALID_ARGUMENT). For more information, see
# [Request URIs](https://cloud.google.com/storage/docs/reference-uris).
},
"config": { # Provides information to the recognizer that specifies how to process the # *Required* Provides information to the recognizer that specifies how to
# process the request.
# request.
"languageCode": "A String", # *Required* The language of the supplied audio as a
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.
# Example: "en-US".
# See [Language Support](/speech-to-text/docs/languages)
# for a list of the currently supported language codes.
"audioChannelCount": 42, # *Optional* The number of channels in the input audio data.
# ONLY set this for MULTI-CHANNEL recognition.
# Valid values for LINEAR16 and FLAC are `1`-`8`.
# Valid values for OGG_OPUS are '1'-'254'.
# Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only `1`.
# If `0` or omitted, defaults to one channel (mono).
# Note: We only recognize the first channel by default.
# To perform independent recognition on each channel set
# `enable_separate_recognition_per_channel` to 'true'.
"encoding": "A String", # Encoding of audio data sent in all `RecognitionAudio` messages.
# This field is optional for `FLAC` and `WAV` audio files and required
# for all other audio formats. For details, see AudioEncoding.
"enableAutomaticPunctuation": True or False, # *Optional* If 'true', adds punctuation to recognition result hypotheses.
# This feature is only available in select languages. Setting this for
# requests in other languages has no effect at all.
# The default 'false' value does not add punctuation to result hypotheses.
# Note: This is currently offered as an experimental service, complimentary
# to all users. In the future this may be exclusively available as a
# premium feature.
"alternativeLanguageCodes": [ # *Optional* A list of up to 3 additional
# [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
# listing possible alternative languages of the supplied audio.
# See [Language Support](/speech-to-text/docs/languages)
# for a list of the currently supported language codes.
# If alternative languages are listed, recognition result will contain
# recognition in the most likely language detected including the main
# language_code. The recognition result will include the language tag
# of the language detected in the audio.
# Note: This feature is only supported for Voice Command and Voice Search
# use cases and performance may vary for other use cases (e.g., phone call
# transcription).
"A String",
],
"enableSeparateRecognitionPerChannel": True or False, # This needs to be set to `true` explicitly and `audio_channel_count` > 1
# to get each channel recognized separately. The recognition result will
# contain a `channel_tag` field to state which channel that result belongs
# to. If this is not true, we will only recognize the first channel. The
# request is billed cumulatively for all channels recognized:
# `audio_channel_count` multiplied by the length of the audio.
"enableWordTimeOffsets": True or False, # *Optional* If `true`, the top result includes a list of words and
# the start and end time offsets (timestamps) for those words. If
# `false`, no word-level time offset information is returned. The default is
# `false`.
"enableSpeakerDiarization": True or False, # *Optional* If 'true', enables speaker detection for each recognized word in
# the top alternative of the recognition result using a speaker_tag provided
# in the WordInfo.
# Note: Use diarization_config instead. This field will be DEPRECATED soon.
"maxAlternatives": 42, # *Optional* Maximum number of recognition hypotheses to be returned.
# Specifically, the maximum number of `SpeechRecognitionAlternative` messages
# within each `SpeechRecognitionResult`.
# The server may return fewer than `max_alternatives`.
# Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of
# one. If omitted, will return a maximum of one.
"profanityFilter": True or False, # *Optional* If set to `true`, the server will attempt to filter out
# profanities, replacing all but the initial character in each filtered word
# with asterisks, e.g. "f***". If set to `false` or omitted, profanities
# won't be filtered out.
"useEnhanced": True or False, # *Optional* Set to true to use an enhanced model for speech recognition.
# If `use_enhanced` is set to true and the `model` field is not set, then
# an appropriate enhanced model is chosen if:
# 1. project is eligible for requesting enhanced models
# 2. an enhanced model exists for the audio
#
# If `use_enhanced` is true and an enhanced version of the specified model
# does not exist, then the speech is recognized using the standard version
# of the specified model.
#
# Enhanced speech models require that you opt-in to data logging using
# instructions in the
# [documentation](/speech-to-text/docs/enable-data-logging). If you set
# `use_enhanced` to true and you have not enabled audio logging, then you
# will receive an error.
"sampleRateHertz": 42, # Sample rate in Hertz of the audio data sent in all
# `RecognitionAudio` messages. Valid values are: 8000-48000.
# 16000 is optimal. For best results, set the sampling rate of the audio
# source to 16000 Hz. If that's not possible, use the native sample rate of
# the audio source (instead of re-sampling).
# This field is optional for FLAC and WAV audio files, but is
# required for all other audio formats. For details, see AudioEncoding.
"diarizationSpeakerCount": 42, # *Optional*
# If set, specifies the estimated number of speakers in the conversation.
# If not set, defaults to '2'.
# Ignored unless enable_speaker_diarization is set to true."
# Note: Use diarization_config instead. This field will be DEPRECATED soon.
"enableWordConfidence": True or False, # *Optional* If `true`, the top result includes a list of words and the
# confidence for those words. If `false`, no word-level confidence
# information is returned. The default is `false`.
"model": "A String", # *Optional* Which model to select for the given request. Select the model
# best suited to your domain to get best results. If a model is not
# explicitly specified, then we auto-select a model based on the parameters
# in the RecognitionConfig.
# | Model | #Description | #
command_and_search |
# Best for short queries such as voice commands or voice search. | #
phone_call |
# Best for audio that originated from a phone call (typically # recorded at an 8khz sampling rate). | #
video |
# Best for audio that originated from from video or includes multiple # speakers. Ideally the audio is recorded at a 16khz or greater # sampling rate. This is a premium model that costs more than the # standard rate. | #
default |
# Best for audio that is not one of the specific audio models. # For example, long-form audio. Ideally the audio is high-fidelity, # recorded at a 16khz or greater sampling rate. | #