Struct google_cognitive_apis::api::grpc::google::cloud::dialogflow::cx::v3::InputAudioConfig
source · [−]pub struct InputAudioConfig {
pub audio_encoding: i32,
pub sample_rate_hertz: i32,
pub enable_word_info: bool,
pub phrase_hints: Vec<String>,
pub model: String,
pub model_variant: i32,
pub single_utterance: bool,
}
Expand description
Instructs the speech recognizer on how to process the audio content.
Fields
audio_encoding: i32
Required. Audio encoding of the audio content to process.
sample_rate_hertz: i32
Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
enable_word_info: bool
Optional. If true
, Dialogflow returns [SpeechWordInfo][google.cloud.dialogflow.cx.v3.SpeechWordInfo] in
[StreamingRecognitionResult][google.cloud.dialogflow.cx.v3.StreamingRecognitionResult] with information about the recognized speech
words, e.g. start and end time offsets. If false or unspecified, Speech
doesn’t return any word-level information.
phrase_hints: Vec<String>
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
model: String
Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.
model_variant: i32
Optional. Which variant of the [Speech model][google.cloud.dialogflow.cx.v3.InputAudioConfig.model] to use.
single_utterance: bool
Optional. If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio’s voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Implementations
sourceimpl InputAudioConfig
impl InputAudioConfig
sourcepub fn audio_encoding(&self) -> AudioEncoding
pub fn audio_encoding(&self) -> AudioEncoding
Returns the enum value of audio_encoding
, or the default if the field is set to an invalid enum value.
sourcepub fn set_audio_encoding(&mut self, value: AudioEncoding)
pub fn set_audio_encoding(&mut self, value: AudioEncoding)
Sets audio_encoding
to the provided enum value.
sourcepub fn model_variant(&self) -> SpeechModelVariant
pub fn model_variant(&self) -> SpeechModelVariant
Returns the enum value of model_variant
, or the default if the field is set to an invalid enum value.
sourcepub fn set_model_variant(&mut self, value: SpeechModelVariant)
pub fn set_model_variant(&mut self, value: SpeechModelVariant)
Sets model_variant
to the provided enum value.
Trait Implementations
sourceimpl Clone for InputAudioConfig
impl Clone for InputAudioConfig
sourcefn clone(&self) -> InputAudioConfig
fn clone(&self) -> InputAudioConfig
Returns a copy of the value. Read more
1.0.0 · sourcefn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
sourceimpl Debug for InputAudioConfig
impl Debug for InputAudioConfig
sourceimpl Default for InputAudioConfig
impl Default for InputAudioConfig
sourceimpl Message for InputAudioConfig
impl Message for InputAudioConfig
sourcefn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
Returns the encoded length of the message without a length delimiter.
sourcefn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut,
fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut,
Encodes the message to a buffer. Read more
sourcefn encode_to_vec(&self) -> Vec<u8, Global>ⓘNotable traits for Vec<u8, A>impl<A> Write for Vec<u8, A> where
A: Allocator,
fn encode_to_vec(&self) -> Vec<u8, Global>ⓘNotable traits for Vec<u8, A>impl<A> Write for Vec<u8, A> where
A: Allocator,
A: Allocator,
Encodes the message to a newly allocated buffer.
sourcefn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut,
fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut,
Encodes the message with a length-delimiter to a buffer. Read more
sourcefn encode_length_delimited_to_vec(&self) -> Vec<u8, Global>ⓘNotable traits for Vec<u8, A>impl<A> Write for Vec<u8, A> where
A: Allocator,
fn encode_length_delimited_to_vec(&self) -> Vec<u8, Global>ⓘNotable traits for Vec<u8, A>impl<A> Write for Vec<u8, A> where
A: Allocator,
A: Allocator,
Encodes the message with a length-delimiter to a newly allocated buffer.
sourcefn decode<B>(buf: B) -> Result<Self, DecodeError> where
B: Buf,
Self: Default,
fn decode<B>(buf: B) -> Result<Self, DecodeError> where
B: Buf,
Self: Default,
Decodes an instance of the message from a buffer. Read more
sourcefn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
B: Buf,
Self: Default,
fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
B: Buf,
Self: Default,
Decodes a length-delimited instance of the message from the buffer.
sourcefn merge<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf,
fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf,
Decodes an instance of the message from a buffer, and merges it into self
. Read more
sourcefn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf,
fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf,
Decodes a length-delimited instance of the message from buffer, and
merges it into self
. Read more
sourceimpl PartialEq<InputAudioConfig> for InputAudioConfig
impl PartialEq<InputAudioConfig> for InputAudioConfig
sourcefn eq(&self, other: &InputAudioConfig) -> bool
fn eq(&self, other: &InputAudioConfig) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
impl StructuralPartialEq for InputAudioConfig
Auto Trait Implementations
impl RefUnwindSafe for InputAudioConfig
impl Send for InputAudioConfig
impl Sync for InputAudioConfig
impl Unpin for InputAudioConfig
impl UnwindSafe for InputAudioConfig
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Instruments this type with the provided Span
, returning an
Instrumented
wrapper. Read more
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
sourcefn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
Wrap the input message T
in a tonic::Request