pub enum PropertyId {
Show 67 variants
SpeechServiceConnectionKey,
SpeechServiceConnectionEndpoint,
SpeechServiceConnectionRegion,
SpeechServiceAuthorizationToken,
SpeechServiceAuthorizationType,
SpeechServiceConnectionEndpointId,
SpeechServiceConnectionHost,
SpeechServiceConnectionProxyHostName,
SpeechServiceConnectionProxyPort,
SpeechServiceConnectionProxyUserName,
SpeechServiceConnectionProxyPassword,
SpeechServiceConnectionURL,
SpeechServiceConnectionTranslationToLanguages,
SpeechServiceConnectionTranslationVoice,
SpeechServiceConnectionTranslationFeatures,
SpeechServiceConnectionIntentRegion,
SpeechServiceConnectionRecoMode,
SpeechServiceConnectionRecoLanguage,
SpeechSessionId,
SpeechServiceConnectionUserDefinedQueryParameters,
SpeechServiceConnectionRecoModelName,
SpeechServiceConnectionRecoModelKey,
SpeechServiceConnectionSynthLanguage,
SpeechServiceConnectionSynthVoice,
SpeechServiceConnectionSynthOutputFormat,
SpeechServiceConnectionSynthEnableCompressedAudioTransmission,
SpeechServiceConnectionSynthOfflineVoice,
SpeechServiceConnectionSynthModelKey,
SpeechServiceConnectionInitialSilenceTimeoutMs,
SpeechServiceConnectionEndSilenceTimeoutMs,
SpeechServiceConnectionEnableAudioLogging,
SpeechServiceResponseRequestDetailedResultTrueFalse,
SpeechServiceResponseRequestProfanityFilterTrueFalse,
SpeechServiceResponseProfanityOption,
SpeechServiceResponsePostProcessingOption,
SpeechServiceResponseRequestWordLevelTimestamps,
SpeechServiceResponseStablePartialResultThreshold,
SpeechServiceResponseOutputFormatOption,
SpeechServiceResponseTranslationRequestStablePartialResult,
SpeechServiceResponseJsonResult,
SpeechServiceResponseJsonErrorDetails,
SpeechServiceResponseRecognitionLatencyMs,
SpeechServiceResponseSynthesisFirstByteLatencyMs,
SpeechServiceResponseSynthesisFinishLatencyMs,
SpeechServiceResponseSynthesisUnderrunTimeMs,
SpeechServiceResponseSynthesisBackend,
CancellationDetailsReason,
CancellationDetailsReasonText,
CancellationDetailsReasonDetailedText,
LanguageUnderstandingServiceResponseJsonResult,
AudioConfigDeviceNameForCapture,
AudioConfigNumberOfChannelsForCapture,
AudioConfigSampleRateForCapture,
AudioConfigBitsPerSampleForCapture,
AudioConfigAudioSource,
AudioConfigDeviceNameForRender,
AudioConfigPlaybackBufferLengthInMs,
SpeechLogFilename,
ConversationApplicationID,
ConversationDialogType,
ConversationInitialSilenceTimeout,
ConversationFromID,
ConversationConversationID,
ConversationCustomVoiceDeploymentIDs,
ConversationSpeechActivityTemplate,
DataBufferTimeStamp,
DataBufferUserID,
}
Expand description
PropertyID defines speech property ids.
Variants
SpeechServiceConnectionKey
SpeechServiceConnectionKey is the Cognitive Services Speech Service subscription key. If you are using an intent recognizer, you need to specify the LUIS endpoint key for your particular LUIS app. Under normal circumstances, you shouldn’t have to use this property directly. Instead, use NewSpeechConfigFromSubscription.
SpeechServiceConnectionEndpoint
SpeechServiceConnectionEndpoint is the Cognitive Services Speech Service endpoint (url). Under normal circumstances, you shouldn’t have to use this property directly. Instead, use NewSpeechConfigFromEndpoint. NOTE: This endpoint is not the same as the endpoint used to obtain an access token.
SpeechServiceConnectionRegion
SpeechServiceConnectionRegion is the Cognitive Services Speech Service region. Under normal circumstances, you shouldn’t have to use this property directly. Instead, use NewSpeechConfigFromSubscription, NewSpeechConfigFromEndpoint, NewSpeechConfigFromHost, NewSpeechConfigFromAuthorizationToken.
SpeechServiceAuthorizationToken
SpeechServiceAuthorizationToken is the Cognitive Services Speech Service authorization token (aka access token). Under normal circumstances, you shouldn’t have to use this property directly. Instead, use NewSpeechConfigFromAuthorizationToken, Recognizer.SetAuthorizationToken
SpeechServiceAuthorizationType
SpeechServiceConnectionEndpointId
SpeechServiceConnectionEndpointID is the Cognitive Services Custom Speech Service endpoint id. Under normal circumstances, you shouldn’t have to use this property directly. Instead use SpeechConfig.SetEndpointId. NOTE: The endpoint id is available in the Custom Speech Portal, listed under Endpoint Details.
SpeechServiceConnectionHost
SpeechServiceConnectionHost is the Cognitive Services Speech Service host (url). Under normal circumstances, you shouldn’t have to use this property directly. Instead, use NewSpeechConfigFromHost.
SpeechServiceConnectionProxyHostName
SpeechServiceConnectionProxyHostName is the host name of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn’t have to use this property directly. Instead, use SpeechConfig.SetProxy.
SpeechServiceConnectionProxyPort
SpeechServiceConnectionProxyPort is the port of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn’t have to use this property directly. Instead, use SpeechConfig.SetProxy.
SpeechServiceConnectionProxyUserName
SpeechServiceConnectionProxyUserName is the user name of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn’t have to use this property directly. Instead, use SpeechConfig.SetProxy.
SpeechServiceConnectionProxyPassword
SpeechServiceConnectionProxyPassword is the password of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn’t have to use this property directly. Instead, use SpeechConfig.SetProxy.
SpeechServiceConnectionURL
SpeechServiceConnectionURL is the URL string built from speech configuration. This property is intended to be read-only. The SDK is using it internally.
SpeechServiceConnectionTranslationToLanguages
SpeechServiceConnectionTranslationToLanguages is the list of comma separated languages used as target translation languages. Under normal circumstances, you shouldn’t have to use this property directly. Instead use SpeechTranslationConfig.AddTargetLanguage and SpeechTranslationConfig.GetTargetLanguages.
SpeechServiceConnectionTranslationVoice
SpeechServiceConnectionTranslationVoice is the name of the Cognitive Service Text to Speech Service voice. Under normal circumstances, you shouldn’t have to use this property directly. Instead use SpeechTranslationConfig.SetVoiceName. NOTE: Valid voice names can be found at https:///aka.ms/csspeech/voicenames.
SpeechServiceConnectionTranslationFeatures
SpeechServiceConnectionTranslationFeatures is the translation features. For internal use.
SpeechServiceConnectionIntentRegion
SpeechServiceConnectionIntentRegion is the Language Understanding Service region. Under normal circumstances, you shouldn’t have to use this property directly. Instead use LanguageUnderstandingModel.
SpeechServiceConnectionRecoMode
SpeechServiceConnectionRecoMode is the Cognitive Services Speech Service recognition mode. Can be “INTERACTIVE”, “CONVERSATION” or “DICTATION”. This property is intended to be read-only. The SDK is using it internally.
SpeechServiceConnectionRecoLanguage
SpeechServiceConnectionRecoLanguage is the spoken language to be recognized (in BCP-47 format). Under normal circumstances, you shouldn’t have to use this property directly. Instead, use SpeechConfig.SetSpeechRecognitionLanguage.
SpeechSessionId
SpeechSessionID is the session id. This id is a universally unique identifier (aka UUID) representing a specific binding of an audio input stream and the underlying speech recognition instance to which it is bound. Under normal circumstances, you shouldn’t have to use this property directly.
SpeechServiceConnectionUserDefinedQueryParameters
SpeechServiceConnectionUserDefinedQueryParameters are the query parameters provided by users. They will be passed to the service as URL query parameters.
SpeechServiceConnectionRecoModelName
The name of the model to be used for speech recognition. Under normal circumstances, you shouldn’t use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0
SpeechServiceConnectionRecoModelKey
The decryption key of the model to be used for speech recognition. Under normal circumstances, you shouldn’t use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0
SpeechServiceConnectionSynthLanguage
SpeechServiceConnectionSynthVoice
SpeechServiceConnectionSynthVoice is the name of the TTS voice to be used for speech synthesis
SpeechServiceConnectionSynthOutputFormat
SpeechServiceConnectionSynthOutputFormat is the string to specify TTS output audio format.
SpeechServiceConnectionSynthEnableCompressedAudioTransmission
SpeechServiceConnectionSynthEnableCompressedAudioTransmission indicates if use compressed audio format for speech synthesis audio transmission. This property only affects when SpeechServiceConnectionSynthOutputFormat is set to a pcm format. If this property is not set and GStreamer is available, SDK will use compressed format for synthesized audio transmission, and decode it. You can set this property to “false” to use raw pcm format for transmission on wire.
SpeechServiceConnectionSynthOfflineVoice
The name of the offline TTS voice to be used for speech synthesis Under normal circumstances, you shouldn’t use this property directly. Added in version 1.19.0
SpeechServiceConnectionSynthModelKey
The decryption key of the voice to be used for speech synthesis. Under normal circumstances, you shouldn’t use this property directly. Added in version 1.19.0
SpeechServiceConnectionInitialSilenceTimeoutMs
SpeechServiceConnectionInitialSilenceTimeoutMs is the initial silence timeout value (in milliseconds) used by the service.
SpeechServiceConnectionEndSilenceTimeoutMs
SpeechServiceConnectionEndSilenceTimeoutMs is the end silence timeout value (in milliseconds) used by the service.
SpeechServiceConnectionEnableAudioLogging
SpeechServiceConnectionEnableAudioLogging is a boolean value specifying whether audio logging is enabled in the service or not.
SpeechServiceResponseRequestDetailedResultTrueFalse
SpeechServiceResponseRequestDetailedResultTrueFalse the requested Cognitive Services Speech Service response output format (simple or detailed). Under normal circumstances, you shouldn’t have to use this property directly. Instead use SpeechConfig.SetOutputFormat.
SpeechServiceResponseRequestProfanityFilterTrueFalse
SpeechServiceResponseRequestProfanityFilterTrueFalse is the requested Cognitive Services Speech Service response output profanity level. Currently unused.
SpeechServiceResponseProfanityOption
SpeechServiceResponseProfanityOption is the requested Cognitive Services Speech Service response output profanity setting. Allowed values are “masked”, “removed”, and “raw”.
SpeechServiceResponsePostProcessingOption
SpeechServiceResponsePostProcessingOption a string value specifying which post processing option should be used by the service. Allowed values are “TrueText”.
SpeechServiceResponseRequestWordLevelTimestamps
SpeechServiceResponseRequestWordLevelTimestamps is a boolean value specifying whether to include word-level timestamps in the response result.
SpeechServiceResponseStablePartialResultThreshold
SpeechServiceResponseStablePartialResultThreshold is the number of times a word has to be in partial results to be returned.
SpeechServiceResponseOutputFormatOption
SpeechServiceResponseOutputFormatOption is a string value specifying the output format option in the response result. Internal use only.
SpeechServiceResponseTranslationRequestStablePartialResult
SpeechServiceResponseTranslationRequestStablePartialResult is a boolean value to request for stabilizing translation partial results by omitting words in the end.
SpeechServiceResponseJsonResult
SpeechServiceResponseJSONResult is the Cognitive Services Speech Service response output (in JSON format). This property is available on recognition result objects only.
SpeechServiceResponseJsonErrorDetails
SpeechServiceResponseJSONErrorDetails is the Cognitive Services Speech Service error details (in JSON format). Under normal circumstances, you shouldn’t have to use this property directly. Instead, use CancellationDetails.ErrorDetails.
SpeechServiceResponseRecognitionLatencyMs
SpeechServiceResponseRecognitionLatencyMs is the recognition latency in milliseconds. Read-only, available on final speech/translation/intent results. This measures the latency between when an audio input is received by the SDK, and the moment the final result is received from the service. The SDK computes the time difference between the last audio fragment from the audio input that is contributing to the final result, and the time the final result is received from the speech service.
SpeechServiceResponseSynthesisFirstByteLatencyMs
SpeechServiceResponseSynthesisFirstByteLatencyMs is the speech synthesis first byte latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the first byte audio is available. Added in version 1.17.0.
SpeechServiceResponseSynthesisFinishLatencyMs
SpeechServiceResponseSynthesisFinishLatencyMs is the speech synthesis all bytes latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the whole audio is synthesized. Added in version 1.17.0.
SpeechServiceResponseSynthesisUnderrunTimeMs
SpeechServiceResponseSynthesisUnderrunTimeMs is the underrun time for speech synthesis in milliseconds. Read-only, available on results in SynthesisCompleted events. This measures the total underrun time from AudioConfigPlaybackBufferLengthInMs is filled to synthesis completed. Added in version 1.17.0.
SpeechServiceResponseSynthesisBackend
SpeechServiceResponseSynthesisBackend indicates which backend the synthesis is finished by. Read-only, available on speech synthesis results, except for the result in SynthesisStarted event Added in version 1.17.0.
CancellationDetailsReason
CancellationDetailsReason is the cancellation reason. Currently unused.
CancellationDetailsReasonText
CancellationDetailsReasonText the cancellation text. Currently unused.
CancellationDetailsReasonDetailedText
CancellationDetailsReasonDetailedText is the cancellation detailed text. Currently unused.
LanguageUnderstandingServiceResponseJsonResult
LanguageUnderstandingServiceResponseJSONResult is the Language Understanding Service response output (in JSON format). Available via IntentRecognitionResult.Properties.
AudioConfigDeviceNameForCapture
AudioConfigDeviceNameForCapture is the device name for audio capture. Under normal circumstances, you shouldn’t have to use this property directly. Instead, use AudioConfig.FromMicrophoneInput.
AudioConfigNumberOfChannelsForCapture
AudioConfigNumberOfChannelsForCapture is the number of channels for audio capture. Internal use only.
AudioConfigSampleRateForCapture
AudioConfigSampleRateForCapture is the sample rate (in Hz) for audio capture. Internal use only.
AudioConfigBitsPerSampleForCapture
AudioConfigBitsPerSampleForCapture is the number of bits of each sample for audio capture. Internal use only.
AudioConfigAudioSource
AudioConfigAudioSource is the audio source. Allowed values are “Microphones”, “File”, and “Stream”.
AudioConfigDeviceNameForRender
you shouldn’t have to use this property directly. Instead, use NewAudioConfigFromDefaultSpeakerOutput. Added in version 1.17.0
AudioConfigPlaybackBufferLengthInMs
AudioConfigPlaybackBufferLengthInMs indicates the playback buffer length in milliseconds, default is 50 milliseconds.
SpeechLogFilename
SpeechLogFilename is the file name to write logs.
ConversationApplicationID
ConversationApplicationID is the identifier used to connect to the backend service.
ConversationDialogType
ConversationDialogType is the type of dialog backend to connect to.
ConversationInitialSilenceTimeout
ConversationInitialSilenceTimeout is the silence timeout for listening.
ConversationFromID
ConversationFromID is the FromId to be used on speech recognition activities.
ConversationConversationID
ConversationConversationID is the ConversationId for the session.
ConversationCustomVoiceDeploymentIDs
ConversationCustomVoiceDeploymentIDs is a comma separated list of custom voice deployment ids.
ConversationSpeechActivityTemplate
ConversationSpeechActivityTemplate is use to stamp properties in the template on the activity generated by the service for speech.
DataBufferTimeStamp
DataBufferTimeStamp is the time stamp associated to data buffer written by client when using Pull/Push audio input streams. The time stamp is a 64-bit value with a resolution of 90 kHz. It is the same as the presentation timestamp in an MPEG transport stream. See https:///en.wikipedia.org/wiki/Presentation_timestamp
DataBufferUserID
DataBufferUserID is the user id associated to data buffer written by client when using Pull/Push audio input streams.
Implementations
sourceimpl PropertyId
impl PropertyId
Auto Trait Implementations
impl RefUnwindSafe for PropertyId
impl Send for PropertyId
impl Sync for PropertyId
impl Unpin for PropertyId
impl UnwindSafe for PropertyId
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more