Modules

Nested message and enum types in Agent.

Generated client implementations.

Nested message and enum types in BatchUpdateEntityTypesRequest.

Nested message and enum types in BatchUpdateIntentsRequest.

Generated client implementations.

Nested message and enum types in EntityType.

Generated client implementations.

Nested message and enum types in Environment.

Generated client implementations.

Nested message and enum types in ExportAgentResponse.

Nested message and enum types in ImportAgentRequest.

Nested message and enum types in Intent.

Generated client implementations.

Nested message and enum types in KnowledgeAnswers.

Nested message and enum types in QueryInput.

Nested message and enum types in RestoreAgentRequest.

Nested message and enum types in SessionEntityType.

Generated client implementations.

Generated client implementations.

Nested message and enum types in StreamingRecognitionResult.

Nested message and enum types in ValidationError.

Structs

A Dialogflow agent is a virtual agent that handles conversations with your end-users. It is a natural language understanding module that understands the nuances of human language. Dialogflow translates end-user text or audio during a conversation to structured data that your apps and services can understand. You design and build a Dialogflow agent to handle the types of conversations required for your system.

The request message for [EntityTypes.BatchCreateEntities][google.cloud.dialogflow.v2beta1.EntityTypes.BatchCreateEntities].

The request message for [EntityTypes.BatchDeleteEntities][google.cloud.dialogflow.v2beta1.EntityTypes.BatchDeleteEntities].

The request message for [EntityTypes.BatchDeleteEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.BatchDeleteEntityTypes].

The request message for [Intents.BatchDeleteIntents][google.cloud.dialogflow.v2beta1.Intents.BatchDeleteIntents].

The request message for [EntityTypes.BatchUpdateEntities][google.cloud.dialogflow.v2beta1.EntityTypes.BatchUpdateEntities].

The request message for [EntityTypes.BatchUpdateEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.BatchUpdateEntityTypes].

The response message for [EntityTypes.BatchUpdateEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.BatchUpdateEntityTypes].

The request message for [Intents.BatchUpdateIntents][google.cloud.dialogflow.v2beta1.Intents.BatchUpdateIntents].

The response message for [Intents.BatchUpdateIntents][google.cloud.dialogflow.v2beta1.Intents.BatchUpdateIntents].

Dialogflow contexts are similar to natural language context. If a person says to you “they are orange”, you need context in order to understand what “they” is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent.

The request message for [Contexts.CreateContext][google.cloud.dialogflow.v2beta1.Contexts.CreateContext].

The request message for [EntityTypes.CreateEntityType][google.cloud.dialogflow.v2beta1.EntityTypes.CreateEntityType].

The request message for [Intents.CreateIntent][google.cloud.dialogflow.v2beta1.Intents.CreateIntent].

The request message for [SessionEntityTypes.CreateSessionEntityType][google.cloud.dialogflow.v2beta1.SessionEntityTypes.CreateSessionEntityType].

The request message for [Agents.DeleteAgent][google.cloud.dialogflow.v2beta1.Agents.DeleteAgent].

The request message for [Contexts.DeleteAllContexts][google.cloud.dialogflow.v2beta1.Contexts.DeleteAllContexts].

The request message for [Contexts.DeleteContext][google.cloud.dialogflow.v2beta1.Contexts.DeleteContext].

The request message for [EntityTypes.DeleteEntityType][google.cloud.dialogflow.v2beta1.EntityTypes.DeleteEntityType].

The request message for [Intents.DeleteIntent][google.cloud.dialogflow.v2beta1.Intents.DeleteIntent].

The request message for [SessionEntityTypes.DeleteSessionEntityType][google.cloud.dialogflow.v2beta1.SessionEntityTypes.DeleteSessionEntityType].

The request to detect user’s intent.

The message returned from the DetectIntent method.

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted.

This message is a wrapper around a collection of entity types.

You can create multiple versions of your agent and publish them to separate environments.

Events allow for matching intents by event name instead of the natural language input. For instance, input <event: { name: "welcome_event", parameters: { name: "Sam" } }> can trigger a personalized welcome response. The parameter name may be used by the agent in the response: "Hello #welcome_event.name! What can I do for you today?".

The request message for [Agents.ExportAgent][google.cloud.dialogflow.v2beta1.Agents.ExportAgent].

The response message for [Agents.ExportAgent][google.cloud.dialogflow.v2beta1.Agents.ExportAgent].

Google Cloud Storage location for single input.

Google Cloud Storage locations for the inputs.

The request message for [Agents.GetAgent][google.cloud.dialogflow.v2beta1.Agents.GetAgent].

The request message for [Contexts.GetContext][google.cloud.dialogflow.v2beta1.Contexts.GetContext].

The request message for [EntityTypes.GetEntityType][google.cloud.dialogflow.v2beta1.EntityTypes.GetEntityType].

The request message for [Intents.GetIntent][google.cloud.dialogflow.v2beta1.Intents.GetIntent].

The request message for [SessionEntityTypes.GetSessionEntityType][google.cloud.dialogflow.v2beta1.SessionEntityTypes.GetSessionEntityType].

The request message for [Agents.GetValidationResult][google.cloud.dialogflow.v2beta1.Agents.GetValidationResult].

The request message for [Agents.ImportAgent][google.cloud.dialogflow.v2beta1.Agents.ImportAgent].

Instructs the speech recognizer on how to process the audio content.

An intent categorizes an end-user’s intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification.

This message is a wrapper around a collection of intents.

Represents the result of querying a Knowledge base.

The request message for [Contexts.ListContexts][google.cloud.dialogflow.v2beta1.Contexts.ListContexts].

The response message for [Contexts.ListContexts][google.cloud.dialogflow.v2beta1.Contexts.ListContexts].

The request message for [EntityTypes.ListEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.ListEntityTypes].

The response message for [EntityTypes.ListEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.ListEntityTypes].

The request message for [Environments.ListEnvironments][google.cloud.dialogflow.v2beta1.Environments.ListEnvironments].

The response message for [Environments.ListEnvironments][google.cloud.dialogflow.v2beta1.Environments.ListEnvironments].

The request message for [Intents.ListIntents][google.cloud.dialogflow.v2beta1.Intents.ListIntents].

The response message for [Intents.ListIntents][google.cloud.dialogflow.v2beta1.Intents.ListIntents].

The request message for [SessionEntityTypes.ListSessionEntityTypes][google.cloud.dialogflow.v2beta1.SessionEntityTypes.ListSessionEntityTypes].

The response message for [SessionEntityTypes.ListSessionEntityTypes][google.cloud.dialogflow.v2beta1.SessionEntityTypes.ListSessionEntityTypes].

Instructs the speech synthesizer how to generate the output audio content. If this audio config is supplied in a request, it overrides all existing text-to-speech settings applied to the agent.

Represents the query input. It can contain either:

Represents the parameters of the conversational query.

Represents the result of conversational query or event processing.

The request message for [Agents.RestoreAgent][google.cloud.dialogflow.v2beta1.Agents.RestoreAgent].

The request message for [Agents.SearchAgents][google.cloud.dialogflow.v2beta1.Agents.SearchAgents].

The response message for [Agents.SearchAgents][google.cloud.dialogflow.v2beta1.Agents.SearchAgents].

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.

Configures the types of sentiment analysis to perform.

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user’s attitude as positive, negative, or neutral. For [Participants.DetectIntent][], it needs to be configured in [DetectIntentRequest.query_params][google.cloud.dialogflow.v2beta1.DetectIntentRequest.query_params]. For [Participants.StreamingDetectIntent][], it needs to be configured in [StreamingDetectIntentRequest.query_params][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.query_params]. And for [Participants.AnalyzeContent][google.cloud.dialogflow.v2beta1.Participants.AnalyzeContent] and [Participants.StreamingAnalyzeContent][google.cloud.dialogflow.v2beta1.Participants.StreamingAnalyzeContent], it needs to be configured in [ConversationProfile.human_agent_assistant_config][google.cloud.dialogflow.v2beta1.ConversationProfile.human_agent_assistant_config]

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes.

The request message for [Agents.SetAgent][google.cloud.dialogflow.v2beta1.Agents.SetAgent].

Hints for the speech recognizer to help with recognition in a specific conversation state.

Configures speech transcription for [ConversationProfile][google.cloud.dialogflow.v2beta1.ConversationProfile].

Information for a word recognized by the speech recognizer.

The top-level message sent by the client to the [Sessions.StreamingDetectIntent][google.cloud.dialogflow.v2beta1.Sessions.StreamingDetectIntent] method.

The top-level message returned from the StreamingDetectIntent method.

Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.

Contains basic configuration for a sub-agent.

Configuration of how speech should be synthesized.

A wrapper of repeated TelephonyDtmf digits.

Represents the natural language text to be processed.

The request message for [Agents.TrainAgent][google.cloud.dialogflow.v2beta1.Agents.TrainAgent].

The request message for [Contexts.UpdateContext][google.cloud.dialogflow.v2beta1.Contexts.UpdateContext].

The request message for [EntityTypes.UpdateEntityType][google.cloud.dialogflow.v2beta1.EntityTypes.UpdateEntityType].

The request message for [Intents.UpdateIntent][google.cloud.dialogflow.v2beta1.Intents.UpdateIntent].

The request message for [SessionEntityTypes.UpdateSessionEntityType][google.cloud.dialogflow.v2beta1.SessionEntityTypes.UpdateSessionEntityType].

Represents a single validation error.

Represents the output of agent validation.

Description of which voice to use for speech synthesis.

Enums

Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation for more details.

Represents the options for views of an intent. An intent can be a sizable object. Therefore, we provide a resource view that does not return training phrases in the response by default.

Audio encoding of the output audio format in Text-To-Speech.

Variant of the specified [Speech model][google.cloud.dialogflow.v2beta1.InputAudioConfig.model] to use.

Gender of the voice as described in SSML voice element.