Class GeminiUtil
java.lang.Object
com.google.adk.models.GeminiUtil
-
Field Summary
Fields -
Method Summary
Modifier and TypeMethodDescriptionstatic Optional<com.google.genai.types.Part> getPart0FromLlmResponse(LlmResponse llmResponse) Extracts the first part of an LlmResponse, if available.static StringgetTextFromLlmResponse(LlmResponse llmResponse) Extracts text content from the first part of an LlmResponse, if available.static LlmRequestprepareGenenerateContentRequest(LlmRequest llmRequest, boolean sanitize) Prepares anLlmRequestfor the GenerateContent API.static LlmRequestprepareGenenerateContentRequest(LlmRequest llmRequest, boolean sanitize, boolean stripThoughts) Prepares anLlmRequestfor the GenerateContent API.static LlmRequestremoveClientFunctionCallId(LlmRequest llmRequest) Removes client-side function call IDs from the request.static LlmRequestsanitizeRequestForGeminiApi(LlmRequest llmRequest) Sanitizes the request to ensure it is compatible with the Gemini API backend.static booleanshouldEmitAccumulatedText(LlmResponse currentLlmResponse) Determines if accumulated text should be emitted based on the current LlmResponse.static com.google.common.collect.ImmutableList<com.google.genai.types.Content> stripThoughts(List<com.google.genai.types.Content> originalContents) Removes any `Part` that contains only a `thought` from the content list.static com.google.genai.types.GenerateContentResponseUsageMetadatatoGenerateContentResponseUsageMetadata(com.google.genai.types.UsageMetadata usageMetadata)
-
Field Details
-
CONTINUE_OUTPUT_MESSAGE
- See Also:
-
-
Method Details
-
prepareGenenerateContentRequest
Prepares anLlmRequestfor the GenerateContent API.This method can optionally sanitize the request and ensures that the last content part is from the user to prompt a model response.
- Parameters:
llmRequest- The originalLlmRequest.sanitize- Whether to sanitize the request to be compatible with the Gemini API backend.- Returns:
- The prepared
LlmRequest.
-
prepareGenenerateContentRequest
public static LlmRequest prepareGenenerateContentRequest(LlmRequest llmRequest, boolean sanitize, boolean stripThoughts) Prepares anLlmRequestfor the GenerateContent API.This method can optionally sanitize the request and ensures that the last content part is from the user to prompt a model response. It also strips out any parts marked as "thoughts" and removes client-side function call IDs as some LLM APIs reject requests with client-side function call IDs.
- Parameters:
llmRequest- The originalLlmRequest.sanitize- Whether to sanitize the request to be compatible with the Gemini API backend.- Returns:
- The prepared
LlmRequest.
-
sanitizeRequestForGeminiApi
Sanitizes the request to ensure it is compatible with the Gemini API backend. Required as there are some parameters that if included in the request will raise a runtime error if sent to the wrong backend (e.g. image names only work on Vertex AI).- Parameters:
llmRequest- The request to sanitize.- Returns:
- The sanitized request.
-
removeClientFunctionCallId
Removes client-side function call IDs from the request.Client-side function call IDs are internal to the ADK and should not be sent to the model. This method iterates through the contents and parts, removing the ID from any
FunctionCallorFunctionResponseparts.- Parameters:
llmRequest- The request to process.- Returns:
- A new
LlmRequestwith function call IDs removed.
-
getPart0FromLlmResponse
public static Optional<com.google.genai.types.Part> getPart0FromLlmResponse(LlmResponse llmResponse) Extracts the first part of an LlmResponse, if available.- Parameters:
llmResponse- The LlmResponse to extract the first part from.- Returns:
- The first part, or an empty optional if not found.
-
getTextFromLlmResponse
Extracts text content from the first part of an LlmResponse, if available.- Parameters:
llmResponse- The LlmResponse to extract text from.- Returns:
- The text content, or an empty string if not found.
-
shouldEmitAccumulatedText
Determines if accumulated text should be emitted based on the current LlmResponse. We flush if current response is not a text continuation (e.g., no content, no parts, or the first part is not inline_data, meaning it's something else or just empty, thereby warranting a flush of preceding text).- Parameters:
currentLlmResponse- The current LlmResponse being processed.- Returns:
- True if accumulated text should be emitted, false otherwise.
-
stripThoughts
public static com.google.common.collect.ImmutableList<com.google.genai.types.Content> stripThoughts(List<com.google.genai.types.Content> originalContents) Removes any `Part` that contains only a `thought` from the content list. -
toGenerateContentResponseUsageMetadata
public static com.google.genai.types.GenerateContentResponseUsageMetadata toGenerateContentResponseUsageMetadata(com.google.genai.types.UsageMetadata usageMetadata)
-