feat: add otel structured logging#552
Conversation
Summary of ChangesHello @pawel-maciejczek, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces comprehensive OpenTelemetry structured logging into the application. The primary goal is to enhance observability by capturing detailed information about LLM interactions, including system messages, user inputs, and inference outcomes. This integration provides a standardized way to emit logs, which can then be processed and exported to various backend systems, such as Google Cloud, for analysis and monitoring. The changes involve adding new logging utilities, updating core LLM interaction logic to use these utilities, and extending the telemetry setup to configure and manage OpenTelemetry log providers and processors. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces OpenTelemetry structured logging, which is a great addition for observability. The changes are extensive, touching configuration, setup, and internal logic to emit logs. I've identified a few critical issues that need to be addressed: a resource leak due to an incorrect shutdown sequence, a potential panic from a nil pointer dereference, a syntax error that will prevent compilation, and redundant logging within a loop. Addressing these will significantly improve the robustness and correctness of the new logging functionality.
b33ec10 to
237e5a2
Compare
…d clarify deviations from semconv.
de517dd to
7bcb69d
Compare
| useStream := runconfig.FromContext(ctx).StreamingMode == runconfig.StreamingModeSSE | ||
|
|
||
| // Log request before calling the model. | ||
| telemetry.LogRequest(ctx, req) |
There was a problem hiding this comment.
nit: Is it possible to hide this and telemetry.LogResponse in the generateContent function?
In python we take up a pretty small code footprint when it comes to instrumentation in the business logic:
https://github.com/google/adk-python/blob/ede925b5025972cffcfaf178b2f81679fabbe90f/src/google/adk/flows/llm_flows/base_llm_flow.py#L1166-L1170
| // guessedGenAISystem is the AI system we are using. | ||
| var guessedGenAISystem = guessAISystem() | ||
|
|
||
| var logger = global.GetLoggerProvider().Logger( |
There was a problem hiding this comment.
nit: rename to otelLogger.
It's a bit confusing, because most languages have some logger already, and it usually signals stdout logging
dpasiukevich
left a comment
There was a problem hiding this comment.
LGTM, let's resolve the small nits and we can merge this.
Thanks!
|
|
||
| // Message content is not logged by default. Set the following env variable to enable logging of prompt/response content. | ||
| // OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true | ||
| var elideMessageContent = !isEnvVarTrue("OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT") |
There was a problem hiding this comment.
@pawel-maciejczek @RKest folks do you know if this is mentioned in the public docs anywhere for ADK? This envvar looks useful, but right now it's hidden in the internal code and users won't know about it unless they'd parse the code.
| return val == "true" || val == "1" | ||
| } | ||
|
|
||
| func guessAISystem() string { |
There was a problem hiding this comment.
Could you use this func
VertexAI or GeminiAPI not only can be configured by this env var but also by genai.Client.Backend field.
| // guessedGenAISystem is the AI system we are using. | ||
| var guessedGenAISystem = guessAISystem() | ||
|
|
||
| var logger = global.GetLoggerProvider().Logger( |
There was a problem hiding this comment.
Just to double check -- logger provider instantiation will be done in a separate PR?
adk-go/telemetry/setup_otel.go
Line 80 in 147b757
|
general question on this, will it enable me to add labels to various groupings in ADK such that I have some good filtering and pivot variables for grafana? |
This PR introduces structured logging for GenAI requests and responses.
Logging is based on OTel GenAI semconv (linked from the code), but aims for consistency with ADK Python implementation. The deviations between the current implementation and semconv are documented in the code.