By default, OpenLLMetry logs prompts, completions, and embeddings to span attributes. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs. However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.Documentation Index
Fetch the complete documentation index at: https://enrolla-gz-new-docs-for-auto-monitor.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Disabling logging globally
To disable logging, set theTRACELOOP_TRACE_CONTENT environment variable to false.
On Typescript/Javascript you can also pass the traceContent option.
Enabling logging selectively in specific workflows / tasks
You can decide to selectively enable prompt logging for specific workflows, tasks, agents, or tools, using the annotations API. If you don’t specify atraceContent option, the global setting will be used.
Enabling logging selectively for specific users
You can decide to selectively enable or disable prompt logging for specific users or workflows.Using the Traceloop Platform
We have an API to enable content tracing for specific users, as defined by association entities. See the Traceloop API documentation for more information.Without the Traceloop Platform
Set a key calledoverride_enable_content_tracing in the OpenTelemetry context to True right before making the LLM call
you want to trace with prompts.
This will create a new context that will instruct instrumentations to log prompts and completions as span attributes.
Python
Python

