AgentBackendEventTokenUsage
Token usage statistics for a model invocation.
Properties
input_tokensinteger optionalNumber of input tokens consumed by the model.output_tokensinteger optionalNumber of output tokens generated by the model.cache_creation_input_tokensinteger optionalNumber of input tokens written to the prompt cache.cache_read_input_tokensinteger optionalNumber of input tokens read from the prompt cache.
Thank you for your feedback!
If you have a question about how to use Pulumi, reach out in Community Slack.
Open an issue on GitHub to report a problem or suggest an improvement.