Skip to content

🐛 Bug Report: incompatibilities with LLM semantics #1455

@codefromthecrypt

Description

@codefromthecrypt

Which component is this bug for?

LLM Semantic Conventions

📜 Description

As a first timer, I tried the ollama instrumentation, and sent a trace to a local collector. Then I compared the output with llm semantics defined by otel. I noticed as many compatibilities as incompatibilities, and it made me concerned that other instrumentation may have other large glitches.

👟 Reproduction steps

use olllama-python with the instrumentation here. It doesn't matter if you use the traceloop-sdk or normal otel to initialize the instrumentation ( I checked both just in case)

👍 Expected behavior

otel specs should be a subset of openllmetry semantics, so no incompatible attributes.

👎 Actual Behavior with Screenshots

compatible:

  • kind=client
  • name=ollama.chat
  • attributes['gen_ai.system']='Ollama'
  • attributes['gen_ai.response.model']='codegemma:2b-code'
  • attributes['gen_ai.usage.completion_tokens']=11

Incompatible:

  • attributes['gen_ai.prompt.0.content']='prompt_text' otel semantics declare this as a non-indexed attribute 'gen_ai.prompt'
  • attributes['gen_ai.completion.0.role']='assistant' otel semantics declare this as a non-indexed attribute 'gen_ai.request.model.role'

not yet defined in the standard:

  • attributes['llm.request.type']='chat'
  • attributes['llm.is_streaming']=false
  • attributes['llm.usage.total_tokens']=11

🤖 Python Version

3.12

📃 Provide any additional context for the Bug.

partially addressed by @gyliu513 in #884

👀 Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions