-
Notifications
You must be signed in to change notification settings - Fork 176
trace logging for scores per pod #1395
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for gateway-api-inference-extension ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: nirrozenbaum The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
fix #1170 |
func (p *SchedulerProfile) runScorerPlugins(ctx context.Context, request *types.LLMRequest, cycleState *types.CycleState, pods []types.Pod) map[types.Pod]float64 { | ||
loggerDebug := log.FromContext(ctx).V(logutil.DEBUG) | ||
logger := log.FromContext(ctx) | ||
loggerTrace := logger.V(logutil.TRACE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit/nonblocking: I keep noticing this pattern, and I'm honestly not sure it actually provides greater value than just explicitly setting the level per log?
Not necessarily a crusade I'll start on this PR but its worth mentioning. Why cant we just set the log level per log?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, sure. we can set log level per log.
I don't see any advantage of current approach.
sounds like a new good-first-issue
is on the way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed it but only in the scope of this PR.
I can do a quick pass over the code and change it or we can open a good-first-issue
.
I would try to change the logging package by package and not the whole repo at once which will touch too many files at once.
this is yet another practice that newcomers would use when they push PRs.
scores := scorer.Score(ctx, cycleState, request, pods) | ||
metrics.RecordPluginProcessingLatency(ScorerExtensionPoint, scorer.TypedName().Type, scorer.TypedName().Name, time.Since(before)) | ||
for pod, score := range scores { // weight is relative to the sum of weights | ||
if loggerTrace.Enabled() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this conditional necessary? I think this check is done internally
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right. no necessary.
fixing.
Couple small comments. One thing to note: this should only be used for light debugging, our Trace logs are breathtakingly verbose and will introduce a high amount of lock contention if used with higher concurrency. I'll have a public doc to share later today about this topic, but just wanted to share. /lgtm |
addressed your comments. /unhold |
Signed-off-by: Nir Rozenbaum <[email protected]>
7181def
to
cc9903e
Compare
/lgtm |
I've been asked several times about the ability to debug specific scorers when running EPP.
to help debug, this PR adds logging (in trace level) of a score per pod of a scorer.