Skip to content

Conversation

@Christopher-Chianelli
Copy link
Contributor

Profiling constraints is notorishing difficult, since each component of a constraint are converted to nodes, some of which are shared. As such, a JVM method profile is basically unreadable and does not represent how much time is actually spent for each constraint.

To aid in profiling, an optional constraintStreamProfilingMode configuration was added. If set to a value other than NONE, it wraps each tuple lifecycle node inside a ProfilingTupleLifecycle, which will measure how long each lifecycle executes. The ProfilingTupleLifecycle find out what constraint is responsible for creating that lifecycle by getting snapshot of the stack traces from its constraint stream's creator (when a constraint stream is shared, their stack traces are merged into the same set).

At the end of solving, a profiling summary is then produced in the INFO log. The details differ depending on the profiling mode:

  • In the BY_METHOD profiling mode, (className, methodName) is used as the key

  • In the BY_LINE profiling mode, (className, methodName, lineNumber) is used as the key.

The methods/lines are printed in descending order of time percentage spent. The sum of time percentage spent may be over 100%, since methods/lines can share time spent with other methods/lines.

timefold.solver.constraint-stream-profiling-mode was added as a property to Quarkus and Spring Boot to configure profiling (defaults to NONE).

Profiling constraints is notorishing difficult, since each
component of a constraint are converted to nodes, some of which
are shared. As such, a JVM method profile is basically unreadable and
does not represent how much time is actually spent for each constraint.

To aid in profiling, an optional constraintStreamProfilingMode
configuration was added. If set to a value other than NONE,
it wraps each tuple lifecycle node inside a ProfilingTupleLifecycle,
which will measure how long each lifecycle executes.
The ProfilingTupleLifecycle find out what constraint is responsible
for creating that lifecycle by getting snapshot of the stack traces
from its constraint stream's creator (when a constraint stream is
shared, their stack traces are merged into the same set).

At the end of solving, a profiling summary is then produced in
the INFO log. The details differ depending on the profiling mode:

- In the BY_METHOD profiling mode, (className, methodName) is used
  as the key

- In the BY_LINE profiling mode, (className, methodName, lineNumber)
  is used as the key.

The methods/lines are printed in descending order of time percentage
spent. The sum of time percentage spent may be over 100%, since
methods/lines can share time spent with other methods/lines.

timefold.solver.constraint-stream-profiling-mode was added as
a property to Quarkus and Spring Boot to configure profiling
(defaults to NONE).
@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
C Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Copy link
Collaborator

@triceo triceo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this in principle! When it comes to the implementation, I have concerns. See specifically the comment regarding profiling mode.

I also see this hooks itself into tuple lifecycle. But I don't see it tracking propagation overhead anywhere. Arguably it should.

The logging aspect is OK, but I am missing any methods to consume this information at runtime. How about Micrometer metrics? Or custom JFR events? Thinking out loud here: if it triggered an event (micrometer / JFR) every time it finished a single measurement, these could be aggregated at the end as well as queried in-flight. More often than not, we will be run in the platform - people will want a way of monitoring their solvers.

I question the name "profiling"; this doesn't use a profiler, and doesn't work in a way in which profilers work. The name should be changed in order to not invite confusing comparisons with actual profiling.

* Controls how constraints are profiled.
* Enabling profiling have a minor performance impact.
*/
public enum ConstraintProfilingMode {
Copy link
Collaborator

@triceo triceo Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO this is needless complexity. Do not profile by method or line - profile by constraint. That is what people actually want to see; the list of constraints and how much time they take to run.

Also, the names of constraints typically do not change, while method names and lines do as code is refactored over time. People who use bytecode obfuscation would get absolutely no information out of this, and I wonder what happens with automatic node sharing etc., where we mutate code heavily without giving access to that new source code.

All in all, this information is too unreliable to present it to the user. The only reliable source is the constraint name; the rest, if it needs to exist, should remain hidden.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not look at lambdas, it looks at the .filter`.join`/etc. code, which is not modified by node sharing (i.e. it looks at the filter call, not what within the filter call).
Arguably, having method name + line number is useful; it tells you what part of a constraint is taking the most runtime (i.e. is it a filter, join, or groupBy).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The line number still changes with automatic node sharing - new fields are added at the top of the class, moving others down. (Although how does it even work with bytecode generation, when there is no source code? It seems that this metadata has to be fundamentally unreliable.)

Even with the changes that the Java compiler does when it produces bytecode, line numbers are still unreliable. Code can be added here and there. The idea of trying to do this per building block is IMO noble, but falls apart too easily. I don't think it's necessary for this concept to succeed - when the users see the expensive constraint, that'll already be plenty information to proceed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line numbers are recorded in bytecode; so they line numbers in stack traces do not change if that bytecode is modified (provided you don't delete/change the metadata instruction that records the line number).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for how line numbers look in bytecode:

// Some previous instructions
IADD
INVOKE_VIRTUAL
Label LINE_START
LINENUMBER 10 LINE_START
// Some instructions after
NOP
INVOKE_INTERFACE
FADD
Label ANOTHER_LINE
LINENUMBER 11 ANOTHER_LINE

In this example, line 10 correspond to NOP, INVOKE_INTERFACE, FADD.

var constraintProvider = new TestingConstraintProvider(firstEntityClass);
var scoreDirectorFactory = new BavetConstraintStreamScoreDirectorFactory<>(solutionDescriptor, constraintProvider,
EnvironmentMode.TRACKED_FULL_ASSERT);
EnvironmentMode.TRACKED_FULL_ASSERT, ConstraintProfilingMode.NONE);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's have an overload which doesn't require this to be specified.
It is boilerplate.

@Christopher-Chianelli
Copy link
Contributor Author

In regards to it tracking propagation overhead, it did in a previous implementation (by wrapping all the Propogators). However, this is a form of double counting; the Propogators fundamentally call lifecycles, which are profiled. Additionally, Propogators correspond to parents of nodes instead of nodes themselves, which can cause absurdities such as forEach taking a large percentage of a profile (since it is a parent to a join/filter).

@triceo
Copy link
Collaborator

triceo commented Nov 18, 2025

However, this is a form of double counting; the Propogators fundamentally call lifecycles, which are profiled.

Not sure about that. These lifecycles count for a lot, true. But propagation also deals with the queues, and iterates a lot. IMO it's not necessarily a double count.

Additionally, Propogators correspond to parents of nodes instead of nodes themselves, which can cause absurdities such as forEach taking a large percentage of a profile (since it is a parent to a join/filter).

Which is another good argument that this should not profile per-method or per-line, but per-constraint. Doesn't matter which part takes such a long amount of time - it's the constraint that's what matters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants