Skip to content

Making pyhf differentiable #882

@phinate

Description

@phinate

Description

It's become clear to me through the discussions that have taken place in IRIS-HEP and gradhep that it would be awesome to have pyhf as part of the modelling step in a differentiable analysis workflow.

Having backends that support autograd is most of the work, but me and @lukasheinrich have found through experiments in the development of neos that there are still operations that don't play nice with the differentiability of pyhf.Model construction, for instance. This is also just in the situation of using pyhf.tensor.jax_backend() -- I haven't tried this with other backends.

Pull requests/issues

There's already been an attempt to make part of this differentiable in #742, which I made small modifications to in my fork of this branch. (I make an explicit call to override the backend at one point, which isn't pretty...)

Scope

So far, this idea has only been used for the model construction and likelihood evaluation steps, but I wonder if this could also extend to inference in pyhf.infer? This could influence design decisions made for a library to provide differentiable HEP operations, which could be imported into pyhf.infer down the line if the actual underlying functionality doesn't change, e.g. wrapping a scipy optimizer with the two-phase method in the fax library.

All this seems big enough a task that I felt like it warranted an issue as a documented way forward :)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions