Description
In #24 I removed the (very rudimentary) caching mechanism from the code. To not lose the knowledge, here's some background and code.
Reasoning for removing the code (for now)
For caching to become useful it would have to be smarter than what we have now. Further, because we have the low-res slices, the need of caching is lower.
After we've implemented more features that affect the managing of data (like contrast limits), we should revisit whether the additional complexity of a caching mechanism is worthwhile.
Outline
To perform caching, we'd keep a per-slicer client-side dictionary (e.g. on the slicer_state
object that #24 introduced). At the moment the server callback that serves slices uses the index.data
as an input. We'd need an extra store req-index
that will copy over the index.data
, except when the cache has data for that index. Something like this:
self._clientside_callback(
"""
function update_req_index(index) {
let slicer_state; // filled in
slicer_state.cache = slicer_state.cache || {};
return slicer_state.cache[index] ? dash_clientside.no_update : index;
}
""",
Output(self._req_index.id, "data"),
[Input(self._index.id, "data")],
)
Then when a new slice is received from the server, store the incoming slice:
self._clientside_callback(
"""
function update_image_traces(index, server_data, overlays, lowres, info, current_traces) {
let slicer_state; // filled in
slicer_state.cache = slicer_state.cache || {};
for (let trigger of dash_clientside.callback_context.triggered) {
if (trigger.prop_id.indexOf('server-data') >= 0) {
slicer_state.cache[server_data.index] = server_data;
break;
}
}
...
This will restore more or less the caching mechanism that we had earlier. But more features should be added to make it useful, e.g. obtaining neighboring slices, and purging slices from the cache if needed to preserve memory.