Skip to content

Conversation

abarker-launchdarkly
Copy link
Contributor

Requirements

  • I have added test coverage for new or changed functionality
  • I have followed the repository's pull request submission guidelines
  • I have validated my changes against all supported platform versions

Describe the solution you've provided
Improve KV data caching and deserialization

@abarker-launchdarkly abarker-launchdarkly requested a review from a team as a code owner July 31, 2025 15:25
@abarker-launchdarkly abarker-launchdarkly requested review from kinyoklion and ldhenry and removed request for a team July 31, 2025 15:25
Copy link
Contributor

@launchdarkly/browser size report
This is the brotli compressed size of the ESM build.
Compressed size: 168910 bytes
Compressed size limit: 200000
Uncompressed size: 789063 bytes

Copy link
Contributor

@launchdarkly/js-sdk-common size report
This is the brotli compressed size of the ESM build.
Compressed size: 24988 bytes
Compressed size limit: 26000
Uncompressed size: 122411 bytes

Copy link
Contributor

@launchdarkly/js-client-sdk-common size report
This is the brotli compressed size of the ESM build.
Compressed size: 17636 bytes
Compressed size limit: 20000
Uncompressed size: 90259 bytes

Copy link
Contributor

@launchdarkly/js-client-sdk size report
This is the brotli compressed size of the ESM build.
Compressed size: 21719 bytes
Compressed size limit: 25000
Uncompressed size: 74696 bytes

}

init(allData: LDFeatureStoreDataStorage, callback: () => void): void {
this._getKVData();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For reviewers: I'm not entirely sure if this is necessary but wanted to play it safe.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is probably good.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abarker-launchdarkly, I ran this branch locally and I saw these logs when evaluating two flags:

Started EventProcessor.
LD Client initialized
Requesting example-flag from LD-Env-local.flags
No cached data found, loading from KV store
Deserializing KV store data
Successfully cached deserialized data
example-flag evaluation: 1.000250ms
Requesting example-flag from LD-Env-local.flags
Using cached deserialized data
example-flag evaluation 2: 0.212166ms

I'm not sure how much it matters, but I was expecting the first flag eval to use cached date since you are calling this._getKVData() in init. FWIW I am calling await ldClient.waitForInitialization(); before evaluating the flag. Is this expected behavior?

Great to see the caching is working for the second flag eval though!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ldhenry From what I can tell, nothing actually is calling either EdgeFeatureStore.init() or EdgeFeatureStore.initialized() during client initialization so caching only happens on first flag eval.

In your branch, you were calling EdgeFeatureStore.initialized() in your overridden LDClient.waitForInitialization() which had the side effect of pre-caching the data. @kinyoklion is this reason enough to attempt to override LDClient.waitForInitialization()? If so, we'll need to discuss some of those issues I brought up in the Slack thread.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that makes sense. Thanks for the clarification.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would follow what we are doing with daemon mode, which I think it immediately initialized. As waitForInitialization would generally apply to cases where the evaluation result would be different if you waited versus not waiting. We are initialized from that perspective, but we haven't cached, which is the same as daemon mode.

Though it may be easier to explain to customers.

cursor[bot]

This comment was marked as outdated.

Copy link
Contributor

@ldhenry ldhenry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me but I'll defer to @kinyoklion for final approval.

@abarker-launchdarkly abarker-launchdarkly merged commit ae47860 into main Aug 1, 2025
27 checks passed
@abarker-launchdarkly abarker-launchdarkly deleted the abarker/sdk-1404/improve-kv-caching-deserialization branch August 1, 2025 18:36
@github-actions github-actions bot mentioned this pull request Jul 30, 2025
abarker-launchdarkly pushed a commit that referenced this pull request Aug 1, 2025
🤖 I have created a release *beep* *boop*
---


<details><summary>fastly-server-sdk: 0.2.0</summary>

##
[0.2.0](fastly-server-sdk-v0.1.9...fastly-server-sdk-v0.2.0)
(2025-08-01)


### Features

* Improve KV data caching and deserialization
([#903](#903))
([ae47860](ae47860))
</details>

<details><summary>server-sdk-ai: 0.11.0</summary>

##
[0.11.0](server-sdk-ai-v0.10.1...server-sdk-ai-v0.11.0)
(2025-08-01)


### Features

* Adding agent support for AI Configs
([#893](#893))
([bf95b92](bf95b92))
* Update AI tracker to include model & provider name for metrics
generation ([#901](#901))
([9474862](9474862))


### Bug Fixes

* Remove deprecated track generation event
([#902](#902))
([40f8593](40f8593))
</details>

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants