Skip to content

Conversation

joostjager
Copy link
Contributor

@joostjager joostjager commented Jul 15, 2025

Async filesystem store with eventually consistent writes. It is just using tokio's spawn_blocking, because that is what tokio::fs would otherwise do as well. Using tokio::fs would make it complicated to reuse the sync code.

ldk-node try out: lightningdevkit/ldk-node@main...joostjager:ldk-node:async-fsstore

@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Jul 15, 2025

👋 Thanks for assigning @tnull as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@joostjager joostjager changed the title Async fsstore Async FilesystemStore Jul 15, 2025
@joostjager joostjager force-pushed the async-fsstore branch 4 times, most recently from 29b8bcf to 81ad668 Compare July 15, 2025 13:40
let this = Arc::clone(&self.inner);

Box::pin(async move {
tokio::task::spawn_blocking(move || {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mhh, so I'm not sure if spawning blocking tasks for every IO call is the way to go (see for example https://docs.rs/tokio/latest/tokio/fs/index.html#tuning-your-file-io: "To get good performance with file IO on Tokio, it is recommended to batch your operations into as few spawn_blocking calls as possible."). Maybe there are other designs that we should at least consider before moving forward with this approach. For example, we could create a dedicated pool of longer-lived worker task(s) that process a queue?

If we use spawn_blocking, can we give the user control over which runtime this exactly will be spawned on? Also, rather than just doing wrapping, should we be using tokio::fs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mhh, so I'm not sure if spawning blocking tasks for every IO call is the way to go (see for example https://docs.rs/tokio/latest/tokio/fs/index.html#tuning-your-file-io: "To get good performance with file IO on Tokio, it is recommended to batch your operations into as few spawn_blocking calls as possible.").

If we should batch operations, I think the current approach is better than using tokio::fs? Because it already batches the various operations inside kvstoresync::write.

Further batching probably needs to happen at a higher level in LDK, and might be a bigger change. Not sure if that is worth it just for FIlesystemStore, especially when that store is not the preferred store for real world usage?

For example, we could create a dedicated pool of longer-lived worker task(s) that process a queue?

Isn't Tokio doing that already when a task is spawned?

If we use spawn_blocking, can we give the user control over which runtime this exactly will be spawned on? Also, rather than just doing wrapping, should we be using tokio::fs?

With tokio::fs, the current runtime is used. I'd think that that is then also sufficient if we spawn ourselves, without a need to specifiy which runtime exactly?

More generally, I think the main purpose of this PR is to show how an async kvstore could be implemented, and to have something for testing potentially. Additionally if there are users that really want to use this type of store in production, they could. But I don't think it is something to spend too much time on. A remote database is probably the more important target to design for.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With tokio::fs, the current runtime is used. I'd think that that is then also sufficient if we spawn ourselves, without a need to specifiy which runtime exactly?

Hmm, I'm not entirely sure, especially for users that have multiple runtime contexts floating around, it might be important to make sure the store uses a particular one (cc @domZippilli ?). I'll also have to think through this for LDK Node when we make the switch to async KVStore there, but happy to leave as-is for now.

}

/// Provides additional interface methods that are required for [`KVStore`]-to-[`KVStore`]
/// data migration.
pub trait MigratableKVStore: KVStore {
pub trait MigratableKVStore: KVStoreSync {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will we solve this for an KVStore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this comment belongs in #3905?

We might not need to solve it now, as long as we still require a sync implementation alongside an async one? If we support async-only kvstores, then we can create an async version of this trait?

@joostjager
Copy link
Contributor Author

Removed garbage collector, because we need to keep the last written version.

@joostjager joostjager self-assigned this Jul 17, 2025
@joostjager joostjager mentioned this pull request Jul 17, 2025
24 tasks
@joostjager joostjager force-pushed the async-fsstore branch 2 times, most recently from 97d6b3f to 02dce94 Compare July 23, 2025 18:11
Copy link

codecov bot commented Jul 23, 2025

Codecov Report

❌ Patch coverage is 91.32231% with 21 lines in your changes missing coverage. Please review.
✅ Project coverage is 88.78%. Comparing base (c2d9b97) to head (96091ec).
⚠️ Report is 60 commits behind head on main.

Files with missing lines Patch % Lines
lightning-persister/src/fs_store.rs 91.32% 10 Missing and 11 partials ⚠️
Additional details and impacted files
@@           Coverage Diff            @@
##             main    #3931    +/-   ##
========================================
  Coverage   88.77%   88.78%            
========================================
  Files         175      176     +1     
  Lines      127760   128709   +949     
  Branches   127760   128709   +949     
========================================
+ Hits       113425   114276   +851     
- Misses      11780    11836    +56     
- Partials     2555     2597    +42     
Flag Coverage Δ
fuzzing 22.38% <47.08%> (+0.28%) ⬆️
tests 88.61% <91.32%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@joostjager joostjager force-pushed the async-fsstore branch 2 times, most recently from c061fcd to 2492508 Compare July 24, 2025 08:31
@joostjager joostjager marked this pull request as ready for review July 24, 2025 08:32
@ldk-reviews-bot ldk-reviews-bot requested a review from tankyleo July 24, 2025 08:32
@joostjager joostjager force-pushed the async-fsstore branch 2 times, most recently from 9938dfe to 7d98528 Compare July 24, 2025 09:39
@joostjager joostjager force-pushed the async-fsstore branch 5 times, most recently from 38ab949 to dd9e1b5 Compare July 25, 2025 13:39
@joostjager
Copy link
Contributor Author

joostjager commented Jul 25, 2025

Updated code to not use an async wrapper, but conditionally expose the async KVStore trait on FilesystemStore.

I didn't yet update the ldk-node branch using this PR, because it seems many other things broke in main again.

@joostjager joostjager requested a review from tnull July 25, 2025 13:51
@joostjager joostjager force-pushed the async-fsstore branch 2 times, most recently from 6f24148 to c96aaff Compare August 22, 2025 11:13
@joostjager
Copy link
Contributor Author

Fuzzer found an issue, fixup commit "f: fix remove clean up"

Copy link
Contributor

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took a first look at the fuzzer parts. I wonder if we would get any notable performance benefit from running the FilesystemStore fuzzer on a ramdisk? Or would we even lose some coverage going this way as it's exactly the IO latency that increases the chances of running into race conditions etc?

fuzz/Cargo.toml Outdated
bech32 = "0.11.0"
bitcoin = { version = "0.32.2", features = ["secp-lowmemory"] }
tokio = { version = "1.35.*", default-features = false, features = ["rt-multi-thread"] }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This is more common (note not 100% equivalent, but probably preferable):

Suggested change
tokio = { version = "1.35.*", default-features = false, features = ["rt-multi-thread"] }
tokio = { version = "1.35", default-features = false, features = ["rt-multi-thread"] }

Or is there any reason we don't want any API-compatible version 1.36 and above?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it doesn't work with rust 1.63

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it doesn't work with rust 1.63

Huh, but why can we get away with 1.35 below in the actual lightning-persister dependency then? Also, while the * works, you'd usually rather see ~1.35 used.

Copy link
Contributor Author

@joostjager joostjager Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For some reason, the compiler decided that 1.35 could safely be bumped to 1.47. Also happened in CI.

error: package `tokio v1.47.1` cannot be built because it requires rustc 1.70 or newer, while the currently active rustc version is 1.63.0
~/repo/rust-lightning/fuzz (async-fsstore ✗) cargo tree -i tokio                                                                                                  
tokio v1.47.1
├── lightning-fuzz v0.0.1 (/Users/joost/repo/rust-lightning/fuzz)
└── lightning-persister v0.2.0+git (/Users/joost/repo/rust-lightning/lightning-persister)
    └── lightning-fuzz v0.0.1 (/Users/joost/repo/rust-lightning/fuzz)
    ```

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made it ~1.35, reads nicer indeed.

use lightning_fuzz::utils::test_logger::StringBuffer;

use std::sync::{atomic, Arc};
// {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Remove commented-out code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes. I was still wondering what that code was for. Some default fuzz string sanity check?

let secondary_namespace = "secondary";
let key = "key";

// Remove the key in case something was left over from a previous run.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, rather than doing this, do we want to add a random suffix to temp_path above, so that we're sure to start with a clean directory every time? Also, do we want to clean up the filesystem store directory at the end of the run, similar to what we do in lightning-persister tests?

Copy link
Contributor Author

@joostjager joostjager Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added random suffixes. It is also necessary because fuzzing runs in parallel. I used uuid for simplicity, but can also generate names differently if preferred.

Also added clean up. I couldn't just copy the Drop trait, because the FilesystemStore isn't in the same crate. So created a wrapper for it. Maybe there is a better way to do it.

let fut = futures.remove(fut_idx);

fut.await.unwrap();
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't change anything, but do we want to throw in some coverage for KVStore::list for good measure?

Copy link
Contributor Author

@joostjager joostjager Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added. Only I don't think we can assert anything because things may be in flight. It does add some extra variation to the test to also list during async ops.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also added read. Some story, nothing to assert, but we do cover read execution during writes.

@joostjager
Copy link
Contributor Author

Considered the RAM disk, but it is platform-specific. @TheBlueMatt suggested an alternative option which is to allow the injection of the actual disk write handler into FilesystemStore and supply an in-memory implementation for fuzzing. But perhaps we are stretching the scope of this PR too much then, so wanted to see if we can keep it to what it is currently?

@joostjager joostjager force-pushed the async-fsstore branch 2 times, most recently from 5f7008b to 1a3631c Compare August 25, 2025 07:38
@joostjager
Copy link
Contributor Author

Fuzz passes, but some deviating log lines show up:

Sz:16 Tm:78,882us (i/b/h/e/p/c) New:0/0/0/0/0/9, Cur:0/0/0/1108/49/48567
Sz:5 Tm:17,602us (i/b/h/e/p/c) New:0/0/0/0/0/16, Cur:0/0/0/1108/49/48583
Sz:1 Tm:17,008us (i/b/h/e/p/c) New:0/0/0/0/0/5, Cur:0/0/0/1108/49/48588
Sz:22 Tm:75,985us (i/b/h/e/p/c) New:0/0/0/1/0/27, Cur:0/0/0/1109/49/48615
[2025-08-25T13:56:42+0000][W][28702] subproc_checkTimeLimit():532 pid=28711 took too much time (limit 1 s). Killing it with SIGKILL
[2025-08-25T13:56:42+0000][W][28703] subproc_checkTimeLimit():532 pid=28715 took too much time (limit 1 s). Killing it with SIGKILL
Sz:44 Tm:137,892us (i/b/h/e/p/c) New:0/0/0/1/0/32, Cur:0/0/0/1110/49/48647
Sz:7 Tm:71,188us (i/b/h/e/p/c) New:0/0/0/0/0/5, Cur:0/0/0/1110/49/48652
[2025-08-25T13:56:42+0000][W][28703] arch_checkWait():237 Persistent mode: pid=28715 exited with status: SIGNALED, signal: 9 (Killed)
Sz:3138 Tm:1,007,070us (i/b/h/e/p/c) New:0/0/0/263/3/13664, Cur:0/0/0/1373/52/62316
Sz:7 Tm:38,881us (i/b/h/e/p/c) New:0/0/0/3/0/55, Cur:0/0/0/1376/52/62371
[2025-08-25T13:56:42+0000][W][28702] arch_checkWait():237 Persistent mode: pid=28711 exited with status: SIGNALED, signal: 9 (Killed)
Sz:5662 Tm:1,013,168us (i/b/h/e/p/c) New:0/0/0/2/0/41, Cur:0/0/0/1378/52/62412
Persistent mode: Launched new persistent pid=30858
Persistent mode: Launched new persistent pid=30878

@joostjager
Copy link
Contributor Author

Using /dev/shm as ramdisk if present fixed the timeouts.

@joostjager
Copy link
Contributor Author

Tested with a RAM disk on macos using the tool https://github.com/conorarmstrong/macOS-ramdisk, to see if it isn't too fast now to catch problems. I think it is ok. On my machine RAM disk is about 10x faster than disk. Also when removing the is_stale_version check, it is caught by the fuzzer.

@joostjager
Copy link
Contributor Author

fs_store fuzz stats now:

Summary iterations:100008 time:969 speed:103 crashes_count:0 timeout_count:339 new_units_added:1085 slowest_unit_ms:1114 guard_nb:686400 branch_coverage_percent:0 peak_rss_mb:67

969 secs, maybe reduce iterations somewhat?

@joostjager
Copy link
Contributor Author

Squashed the commits. With the number of fixup commits that were there, I don't think it would be helpful anymore to have them.

@joostjager joostjager requested a review from TheBlueMatt August 26, 2025 15:31
TheBlueMatt
TheBlueMatt previously approved these changes Aug 26, 2025
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor notes, but all of them can reasonably be addressed in a followup.


impl Drop for TempFilesystemStore {
fn drop(&mut self) {
_ = fs::remove_dir_all(&self.temp_path)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to make sure all the spawned tasks have finished before we do this. Otherwise cleanup wont work because the async task will re-create the directory as a part of its write.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. First I thought to do something smart with drop, but at that point we don't have the list of handles, and also can't await in drop I think. So just added the final wait to the end of the test fn and avoid early returns.

@joostjager
Copy link
Contributor Author

I will address comments in this PR. I definitely want to merge, but there are no conflicts with other PRs, so an early merge isn't really justified.

@TheBlueMatt
Copy link
Collaborator

Oh, I guess this needs squashing, but feel free.

@joostjager joostjager requested a review from tnull August 27, 2025 13:56
@joostjager
Copy link
Contributor Author

Yes, kept the fixup because in this case I thought it would make re-review easier. Will let Elias take a look next week and then squash.

@ldk-reviews-bot
Copy link

🔔 1st Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 2nd Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

4 participants