Skip to content

Conversation

Matroskine
Copy link

This is an eigensync prototype

Copy link

coderabbitai bot commented Jul 24, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/eigensync

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Matroskine Matroskine self-assigned this Jul 24, 2025
}

/// Load an Automerge document
pub async fn load_document(&self, document_id: &str) -> Result<Option<Vec<u8>>> {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should return an already parsed AutoCommit (see AutoCommit::load)

self.documents.insert(document_id.to_string(), doc);
}

Ok(self.documents.get_mut(document_id).unwrap())
Copy link

@Einliterflasche Einliterflasche Jul 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for testing unwrap is ok, but when we merge this should be .expect("doc to exist because we just inserted it")


let doc = self.get_or_create_document(document_id)?;

for change in changes {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AutoCommit::apply_changes does this for us


/// Manager for Automerge documents
pub struct DocumentManager {
documents: HashMap<String, AutoCommit>,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably want to use a more specific type as our document key. pub struct DocumentId(pub String) for example. This ensures we don't accidentally mistake a values that's not a documentId as an id. Any reason we wouldn't use Uuid?

}

/// Get document heads
pub fn get_heads(&mut self, document_id: &str) -> Vec<ChangeHash> {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should return Option<Vec<ChangeHash>> and not an empty vector but None if there's no such document

&self,
peer_id: PeerId,
actor_id: ActorId,
document_id: &str,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, we want a more specific Id type, possibly uuid

peer_id: PeerId,
actor_id: ActorId,
document_id: &str,
patch_data: &[u8],

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parse the patch before passing it to this function -> this function should take a Change type (I think, I don't think we're supposed to use Patch)

peer_id: PeerId,
document_id: &str,
since_sequence: Option<u64>,
) -> Result<Vec<(u64, Vec<u8>)>> {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here too, parse the bytes before returning it from the function

@binarybaron
Copy link

binarybaron commented Jul 25, 2025

I haven't looked at this thoroughly yet but in case this wasn't clear. The data storage flow should be like this:

Server <> Eigensync Client <> GUI

The Eigensync Client stores the entire document in a database. The GUI also stores all the states (but not the entire document) in its own database.

The Eigensync Client should function entirely without depending on the GUI (to avoid circular dependencies).

The GUI then polls states from the Eigensync Client and pushes new states into the document. The Eigensync Client takes care of syncing that with the server.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should ignore this for now

Copy link

@Einliterflasche Einliterflasche Jul 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of this is not necessary. Have a look at our implementation of the cooperative redeem protocol (link). Basically, only keep the Request and Response struct.

Copy link

@Einliterflasche Einliterflasche Jul 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably you should delete everything except the request and response structs (and of those, only keep the GetChanges SubmitChanges (and Error for the Response) variants). Then use the same pattern we use for cooperative redeem.

@binarybaron binarybaron changed the title Feat/eigensync feat: eigensync Jul 25, 2025
@Einliterflasche
Copy link

Einliterflasche commented Aug 8, 2025

Still to-do:

  • Get basic scenario (Alice adds states, Alice syncs, Bob syncs, Bob adds state, Bob syncs, Alice syncs => assert that both have the same state) working with libp2p:
    • In the event loop listen for events sent via the channel by adding a channel_request = channel.recv() (not sure about exact syntax) case to the select!
    • When we receive a sync request via the channel, initiate the sync via swarm.behaviour_mut().send_request(...)
    • When sending a channel event from the handle to the event loop, create a oneshot channel. Add the Sender to the channel request. And when the request was handled in the event loop, send an event via the oneshot channel. This way we can just oneshot_channel.recv().await in the handle and have the function return only once we actually completed the request.
    • Probably update the protocol to just have a single enum variant for now, Sync where we send the client's current changes to the server and the server adds those it doesn't already have to the databse, sending back the ones that the client doesn't already have.
    • Depending on the channel event, we might send some data back from the event loop to the handle. For example, if we sync the server might have new changes. If we just have a single Sync event and don't distinguish between upload/download then we can send the new changes that we don't yet know (which the server sends back) from the event loop to the handle. Then, in the handle, we can apply the changes to the document and hydrate an updated state which we can return to the caller.

Rough sketch for the sync function in the handle:

enum ChannelRequest {
     Sync {
         current_changes: Vec<SerializedChange>,
         response_channel: OneshotChannel<Result<Vec<SerializedChanges>, String>>
     }
}

struct Handle<T> {
    channel: UnboundedSender<ChannelRequest>
    _maker: PhantomData<T>
}

impl<T> Handle<T> {
    pub async fn sync(&mut self, state: &T) -> Result<T> {
        let (sender, receiver) = oneshot::channel();
        self.channel.send(ChannelRequest::Sync { current_changes: self.document.changes(), sender }).await;

        let new_changes = receiver.recv().await.context("channel didn't respond")??;
        self.document.apply(new_changes).context("couldn't apply changes we got from server - are they corrupted?")?;
        
        return hydrate(&self.document).context("couldn't hydrate specified type from document after updating - did we get corrupted?");
    }
}
  • Also test a scenario where instead of adding to a vector/hashmap, we actually mutate something, for example a settings struct, e.g. struct Settings { monero_node: String, use_tor: bool }. Make sure this works as expected, too: when two clients change something before either syncs with the server, we should choose the latest change when there's a conflict.

  • Note: since we'll encrypt the changes end-to-end, avoid doing anything with the changes on the server (except storing them) - the server won't be able to read them in the final implementation.

After that, there's of course the encryption we should implement:

  • Encrypt the changes client-side using a secret key that all devices of a single person have access to (goal: derived from monero seed, for now you just hardcode it)

dispatch(setEigensyncServer(newServer));
};

const isValidMultiaddr = (addr: string) => {
Copy link

@binarybaron binarybaron Sep 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not implement this validation logic here like this.

We have the isValidMultiAddressWithPeerId function in src-gui/src/utils/parseUtils.ts which properly parses and checks the validity of multi addresses.

}

let db_path = data_dir.join("changes");
let database_url = format!("sqlite:{}?mode=rwc", db_path.display());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use SqlitePoolOptions instead of encoding them into a string

pub punish_timelock: PunishTimelock,
pub timelock: Option<ExpiredTimelocks>,
pub monero_receive_pool: MoneroAddressPool,
pub monero_receive_pool: Option<MoneroAddressPool>,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Convert this back to MoneroAddressPool from Option<MoneroAddressPool>

We only changed this for testing purposes.


pub fn sync_with_server(&mut self, request: ChannelRequest, server_id: PeerId) {
let server_request = ServerRequest::UploadChangesToServer { encrypted_changes: request.encrypted_changes };
match &server_request {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This match isn't necessary, you just created the server_request.

/// Whether to route Monero wallet traffic through Tor
pub enable_monero_tor: bool,
/// Eigensync server Multiaddr
pub eigensync_server_multiaddr: String,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to Option<String> which translates to string | null in typescript (auto generated by typeshare)

tor: false,
enable_monero_tor: false,
tauri_handle: None,
eigensync_server_multiaddr: "/ip4/127.0.0.1/tcp/3333".to_string(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default to None

pub async fn run(&mut self, server_id: PeerId) -> anyhow::Result<()> {
loop {
tokio::select! {
event = self.swarm.select_next_some() => handle_event(event, server_id, &mut self.swarm, &mut self.response_map, self.connection_established.take()).await?,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't propagate the error here. Instead, match on the result, log the error if there is one, and then just continue the loop. we want the loop to continue running. This also enables much more succinct error handling in handle_event


/// Configures the Eigensync server multiaddr
pub fn with_eigensync_server(mut self, eigensync_server: impl Into<Option<String>>) -> Self {
self.eigensync_server_multiaddr = eigensync_server.into().unwrap().to_string();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never call .unwrap unless we can be entirely sure the String will be a valid Multiaddress

response,
} => match response {
Response::NewChanges { changes } => {
let sender = match response_map.remove(&request_id) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Start using the ? operator in handle_event. This whole match statement could be: .context(format!("No sender for request id {:?}", request_id))?

);

let multiaddr = Multiaddr::from_str(self.eigensync_server_multiaddr.as_ref()).context("Failed to parse Eigensync server multiaddr")?;
let server_peer_id = PeerId::from_str("12D3KooWQsAFHUm32ThqfQRJhtcc57qqkYckSu8JkMsbGKkwTS6p")?;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PeerId has to be part of the multiaddr, not hardcoded

let _ = sender.send(Ok(changes));
},
Response::Error { reason } => {
let sender = match response_map.remove(&request_id) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, too

}
};

let _ = sender.send(Ok(changes));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't ignore the result


let mut db_adapter = EigensyncDatabaseAdapter::new(eigensync_handle.clone(), db.clone());

tracing::info!("opened db");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ambigious log message, better to remove

channel,
request_id,
} => {
tracing::error!("Received request of id {:?}", request_id);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change this to something like "Received the request when we're the client"

Comment on lines 111 to 119
let sender = match response_map.remove(&request_id) {
Some(sender) => sender,
None => {
tracing::error!("No sender for request id {:?}", request_id);
return Ok(());
}
};

let _ = sender.send(Err(error.to_string()));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apply the better error handling here too

#[derive(Debug, Clone, Reconcile, Hydrate, PartialEq, Default)]
struct EigensyncWire {
states: HashMap<StateKey, String>,
// encode (peer_id, addr) -> "peer_id|addr", unit value as bool true

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these comments are not up to date anymore?

We are not serialiazing as string anymore

peer_addresses: HashMap<PeerAddressesKey, bool>,
// swap_id -> peer_id
peers: HashMap<String, String>,
// encode (swap_id, address?) -> "swap_id|address_or_-"; store (Decimal, String) as (String, String)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these comments are not up to date anymore?

We are not serialiazing as string anymore

let _ = sender.send(());
}
},
other => tracing::error!("Received event: {:?}", other),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change this to debug

let timestamp = k.0.1;
let swap: Swap = serde_json::from_str(&v)?;
let state: State = swap.into();
// convert to utc date time from string like "2025-07-28 15:23:12.0 +00"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this comments are not up to date anymore?

we aren't using timestamps like this anymore

use eigensync::EigensyncHandle;
use libp2p::{Multiaddr, PeerId};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
// serde kept via Cargo features; no direct derives used here

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove claude / gpt comments

pub async fn run(&mut self) -> anyhow::Result<()> {
loop {
tokio::time::sleep(Duration::from_secs(1)).await;
tracing::info!("running eigensync sync");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: capitalize

let mut new_peer_addresses = HashMap::new();

let mut document_lock = self.eigensync_handle.write().await;
let document_state = document_lock.get_document_state().expect("Eigensync document should be present");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fail properly, do not unwrap


// Initial pull from server. If it fails, continue (we may be offline).
if let Err(e) = handle.sync_with_server().await {
tracing::error!("Initial eigensync pull failed: {:?}", e);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also say that we're continuing anyway


// Seed default only if the document is still empty after the pull.
if handle.document.get_changes(&[]).is_empty() {
let state = T::default();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means that changing the default value of a type may break compatibility with the automerge history. That's not good. We will have to come up with a better solution to this.

}

#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq, Hash)]
pub struct SignedEncryptedSerializedChange(Vec<u8>);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shorten this name to EncryptedChange.

Copy link

@Einliterflasche Einliterflasche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Split the eigensync crate into the core protocol and a crate for the server/client each:

  • eigensync-protocol which contains the protocol / ServerRequest/Response types and other common types/functions when they're used by server + client
  • eigensync-client which contains the SyncLoop, ChannelRequest types and EigensyncHandle and everything that goes along with that. This crates uses eigensync-protocol as a dependency.
  • eigensync-server which also uses eigensync-protocol and contains mostly whats in the bin/server.rs file currently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants