Skip to content

Commit 9c4442c

Browse files
committed
Update the unique index example
1 parent e23d513 commit 9c4442c

File tree

3 files changed

+99
-3
lines changed

3 files changed

+99
-3
lines changed

src/current/v24.3/set-up-logical-data-replication.md

Lines changed: 33 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,39 @@ LDR cannot guarantee that the [_dead letter queue_ (DLQ)]({% link {{ page.versio
6060

6161
If the application modifies the same row in both clusters, LDR resolves the conflict using _last write wins_ (LWW) conflict resolution. [`UNIQUE` constraints]({% link {{ page.version.version }}/unique.md %}) are validated locally in each cluster, therefore if a replicated write violates a `UNIQUE` constraint on the destination cluster (possibly because a conflicting write was already applied to the row) the replicating row will be applied to the DLQ.
6262

63-
For example, consider a table with a unique `email` column. If an application attempts to insert (`gen_random_uuid()`, `[email protected]`) into both clusters simultaneously, the insert will succeed in both clusters, but the records will have different [primary keys]({% link {{ page.version.version }}/primary-key.md %}) and the same email address, which violates the `UNIQUE` constraint. When the rows are replicated, LDR will DLQ the row in the peer cluster.
63+
For example, consider a table with a unique `name` column where the following operations occur in this order in a source and destination cluster running LDR:
64+
65+
On the **source cluster**:
66+
67+
{% include_cached copy-clipboard.html %}
68+
~~~ sql
69+
INSERT INTO city (1, nyc); -- timestamp 1
70+
UPDATE city SET name = 'philly' WHERE id = 1; -- timestamp 2
71+
INSERT INTO city (100, nyc); -- timestamp 3
72+
~~~
73+
74+
LDR replicates the write to the **destination cluster**:
75+
76+
{% include_cached copy-clipboard.html %}
77+
~~~ sql
78+
INSERT INTO city (100, nyc); -- timestamp 4
79+
~~~
80+
81+
_Timestamp 5:_ Range containing primary key `1` on the destination cluster is unavailable for a few minutes due to a network partition.
82+
83+
_Timestamp 6:_ On the destination cluster, LDR attempts to replicate the row `(1, nyc)`, but it enters the retry queue for 1 minute due to the unavailable range. LDR adds `1, nyc` to the DLQ after having retried for 1 minute and observing the `UNIQUE` constraint violation:
84+
85+
{% include_cached copy-clipboard.html %}
86+
~~~ sql
87+
INSERT INTO city (1, nyc); -- timestamp 6
88+
~~~
89+
90+
_Timestamp 7:_ LDR continues replication writes:
91+
92+
{% include_cached copy-clipboard.html %}
93+
~~~ sql
94+
INSERT INTO city (1, philly); -- timestamp 7
95+
~~~
6496

6597
To prevent expected DLQ entries and allow LDR to be eventually consistent, we recommend:
6698

src/current/v25.1/set-up-logical-data-replication.md

Lines changed: 33 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,39 @@ LDR cannot guarantee that the [_dead letter queue_ (DLQ)]({% link {{ page.versio
6666

6767
If the application modifies the same row in both clusters, LDR resolves the conflict using _last write wins_ (LWW) conflict resolution. [`UNIQUE` constraints]({% link {{ page.version.version }}/unique.md %}) are validated locally in each cluster, therefore if a replicated write violates a `UNIQUE` constraint on the destination cluster (possibly because a conflicting write was already applied to the row) the replicating row will be applied to the DLQ.
6868

69-
For example, consider a table with a unique `email` column. If an application attempts to insert (`gen_random_uuid()`, `[email protected]`) into both clusters simultaneously, the insert will succeed in both clusters, but the records will have different [primary keys]({% link {{ page.version.version }}/primary-key.md %}) and the same email address, which violates the `UNIQUE` constraint. When the rows are replicated, LDR will DLQ the row in the peer cluster.
69+
For example, consider a table with a unique `name` column where the following operations occur in this order in a source and destination cluster running LDR:
70+
71+
On the **source cluster**:
72+
73+
{% include_cached copy-clipboard.html %}
74+
~~~ sql
75+
INSERT INTO city (1, nyc); -- timestamp 1
76+
UPDATE city SET name = 'philly' WHERE id = 1; -- timestamp 2
77+
INSERT INTO city (100, nyc); -- timestamp 3
78+
~~~
79+
80+
LDR replicates the write to the **destination cluster**:
81+
82+
{% include_cached copy-clipboard.html %}
83+
~~~ sql
84+
INSERT INTO city (100, nyc); -- timestamp 4
85+
~~~
86+
87+
_Timestamp 5:_ Range containing primary key `1` on the destination cluster is unavailable for a few minutes due to a network partition.
88+
89+
_Timestamp 6:_ On the destination cluster, LDR attempts to replicate the row `(1, nyc)`, but it enters the retry queue for 1 minute due to the unavailable range. LDR adds `1, nyc` to the DLQ after having retried for 1 minute and observing the `UNIQUE` constraint violation:
90+
91+
{% include_cached copy-clipboard.html %}
92+
~~~ sql
93+
INSERT INTO city (1, nyc); -- timestamp 6
94+
~~~
95+
96+
_Timestamp 7:_ LDR continues replication writes:
97+
98+
{% include_cached copy-clipboard.html %}
99+
~~~ sql
100+
INSERT INTO city (1, philly); -- timestamp 7
101+
~~~
70102

71103
To prevent expected DLQ entries and allow LDR to be eventually consistent, we recommend:
72104

src/current/v25.2/set-up-logical-data-replication.md

Lines changed: 33 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,39 @@ LDR cannot guarantee that the [_dead letter queue_ (DLQ)]({% link {{ page.versio
7474

7575
If the application modifies the same row in both clusters, LDR resolves the conflict using _last write wins_ (LWW) conflict resolution. [`UNIQUE` constraints]({% link {{ page.version.version }}/unique.md %}) are validated locally in each cluster, therefore if a replicated write violates a `UNIQUE` constraint on the destination cluster (possibly because a conflicting write was already applied to the row) the replicating row will be applied to the DLQ.
7676

77-
For example, consider a table with a unique `email` column. If an application attempts to insert (`gen_random_uuid()`, `[email protected]`) into both clusters simultaneously, the insert will succeed in both clusters, but the records will have different [primary keys]({% link {{ page.version.version }}/primary-key.md %}) and the same email address, which violates the `UNIQUE` constraint. When the rows are replicated, LDR will DLQ the row in the peer cluster.
77+
For example, consider a table with a unique `name` column where the following operations occur in this order in a source and destination cluster running LDR:
78+
79+
On the **source cluster**:
80+
81+
{% include_cached copy-clipboard.html %}
82+
~~~ sql
83+
INSERT INTO city (1, nyc); -- timestamp 1
84+
UPDATE city SET name = 'philly' WHERE id = 1; -- timestamp 2
85+
INSERT INTO city (100, nyc); -- timestamp 3
86+
~~~
87+
88+
LDR replicates the write to the **destination cluster**:
89+
90+
{% include_cached copy-clipboard.html %}
91+
~~~ sql
92+
INSERT INTO city (100, nyc); -- timestamp 4
93+
~~~
94+
95+
_Timestamp 5:_ Range containing primary key `1` on the destination cluster is unavailable for a few minutes due to a network partition.
96+
97+
_Timestamp 6:_ On the destination cluster, LDR attempts to replicate the row `(1, nyc)`, but it enters the retry queue for 1 minute due to the unavailable range. LDR adds `1, nyc` to the DLQ after having retried for 1 minute and observing the `UNIQUE` constraint violation:
98+
99+
{% include_cached copy-clipboard.html %}
100+
~~~ sql
101+
INSERT INTO city (1, nyc); -- timestamp 6
102+
~~~
103+
104+
_Timestamp 7:_ LDR continues replication writes:
105+
106+
{% include_cached copy-clipboard.html %}
107+
~~~ sql
108+
INSERT INTO city (1, philly); -- timestamp 7
109+
~~~
78110

79111
To prevent expected DLQ entries and allow LDR to be eventually consistent, we recommend:
80112

0 commit comments

Comments
 (0)