From c556f9a3a2a2ae2e53cb7f25f9681535e2ed8e19 Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Wed, 3 Jul 2024 11:46:41 +0300 Subject: [PATCH 1/6] Update sample to allow using TCM --- .../instances.enabled/sharded_cluster_crud/config.yaml | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/config.yaml b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/config.yaml index 9135ca916e..4c505e1a29 100644 --- a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/config.yaml +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/config.yaml @@ -34,10 +34,14 @@ groups: iproto: listen: - uri: '127.0.0.1:3302' + advertise: + client: '127.0.0.1:3302' storage-a-002: iproto: listen: - uri: '127.0.0.1:3303' + advertise: + client: '127.0.0.1:3303' storage-b: leader: storage-b-001 instances: @@ -45,10 +49,14 @@ groups: iproto: listen: - uri: '127.0.0.1:3304' + advertise: + client: '127.0.0.1:3304' storage-b-002: iproto: listen: - uri: '127.0.0.1:3305' + advertise: + client: '127.0.0.1:3305' routers: roles: [ roles.crud-router ] roles_cfg: @@ -70,3 +78,5 @@ groups: iproto: listen: - uri: '127.0.0.1:3301' + advertise: + client: '127.0.0.1:3301' From bf918add2ec83bf2ab023edb044ab11467260e97 Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Wed, 3 Jul 2024 12:38:38 +0300 Subject: [PATCH 2/6] Update Get Started --- doc/how-to/vshard_quick.rst | 255 +++++++++++++++++------------------- 1 file changed, 118 insertions(+), 137 deletions(-) diff --git a/doc/how-to/vshard_quick.rst b/doc/how-to/vshard_quick.rst index 4b711f51a6..c062b6edbc 100644 --- a/doc/how-to/vshard_quick.rst +++ b/doc/how-to/vshard_quick.rst @@ -3,10 +3,13 @@ Creating a sharded cluster ========================== -**Example on GitHub**: `sharded_cluster `_ +**Example on GitHub**: `sharded_cluster_crud `_ In this tutorial, you get a sharded cluster up and running on your local machine and learn how to manage the cluster using the tt utility. -To enable sharding in the cluster, the :ref:`vshard ` module is used. +In this tutorial, the following external modules are used: + +- :ref:`vshard ` enables sharding in the cluster. +- `crud `__ allows you to perform CRUD operations in the sharded cluster. The cluster created in this tutorial includes 5 instances: one router and 4 storages, which constitute two replica sets. @@ -43,15 +46,15 @@ In this tutorial, the application layout is prepared manually: 1. Create a tt environment in the current directory by executing the :ref:`tt init ` command. -2. Inside the empty ``instances.enabled`` directory of the created tt environment, create the ``sharded_cluster`` directory. +2. Inside the empty ``instances.enabled`` directory of the created tt environment, create the ``sharded_cluster_crud`` directory. -3. Inside ``instances.enabled/sharded_cluster``, create the following files: +3. Inside ``instances.enabled/sharded_cluster_crud``, create the following files: - ``instances.yml`` specifies instances to run in the current environment. - ``config.yaml`` specifies the cluster's :ref:`configuration `. - ``storage.lua`` contains code specific for :ref:`storages `. - ``router.lua`` contains code specific for a :ref:`router `. - - ``sharded_cluster-scm-1.rockspec`` specifies external dependencies required by the application. + - ``sharded_cluster_crud-scm-1.rockspec`` specifies external dependencies required by the application. The next :ref:`vshard-quick-start-developing-app` section shows how to configure the cluster and write code for routing read and write requests to different storages. @@ -68,7 +71,7 @@ Configuring instances to run Open the ``instances.yml`` file and add the following content: -.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/instances.yaml +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/instances.yaml :language: yaml :dedent: @@ -89,10 +92,10 @@ Step 1: Configuring credentials Add the :ref:`credentials ` configuration section: -.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/config.yaml :language: yaml :start-at: credentials: - :end-at: roles: [sharding] + :end-at: roles: [ sharding ] :dedent: In this section, two users with the specified passwords are created: @@ -116,9 +119,9 @@ Step 2: Specifying advertise URIs Add the :ref:`iproto.advertise ` section: -.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/config.yaml :language: yaml - :start-after: roles: [sharding] + :start-after: roles: [ sharding ] :end-at: login: storage :dedent: @@ -128,6 +131,9 @@ In this section, the following options are configured: In particular, this option informs other replica set members that the ``replicator`` user should be used to connect to the current instance. * ``iproto.advertise.sharding`` specifies how to advertise the current instance to a router and rebalancer. +The cluster topology defined in the :ref:`following section ` also specifies the ``iproto.advertise.client`` option for each instance. +This option accepts a URI used to advertise the instance to clients. + .. _vshard-quick-start-configuring-cluster-bucket-count: @@ -136,7 +142,7 @@ Step 3: Configuring bucket count Specify the total number of :ref:`buckets ` in a sharded cluster using the :ref:`sharding.bucket_count ` option: -.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/config.yaml :language: yaml :start-after: login: storage :end-at: bucket_count @@ -172,14 +178,15 @@ Here is a schematic view of the cluster's topology: 1. To configure storages, add the following code inside the ``groups`` section: - .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/config.yaml :language: yaml :start-at: storages: - :end-before: routers: + :end-at: client: '127.0.0.1:3305' :dedent: The main group-level options here are: + * ``roles``: This option enables the ``roles.crud-storage`` :ref:`role ` provided by the CRUD module for all storage instances. * ``app``: The ``app.module`` option specifies that code specific to storages should be loaded from the ``storage`` module. This is explained below in the :ref:`vshard-quick-start-storage-code` section. * ``sharding``: The :ref:`sharding.roles ` option specifies that all instances inside this group act as storages. A rebalancer is selected automatically from two master instances. @@ -189,17 +196,19 @@ Here is a schematic view of the cluster's topology: 2. To configure a router, add the following code inside the ``groups`` section: - .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/config.yaml :language: yaml :start-at: routers: - :end-at: 127.0.0.1:3301 + :end-at: client: '127.0.0.1:3301' :dedent: The main group-level options here are: + * ``roles``: This option enables the ``roles.crud-router`` :ref:`role ` provided by the CRUD module for a router instance. + * ``roles_cfg``: This section enables and configures statistics on called operations for a router with the enabled ``roles.crud-router`` role. * ``app``: The ``app.module`` option specifies that code specific to a router should be loaded from the ``router`` module. This is explained below in the :ref:`vshard-quick-start-router-code` section. * ``sharding``: The :ref:`sharding.roles ` option specifies that an instance inside this group acts as a router. - * ``replicasets``: This section configures one replica set with one router instance. + * ``replicasets``: This section configures a replica set with one router instance. Resulting configuration @@ -207,7 +216,7 @@ Resulting configuration The resulting ``config.yaml`` file should look as follows: -.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/config.yaml :language: yaml :dedent: @@ -217,88 +226,30 @@ The resulting ``config.yaml`` file should look as follows: Adding storage code ~~~~~~~~~~~~~~~~~~~ -1. Open the ``storage.lua`` file and define a space and indexes inside :ref:`box.once() `: - - .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua - :language: lua - :start-at: box.once - :end-before: function insert_band - :dedent: - - * The :ref:`box.schema.create_space() ` function is used to create a space. - Note that the created ``bands`` spaces includes the ``bucket_id`` field. - This field represents a sharding key used to partition a dataset across different storage instances. - * :ref:`space_object:create_index() ` is used to create two indexes based on the ``id`` and ``bucket_id`` fields. - -2. Define the ``insert_band`` function that inserts a tuple into the created space: - - .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua - :language: lua - :start-at: function insert_band - :end-before: function get_band - :dedent: - -3. Define the ``get_band`` function that returns data without the ``bucket_id`` value: +Open the ``storage.lua`` file and define a space and indexes inside :ref:`box.watch() ` as follows: - .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua - :language: lua - :start-at: function get_band - :dedent: - -The resulting ``storage.lua`` file should look as follows: - -.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/storage.lua :language: lua :dedent: +* The :ref:`box.schema.create_space() ` function creates a space. + Note that the created ``bands`` space includes the ``bucket_id`` field. + This field represents a sharding key used to partition a dataset across different storage instances. +* :ref:`space_object:create_index() ` creates two indexes based on the ``id`` and ``bucket_id`` fields. + + .. _vshard-quick-start-router-code: Adding router code ~~~~~~~~~~~~~~~~~~ -1. Open the ``router.lua`` file and load the ``vshard`` module as follows: - - .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua - :language: lua - :start-at: local vshard - :end-at: local vshard - :dedent: - -2. Define the ``put`` function that specifies how the router selects the storage to write data: - - .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua - :language: lua - :start-at: function put - :end-before: function get - :dedent: - - The following ``vshard`` router functions are used: - - * :ref:`vshard.router.bucket_id_mpcrc32() `: Calculates a bucket ID value using a hash function. - * :ref:`vshard.router.callrw() `: Inserts a tuple to a storage identified the generated bucket ID. - -3. Create the ``get`` function for getting data: - - .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua - :language: lua - :start-at: function get - :end-before: function insert_data - :dedent: - - Inside this function, :ref:`vshard.router.callro() ` is called to get data from a storage identified the generated bucket ID. - -4. Finally, create the ``insert_data()`` function that inserts sample data into the created space: - - .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua - :language: lua - :start-at: function insert_data - :dedent: - -The resulting ``router.lua`` file should look as follows: +Open the ``router.lua`` file and load the ``vshard`` module as follows: -.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/router.lua :language: lua + :start-at: local vshard + :end-at: local vshard :dedent: @@ -308,13 +259,13 @@ The resulting ``router.lua`` file should look as follows: Configuring build settings ~~~~~~~~~~~~~~~~~~~~~~~~~~ -Open the ``sharded_cluster-scm-1.rockspec`` file and add the following content: +Open the ``sharded_cluster_crud-scm-1.rockspec`` file and add the following content: -.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/sharded_cluster-scm-1.rockspec +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/sharded_cluster_crud-scm-1.rockspec :language: none :dedent: -The ``dependencies`` section includes the specified version of the ``vshard`` module. +The ``dependencies`` section includes the specified versions of the ``vshard`` and ``crud`` modules. To install dependencies, you need to :ref:`build the application `. @@ -328,12 +279,12 @@ Then, execute the ``tt build`` command: .. code-block:: console - $ tt build sharded_cluster + $ tt build sharded_cluster_crud • Running rocks make No existing manifest. Attempting to rebuild... • Application was successfully built -This installs the ``vshard`` dependency defined in the :ref:`*.rockspec ` file to the ``.rocks`` directory. +This installs the ``vshard`` and ``crud`` modules defined in the :ref:`*.rockspec ` file to the ``.rocks`` directory. @@ -351,12 +302,12 @@ To start all instances in the cluster, execute the ``tt start`` command: .. code-block:: console - $ tt start sharded_cluster - • Starting an instance [sharded_cluster:storage-a-001]... - • Starting an instance [sharded_cluster:storage-a-002]... - • Starting an instance [sharded_cluster:storage-b-001]... - • Starting an instance [sharded_cluster:storage-b-002]... - • Starting an instance [sharded_cluster:router-a-001]... + $ tt start sharded_cluster_crud + • Starting an instance [sharded_cluster_crud:storage-a-001]... + • Starting an instance [sharded_cluster_crud:storage-a-002]... + • Starting an instance [sharded_cluster_crud:storage-b-001]... + • Starting an instance [sharded_cluster_crud:storage-b-002]... + • Starting an instance [sharded_cluster_crud:router-a-001]... .. _vshard-quick-start-working-bootstrap: @@ -370,15 +321,15 @@ After starting instances, you need to bootstrap the cluster as follows: .. code-block:: console - $ tt connect sharded_cluster:router-a-001 + $ tt connect sharded_cluster_crud:router-a-001 • Connecting to the instance... - • Connected to sharded_cluster:router-a-001 + • Connected to sharded_cluster_crud:router-a-001 2. Call :ref:`vshard.router.bootstrap() ` to perform the initial cluster bootstrap: - .. code-block:: console + .. code-block:: tarantoolsession - sharded_cluster:router-a-001> vshard.router.bootstrap() + sharded_cluster_crud:router-a-001> vshard.router.bootstrap() --- - true ... @@ -386,14 +337,14 @@ After starting instances, you need to bootstrap the cluster as follows: .. _vshard-quick-start-working-status: -Checking status -~~~~~~~~~~~~~~~ +Checking the cluster's status +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To check the cluster's status, execute :ref:`vshard.router.info() ` on the router: -.. code-block:: console +.. code-block:: tarantoolsession - sharded_cluster:router-a-001> vshard.router.info() + sharded_cluster_crud::router-a-001> vshard.router.info() --- - replicasets: storage-b: @@ -447,33 +398,63 @@ The output includes the following sections: Writing and selecting data ~~~~~~~~~~~~~~~~~~~~~~~~~~ -1. To insert sample data, call the :ref:`insert_data() ` function on the router: +1. To insert sample data, call ``crud.insert_many()`` on the router: - .. code-block:: console + .. code-block:: lua - sharded_cluster:router-a-001> insert_data() - --- - ... + crud.insert_many('bands', { + { 1, box.NULL, 'Roxette', 1986 }, + { 2, box.NULL, 'Scorpions', 1965 }, + { 3, box.NULL, 'Ace of Base', 1987 }, + { 4, box.NULL, 'The Beatles', 1960 }, + { 5, box.NULL, 'Pink Floyd', 1965 }, + { 6, box.NULL, 'The Rolling Stones', 1962 }, + { 7, box.NULL, 'The Doors', 1965 }, + { 8, box.NULL, 'Nirvana', 1987 }, + { 9, box.NULL, 'Led Zeppelin', 1968 }, + { 10, box.NULL, 'Queen', 1970 } + }) Calling this function :ref:`distributes data ` evenly across the cluster's nodes. -2. To get a tuple by the specified ID, call the ``get()`` function: +2. To get a tuple by the specified ID, call the ``crud.get()`` function: - .. code-block:: console + .. code-block:: tarantoolsession - sharded_cluster:router-a-001> get(4) + sharded_cluster_crud:router-a-001> crud.get('bands', 4) --- - - [4, 'The Beatles', 1960] + - rows: + - [4, 161, 'The Beatles', 1960] + metadata: [{'name': 'id', 'type': 'unsigned'}, {'name': 'bucket_id', 'type': 'unsigned'}, + {'name': 'band_name', 'type': 'string'}, {'name': 'year', 'type': 'unsigned'}] + - null ... -3. To insert a new tuple, call the ``put()`` function: +3. To insert a new tuple, call ``crud.insert()``: - .. code-block:: console + .. code-block:: tarantoolsession - sharded_cluster:router-a-001> put(11, 'The Who', 1962) + sharded_cluster_crud:router-a-001> crud.insert('bands', {11, box.NULL, 'The Who', 1962}) --- + - rows: + - [11, 652, 'The Who', 1962] + metadata: [{'name': 'id', 'type': 'unsigned'}, {'name': 'bucket_id', 'type': 'unsigned'}, + {'name': 'band_name', 'type': 'string'}, {'name': 'year', 'type': 'unsigned'}] + - null ... +4. To get statistics on called operations, pass the space name to ``crud.stats()``: + + .. code-block:: tarantoolsession + + sharded_cluster_crud:router-a-001> crud.stats('bands') + --- + - get: + ok: + latency: 0.00069199999961711 + count: 1 + time: 0.00069199999961711 + latency_average: 0.00069199999961711 @@ -488,22 +469,22 @@ To check how data is distributed across the cluster's nodes, follow the steps be .. code-block:: console - $ tt connect sharded_cluster:storage-a-001 + $ tt connect sharded_cluster_crud:storage-a-001 • Connecting to the instance... - • Connected to sharded_cluster:storage-a-001 + • Connected to sharded_cluster_crud:storage-a-001 Then, select all tuples in the ``bands`` space: - .. code-block:: console + .. code-block:: tarantoolsession - sharded_cluster:storage-a-001> box.space.bands:select() + sharded_cluster_crud:storage-a-001> box.space.bands:select() --- - - - [3, 11, 'Ace of Base', 1987] - - [4, 42, 'The Beatles', 1960] - - [6, 55, 'The Rolling Stones', 1962] - - [9, 299, 'Led Zeppelin', 1968] - - [10, 167, 'Queen', 1970] - - [11, 70, 'The Who', 1962] + - - [1, 477, 'Roxette', 1986] + - [2, 401, 'Scorpions', 1965] + - [4, 161, 'The Beatles', 1960] + - [5, 172, 'Pink Floyd', 1965] + - [6, 64, 'The Rolling Stones', 1962] + - [8, 185, 'Nirvana', 1987] ... @@ -511,19 +492,19 @@ To check how data is distributed across the cluster's nodes, follow the steps be .. code-block:: console - $ tt connect sharded_cluster:storage-b-001 + $ tt connect sharded_cluster_crud:storage-b-001 • Connecting to the instance... - • Connected to sharded_cluster:storage-b-001 + • Connected to sharded_cluster_crud:storage-b-001 Select all tuples in the ``bands`` space to make sure it contains another subset of data: - .. code-block:: console + .. code-block:: tarantoolsession - sharded_cluster:storage-b-001> box.space.bands:select() + sharded_cluster_crud:storage-b-001> box.space.bands:select() --- - - - [1, 614, 'Roxette', 1986] - - [2, 986, 'Scorpions', 1965] - - [5, 755, 'Pink Floyd', 1965] - - [7, 998, 'The Doors', 1965] - - [8, 762, 'Nirvana', 1987] + - - [3, 804, 'Ace of Base', 1987] + - [7, 693, 'The Doors', 1965] + - [9, 644, 'Led Zeppelin', 1968] + - [10, 569, 'Queen', 1970] + - [11, 652, 'The Who', 1962] ... From a5fefa307b39971402c82e3046d9beaa567da0a7 Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Wed, 3 Jul 2024 13:02:29 +0300 Subject: [PATCH 3/6] Update READMEs --- .../sharded_cluster/README.md | 68 +++++++++++++++++-- .../sharded_cluster_crud/README.md | 54 +-------------- 2 files changed, 63 insertions(+), 59 deletions(-) diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/README.md b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/README.md index bde60f127f..bc0b5dba63 100644 --- a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/README.md +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/README.md @@ -1,16 +1,70 @@ # Sharded cluster -A sample application created in the [Creating a sharded cluster](https://www.tarantool.io/en/doc/latest/how-to/vshard_quick/) tutorial. +A sample application demonstrating how to configure a [sharded](https://www.tarantool.io/en/doc/latest/concepts/sharding/) cluster. ## Running -To learn how to run the cluster, see the [Working with the cluster](https://www.tarantool.io/en/doc/latest/how-to/vshard_quick/#working-with-the-cluster) section. +To run the cluster, go to the `sharding` directory in the terminal and perform the following steps: +1. Install dependencies defined in the `*.rockspec` file: -## Packaging + ```console + $ tt build sharded_cluster + ``` + +2. Run the cluster: -To package an application into a `.tgz` archive, use the `tt pack` command: + ```console + $ tt start sharded_cluster + ``` -```console -$ tt pack tgz --app-list sharded_cluster -``` +3. Connect to the router: + + ```console + $ tt connect sharded_cluster:router-a-001 + ``` + +4. Call `vshard.router.bootstrap()` to perform the initial cluster bootstrap: + + ```console + sharded_cluster:router-a-001> vshard.router.bootstrap() + --- + - true + ... + ``` + +5. Insert test data: + + ```console + sharded_cluster:router-a-001> insert_data() + --- + ... + ``` + +6. Connect to storages in different replica sets to see how data is distributed across nodes: + + a. `storage-a-001`: + + ```console + sharded_cluster:storage-a-001> box.space.bands:select() + --- + - - [1, 614, 'Roxette', 1986] + - [2, 986, 'Scorpions', 1965] + - [5, 755, 'Pink Floyd', 1965] + - [7, 998, 'The Doors', 1965] + - [8, 762, 'Nirvana', 1987] + ... + ``` + + b. `storage-b-001`: + + ```console + sharded_cluster:storage-b-001> box.space.bands:select() + --- + - - [3, 11, 'Ace of Base', 1987] + - [4, 42, 'The Beatles', 1960] + - [6, 55, 'The Rolling Stones', 1962] + - [9, 299, 'Led Zeppelin', 1968] + - [10, 167, 'Queen', 1970] + ... + ``` diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/README.md b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/README.md index 309e7348c0..29ba109ecc 100644 --- a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/README.md +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/README.md @@ -1,57 +1,7 @@ # Sharded cluster with CRUD -A sample application demonstrating how to set up a sharded cluster with the [crud](https://github.com/tarantool/crud) module. +A sample application created in the [Creating a sharded cluster](https://www.tarantool.io/en/doc/latest/how-to/vshard_quick/) tutorial. ## Running -Before running the cluster, execute the `tt build` command in the [sharding](../../../sharding) directory: - -```shell -$ tt build sharded_cluster_crud -``` - -Then, start all instances in the cluster using `tt start`: - -```shell -$ tt start sharded_cluster_crud -``` - -## Bootstrapping a cluster - -After starting instances, you need to bootstrap the cluster. -Connect to the router instance using `tt connect`: - -```shell -$ tt connect sharded_cluster_crud:router-a-001 - • Connecting to the instance... - • Connected to sharded_cluster_crud:router-a-001 -``` - -Call `vshard.router.bootstrap()` to perform the initial cluster bootstrap: - -```shell -sharded_cluster_crud:router-a-001> vshard.router.bootstrap() ---- -- true -... -``` - - -## Inserting data - -To insert sample data, call `crud.insert_many()` on the router: - -```lua -crud.insert_many('bands', { - { 1, box.NULL, 'Roxette', 1986 }, - { 2, box.NULL, 'Scorpions', 1965 }, - { 3, box.NULL, 'Ace of Base', 1987 }, - { 4, box.NULL, 'The Beatles', 1960 }, - { 5, box.NULL, 'Pink Floyd', 1965 }, - { 6, box.NULL, 'The Rolling Stones', 1962 }, - { 7, box.NULL, 'The Doors', 1965 }, - { 8, box.NULL, 'Nirvana', 1987 }, - { 9, box.NULL, 'Led Zeppelin', 1968 }, - { 10, box.NULL, 'Queen', 1970 } -}) -``` +To learn how to run the cluster, see the [Working with the cluster](https://www.tarantool.io/en/doc/latest/how-to/vshard_quick/#working-with-the-cluster) section. From f42fe0216be4d45d08433e2852e099de0ef0c137 Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Wed, 3 Jul 2024 13:14:00 +0300 Subject: [PATCH 4/6] Update related topics --- doc/book/admin/instance_config.rst | 25 ++++---- doc/book/admin/start_stop_instance.rst | 86 +++++++++++++------------- 2 files changed, 54 insertions(+), 57 deletions(-) diff --git a/doc/book/admin/instance_config.rst b/doc/book/admin/instance_config.rst index 64cb813802..8a3e4b7f6b 100644 --- a/doc/book/admin/instance_config.rst +++ b/doc/book/admin/instance_config.rst @@ -17,7 +17,7 @@ The main steps of creating and preparing the application for deployment are: 3. :ref:`admin-instance_config-package-app`. -In this section, a `sharded_cluster `_ application is used as an example. +In this section, a `sharded_cluster_crud `_ application is used as an example. This cluster includes 5 instances: one router and 4 storages, which constitute two replica sets. .. image:: /book/admin/admin_instances_dev.png @@ -82,27 +82,27 @@ In this example, the application's layout is prepared manually and looks as foll ├── distfiles ├── include ├── instances.enabled - │ └── sharded_cluster + │ └── sharded_cluster_crud │ ├── config.yaml │ ├── instances.yaml │ ├── router.lua - │ ├── sharded_cluster-scm-1.rockspec + │ ├── sharded_cluster_crud-scm-1.rockspec │ └── storage.lua ├── modules ├── templates └── tt.yaml -The ``sharded_cluster`` directory contains the following files: +The ``sharded_cluster_crud`` directory contains the following files: - ``config.yaml``: contains the :ref:`configuration ` of the cluster. This file might include the entire cluster topology or provide connection settings to a centralized configuration storage. - ``instances.yml``: specifies instances to run in the current environment. For example, on the developer’s machine, this file might include all the instances defined in the cluster configuration. In the production environment, this file includes :ref:`instances to run on the specific machine `. - ``router.lua``: includes code specific for a :ref:`router `. -- ``sharded_cluster-scm-1.rockspec``: specifies the required external dependencies (for example, ``vshard``). +- ``sharded_cluster_crud-scm-1.rockspec``: specifies the required external dependencies (for example, ``vshard`` and ``crud``). - ``storage.lua``: includes code specific for :ref:`storages `. You can find the full example here: -`sharded_cluster `_. +`sharded_cluster_crud `_. @@ -116,7 +116,7 @@ Packaging the application To package the ready application, use the :ref:`tt pack ` command. This command can create an installable DEB/RPM package or generate ``.tgz`` archive. -The structure below reflects the content of the packed ``.tgz`` archive for the `sharded_cluster `_ application: +The structure below reflects the content of the packed ``.tgz`` archive for the `sharded_cluster_crud `_ application: .. code-block:: console @@ -125,18 +125,15 @@ The structure below reflects the content of the packed ``.tgz`` archive for the ├── bin │ ├── tarantool │ └── tt - ├── include ├── instances.enabled - │ └── sharded_cluster -> ../sharded_cluster - ├── modules - ├── sharded_cluster + │ └── sharded_cluster_crud -> ../sharded_cluster_crud + ├── sharded_cluster_crud │ ├── .rocks │ │ └── share │ │ └── ... │ ├── config.yaml │ ├── instances.yaml │ ├── router.lua - │ ├── sharded_cluster-scm-1.rockspec │ └── storage.lua └── tt.yaml @@ -147,7 +144,7 @@ The application's layout looks similar to the one defined when :ref:`developing - ``instances.enabled``: contains a symlink to the packed ``sharded_cluster`` application. -- ``sharded_cluster``: a packed application. In addition to files created during the application development, includes the ``.rocks`` directory containing application dependencies (for example, ``vshard``). +- ``sharded_cluster_crud``: a packed application. In addition to files created during the application development, includes the ``.rocks`` directory containing application dependencies (for example, ``vshard`` and ``crud``). - ``tt.yaml``: a ``tt`` configuration file. @@ -178,7 +175,7 @@ define instances to run on each machine by changing the content of the ``instanc ``instances.yaml``: - .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/instances.yaml + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/instances.yaml :language: yaml :dedent: diff --git a/doc/book/admin/start_stop_instance.rst b/doc/book/admin/start_stop_instance.rst index 9186fc1515..e91966b623 100644 --- a/doc/book/admin/start_stop_instance.rst +++ b/doc/book/admin/start_stop_instance.rst @@ -17,7 +17,7 @@ To get more context on how the application's environment might look, refer to :r .. NOTE:: - In this section, a `sharded_cluster `_ application is used to demonstrate how to start, stop, and manage instances in a cluster. + In this section, a `sharded_cluster_crud `_ application is used to demonstrate how to start, stop, and manage instances in a cluster. .. _configuration_run_instance: @@ -30,20 +30,20 @@ To start Tarantool instances use the :ref:`tt start ` command: .. code-block:: console - $ tt start sharded_cluster - • Starting an instance [sharded_cluster:storage-a-001]... - • Starting an instance [sharded_cluster:storage-a-002]... - • Starting an instance [sharded_cluster:storage-b-001]... - • Starting an instance [sharded_cluster:storage-b-002]... - • Starting an instance [sharded_cluster:router-a-001]... + $ tt start sharded_cluster_crud + • Starting an instance [sharded_cluster_crud:storage-a-001]... + • Starting an instance [sharded_cluster_crud:storage-a-002]... + • Starting an instance [sharded_cluster_crud:storage-b-001]... + • Starting an instance [sharded_cluster_crud:storage-b-002]... + • Starting an instance [sharded_cluster_crud:router-a-001]... After the cluster has started and worked for some time, you can find its artifacts in the directories specified in the ``tt`` configuration. These are the default locations in the local :ref:`launch mode `: -* ``sharded_cluster/var/log//`` -- instance :ref:`logs `. -* ``sharded_cluster/var/lib//`` -- :ref:`snapshots and write-ahead logs `. -* ``sharded_cluster/var/run//`` -- control sockets and PID files. +* ``sharded_cluster_crud/var/log//`` -- instance :ref:`logs `. +* ``sharded_cluster_crud/var/lib//`` -- :ref:`snapshots and write-ahead logs `. +* ``sharded_cluster_crud/var/run//`` -- control sockets and PID files. In the system launch mode, artifacts are created in these locations: @@ -72,21 +72,21 @@ To check the status of instances, execute :ref:`tt status `: .. code-block:: console - $ tt status sharded_cluster + $ tt status sharded_cluster_crud INSTANCE STATUS PID MODE - sharded_cluster:storage-a-001 RUNNING 2023 RW - sharded_cluster:storage-a-002 RUNNING 2026 RO - sharded_cluster:storage-b-001 RUNNING 2020 RW - sharded_cluster:storage-b-002 RUNNING 2021 RO - sharded_cluster:router-a-001 RUNNING 2022 RW + sharded_cluster_crud:storage-a-001 RUNNING 2023 RW + sharded_cluster_crud:storage-a-002 RUNNING 2026 RO + sharded_cluster_crud:storage-b-001 RUNNING 2020 RW + sharded_cluster_crud:storage-b-002 RUNNING 2021 RO + sharded_cluster_crud:router-a-001 RUNNING 2022 RW To check the status of a specific instance, you need to specify its name: .. code-block:: console - $ tt status sharded_cluster:storage-a-001 + $ tt status sharded_cluster_crud:storage-a-001 INSTANCE STATUS PID MODE - sharded_cluster:storage-a-001 RUNNING 2023 RW + sharded_cluster_crud:storage-a-001 RUNNING 2023 RW .. _admin-start_stop_instance_connect: @@ -98,18 +98,18 @@ To connect to the instance, use the :ref:`tt connect ` command: .. code-block:: console - $ tt connect sharded_cluster:storage-a-001 + $ tt connect sharded_cluster_crud:storage-a-001 • Connecting to the instance... - • Connected to sharded_cluster:storage-a-001 + • Connected to sharded_cluster_crud:storage-a-001 - sharded_cluster:storage-a-001> + sharded_cluster_crud:storage-a-001> In the instance's console, you can execute commands provided by the :ref:`box ` module. For example, :ref:`box.info ` can be used to get various information about a running instance: -.. code-block:: console +.. code-block:: tarantoolsession - sharded_cluster:storage-a-001> box.info.ro + sharded_cluster_crud:storage-a-001> box.info.ro --- - false ... @@ -125,15 +125,15 @@ To restart an instance, use :ref:`tt restart `: .. code-block:: console - $ tt restart sharded_cluster:storage-a-002 + $ tt restart sharded_cluster_crud:storage-a-002 After executing ``tt restart``, you need to confirm this operation: .. code-block:: console - Confirm restart of 'sharded_cluster:storage-a-002' [y/n]: y - • The Instance sharded_cluster:storage-a-002 (PID = 2026) has been terminated. - • Starting an instance [sharded_cluster:storage-a-002]... + Confirm restart of 'sharded_cluster_crud:storage-a-002' [y/n]: y + • The Instance sharded_cluster_crud:storage-a-002 (PID = 2026) has been terminated. + • Starting an instance [sharded_cluster_crud:storage-a-002]... .. _admin-start_stop_instance_stop: @@ -145,18 +145,18 @@ To stop the specific instance, use :ref:`tt stop ` as follows: .. code-block:: console - $ tt stop sharded_cluster:storage-a-002 + $ tt stop sharded_cluster_crud:storage-a-002 You can also stop all the instances at once as follows: .. code-block:: console - $ tt stop sharded_cluster - • The Instance sharded_cluster:storage-b-001 (PID = 2020) has been terminated. - • The Instance sharded_cluster:storage-b-002 (PID = 2021) has been terminated. - • The Instance sharded_cluster:router-a-001 (PID = 2022) has been terminated. - • The Instance sharded_cluster:storage-a-001 (PID = 2023) has been terminated. - • can't "stat" the PID file. Error: "stat /home/testuser/myapp/instances.enabled/sharded_cluster/var/run/storage-a-002/tt.pid: no such file or directory" + $ tt stop sharded_cluster_crud + • The Instance sharded_cluster_crud:storage-b-001 (PID = 2020) has been terminated. + • The Instance sharded_cluster_crud:storage-b-002 (PID = 2021) has been terminated. + • The Instance sharded_cluster_crud:router-a-001 (PID = 2022) has been terminated. + • The Instance sharded_cluster_crud:storage-a-001 (PID = 2023) has been terminated. + • can't "stat" the PID file. Error: "stat /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/run/storage-a-002/tt.pid: no such file or directory" .. note:: @@ -172,12 +172,12 @@ The :ref:`tt clean ` command removes instance artifacts (such as logs .. code-block:: console - $ tt clean sharded_cluster + $ tt clean sharded_cluster_crud • List of files to delete: - • /home/testuser/myapp/instances.enabled/sharded_cluster/var/log/storage-a-001/tt.log - • /home/testuser/myapp/instances.enabled/sharded_cluster/var/lib/storage-a-001/00000000000000001062.snap - • /home/testuser/myapp/instances.enabled/sharded_cluster/var/lib/storage-a-001/00000000000000001062.xlog + • /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/log/storage-a-001/tt.log + • /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/lib/storage-a-001/00000000000000001062.snap + • /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/lib/storage-a-001/00000000000000001062.xlog • ... Confirm [y/n]: @@ -201,20 +201,20 @@ Tarantool supports loading and running chunks of Lua code before starting instan To load or run Lua code immediately upon Tarantool startup, specify the ``TT_PRELOAD`` environment variable. Its value can be either a path to a Lua script or a Lua module name: -* To run the Lua script ``preload_script.lua`` from the ``sharded_cluster`` directory, set ``TT_PRELOAD`` as follows: +* To run the Lua script ``preload_script.lua`` from the ``sharded_cluster_crud`` directory, set ``TT_PRELOAD`` as follows: .. code-block:: console - $ TT_PRELOAD=preload_script.lua tt start sharded_cluster + $ TT_PRELOAD=preload_script.lua tt start sharded_cluster_crud Tarantool runs the ``preload_script.lua`` code, waits for it to complete, and then starts instances. -* To load the ``preload_module`` from the ``sharded_cluster`` directory, set ``TT_PRELOAD`` as follows: +* To load the ``preload_module`` from the ``sharded_cluster_crud`` directory, set ``TT_PRELOAD`` as follows: .. code-block:: console - $ TT_PRELOAD=preload_module tt start sharded_cluster + $ TT_PRELOAD=preload_module tt start sharded_cluster_crud .. note:: @@ -226,7 +226,7 @@ by semicolons: .. code-block:: console - $ TT_PRELOAD="preload_script.lua;preload_module" tt start sharded_cluster + $ TT_PRELOAD="preload_script.lua;preload_module" tt start sharded_cluster_crud If an error happens during the execution of the preload script or module, Tarantool reports the problem and exits. From 98eb4644b31b1ddf9ea7059ab864671ecdc7df34 Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Thu, 4 Jul 2024 16:43:41 +0300 Subject: [PATCH 5/6] Mention TCM --- doc/how-to/vshard_quick.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/doc/how-to/vshard_quick.rst b/doc/how-to/vshard_quick.rst index c062b6edbc..9404aabc19 100644 --- a/doc/how-to/vshard_quick.rst +++ b/doc/how-to/vshard_quick.rst @@ -133,6 +133,7 @@ In this section, the following options are configured: The cluster topology defined in the :ref:`following section ` also specifies the ``iproto.advertise.client`` option for each instance. This option accepts a URI used to advertise the instance to clients. +For example, |tcm_full_name| uses these URIs to :ref:`connect ` to cluster instances. .. _vshard-quick-start-configuring-cluster-bucket-count: From 98306a507c7530234922b42ea1221075b2e54a0c Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Mon, 8 Jul 2024 10:54:23 +0300 Subject: [PATCH 6/6] Update per TW review --- doc/how-to/vshard_quick.rst | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/doc/how-to/vshard_quick.rst b/doc/how-to/vshard_quick.rst index 9404aabc19..8b1265a1e7 100644 --- a/doc/how-to/vshard_quick.rst +++ b/doc/how-to/vshard_quick.rst @@ -6,10 +6,10 @@ Creating a sharded cluster **Example on GitHub**: `sharded_cluster_crud `_ In this tutorial, you get a sharded cluster up and running on your local machine and learn how to manage the cluster using the tt utility. -In this tutorial, the following external modules are used: +This cluster uses the following external modules: - :ref:`vshard ` enables sharding in the cluster. -- `crud `__ allows you to perform CRUD operations in the sharded cluster. +- `crud `__ allows you to manipulate data in the sharded cluster. The cluster created in this tutorial includes 5 instances: one router and 4 storages, which constitute two replica sets. @@ -51,7 +51,7 @@ In this tutorial, the application layout is prepared manually: 3. Inside ``instances.enabled/sharded_cluster_crud``, create the following files: - ``instances.yml`` specifies instances to run in the current environment. - - ``config.yaml`` specifies the cluster's :ref:`configuration `. + - ``config.yaml`` specifies the cluster :ref:`configuration `. - ``storage.lua`` contains code specific for :ref:`storages `. - ``router.lua`` contains code specific for a :ref:`router `. - ``sharded_cluster_crud-scm-1.rockspec`` specifies external dependencies required by the application. @@ -133,7 +133,7 @@ In this section, the following options are configured: The cluster topology defined in the :ref:`following section ` also specifies the ``iproto.advertise.client`` option for each instance. This option accepts a URI used to advertise the instance to clients. -For example, |tcm_full_name| uses these URIs to :ref:`connect ` to cluster instances. +For example, :ref:`Tarantool Cluster Manager ` uses these URIs to :ref:`connect ` to cluster instances. .. _vshard-quick-start-configuring-cluster-bucket-count: @@ -155,13 +155,13 @@ Specify the total number of :ref:`buckets ` in a sharded cluste Step 4: Defining the cluster topology ************************************* -Define the cluster's topology inside the :ref:`groups ` section. +Define the cluster topology inside the :ref:`groups ` section. The cluster includes two groups: * ``storages`` includes two replica sets. Each replica set contains two instances. * ``routers`` includes one router instance. -Here is a schematic view of the cluster's topology: +Here is a schematic view of the cluster topology: .. code-block:: yaml @@ -326,7 +326,7 @@ After starting instances, you need to bootstrap the cluster as follows: • Connecting to the instance... • Connected to sharded_cluster_crud:router-a-001 -2. Call :ref:`vshard.router.bootstrap() ` to perform the initial cluster bootstrap: +2. Call :ref:`vshard.router.bootstrap() ` to perform the initial cluster bootstrap and distribute all buckets across the replica sets: .. code-block:: tarantoolsession @@ -338,10 +338,10 @@ After starting instances, you need to bootstrap the cluster as follows: .. _vshard-quick-start-working-status: -Checking the cluster's status -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Checking the cluster status +~~~~~~~~~~~~~~~~~~~~~~~~~~~ -To check the cluster's status, execute :ref:`vshard.router.info() ` on the router: +To check the cluster status, execute :ref:`vshard.router.info() ` on the router: .. code-block:: tarantoolsession @@ -416,7 +416,7 @@ Writing and selecting data { 10, box.NULL, 'Queen', 1970 } }) - Calling this function :ref:`distributes data ` evenly across the cluster's nodes. + Calling this function :ref:`distributes data ` evenly across the cluster nodes. 2. To get a tuple by the specified ID, call the ``crud.get()`` function: @@ -464,7 +464,7 @@ Writing and selecting data Checking data distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~ -To check how data is distributed across the cluster's nodes, follow the steps below: +To check how data is distributed across the replica sets, follow the steps below: 1. Connect to any storage in the ``storage-a`` replica set: