Skip to content

Use the CRUD module in the 'Creating a sharded cluster' tutorial #4338

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jul 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 11 additions & 14 deletions doc/book/admin/instance_config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The main steps of creating and preparing the application for deployment are:

3. :ref:`admin-instance_config-package-app`.

In this section, a `sharded_cluster <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster>`_ application is used as an example.
In this section, a `sharded_cluster_crud <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud>`_ application is used as an example.
This cluster includes 5 instances: one router and 4 storages, which constitute two replica sets.

.. image:: /book/admin/admin_instances_dev.png
Expand Down Expand Up @@ -82,27 +82,27 @@ In this example, the application's layout is prepared manually and looks as foll
├── distfiles
├── include
├── instances.enabled
│ └── sharded_cluster
│ └── sharded_cluster_crud
│ ├── config.yaml
│ ├── instances.yaml
│ ├── router.lua
│ ├── sharded_cluster-scm-1.rockspec
│ ├── sharded_cluster_crud-scm-1.rockspec
│ └── storage.lua
├── modules
├── templates
└── tt.yaml


The ``sharded_cluster`` directory contains the following files:
The ``sharded_cluster_crud`` directory contains the following files:

- ``config.yaml``: contains the :ref:`configuration <configuration>` of the cluster. This file might include the entire cluster topology or provide connection settings to a centralized configuration storage.
- ``instances.yml``: specifies instances to run in the current environment. For example, on the developer’s machine, this file might include all the instances defined in the cluster configuration. In the production environment, this file includes :ref:`instances to run on the specific machine <admin-instances_to_run>`.
- ``router.lua``: includes code specific for a :ref:`router <vshard-architecture-router>`.
- ``sharded_cluster-scm-1.rockspec``: specifies the required external dependencies (for example, ``vshard``).
- ``sharded_cluster_crud-scm-1.rockspec``: specifies the required external dependencies (for example, ``vshard`` and ``crud``).
- ``storage.lua``: includes code specific for :ref:`storages <vshard-architecture-storage>`.

You can find the full example here:
`sharded_cluster <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster>`_.
`sharded_cluster_crud <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud>`_.



Expand All @@ -116,7 +116,7 @@ Packaging the application
To package the ready application, use the :ref:`tt pack <tt-pack>` command.
This command can create an installable DEB/RPM package or generate ``.tgz`` archive.

The structure below reflects the content of the packed ``.tgz`` archive for the `sharded_cluster <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster>`_ application:
The structure below reflects the content of the packed ``.tgz`` archive for the `sharded_cluster_crud <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud>`_ application:

.. code-block:: console

Expand All @@ -125,18 +125,15 @@ The structure below reflects the content of the packed ``.tgz`` archive for the
├── bin
│ ├── tarantool
│ └── tt
├── include
├── instances.enabled
│ └── sharded_cluster -> ../sharded_cluster
├── modules
├── sharded_cluster
│ └── sharded_cluster_crud -> ../sharded_cluster_crud
├── sharded_cluster_crud
│ ├── .rocks
│ │ └── share
│ │ └── ...
│ ├── config.yaml
│ ├── instances.yaml
│ ├── router.lua
│ ├── sharded_cluster-scm-1.rockspec
│ └── storage.lua
└── tt.yaml

Expand All @@ -147,7 +144,7 @@ The application's layout looks similar to the one defined when :ref:`developing

- ``instances.enabled``: contains a symlink to the packed ``sharded_cluster`` application.

- ``sharded_cluster``: a packed application. In addition to files created during the application development, includes the ``.rocks`` directory containing application dependencies (for example, ``vshard``).
- ``sharded_cluster_crud``: a packed application. In addition to files created during the application development, includes the ``.rocks`` directory containing application dependencies (for example, ``vshard`` and ``crud``).

- ``tt.yaml``: a ``tt`` configuration file.

Expand Down Expand Up @@ -178,7 +175,7 @@ define instances to run on each machine by changing the content of the ``instanc

``instances.yaml``:

.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/instances.yaml
.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/instances.yaml
:language: yaml
:dedent:

Expand Down
86 changes: 43 additions & 43 deletions doc/book/admin/start_stop_instance.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ To get more context on how the application's environment might look, refer to :r

.. NOTE::

In this section, a `sharded_cluster <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster>`_ application is used to demonstrate how to start, stop, and manage instances in a cluster.
In this section, a `sharded_cluster_crud <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud>`_ application is used to demonstrate how to start, stop, and manage instances in a cluster.


.. _configuration_run_instance:
Expand All @@ -30,20 +30,20 @@ To start Tarantool instances use the :ref:`tt start <tt-start>` command:

.. code-block:: console

$ tt start sharded_cluster
• Starting an instance [sharded_cluster:storage-a-001]...
• Starting an instance [sharded_cluster:storage-a-002]...
• Starting an instance [sharded_cluster:storage-b-001]...
• Starting an instance [sharded_cluster:storage-b-002]...
• Starting an instance [sharded_cluster:router-a-001]...
$ tt start sharded_cluster_crud
• Starting an instance [sharded_cluster_crud:storage-a-001]...
• Starting an instance [sharded_cluster_crud:storage-a-002]...
• Starting an instance [sharded_cluster_crud:storage-b-001]...
• Starting an instance [sharded_cluster_crud:storage-b-002]...
• Starting an instance [sharded_cluster_crud:router-a-001]...

After the cluster has started and worked for some time, you can find its artifacts
in the directories specified in the ``tt`` configuration. These are the default
locations in the local :ref:`launch mode <tt-config_modes>`:

* ``sharded_cluster/var/log/<instance_name>/`` -- instance :ref:`logs <admin-logs>`.
* ``sharded_cluster/var/lib/<instance_name>/`` -- :ref:`snapshots and write-ahead logs <concepts-data_model-persistence>`.
* ``sharded_cluster/var/run/<instance_name>/`` -- control sockets and PID files.
* ``sharded_cluster_crud/var/log/<instance_name>/`` -- instance :ref:`logs <admin-logs>`.
* ``sharded_cluster_crud/var/lib/<instance_name>/`` -- :ref:`snapshots and write-ahead logs <concepts-data_model-persistence>`.
* ``sharded_cluster_crud/var/run/<instance_name>/`` -- control sockets and PID files.

In the system launch mode, artifacts are created in these locations:

Expand Down Expand Up @@ -72,21 +72,21 @@ To check the status of instances, execute :ref:`tt status <tt-status>`:

.. code-block:: console

$ tt status sharded_cluster
$ tt status sharded_cluster_crud
INSTANCE STATUS PID MODE
sharded_cluster:storage-a-001 RUNNING 2023 RW
sharded_cluster:storage-a-002 RUNNING 2026 RO
sharded_cluster:storage-b-001 RUNNING 2020 RW
sharded_cluster:storage-b-002 RUNNING 2021 RO
sharded_cluster:router-a-001 RUNNING 2022 RW
sharded_cluster_crud:storage-a-001 RUNNING 2023 RW
sharded_cluster_crud:storage-a-002 RUNNING 2026 RO
sharded_cluster_crud:storage-b-001 RUNNING 2020 RW
sharded_cluster_crud:storage-b-002 RUNNING 2021 RO
sharded_cluster_crud:router-a-001 RUNNING 2022 RW

To check the status of a specific instance, you need to specify its name:

.. code-block:: console

$ tt status sharded_cluster:storage-a-001
$ tt status sharded_cluster_crud:storage-a-001
INSTANCE STATUS PID MODE
sharded_cluster:storage-a-001 RUNNING 2023 RW
sharded_cluster_crud:storage-a-001 RUNNING 2023 RW


.. _admin-start_stop_instance_connect:
Expand All @@ -98,18 +98,18 @@ To connect to the instance, use the :ref:`tt connect <tt-connect>` command:

.. code-block:: console

$ tt connect sharded_cluster:storage-a-001
$ tt connect sharded_cluster_crud:storage-a-001
• Connecting to the instance...
• Connected to sharded_cluster:storage-a-001
• Connected to sharded_cluster_crud:storage-a-001

sharded_cluster:storage-a-001>
sharded_cluster_crud:storage-a-001>

In the instance's console, you can execute commands provided by the :ref:`box <box-module>` module.
For example, :ref:`box.info <box_introspection-box_info>` can be used to get various information about a running instance:

.. code-block:: console
.. code-block:: tarantoolsession

sharded_cluster:storage-a-001> box.info.ro
sharded_cluster_crud:storage-a-001> box.info.ro
---
- false
...
Expand All @@ -125,15 +125,15 @@ To restart an instance, use :ref:`tt restart <tt-restart>`:

.. code-block:: console

$ tt restart sharded_cluster:storage-a-002
$ tt restart sharded_cluster_crud:storage-a-002

After executing ``tt restart``, you need to confirm this operation:

.. code-block:: console

Confirm restart of 'sharded_cluster:storage-a-002' [y/n]: y
• The Instance sharded_cluster:storage-a-002 (PID = 2026) has been terminated.
• Starting an instance [sharded_cluster:storage-a-002]...
Confirm restart of 'sharded_cluster_crud:storage-a-002' [y/n]: y
• The Instance sharded_cluster_crud:storage-a-002 (PID = 2026) has been terminated.
• Starting an instance [sharded_cluster_crud:storage-a-002]...


.. _admin-start_stop_instance_stop:
Expand All @@ -145,18 +145,18 @@ To stop the specific instance, use :ref:`tt stop <tt-stop>` as follows:

.. code-block:: console

$ tt stop sharded_cluster:storage-a-002
$ tt stop sharded_cluster_crud:storage-a-002

You can also stop all the instances at once as follows:

.. code-block:: console

$ tt stop sharded_cluster
• The Instance sharded_cluster:storage-b-001 (PID = 2020) has been terminated.
• The Instance sharded_cluster:storage-b-002 (PID = 2021) has been terminated.
• The Instance sharded_cluster:router-a-001 (PID = 2022) has been terminated.
• The Instance sharded_cluster:storage-a-001 (PID = 2023) has been terminated.
• can't "stat" the PID file. Error: "stat /home/testuser/myapp/instances.enabled/sharded_cluster/var/run/storage-a-002/tt.pid: no such file or directory"
$ tt stop sharded_cluster_crud
• The Instance sharded_cluster_crud:storage-b-001 (PID = 2020) has been terminated.
• The Instance sharded_cluster_crud:storage-b-002 (PID = 2021) has been terminated.
• The Instance sharded_cluster_crud:router-a-001 (PID = 2022) has been terminated.
• The Instance sharded_cluster_crud:storage-a-001 (PID = 2023) has been terminated.
• can't "stat" the PID file. Error: "stat /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/run/storage-a-002/tt.pid: no such file or directory"

.. note::

Expand All @@ -172,12 +172,12 @@ The :ref:`tt clean <tt-clean>` command removes instance artifacts (such as logs

.. code-block:: console

$ tt clean sharded_cluster
$ tt clean sharded_cluster_crud
• List of files to delete:

• /home/testuser/myapp/instances.enabled/sharded_cluster/var/log/storage-a-001/tt.log
• /home/testuser/myapp/instances.enabled/sharded_cluster/var/lib/storage-a-001/00000000000000001062.snap
• /home/testuser/myapp/instances.enabled/sharded_cluster/var/lib/storage-a-001/00000000000000001062.xlog
• /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/log/storage-a-001/tt.log
• /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/lib/storage-a-001/00000000000000001062.snap
• /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/lib/storage-a-001/00000000000000001062.xlog
• ...

Confirm [y/n]:
Expand All @@ -201,20 +201,20 @@ Tarantool supports loading and running chunks of Lua code before starting instan
To load or run Lua code immediately upon Tarantool startup, specify the ``TT_PRELOAD``
environment variable. Its value can be either a path to a Lua script or a Lua module name:

* To run the Lua script ``preload_script.lua`` from the ``sharded_cluster`` directory, set ``TT_PRELOAD`` as follows:
* To run the Lua script ``preload_script.lua`` from the ``sharded_cluster_crud`` directory, set ``TT_PRELOAD`` as follows:

.. code-block:: console

$ TT_PRELOAD=preload_script.lua tt start sharded_cluster
$ TT_PRELOAD=preload_script.lua tt start sharded_cluster_crud

Tarantool runs the ``preload_script.lua`` code, waits for it to complete, and
then starts instances.

* To load the ``preload_module`` from the ``sharded_cluster`` directory, set ``TT_PRELOAD`` as follows:
* To load the ``preload_module`` from the ``sharded_cluster_crud`` directory, set ``TT_PRELOAD`` as follows:

.. code-block:: console

$ TT_PRELOAD=preload_module tt start sharded_cluster
$ TT_PRELOAD=preload_module tt start sharded_cluster_crud

.. note::

Expand All @@ -226,7 +226,7 @@ by semicolons:

.. code-block:: console

$ TT_PRELOAD="preload_script.lua;preload_module" tt start sharded_cluster
$ TT_PRELOAD="preload_script.lua;preload_module" tt start sharded_cluster_crud

If an error happens during the execution of the preload script or module, Tarantool
reports the problem and exits.
Original file line number Diff line number Diff line change
@@ -1,16 +1,70 @@
# Sharded cluster

A sample application created in the [Creating a sharded cluster](https://www.tarantool.io/en/doc/latest/how-to/vshard_quick/) tutorial.
A sample application demonstrating how to configure a [sharded](https://www.tarantool.io/en/doc/latest/concepts/sharding/) cluster.

## Running

To learn how to run the cluster, see the [Working with the cluster](https://www.tarantool.io/en/doc/latest/how-to/vshard_quick/#working-with-the-cluster) section.
To run the cluster, go to the `sharding` directory in the terminal and perform the following steps:

1. Install dependencies defined in the `*.rockspec` file:

## Packaging
```console
$ tt build sharded_cluster
```

2. Run the cluster:

To package an application into a `.tgz` archive, use the `tt pack` command:
```console
$ tt start sharded_cluster
```

```console
$ tt pack tgz --app-list sharded_cluster
```
3. Connect to the router:

```console
$ tt connect sharded_cluster:router-a-001
```

4. Call `vshard.router.bootstrap()` to perform the initial cluster bootstrap:

```console
sharded_cluster:router-a-001> vshard.router.bootstrap()
---
- true
...
```

5. Insert test data:

```console
sharded_cluster:router-a-001> insert_data()
---
...
```

6. Connect to storages in different replica sets to see how data is distributed across nodes:

a. `storage-a-001`:

```console
sharded_cluster:storage-a-001> box.space.bands:select()
---
- - [1, 614, 'Roxette', 1986]
- [2, 986, 'Scorpions', 1965]
- [5, 755, 'Pink Floyd', 1965]
- [7, 998, 'The Doors', 1965]
- [8, 762, 'Nirvana', 1987]
...
```

b. `storage-b-001`:

```console
sharded_cluster:storage-b-001> box.space.bands:select()
---
- - [3, 11, 'Ace of Base', 1987]
- [4, 42, 'The Beatles', 1960]
- [6, 55, 'The Rolling Stones', 1962]
- [9, 299, 'Led Zeppelin', 1968]
- [10, 167, 'Queen', 1970]
...
```
Loading
Loading