diff --git a/api-reference/pgai/vectorizer-api-reference.mdx b/api-reference/pgai/vectorizer-api-reference.mdx deleted file mode 100644 index c955c0c..0000000 --- a/api-reference/pgai/vectorizer-api-reference.mdx +++ /dev/null @@ -1,1827 +0,0 @@ ---- -title: Vectorizer API reference -description: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. ---- - -import { CLOUD_LONG } from '/snippets/vars.mdx'; - -This page provides an API reference for Vectorizer functions. For an overview -of Vectorizer and how it works, see the [Vectorizer Guide](/docs/vectorizer/overview.md). - -A vectorizer provides you with a powerful and automated way to generate and -manage LLM embeddings for your PostgreSQL data. Here's a summary of what you -gain from Vectorizers: - -- **Automated embedding generation**: you can create a vectorizer for a specified - table, which automatically generates embeddings for the data in that table and - keeps them in sync with the source data. - -- **Automatic synchronization**: a vectorizer creates triggers on the source table, - ensuring that embeddings are automatically updated when the source data - changes. - -- **Background processing**: the process to create embeddings runs -asynchrounously in the background. This minimizes the impact on regular database -operations such as INSERT, UPDATE, and DELETE. - -- **Scalability**: a vectorizer processes data in batches and can run concurrently. - This enables vectorizers to handle large datasets efficiently. - -- **Configurable embedding process**: a vectorizer is highly configurable, - allowing you to specify: - - The embedding model and dimensions. For example, the `nomic-embed-text` model in Ollama. - - Chunking strategies for text data. - - Formatting templates for combining multiple fields. - - Indexing options for efficient similarity searches. - - Scheduling for background processing. - -- **Integration with multiple AI providers**: a vectorizer supports different - embedding providers, initially including OpenAI, with more planned for the - future. - -- **Efficient storage and retrieval**: embeddings are stored in a separate table - with appropriate indexing, optimizing for vector similarity searches. - -- **View creation**: a view is automatically created to join the original data with - its embeddings, making it easy to query and use the embedded data. - -- **Fine-grained access control**: you can specify the roles that have - access to a vectorizer and its related objects. - -- **Monitoring and management**: monitor the vectorizer's queue, enable/disable scheduling, and manage the vectorizer - lifecycle. - -Vectorizer significantly simplifies the process of incorporating AI-powered -semantic search and analysis capabilities into existing PostgreSQL databases. -Making it easier for you to leverage the power of LLMs in your data workflows. - -Vectorizer offers the following APIs: - -**Install or upgrade database dependencies** -- [Install or upgrade](#install-or-upgrade-the-database-objects-necessary-for-vectorizer) the database objects necessary for vectorizer. - -**Create and configure vectorizers** -- [Create vectorizers](#create-vectorizers): automate the process of creating embeddings for table data. -- [Loading configuration](#loading-configuration): define the source of the data to embed. You can load data from a column in the source table, or from a file referenced in a column of the source table. -- [Parsing configuration](#parsing-configuration): for documents, define the way the data is parsed after it is loaded. -- [Chunking configuration](#chunking-configuration): define the way text data is split into smaller, manageable pieces - before being processed for embeddings. -- [Formatting configuration](#formatting-configuration): configure the way data from the source table is formatted - before it is sent for embedding. -- [Embedding configuration](#embedding-configuration): specify the LLM provider, model, and the parameters to be - used when generating the embeddings -- [Indexing configuration](#indexing-configuration): specify the way generated embeddings should be indexed for - efficient similarity searches. -- [Scheduling configuration](#scheduling-configuration): configure when and how often the vectorizer should run in order - to process new or updated data. -- [Processing configuration](#processing-configuration): specify the way the vectorizer should process data when - generating embeddings. - -**Manage vectorizers** -- [Enable and disable vectorizer schedules](#enable-and-disable-vectorizer-schedules): temporarily pause or resume the - automatic processing of embeddings, without having to delete or recreate the vectorizer configuration. -- [Drop a vectorizer](#drop-a-vectorizer): remove a vectorizer that you created previously, and clean up the associated - resources. - -**Monitor vectorizers** -- [View vectorizer status](#view-vectorizer-status): monitoring tools in pgai that provide insights into the state and - performance of vectorizers. - - -## Install or upgrade the database objects necessary for vectorizer - -You can install or upgrade the database objects necessary for vectorizer by running the following cli command: - -```bash -pgai install -d DB_URL -``` - -or by running the following python code: - -```python -import pgai - -pgai.install(DB_URL) -``` - -This will create the necessary catalog tables and functions in your database. All of the -database objects will be installed in the `ai` schema. - -The version of the database objects corresponds to the version of the `pgai` python package you have installed. To upgrade, first upgrade the python package with `pip install -U pgai` and then run `pgai.install(DB_URL)` again. - -## Create vectorizers - -You use the `ai.create_vectorizer` function in pgai to set up and configure an automated system -for generating and managing embeddings for a specific table in your database. - -The purpose of `ai.create_vectorizer` is to: -- Automate the process of creating embeddings for table data. -- Set up necessary infrastructure such as tables, views, triggers, or columns for embedding management. -- Configure the embedding generation process according to user specifications. -- Integrate with AI providers for embedding creation. -- Set up scheduling for background processing of embeddings. - -### Example usage - -By using `ai.create_vectorizer`, you can quickly set up a sophisticated -embedding system tailored to your specific needs, without having to manually -create and manage all the necessary database objects and processes. - -#### Example 1: Table destination (default) - -This approach creates a separate table to store embeddings and a view that joins with the source table: - -```sql -SELECT ai.create_vectorizer( - 'website.blog'::regclass, - name => 'website_blog_vectorizer', - loading => ai.loading_column('contents'), - embedding => ai.embedding_ollama('nomic-embed-text', 768), - chunking => ai.chunking_character_text_splitter(128, 10), - formatting => ai.formatting_python_template('title: $title published: $published $chunk'), - grant_to => ai.grant_to('bob', 'alice'), - destination => ai.destination_table( - target_schema => 'website', - target_table => 'blog_embeddings_store', - view_name => 'blog_embeddings' - ) -); -``` - -This function call: -1. Sets up a vectorizer named 'website_blog_vectorizer' for the `website.blog` table. -2. Creates a separate table `website.blog_embeddings_store` to store embeddings. -3. Creates a view `website.blog_embeddings` joining the source and embeddings. -4. Loads the `contents` column. -5. Uses the Ollama `nomic-embed-text` model to create 768 dimensional embeddings. -6. Chunks the content into 128-character pieces with a 10-character overlap. -7. Formats each chunk with a `title` and a `published` date. -8. Grants necessary permissions to the roles `bob` and `alice`. - -#### Example 2: Column destination - -Column destination place the embedding in a separate column in the source table. It can only be used when the vectorizer does not perform chunking because it requires a one-to-one relationship between the source data and the embedding. This is useful in cases where you know the source text is short (as is common if the chunking has already been done upstream in your data pipeline). - -The workflow is that your application inserts data into the table with a NULL in the embedding column. The vectorizer will then read the row, generate the embedding and update the row with the correct value in the embedding column. - -```sql -SELECT ai.create_vectorizer( - 'website.product_descriptions'::regclass, - name => 'product_descriptions_vectorizer', - loading => ai.loading_column('description'), - embedding => ai.embedding_openai('text-embedding-3-small', 768), - chunking => ai.chunking_none(), -- Required for column destination - grant_to => ai.grant_to('marketing_team'), - destination => ai.destination_column('description_embedding') -); -``` - -This function call: -1. Sets up a vectorizer named 'product_descriptions_vectorizer' for the `website.product_descriptions` table. -2. Adds a column called `description_embedding` directly to the source table. -3. Loads the `description` column. -4. Doesn't chunk the content (required for column destination). -5. Uses OpenAI's embedding model to create 768 dimensional embeddings. -6. Doesn't chunk the content (required for column destination). -7. Grants necessary permissions to the role `marketing_team`. - -The function returns an integer identifier for the vectorizer created, but you can also reference it by name -in other management functions. - -### Parameters - -`ai.create_vectorizer` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------------------|--------------------------------------------------------|-----------------------------------|----------|----------------------------------------------------------------------------------------------------| -| source | regclass | - | ✔ | The source table that embeddings are generated for. | -| name | text | Auto-generated | ✖ | A unique name for the vectorizer. If not provided, it's auto-generated based on the destination type:- For table destination: `[target_schema]_[target_table]`- For column destination: `[source_schema]_[source_table]_[embedding_column]` Must follow snake_case pattern `^[a-z][a-z_0-9]*$` -| destination | [Destination configuration](#destination-configuration)| `ai.destination_table()` | ✖ | Configure how the embeddings will be stored. Two options available: `ai.destination_table()` (default): Creates a separate table to store embeddings - `ai.destination_column()`: Adds an embedding column directly to the source table | -| embedding | [Embedding configuration](#embedding-configuration) | - | ✔ | Set how to embed the data. | -| loading | [Loading configuration](#loading-configuration) | - | ✔ | Set the way to load the data from the source table, using functions like `ai.loading_column()`. | -| parsing | [Parsing configuration](#parsing-configuration) | ai.parsing_auto() | ✖ | Set the way to parse the data, using functions like `ai.parsing_auto()`. | -| chunking | [Chunking configuration](#chunking-configuration) | `ai.chunking_recursive_character_text_splitter()` | ✖ | Set the way to split text data, using functions like `ai.chunking_character_text_splitter()`. | -| indexing | [Indexing configuration](#indexing-configuration) | `ai.indexing_default()` | ✖ | Specify how to index the embeddings. For example, `ai.indexing_diskann()` or `ai.indexing_hnsw()`. | -| formatting | [Formatting configuration](#formatting-configuration) | `ai.formatting_python_template()` | ✖ | Define the data format before embedding, using `ai.formatting_python_template()`. | -| scheduling | [Scheduling configuration](#scheduling-configuration) | `ai.scheduling_default()` | ✖ | Set how often to run the vectorizer. For example, `ai.scheduling_timescaledb()`. | -| processing | [Processing configuration](#processing-configuration ) | `ai.processing_default()` | ✖ | Configure the way to process the embeddings. | -| queue_schema | name | - | ✖ | Specify the schema where the work queue table is created. | -| queue_table | name | - | ✖ | Specify the name of the work queue table. | -| grant_to | [Grant To configuration][#grant-to-configuration] | `ai.grant_to_default()` | ✖ | Specify which users should be able to use objects created by the vectorizer. | -| enqueue_existing | bool | `true` | ✖ | Set to `true` if existing rows should be immediately queued for embedding. | -| if_not_exists | bool | `false` | ✖ | Set to `true` to avoid an error if the vectorizer already exists. | - - -#### Returns - -The `int` id of the vectorizer that you created. You can also reference the vectorizer by its name in management functions. - -## Destination configuration - -You use the destination configuration functions to define how and where the embeddings will be stored. There are two options available: - -- [ai.destination_table](#aidestination_table): Creates a separate table to store embeddings (default behavior) -- [ai.destination_column](#aidestination_column): Adds an embedding column directly to the source table - -### ai.destination_table - -You use `ai.destination_table` to store embeddings in a separate table. This is the default behavior, where: -- A new table is created to store the embeddings -- A view is created that joins the source table with the embeddings -- Multiple chunks can be created per row (using chunking) - -#### Example usage - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - destination => ai.destination_table( - target_schema => 'public', - target_table => 'my_table_embeddings_store', - view_schema => 'public', - view_name => 'my_table_embeddings' - ), - -- other parameters... -); -``` - -For simpler configuration with defaults: - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - destination => ai.destination_table('my_table_embeddings'), - -- other parameters... -); -``` - -#### Parameters - -`ai.destination_table` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------|------|---------|----------|-------------| -| destination | name | - | ✖ | The base name for the view and table. The view is named ``, the embedding table is named `_store`. | -| target_schema | name | Source table schema | ✖ | The schema where the embeddings table will be created. | -| target_table | name | `_embedding_store` or `_store` | ✖ | The name of the table where embeddings will be stored. | -| view_schema | name | Source table schema | ✖ | The schema where the view will be created. | -| view_name | name | `_embedding` or `` | ✖ | The name of the view that joins source and embeddings tables. | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -### ai.destination_column - -You use `ai.destination_column` to store embeddings directly in the source table as a new column. This approach can only be used when the vectorizer does not perform chunking because it requires a one-to-one relationship between the source data and the embedding. This is useful in cases where you know the source text is short (as is common if the chunking has already been done upstream in your data pipeline). - -This approach: -- Adds a vector column directly to the source table -- Does not create a separate view -- Requires chunking to be set to `ai.chunking_none()` (no chunking) -- Stores exactly one embedding per row - -The workflow is that your application inserts data into the table with a NULL in the embedding column. The vectorizer will then read the row, generate the embedding and update the row with the correct value in the embedding column. - -#### Example usage - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - destination => ai.destination_column('content_embedding'), - chunking => ai.chunking_none(), - -- other parameters... -); -``` - -#### Parameters - -`ai.destination_column` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------|------|---------|----------|-------------| -| embedding_column | name | - | ✔ | The name of the column to be added to the source table for storing embeddings. | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -## Loading configuration - -You use the loading configuration functions in `pgai` to define the way data is loaded from the source table. - -The loading functions are: - -- [ai.loading_column](#ailoading_column) -- [ai.loading_uri](#ailoading_uri) - -### ai.loading_column - -You use `ai.loading_column` to load the data to embed directly from a column in the source table. - -#### Example usage - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - loading => ai.loading_column('contents'), - -- other parameters... -); -``` - -#### Parameters - -`ai.loading_column` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------|------|---------|----------|-------------| -| column_name | text | - | ✔ | The name of the column containing the data to load. | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -### ai.loading_uri - -You use `ai.loading_uri` to load the data to embed from a file that is referenced in a column of the source table. -This file path is internally passed to [smart_open](https://github.com/piskvorky/smart_open), so it supports any protocol that smart_open supports, including: - -- Local files -- Amazon S3 -- Google Cloud Storage -- Azure Blob Storage -- HTTP/HTTPS -- SFTP -- and [many more](https://github.com/piskvorky/smart_open/blob/master/help.txt) - - -#### Environment configuration - -You just need to ensure the vectorizer worker has the correct credentials to access the file, such as in environment variables. Here is an example for AWS S3: - -```bash -export AWS_ACCESS_KEY_ID='your_access_key' -export AWS_SECRET_ACCESS_KEY='your_secret_key' -export AWS_REGION='your_region' # optional -``` - -Make sure these environment variables are properly set in the environment where the PGAI vectorizer worker runs. - -#### Example usage - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - loading => ai.loading_uri('file_uri_column_name'), - -- other parameters... -); -``` - -#### Parameters - -`ai.loading_uri` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------|------|---------|----------|-------------| -| column_name | text | - | ✔ | The name of the column containing the file path. | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -## Parsing configuration - -You use the parsing configuration functions in `pgai` to define how data is parsed after document loading. This is useful if for non-textual formats such as PDF documents. - -The parsing functions are: - -- [ai.parsing_auto](#aiparsing_auto): Automatically selects the appropriate parser based on file type. -- [ai.parsing_none](#aiparsing_none): Converts various formats to Markdown. -- [ai.parsing_docling](#aiparsing_docling): More powerful alternative to PyMuPDF. See [Docling](https://docling-project.github.io/docling/usage/supported_formats/) for supported formats. -- [ai.parsing_pymupdf](#aiparsing_pymupdf): See [PyMuPDF](https://pymupdf.readthedocs.io/en/latest/) for supported formats. - -### ai.parsing_auto - -You use `ai.parsing_auto` to automatically select an appropriate parser based on detected file types. -Documents with unrecognizable formats won't be processed and will generate an error (in the `ai.vectorizer_errors` table. - -The parser selection works by examining file extensions and content types: -- For PDF files, images, Office documents (DOCX, XLSX, etc.): Uses docling -- For EPUB and MOBI (e-book formats): Uses pymupdf -- For text formats (TXT, MD, etc.): No parser is used (content is read directly) - -#### Example usage - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - parsing => ai.parsing_auto(), - -- other parameters... -); -``` - -#### Parameters - -`ai.parsing_auto` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------|------|---------|----------|-------------| -| None | - | - | - | - | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - - -### ai.parsing_none - -You use `ai.parsing_none` to skip the parsing step. Only appropriate for textual data. - -#### Example usage, for textual data. - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - parsing => ai.parsing_none(), - -- other parameters... -); -``` - -#### Parameters - -`ai.parsing_none` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------|------|---------|----------|-------------| -| None | - | - | - | - | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -### ai.parsing_docling - -You use `ai.parsing_docling` to parse the data provided by the loader using [docling](https://docling-project.github.io/docling/). - -Docling is a more robust and thorough document parsing library that: -- Uses OCR capabilities to extract text from images -- Can parse complex documents with tables and multi-column layouts -- Supports Office formats (DOCX, XLSX, etc.) -- Preserves document structure better than other parsers -- Converts documents to markdown format - -Note that docling uses ML models for improved parsing, which makes it slower than simpler parsers like pymupdf. - -#### Example usage - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - parsing => ai.parsing_docling(), - -- other parameters... -); -``` - -#### Parameters - -`ai.parsing_docling` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------|------|---------|----------|-------------| -| None | - | - | - | - | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -### ai.parsing_pymupdf - -You use `ai.parsing_pymupdf` to parse the data provided by the loader using [pymupdf](https://pymupdf.readthedocs.io/en/latest/). - -PyMuPDF is a faster, simpler document parser that: -- Processes PDF documents with basic structure preservation -- Supports e-book formats like EPUB and MOBI -- Is generally faster than docling for simpler documents -- Works well for documents with straightforward layouts - -Choose pymupdf when processing speed is more important than perfect structure preservation. - -#### Example usage - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - parsing => ai.parsing_pymupdf(), - -- other parameters... -); -``` - -#### Parameters - -`ai.parsing_pymupdf` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------|------|---------|----------|-------------| -| None | - | - | - | - | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -## Chunking configuration - -You use the chunking configuration functions in `pgai` to define the way text data is split into smaller, -manageable pieces before being processed for embeddings. This is crucial because many embedding models have input size -limitations, and chunking allows for processing of larger text documents while maintaining context. - -By using chunking functions, you can fine-tune how your text data is -prepared for embedding, ensuring that the chunks are appropriately sized and -maintain necessary context for their specific use case. This is particularly -important for maintaining the quality and relevance of the generated embeddings, -especially when dealing with long-form content or documents with specific -structural elements. - -The chunking functions are: - -- [ai.chunking_character_text_splitter](#aichunking_character_text_splitter) -- [ai.chunking_recursive_character_text_splitter](#aichunking_recursive_character_text_splitter) - -The key difference between these functions is that `chunking_recursive_character_text_splitter` -allows for a more sophisticated splitting strategy, potentially preserving more -semantic meaning in the chunks. - -### ai.chunking_character_text_splitter - -You use `ai.chunking_character_text_splitter` to: -- Split text into chunks based on a specified separator. -- Control the chunk size and the amount of overlap between chunks. - -#### Example usage - -- Split the content into chunks of 128 characters, with 10 - character overlap, using '\n;' as the separator: - - ```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - chunking => ai.chunking_character_text_splitter(128, 10, E'\n'), - -- other parameters... - ); - ``` - -#### Parameters - -`ai.chunking_character_text_splitter` takes the following parameters: - -|Name| Type | Default | Required | Description | -|-|------|---------|-|--------------------------------------------------------| -|chunk_size| int | 800 |✖| The maximum number of characters in a chunk | -|chunk_overlap| int | 400 |✖| The number of characters to overlap between chunks | -|separator| text | E'\n\n' |✖| The string or character used to split the text | -|is_separator_regex| bool | false |✖| Set to `true` if `separator` is a regular expression. | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -### ai.chunking_recursive_character_text_splitter - -`ai.chunking_recursive_character_text_splitter` provides more fine-grained control over the chunking process. -You use it to recursively split text into chunks using multiple separators. - -#### Example usage - -- Recursively split content into chunks of 256 characters, with a 20 character - overlap, first trying to split on '\n;', then on spaces: - - ```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - chunking => ai.chunking_recursive_character_text_splitter( - 256, - 20, - separators => array[E'\n;', ' '] - ), - -- other parameters... - ); - ``` - -#### Parameters - -`ai.chunking_recursive_character_text_splitter` takes the following parameters: - -| Name | Type | Default | Required | Description | -|--------------------|------|---------|-|----------------------------------------------------------| -| chunk_size | int | 800 |✖| The maximum number of characters per chunk | -| chunk_overlap | int | 400 |✖| The number of characters to overlap between chunks | -| separators | text[] | array[E'\n\n', E'\n', '.', '?', '!', ' ', ''] |✖| The string or character used to split the text | -| is_separator_regex | bool | false |✖| Set to `true` if `separator` is a regular expression. | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -## Embedding configuration - -You use the embedding configuration functions to specify how embeddings are -generated for your data. - -The embedding functions are: - -- [ai.embedding_litellm](#aiembedding_litellm) -- [ai.embedding_openai](#aiembedding_openai) -- [ai.embedding_ollama](#aiembedding_ollama) -- [ai.embedding_voyageai](#aiembedding_voyageai) - -### ai.embedding_litellm - -You call the `ai.embedding_litellm` function to use LiteLLM to generate embeddings for models from multiple providers. - -The purpose of `ai.embedding_litellm` is to: -- Define the embedding model to use. -- Specify the dimensionality of the embeddings. -- Configure optional, provider-specific parameters. -- Set the name of the environment variable that holds the value of your API key. - -#### Example usage - -Use `ai.embedding_litellm` to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers): - -1. Set the required API key for your provider. - - The API key should be set as an environment variable which is available to either the Vectorizer worker, or the - Postgres process. - -2. Create a vectorizer using LiteLLM to access the 'microsoft/codebert-base' embedding model on huggingface: - - ```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_litellm( - 'huggingface/microsoft/codebert-base', - 768, - api_key_name => 'HUGGINGFACE_API_KEY', - extra_options => '{"wait_for_model": true}'::jsonb - ), - -- other parameters... - ); - ``` - -#### Parameters - -The function takes several parameters to customize the LiteLLM embedding configuration: - -| Name | Type | Default | Required | Description | -|---------------|-------|---------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------| -| model | text | - | ✔ | Specify the name of the embedding model to use. Refer to the [LiteLLM embedding documentation] for an overview of the available providers and models. | -| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. | -| api_key_name | text | - | ✖ | Set the name of the environment variable that contains the API key. This allows for flexible API key management without hardcoding keys in the database. | -| extra_options | jsonb | - | ✖ | Set provider-specific configuration options. | - - - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -#### Provider-specific configuration examples - -The following subsections show how to configure the vectorizer for all supported providers. - -##### Cohere - -```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_litellm( - 'cohere/embed-english-v3.0', - 1024, - api_key_name => 'COHERE_API_KEY', - ), - -- other parameters... - ); -``` - -Note: The [Cohere documentation on input_type] specifies that the `input_type` parameter is required. -By default, LiteLLM sets this to `search_document`. The input type can be provided -via `extra_options`, i.e. `extra_options => '{"input_type": "search_document"}'::jsonb`. - - -#### Mistral - -```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_litellm( - 'mistral/mistral-embed', - 1024, - api_key_name => 'MISTRAL_API_KEY', - ), - -- other parameters... - ); -``` - -Note: Mistral limits the maximum input per batch to 16384 tokens. - -##### Azure OpenAI - -To set up a vectorizer with Azure OpenAI you require these values from the Azure AI Foundry console: -- deployment name -- base URL -- version -- API key - -The deployment name is visible in the "Deployment info" section. The base URL and version are -extracted from the "Target URI" field in the "Endpoint section". The Target URI has the form: -`https://your-resource-name.openai.azure.com/openai/deployments/your-deployment-name/embeddings?api-version=2023-05-15`. -In this example, the base URL is: `https://your-resource-name.openai.azure.com` and the version is `2023-05-15`. - -![Azure AI Foundry console example](/docs/images/azure_openai.png) - -Configure the vectorizer, note that the base URL and version are configured through `extra_options`: - -```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_litellm( - 'azure/', - 1024, - api_key_name => 'AZURE_API_KEY', - extra_options => '{"api_base": "", "api_version": ""}'::jsonb - ), - -- other parameters... - ); -``` - -#### Huggingface inference models - -You can use [Huggingface inference] to obtain vector embeddings. Note that -Huggingface has two categories of inference: "serverless inference", and -"inference endpoints". Serverless inference is free, but is limited to models -under 10GB in size, and the model may not be immediately available to serve -requests. Inference endpoints are a paid service and provide always-on APIs -for production use-cases. - -Note: We recommend using the `wait_for_model` parameter when using vectorizer -with serverless inference to force the call to block until the model has been -loaded. If you do not use `wait_for_model`, it's likely that vectorization will -never succeed. - -```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_litellm( - 'huggingface/BAAI/bge-small-en-v1.5', - , 384 - , extra_options => '{"wait_for_model": true}'::jsonb - ) - -- other parameters... - ); -``` - - -#### AWS Bedrock - -To set up a vectorizer with AWS Bedrock, you must ensure that the vectorizer -is authenticated to make API calls to the AWS Bedrock endpoint. The vectorizer -worker uses boto3 under the hood, so there are multiple ways to achieve this. - -The simplest method is to provide the `AWS_ACCESS_KEY_ID`, -`AWS_SECRET_ACCESS_KEY`, and `AWS_REGION_NAME` environment variables to the -vectorizer worker. Consult the [boto3 credentials documentation] for more -options. - - -```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_litellm( - 'bedrock/amazon.titan-embed-text-v2:0', - 1024, - api_key_name => 'AWS_SECRET_ACCESS_KEY', -- optional - extra_options => '{"aws_access_key_id": "", "aws_region_name": ""}'::jsonb -- optional - ), - -- other parameters... - ); -``` - -You can also only configure the secret in the database, and provide the -`api_key_name` parameter to prompt the vectorizer worker to load the api key -from the database. When you do this, you may need to pass `aws_access_key_id` -and `aws_region_name` through the `extra_options` parameter: - -```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_litellm( - 'bedrock/amazon.titan-embed-text-v2:0', - 1024, - api_key_name => 'AWS_SECRET_ACCESS_KEY', -- optional - extra_options => '{"aws_access_key_id": "", "aws_region_name": ""}'::jsonb -- optional - ), - -- other parameters... - ); -``` - -#### Vertex AI - -To set up a vectorizer with Vertex AI, you must ensure that the vectorizer -can make API calls to the Vertex AI endpoint. The vectorizer worker uses -GCP's authentication under the hood, so there are multiple ways to achieve -this. - -The simplest method is to provide the `VERTEX_PROJECT`, and -`VERTEX_CREDENTIALS` environment variables to the vectorizer worker. These -correspond to the project id, and the path to a file containing credentials for -a service account. Consult the [Authentication methods at Google] for more -options. - - -```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_litellm( - 'vertex_ai/text-embedding-005', - 768 - ), - -- other parameters... - ); -``` - -You can also only configure the secret in the database, and provide the -`api_key_name` parameter to prompt the vectorizer worker to load the api key -from the database. When you do this, you may need to pass `vertex_project` and -`vertex_location` through the `extra_options` parameter. - -Note: `VERTEX_CREDENTIALS` should contain the path to a file -containing the API key, the vectorizer worker requires to have access to this -file in order to load the credentials. - -```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_litellm( - 'vertex_ai/text-embedding-005', - 768, - api_key_name => 'VERTEX_CREDENTIALS', -- optional - extra_options => '{"vertex_project": "", "vertex_location": ""}'::jsonb -- optional - ), - -- other parameters... - ); -``` - -### ai.embedding_openai - -You call the `ai.embedding_openai` function to use an OpenAI model to generate embeddings. - -The purpose of `ai.embedding_openai` is to: -- Define which OpenAI embedding model to use. -- Specify the dimensionality of the embeddings. -- Configure optional parameters like the user identifier for API calls. -- Set the name of the [environment variable that holds the value of your OpenAI API key][openai-use-env-var]. - -#### Example usage - -Use `ai.embedding_openai` to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers): - -1. Set the value of your OpenAI API key. - - For example, [in an environment variable][openai-set-key] or in a [Docker configuration][docker configuration]. - -2. Create a vectorizer with OpenAI as the embedding provider: - - ```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_openai( - 'text-embedding-3-small', - 768, - chat_user => 'bob', - api_key_name => 'MY_OPENAI_API_KEY_NAME' - ), - -- other parameters... - ); - ``` - -#### Parameters - -The function takes several parameters to customize the OpenAI embedding configuration: - -| Name | Type | Default | Required | Description | -|--------------|------|------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| model | text | - | ✔ | Specify the name of the OpenAI embedding model to use. For example, `text-embedding-3-small`. | -| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. | -| chat_user | text | - | ✖ | The identifier for the user making the API call. This can be useful for tracking API usage or for OpenAI's monitoring purposes. | -| api_key_name | text | `OPENAI_API_KEY` | ✖ | Set [the name of the environment variable that contains the OpenAI API key][openai-use-env-var]. This allows for flexible API key management without hardcoding keys in the database. On {CLOUD_LONG}, you should set this to the name of the secret that contains the OpenAI API key. | -| base_url | text | - | ✖ | Set the base_url of the OpenAI API. Note: no default configured here to allow configuration of the vectorizer worker through `OPENAI_BASE_URL` env var. | -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -### ai.embedding_ollama - -You use the `ai.embedding_ollama` function to use an Ollama model to generate embeddings. - -The purpose of `ai.embedding_ollama` is to: -- Define which Ollama model to use. -- Specify the dimensionality of the embeddings. -- Configure how the Ollama API is accessed. -- Configure the model's truncation behaviour, and keep alive. -- Configure optional, model-specific parameters, like the `temperature`. - -#### Example usage - -This function is used to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers): - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_ollama( - 'nomic-embed-text', - 768, - base_url => 'http://my.ollama.server:443' - options => '{ "num_ctx": 1024 }', - keep_alive => "10m" - ), - -- other parameters... -); -``` - -#### Parameters - -The function takes several parameters to customize the Ollama embedding configuration: - -| Name | Type | Default | Required | Description | -|------------|---------|---------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| model | text | - | ✔ | Specify the name of the Ollama model to use. For example, `nomic-embed-text`. Note: the model must already be available (pulled) in your Ollama server. | -| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. | -| base_url | text | - | ✖ | Set the base_url of the Ollama API. Note: no default configured here to allow configuration of the vectorizer worker through `OLLAMA_HOST` env var. | -| options | jsonb | - | ✖ | Configures additional model parameters listed in the documentation for the Modelfile, such as `temperature`, or `num_ctx`. | -| keep_alive | text | - | ✖ | Controls how long the model will stay loaded in memory following the request. Note: no default configured here to allow configuration at Ollama-level. | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -### ai.embedding_voyageai - -You use the `ai.embedding_voyageai` function to use a Voyage AI model to generate embeddings. - -The purpose of `ai.embedding_voyageai` is to: -- Define which Voyage AI model to use. -- Specify the dimensionality of the embeddings. -- Configure the model's truncation behaviour, and api key name. -- Configure the input type. - -#### Example usage - -This function is used to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers): - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - embedding => ai.embedding_voyageai( - 'voyage-3-lite', - 512, - api_key_name => "TEST_API_KEY" - ), - -- other parameters... -); -``` - -#### Parameters - -The function takes several parameters to customize the Voyage AI embedding configuration: - -| Name | Type | Default | Required | Description | -|--------------|---------|------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| model | text | - | ✔ | Specify the name of the [Voyage AI model](https://docs.voyageai.com/docs/embeddings#model-choices) to use. | -| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. | -| input_type | text | 'document' | ✖ | Type of the input text, null, 'query', or 'document'. | -| api_key_name | text | `VOYAGE_API_KEY` | ✖ | Set the name of the environment variable that contains the Voyage AI API key. This allows for flexible API key management without hardcoding keys in the database. On {CLOUD_LONG}, you should set this to the name of the secret that contains the Voyage AI API key. | - -#### Returns - -A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers). - -## Formatting configuration - -You use the `ai.formatting_python_template` function in `pgai` to -configure the way data from the source table is formatted before it is sent -for embedding. - -`ai.formatting_python_template` provides a flexible way to structure the input -for embedding models. This enables you to incorporate relevant metadata and additional -text. This can significantly enhance the quality and usefulness of the generated -embeddings, especially in scenarios where context from multiple fields is -important for understanding or searching the content. - -The purpose of `ai.formatting_python_template` is to: -- Define a template for formatting the data before embedding. -- Allow the combination of multiple fields from the source table. -- Add consistent context or structure to the text being embedded. -- Customize the input for the embedding model to improve relevance and searchability. - -Formatting happens after chunking and the special `$chunk` variable contains the chunked text. - -### Example usage - -- Default formatting: - - The default formatter uses the `$chunk` template, resulting in outputing the chunk text as-is. - - ```sql - SELECT ai.create_vectorizer( - 'blog_posts'::regclass, - formatting => ai.formatting_python_template('$chunk'), - -- other parameters... - ); - ``` - -- Add context from other columns: - - Add the title and publication date to each chunk, providing more context for the embedding. - ```sql - SELECT ai.create_vectorizer( - 'blog_posts'::regclass, - formatting => ai.formatting_python_template('Title: $title\nDate: $published\nContent: $chunk'), - -- other parameters... - ); - ``` - -- Combine multiple fields: - - Prepend author and category information to each chunk. - ```sql - SELECT ai.create_vectorizer( - 'blog_posts'::regclass, - formatting => ai.formatting_python_template('Author: $author\nCategory: $category\n$chunk'), - -- other parameters... - ); - ``` - -- Add consistent structure: - - Add start and end markers to each chunk, which could be useful for certain - types of embeddings or retrieval tasks. - - ```sql - SELECT ai.create_vectorizer( - 'blog_posts'::regclass, - formatting => ai.formatting_python_template('BEGIN DOCUMENT\n$chunk\nEND DOCUMENT'), - -- other parameters... - ); - ``` - -### Parameters - -`ai.formatting_python_template` takes the following parameter: - -|Name| Type | Default | Required | Description | -|-|--------|-|-|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -|template| string |`$chunk`|✔| A string using [Python template strings](https://docs.python.org/3/library/string.html#template-strings) with $-prefixed variables that defines how the data should be formatted. | - - - The $chunk placeholder is required and represents the text chunk that will be embedded. - - Other placeholders can be used to reference columns from the source table. - - The template allows for adding static text or structuring the input in a specific way. - -### Returns - -A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers). - -## Indexing configuration - -You use indexing configuration functions in pgai to -specify the way generated embeddings should be indexed for efficient similarity -searches. These functions enable you to choose and configure the indexing -method that best suits your needs in terms of performance, accuracy, and -resource usage. - -By providing these indexing options, pgai allows you to optimize your -embedding storage and retrieval based on their specific use case and performance -requirements. This flexibility is crucial for scaling AI-powered search and -analysis capabilities within a PostgreSQL database. - -Key points about indexing: - -- The choice of indexing method depends on your dataset size, query performance requirements, and available resources. - -- [ai.indexing_none](#aiindexing_none) is better suited for small datasets, or when you want to perform index creation manually. -- [ai.indexing_diskann](#aiindexing_diskann) is generally recommended for larger datasets that require an index. - -- The `min_rows` parameter enables you to delay index creation until you have enough data to justify the overhead. - -- These indexing methods are designed for approximate nearest neighbor search, which trades a small amount of accuracy for significant speed improvements in similarity searches. - -The available functions are: - -- [ai.indexing_default](#aiindexing_default): when you do not want indexes created automatically. -- [ai.indexing_none](#aiindexing_none): when you do not want indexes created automatically. -- [ai.indexing_diskann](#aiindexing_diskann): configure indexing using the [DiskANN algorithm](https://github.com/timescale/pgvectorscale). -- [ai.indexing_hnsw](#aiindexing_hnsw): configure indexing using the [Hierarchical Navigable Small World (HNSW) algorithm](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world). - -### ai.indexing_default - -You use `ai.indexing_default` to use the platform-specific default value for indexing. - -On {CLOUD_LONG}, the default is `ai.indexing_diskann()`. On self-hosted, the default is `ai.indexing_none()`. -A timescaledb background job is used for automatic index creation. Since timescaledb may not be installed -in a self-hosted environment, we default to `ai.indexing_none()`. - -#### Example usage - -```sql - SELECT ai.create_vectorizer( - 'blog_posts'::regclass, - indexing => ai.indexing_default(), - -- other parameters... - ); -``` - -#### Parameters - -This function takes no parameters. - -#### Returns - -A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers). - -### ai.indexing_none - -You use `ai.indexing_none` to specify that no special indexing should be used for the embeddings. - -This is useful when you don't need fast similarity searches or when you're dealing with a small amount of data. - -#### Example usage - -```sql - SELECT ai.create_vectorizer( - 'blog_posts'::regclass, - indexing => ai.indexing_none(), - -- other parameters... - ); -``` - -#### Parameters - -This function takes no parameters. - -#### Returns - -A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers). - -### ai.indexing_diskann - -You use `ai.indexing_diskann` to configure indexing using the DiskANN algorithm, which is designed for high-performance -approximate nearest neighbor search on large-scale datasets. This is suitable for very large datasets that need to be -stored on disk. - -#### Example usage - -```sql - SELECT ai.create_vectorizer( - 'blog_posts'::regclass, - indexing => ai.indexing_diskann(min_rows => 500000, storage_layout => 'memory_optimized'), - -- other parameters... - ); -``` - -#### Parameters - -`ai.indexing_diskann` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------|------|---------|-|--------------------------------------------------| -|min_rows| int | 100000 |✖| The minimum number of rows before creating the index | -| storage_layout | text | - |✖| Set to either `memory_optimized` or `plain` | -| num_neighbors | int | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter. | -| search_list_size | int | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.| -| max_alpha | float8 | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.| -| num_dimensions | int | - |✖|Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.| -| num_bits_per_dimension | int | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.| -| create_when_queue_empty | boolean | true |✖| Create the index only after all of the embeddings have been generated. | - - -#### Returns - -A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers). - -### ai.indexing_hnsw - -You use `ai.indexing_hnsw` to configure indexing using the [Hierarchical Navigable Small World (HNSW) algorithm](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world), -which is known for fast and accurate approximate nearest neighbor search. - -HNSW is suitable for in-memory datasets and scenarios where query speed is crucial. - -#### Example usage - -```sql - SELECT ai.create_vectorizer( - 'blog_posts'::regclass, - indexing => ai.indexing_hnsw(min_rows => 50000, opclass => 'vector_l1_ops'), - -- other parameters... - ); -``` - -#### Parameters - -`ai.indexing_hnsw` takes the following parameters: - -| Name | Type | Default | Required | Description | -|------|------|---------------------|-|----------------------------------------------------------------------------------------------------------------| -|min_rows| int | 100000 |✖| The minimum number of rows before creating the index | -|opclass| text | `vector_cosine_ops` |✖| The operator class for the index. Possible values are:`vector_cosine_ops`, `vector_l1_ops`, or `vector_ip_ops` | -|m| int | - |✖| Advanced [HNSW parameters](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) | -|ef_construction| int | - |✖| Advanced [HNSW parameters](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) | -| create_when_queue_empty| boolean | true |✖| Create the index only after all of the embeddings have been generated. | - - -#### Returns - -A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers). - -## Scheduling configuration - -You use scheduling functions in pgai to configure when and how often the vectorizer should run to process new or -updated data. These functions allow you to set up automated, periodic execution of the embedding -generation process. These are advanced options and most users should use the default. - -By providing these scheduling options, pgai enables you to automate the process -of keeping your embeddings up-to-date with minimal manual intervention. This is -crucial for maintaining the relevance and accuracy of AI-powered search and -analysis capabilities, especially in systems where data is frequently updated or -added. The flexibility in scheduling also allows users to balance the freshness -of embeddings against system resource usage and other operational -considerations. - -The available functions are: - -- [ai.scheduling_default](#aischeduling_default): uses the platform-specific default scheduling configuration. On {CLOUD_LONG} this is equivalent to `ai.scheduling_timescaledb()`. On self-hosted deployments, this is equivalent to `ai.scheduling_none()`. -- [ai.scheduling_none](#aischeduling_none): when you want manual control over when the vectorizer runs. Use this when you're using an external scheduling system, as is the case with self-hosted deployments. -- [ai.scheduling_timescaledb](#aischeduling_timescaledb): leverages TimescaleDB's robust job scheduling system, which is designed for reliability and scalability. Use this when you're using {CLOUD_LONG}. - - -### ai.scheduling_default - -You use `ai.scheduling_default` to use the platform-specific default scheduling configuration. - -On {CLOUD_LONG}, the default is `ai.scheduling_timescaledb()`. On self-hosted, the default is `ai.scheduling_none()`. -A timescaledb background job is used to periodically trigger a cloud vectorizer on {CLOUD_LONG}. -This is not available in a self-hosted environment. - -#### Example usage - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - scheduling => ai.scheduling_default(), - -- other parameters... -); -``` - -#### Parameters - -This function takes no parameters. - -#### Returns - -A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers). - -### ai.scheduling_none - -You use `ai.scheduling_none` to -- Specify that no automatic scheduling should be set up for the vectorizer. -- Manually control when the vectorizer runs or when you're using an external scheduling system. - -You should use this for self-hosted deployments. - -#### Example usage - -```sql -SELECT ai.create_vectorizer( - 'my_table'::regclass, - scheduling => ai.scheduling_none(), - -- other parameters... -); -``` - -#### Parameters - -This function takes no parameters. - -#### Returns - -A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers). - -### ai.scheduling_timescaledb - -You use `ai.scheduling_timescaledb` to: - -- Configure automated scheduling using TimescaleDB's job scheduling system. -- Allow periodic execution of the vectorizer to process new or updated data. -- Provide fine-grained control over when and how often the vectorizer runs. - -#### Example usage - -- Basic usage (run every 5 minutes). This is the default: - - ```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - scheduling => ai.scheduling_timescaledb(), - -- other parameters... - ); - ``` - -- Custom interval (run every hour): - ```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - scheduling => ai.scheduling_timescaledb(interval '1 hour'), - -- other parameters... - ); - ``` - -- Specific start time and timezone: - ```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - scheduling => ai.scheduling_timescaledb( - interval '30 minutes', - initial_start => '2024-01-01 00:00:00'::timestamptz, - timezone => 'America/New_York' - ), - -- other parameters... - ); - ``` - -- Fixed schedule: - ```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - scheduling => ai.scheduling_timescaledb( - interval '1 day', - fixed_schedule => true, - timezone => 'UTC' - ), - -- other parameters... - ); - ``` - -#### Parameters - -`ai.scheduling_timescaledb` takes the following parameters: - -|Name|Type| Default | Required | Description | -|-|-|---------|-|--------------------------------------------------------------------------------------------------------------------| -|schedule_interval|interval| '10m' |✔| Set how frequently the vectorizer checks for new or updated data to process. | -|initial_start|timestamptz| - |✖| Delay the start of scheduling. This is useful for coordinating with other system processes or maintenance windows. | -|fixed_schedule|bool| - |✖|Set to `true` to use a fixed schedule such as every day at midnight. Set to `false` for a sliding window such as every 24 hours from the last run| -|timezone|text| - |✖| Set the timezone this schedule operates in. This ensures that schedules are interpreted correctly, especially important for fixed schedules or when coordinating with business hours. | - -#### Returns - -A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers). - -## Processing configuration - -You use the processing configuration functions in pgai to specify -the way the vectorizer should process data when generating embeddings, -such as the batch size and concurrency. These are advanced options and most -users should use the default. - -### ai.processing_default - -You use `ai.processing_default` to specify the concurrency and batch size for the vectorizer. - -#### Example usage - -```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - processing => ai.processing_default(batch_size => 200, concurrency => 5), - -- other parameters... - ); -``` - -#### Parameters - -`ai.processing_default` takes the following parameters: - -|Name| Type | Default | Required | Description | -|-|------|------------------------------|-|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -|batch_size| int | Determined by the vectorizer |✖| The number of items to process in each batch. The optimal batch size depends on your data and cloud function configuration, larger batch sizes can improve efficiency but may increase memory usage. The default is 1 for vectorizers that use document loading (`ai.loading_uri`) and 50 otherwise. | -|concurrency| int | Determined by the vectorizer |✖| The number of concurrent processing tasks to run. The optimal concurrency depends on your cloud infrastructure and rate limits, higher concurrency can speed up processing but may increase costs and resource usage. | - -#### Returns - -A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers). - -## Grant To configuration - -You use the grant to configuration function in pgai to specify which users should be able to use -objects created by the vectorizer. - -### ai.grant_to - -Grant permissions to a comma-separated list of users. - -Includes the users specified in the `ai.grant_to_default` setting. - -#### Example usage - -```sql - SELECT ai.create_vectorizer( - 'my_table'::regclass, - grant_to => ai.grant_to('bob', 'alice'), - -- other parameters... - ); -``` - -#### Parameters - -This function takes a comma-separated list of usernames to grant permissions to. - -#### Returns - -An array of name values, that you can use as an argument for [ai.create_vectorizer](#create-vectorizers). - -## Enable and disable vectorizer schedules - -You use `ai.enable_vectorizer_schedule` and `ai.disable_vectorizer_schedule` to control -the execution of [scheduled vectorizer jobs](#scheduling-configuration). These functions -provide a way to temporarily pause or resume the automatic processing of embeddings, without -having to delete or recreate the vectorizer configuration. - -These functions provide an important layer of operational control for managing -pgai vectorizers in production environments. They allow database administrators -and application developers to balance the need for up-to-date embeddings with -other system priorities and constraints, enhancing the overall flexibility and -manageability of pgai. - -Key points about schedule enable and disable: - -- These functions provide fine-grained control over individual vectorizer schedules without affecting other - vectorizers, or the overall system configuration. - -- Disabling a schedule does not delete the vectorizer or its configuration; it simply stops scheduling future - executions of the job. - -- These functions are particularly useful in scenarios such as: - - System maintenance windows where you want to reduce database load. - - Temporarily pausing processing during data migrations or large bulk updates. - - Debugging or troubleshooting issues related to the vectorizer. - - Implementing manual control over when embeddings are updated. - -- When a schedule is disabled, new or updated data is not automatically processed. However, the data is still - queued, and will be processed when the schedule is re-enabled, or when the vectorizer is run manually. - -- After re-enabling a schedule, for a vectorizer configured with - [ai.scheduling_timescaledb](#aischeduling_timescaledb), the next run is based - on the original scheduling configuration. For example, if the vectorizer was - set to run every hour, it will run at the next hour mark after being enabled. - -- You can reference vectorizers either by their ID or their name. - -Usage example in a maintenance scenario: - -```sql --- Before starting system maintenance using IDs -SELECT ai.disable_vectorizer_schedule(1); -SELECT ai.disable_vectorizer_schedule(2); - --- Or using names (more human-readable) -SELECT ai.disable_vectorizer_schedule('public_blog_embeddings'); -SELECT ai.disable_vectorizer_schedule('public_products_embeddings'); - --- Perform maintenance tasks... - --- After maintenance is complete -SELECT ai.enable_vectorizer_schedule('public_blog_embeddings'); -SELECT ai.enable_vectorizer_schedule('public_products_embeddings'); -``` - -The available functions are: -- [ai.enable_vectorizer_schedule](#aienable_vectorizer_schedule): activate, reactivate or resume a scheduled job. -- [ai.disable_vectorizer_schedule](#aidisable_vectorizer_schedule): disactivate or temporarily stop a scheduled job. - -### ai.enable_vectorizer_schedule - -You use `ai.enable_vectorizer_schedule` to: -- Activate or reactivate the scheduled job for a specific vectorizer. -- Allow the vectorizer to resume automatic processing of new or updated data. - -#### Example usage - -To resume the automatic scheduling for a vectorizer: - -```sql --- Using vectorizer name (recommended) -SELECT ai.enable_vectorizer_schedule('public_blog_embeddings'); - --- Using ID -SELECT ai.enable_vectorizer_schedule(1); -``` - -#### Parameters - -`ai.enable_vectorizer_schedule` can be called in two ways: -1. With a vectorizer name (recommended for better readability) -2. With a vectorizer ID - -`ai.enable_vectorizer_schedule(name text)`: - -|Name| Type | Default | Required | Description | -|-|------|---------|-|-----------------------------------------------------------| -|name| text | - |✔| The name of the vectorizer whose schedule you want to enable. | - -`ai.enable_vectorizer_schedule(vectorizer_id int)`: - -|Name| Type | Default | Required | Description | -|-|------|---------|-|-----------------------------------------------------------| -|vectorizer_id| int | - |✔| The identifier of the vectorizer whose schedule you want to enable. | - - -#### Returns - -`ai.enable_vectorizer_schedule` does not return a value. - -### ai.disable_vectorizer_schedule - -You use `ai.disable_vectorizer_schedule` to: -- Deactivate the scheduled job for a specific vectorizer. -- Temporarily stop the automatic processing of new or updated data. - - -#### Example usage - -To stop the automatic scheduling for a vectorizer: - -```sql --- Using name (recommended) -SELECT ai.disable_vectorizer_schedule('public_blog_embeddings'); - --- Using ID -SELECT ai.disable_vectorizer_schedule(1); -``` - -#### Parameters - -`ai.disable_vectorizer_schedule` can be called in two ways: -1. With a vectorizer name (recommended for better readability) -2. With a vectorizer ID - -`ai.disable_vectorizer_schedule(name text)`: - -|Name| Type | Default | Required | Description | -|-|------|---------|-|----------------------------------------------------------------------| -|name| text | - |✔| The name of the vectorizer whose schedule you want to disable. | - - -`ai.disable_vectorizer_schedule(vectorizer_id int)`: - -|Name| Type | Default | Required | Description | -|-|------|---------|-|----------------------------------------------------------------------| -|vectorizer_id| int | - |✔| The identifier of the vectorizer whose schedule you want to disable. | - -#### Returns - -`ai.disable_vectorizer_schedule` does not return a value. - - -## Drop a vectorizer - -`ai.drop_vectorizer` is a management tool that you use to remove a vectorizer that you -[created previously](#create-vectorizers), and clean up the associated -resources. Its primary purpose is to provide a controlled way to delete a -vectorizer when it's no longer needed, or when you want to reconfigure it from -scratch. - -You use `ai.drop_vectorizer` to: -- Remove a specific vectorizer configuration from the system. -- Clean up associated database objects and scheduled jobs. -- Safely undo the creation of a vectorizer. - -`ai.drop_vectorizer` performs the following on the vectorizer to drop: - -- Deletes the scheduled job associated with the vectorizer if one exists. -- Drops the trigger from the source table used to queue changes. -- Drops the trigger function that backed the source table trigger. -- Drops the queue table used to manage the updates to be processed. -- Deletes the vectorizer row from the `ai.vectorizer` table. - -By default, `ai.drop_vectorizer` does not: - -- Drop the target table containing the embeddings. -- Drop the view joining the target and source tables. - -There is an optional parameter named `drop_all` which is `false` by default. If you -explicitly pass `true`, the function WILL drop the target table and view. - -This design allows you to keep the generated embeddings and the convenient view -even after dropping the vectorizer. This is useful if you want to stop -automatic updates but still use the existing embeddings. - -#### Example usage - -Best practices are: - -- Before dropping a vectorizer, ensure that you will not need the automatic embedding updates it provides. -- After dropping a vectorizer, you may want to manually clean up the target table and view if they're no longer needed. -- You can reference vectorizers either by their ID or their name (recommended). - - -Examples: -- Remove a vectorizer by name (recommended): - - ```sql - SELECT ai.drop_vectorizer('public_blog_embeddings'); - ``` - -- Remove a vectorizer by ID: - - ```sql - SELECT ai.drop_vectorizer(1); - ``` - -- Remove a vectorizer and drop the target table and view as well: - - ```sql - SELECT ai.drop_vectorizer('public_blog_embeddings', drop_all=>true); - ``` - -#### Parameters - -`ai.drop_vectorizer` can be called in two ways: -1. With a vectorizer name (recommended for better readability) -2. With a vectorizer ID - -`ai.drop_vectorizer(name text, drop_all bool)`: - -|Name| Type | Default | Required | Description | -|-|------|-|-|-| -|name| text | -|✔|The name of the vectorizer you want to drop| -|drop_all| bool | false |✖|true to drop the target table and view as well| - -`ai.drop_vectorizer(vectorizer_id int, drop_all bool)`: - -|Name| Type | Default | Required | Description | -|-|------|-|-|-| -|vectorizer_id| int | -|✔|The identifier of the vectorizer you want to drop| -|drop_all| bool | false |✖|true to drop the target table and view as well| - - - -#### Returns - -`ai.drop_vectorizer` does not return a value, but it performs several cleanup operations. - -## View vectorizer status - -[ai.vectorizer_status view](#aivectorizer_status-view) and -[ai.vectorizer_queue_pending function](#aivectorizer_queue_pending-function) are -monitoring tools in pgai that provide insights into the state and performance of vectorizers. - -These monitoring tools are crucial for maintaining the health and performance of -your pgai-enhanced database. They allow you to proactively manage your -vectorizers, ensure timely processing of embeddings, and quickly identify and -address any issues that may arise in your AI-powered data pipelines. - -For effective monitoring, you use `ai.vectorizer_status`. - -For example: -```sql --- Get an overview of all vectorizers -SELECT * FROM ai.vectorizer_status; -``` - -Sample output: - -| id | source_table | target_table | view | pending_items | -|----|--------------|--------------|------|---------------| -| 1 | public.blog | public.blog_contents_embedding_store | public.blog_contents_embeddings | 1 | - -The `pending_items` column indicates the number of items still awaiting embedding creation. The pending items count helps you to: -- Identify bottlenecks in processing. -- Determine if you need to adjust scheduling or processing configurations. -- Monitor the impact of large data imports or updates on your vectorizers. - -Regular monitoring using these tools helps ensure that your vectorizers are keeping up with data changes, and that -embeddings remain up-to-date. - - -Available views are: -- [ai.vectorizer_status](#aivectorizer_status-view): view, monitor and display information about a vectorizer. - -Available functions are: -- [ai.vectorizer_queue_pending](#aivectorizer_queue_pending-function): retrieve just the queue count for a vectorizer. - - -### ai.vectorizer_status view - -You use `ai.vectorizer_status` to: -- Get a high-level overview of all vectorizers in the system. -- Regularly monitor and check the health of the entire system. -- Display key information about each vectorizer's configuration and current state. -- Use the `pending_items` column to get a quick indication of processing backlogs. - -#### Example usage - -- Retrieve all vectorizers that have items waiting to be processed: - - ```sql - SELECT * FROM ai.vectorizer_status WHERE pending_items > 0; - ``` - -- System health monitoring: - ```sql - -- Alert if any vectorizer has more than 1000 pending items - SELECT id, source_table, pending_items - FROM ai.vectorizer_status - WHERE pending_items > 1000; - ``` - -#### Returns - -`ai.vectorizer_status` returns the following: - -| Column name | Description | -|---------------|-----------------------------------------------------------------------| -| id | The unique identifier of this vectorizer | -|source_table | The fully qualified name of the source table | -|target_table | The fully qualified name of the table storing the embeddings | -|view | The fully qualified name of the view joining source and target tables | -| pending_items | The number of items waiting to be processed by the vectorizer | - -### ai.vectorizer_queue_pending function - -`ai.vectorizer_queue_pending` enables you to retrieve the number of items in a vectorizer queue -when you need to focus on a particular vectorizer or troubleshoot issues. - -You use `vectorizer_queue_pending` to: -- Retrieve the number of pending items for a specific vectorizer. -- Allow for more granular monitoring of individual vectorizer queues. - -#### Example usage - -Return the number of pending items for a vectorizer: - -```sql --- Using name (recommended) -SELECT ai.vectorizer_queue_pending('public_blog_embeddings'); - --- Using ID -SELECT ai.vectorizer_queue_pending(1); -``` - -A queue with a very large number of items may be slow to count. The optional -`exact_count` parameter is defaulted to false. When false, the count is limited. -An exact count is returned if the queue has 10,000 or fewer items, and returns -9223372036854775807 (the max bigint value) if there are greater than 10,000 -items. - -To get an exact count, regardless of queue size, set the optional parameter to -`true` like this: - -```sql --- Using name (recommended) -SELECT ai.vectorizer_queue_pending('public_blog_embeddings', exact_count=>true); - --- Using ID -SELECT ai.vectorizer_queue_pending(1, exact_count=>true); -``` - -#### Parameters - -`ai.vectorizer_queue_pending` can be called in two ways: -1. With a vectorizer name (recommended for better readability) -2. With a vectorizer ID - -`ai.vectorizer_queue_pending(name text, exact_count bool)`: - -| Name | Type | Default | Required | Description | -|---------------|------|---------|----------|---------------------------------------------------------| -| name | text | - | ✔ | The name of the vectorizer you want to check | -| exact_count | bool | false | ✖ | If true, return exact count. If false, capped at 10,000 | - -`ai.vectorizer_queue_pending(vectorizer_id int, exact_count bool)`: - -| Name | Type | Default | Required | Description | -|---------------|------|---------|----------|---------------------------------------------------------| -| vectorizer_id | int | - | ✔ | The identifier of the vectorizer you want to check | -| exact_count | bool | false | ✖ | If true, return exact count. If false, capped at 10,000 | - - -### Returns - -The number of items in the queue for the specified vectorizer - -[LiteLLM embedding documentation]: https://docs.litellm.ai/docs/embedding/supported_embedding -[Cohere documentation on input_type]: https://docs.cohere.com/v2/docs/embeddings#the-input_type-parameter -[Huggingface inference]: https://huggingface.co/docs/huggingface_hub/en/guides/inference -[boto3 credentials documentation]: (https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) -[Authentication methods at Google]: https://cloud.google.com/docs/authentication -[timescale-cloud]: https://console.cloud.timescale.com/ -[openai-use-env-var]: https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety#h_a1ab3ba7b2 -[openai-set-key]: https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety#h_a1ab3ba7b2 -[docker configuration]: /docs/vectorizer/worker.md#install-and-configure-vectorizer-worker diff --git a/api-reference/tiger-cloud-rest-api/introduction.mdx b/api-reference/tiger-cloud-rest-api/introduction.mdx index cabc06d..f8d2867 100644 --- a/api-reference/tiger-cloud-rest-api/introduction.mdx +++ b/api-reference/tiger-cloud-rest-api/introduction.mdx @@ -9,23 +9,78 @@ description: A comprehensive RESTful API for managing Tiger Cloud resources incl ## Authentication -The Tiger Cloud REST API uses HTTP Basic Authentication. Include your access key and secret key in the Authorization header. +The Tiger Cloud REST API uses HTTP Basic Authentication. Include your public key and secret key in the Authorization header. + +To run the samples in this API reference, add your [public key and secret key](/integrations/integrate/find-connection-details) +as the `Authorization.username` and `Authorization.password` when you run the calls. ### Basic Authentication ```http -Authorization: Basic +Authorization: Basic ``` -### Example +## Samples + +### List all services + ```bash -# Using cURL curl -X GET "https://console.cloud.timescale.com/public/api/v1/projects/{project_id}/services" \ - -H "Authorization: Basic $(echo -n 'your_access_key:your_secret_key' | base64)" + -H "Authorization: Basic $(echo -n 'your_public_key:your_secret_key' | base64)" +``` + +### Create a new service + +```bash +curl -X POST "https://console.cloud.timescale.com/public/api/v1/projects/{project_id}/services" \ + -H "Authorization: Basic $(echo -n 'your_public_key:your_secret_key' | base64)" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "my-production-db", + "region_code": "us-east-1", + "compute": { + "cpu": "0.5", + "memory_gb": 2 + } + }' +``` + +### Create a VPC + +```bash +curl -X POST "https://console.cloud.timescale.com/public/api/v1/projects/{project_id}/vpcs" \ + -H "Authorization: Basic $(echo -n 'your_public_key:your_secret_key' | base64)" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "my-vpc", + "region_code": "us-east-1", + "cidr": "10.0.0.0/16" + }' +``` + +### Create a read replica set + +```bash +curl -X POST "https://console.cloud.timescale.com/public/api/v1/projects/{project_id}/services/{service_id}/replicaSets" \ + -H "Authorization: Basic $(echo -n 'your_public_key:your_secret_key' | base64)" \ + -H "Content-Type: application/json" \ + -d '{ + "region_code": "us-west-2", + "compute": { + "cpu": "0.5", + "memory_gb": 2 + } + }' ``` ## API Endpoints -The REST API is organized around three main resource types: +The REST API is organized around the following resource types: + +### Authentication + +Get information about your API credentials: + +- [**Get Authentication Info**](/api-reference/auth/get-authentication-info) - `GET /auth/info` ### Service Management @@ -35,6 +90,8 @@ Manage Tiger Cloud database services: - [**Create a Service**](/api-reference/services/create-a-service) - `POST /projects/{project_id}/services` - [**Get a Service**](/api-reference/services/get-a-service) - `GET /projects/{project_id}/services/{service_id}` - [**Delete a Service**](/api-reference/services/delete-a-service) - `DELETE /projects/{project_id}/services/{service_id}` +- [**Start a Service**](/api-reference/services/start-a-service) - `POST /projects/{project_id}/services/{service_id}/start` +- [**Stop a Service**](/api-reference/services/stop-a-service) - `POST /projects/{project_id}/services/{service_id}/stop` - [**Resize a Service**](/api-reference/services/resize-a-service) - `POST /projects/{project_id}/services/{service_id}/resize` - [**Update Service Password**](/api-reference/services/update-service-password) - `POST /projects/{project_id}/services/{service_id}/updatePassword` - [**Set Environment for a Service**](/api-reference/services/set-environment-for-a-service) - `POST /projects/{project_id}/services/{service_id}/setEnvironment` diff --git a/api-reference/tiger-cloud-rest-api/openapi.yaml b/api-reference/tiger-cloud-rest-api/openapi.yaml index 0da0641..a4a30e3 100644 --- a/api-reference/tiger-cloud-rest-api/openapi.yaml +++ b/api-reference/tiger-cloud-rest-api/openapi.yaml @@ -1,33 +1,8 @@ openapi: 3.0.3 info: - title: Tiger Cloud REST API + title: Tiger Cloud API description: | - A comprehensive RESTful API for managing Tiger Cloud resources including VPCs, services, and read replicas. - - ## Authentication - - The Tiger Cloud REST API uses HTTP Basic Authentication. Include your access key and secret key in the Authorization header. - - ### Basic Authentication - ```http - Authorization: Basic - ``` - - ### Example - ```bash - # Using cURL - curl -X GET "https://console.cloud.timescale.com/public/api/v1/projects/{project_id}/services" \ - -H "Authorization: Basic $(echo -n 'your_access_key:your_secret_key' | base64)" - ``` - - ## Service Management - - You use this endpoint to create a Tiger Cloud service with one or more of the following addons: - - - `time-series`: a Tiger Cloud service optimized for real-time analytics. For time-stamped data like events, prices, metrics, sensor readings, or any information that changes over time. - - `ai`: a Tiger Cloud service instance with vector extensions. - - To have multiple addons when you create a new service, set `"addons": ["time-series", "ai"]`. To create a vanilla Postgres instance, set `addons` to an empty list `[]`. + A RESTful API for Tiger Cloud. version: 1.0.0 license: name: Proprietary @@ -37,293 +12,392 @@ info: url: https://www.tigerdata.com/contact servers: - url: https://console.cloud.timescale.com/public/api/v1 - description: Tiger Cloud API server + description: API server for Tiger Cloud - url: http://localhost:8080 description: Local development server +security: + - basicAuth: [] + tags: - - name: Services - description: Manage services, read replicas, and their associated actions. + - name: Auth + description: Authentication and authorization information. - name: VPCs description: Manage VPCs and their peering connections. + - name: Services + description: Manage services, read replicas, and their associated actions. - name: Analytics description: Track analytics events. - paths: - /projects/{project_id}/services: + /auth/info: get: tags: - - Services - summary: List All Services - description: Retrieves a list of all services within a specific project. + - Auth + summary: Get Authentication Info + description: Returns information about the authentication credentials being used to access the API + responses: + '200': + description: Authentication information retrieved successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/AuthInfo' + '4XX': + $ref: '#/components/responses/ClientError' + + /analytics/identify: + post: + tags: + - Analytics + summary: Identify a user + description: Identifies a user with optional properties for analytics tracking. + requestBody: + required: true + content: + application/json: + schema: + type: object + properties: + properties: + type: object + additionalProperties: true + description: Optional map of arbitrary properties associated with the user + example: + email: "user@example.com" + name: "John Doe" + responses: + '200': + $ref: '#/components/responses/AnalyticsResponse' + '4XX': + $ref: '#/components/responses/ClientError' + + /analytics/track: + post: + tags: + - Analytics + summary: Track an analytics event + description: Tracks an analytics event with optional properties. + requestBody: + required: true + content: + application/json: + schema: + type: object + required: + - event + properties: + event: + type: string + description: The name of the event to track + example: service_created + properties: + type: object + additionalProperties: true + description: Optional map of arbitrary properties associated with the event + example: + region: "us-east-1" + responses: + '200': + $ref: '#/components/responses/AnalyticsResponse' + '4XX': + $ref: '#/components/responses/ClientError' + + /projects/{project_id}/vpcs: + get: + tags: + - VPCs parameters: - $ref: '#/components/parameters/ProjectId' + summary: List All VPCs + description: Retrieves a list of all Virtual Private Clouds (VPCs). responses: '200': - description: A list of services. + description: A list of VPCs. content: application/json: schema: type: array items: - $ref: '#/components/schemas/Service' + $ref: '#/components/schemas/VPC' '4XX': $ref: '#/components/responses/ClientError' post: tags: - - Services - summary: Create a Service - description: Creates a new database service within a project. This is an asynchronous operation. + - VPCs parameters: - $ref: '#/components/parameters/ProjectId' + summary: Create a VPC + description: Creates a new Virtual Private Cloud (VPC). requestBody: required: true content: application/json: schema: - $ref: '#/components/schemas/ServiceCreate' + $ref: '#/components/schemas/VPCCreate' responses: - '202': - description: Service creation request has been accepted. + '201': + description: VPC created successfully. content: application/json: schema: - $ref: '#/components/schemas/Service' + $ref: '#/components/schemas/VPC' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}: + + /projects/{project_id}/vpcs/{vpc_id}: get: tags: - - Services - summary: Get a Service - description: Retrieves the details of a specific service by its ID. + - VPCs parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/VPCId' + summary: Get a VPC + description: Retrieves the details of a specific VPC by its ID. responses: '200': - description: Service details. + description: VPC details. content: application/json: schema: - $ref: '#/components/schemas/Service' + $ref: '#/components/schemas/VPC' '4XX': $ref: '#/components/responses/ClientError' delete: tags: - - Services - summary: Delete a Service - description: Deletes a specific service. This is an asynchronous operation. + - VPCs + summary: Delete a VPC + description: Deletes a specific VPC. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/VPCId' responses: - '202': - description: Deletion request has been accepted. + '204': + description: VPC deleted successfully. '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/attachToVPC: + + /projects/{project_id}/vpcs/{vpc_id}/rename: post: tags: - - Services - summary: Attach Service to VPC - description: Associates a service with a VPC. + - VPCs + summary: Rename a VPC + description: Updates the name of a specific VPC. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/VPCId' requestBody: required: true content: application/json: schema: - $ref: '#/components/schemas/ServiceVPCInput' + $ref: '#/components/schemas/VPCRename' responses: - '202': - $ref: '#/components/responses/SuccessMessage' + '200': + description: VPC renamed successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/VPC' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/detachFromVPC: - post: + + /projects/{project_id}/vpcs/{vpc_id}/peerings: + get: tags: - - Services - summary: Detach Service from VPC - description: Disassociates a service from its VPC. + - VPCs + summary: List VPC Peerings + description: Retrieves a list of all VPC peering connections for a given VPC. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/ServiceId' - requestBody: - required: true - content: - application/json: - schema: - $ref: '#/components/schemas/ServiceVPCInput' + - $ref: '#/components/parameters/VPCId' responses: - '202': - $ref: '#/components/responses/SuccessMessage' + '200': + description: A list of VPC peering connections. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/Peering' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/resize: post: tags: - - Services - summary: Resize a Service - description: Changes the CPU and memory allocation for a specific service within a project. + - VPCs + summary: Create a VPC Peering + description: Creates a new VPC peering connection. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/VPCId' requestBody: required: true content: application/json: schema: - $ref: '#/components/schemas/ResizeInput' + $ref: '#/components/schemas/PeeringCreate' responses: - '202': - description: Resize request has been accepted and is in progress. + '201': + description: VPC peering created successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/Peering' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/enablePooler: - post: + + /projects/{project_id}/vpcs/{vpc_id}/peerings/{peering_id}: + get: tags: - - Services - summary: Enable Connection Pooler for a Service - description: Activates the connection pooler for a specific service within a project. + - VPCs + summary: Get a VPC Peering + description: Retrieves the details of a specific VPC peering connection. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/VPCId' + - $ref: '#/components/parameters/PeeringId' responses: '200': - $ref: '#/components/responses/SuccessMessage' + description: VPC peering details. + content: + application/json: + schema: + $ref: '#/components/schemas/Peering' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/disablePooler: - post: + delete: + tags: + - VPCs + summary: Delete a VPC Peering + description: Deletes a specific VPC peering connection. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/VPCId' + - $ref: '#/components/parameters/PeeringId' + responses: + '204': + description: VPC peering deleted successfully. + '4XX': + $ref: '#/components/responses/ClientError' + + /projects/{project_id}/services: + get: tags: - Services - summary: Disable Connection Pooler for a Service - description: Deactivates the connection pooler for a specific service within a project. + summary: List All Services + description: Retrieves a list of all services within a specific project. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/ServiceId' responses: '200': - $ref: '#/components/responses/SuccessMessage' + description: A list of services. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/Service' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/forkService: post: tags: - Services - summary: Fork a Service - description: Creates a new, independent service within a project by taking a snapshot of an existing one. + summary: Create a Service + description: Creates a new database service within a project. This is an asynchronous operation. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/ServiceId' requestBody: required: true content: application/json: schema: - $ref: '#/components/schemas/ForkServiceCreate' + $ref: '#/components/schemas/ServiceCreate' responses: '202': - description: Fork request accepted. The response contains the details of the new service being created. + description: Service creation request has been accepted. content: application/json: schema: $ref: '#/components/schemas/Service' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/updatePassword: - post: + + /projects/{project_id}/services/{service_id}: + get: tags: - Services - summary: Update Service Password - description: Sets a new master password for the service within a project. + summary: Get a Service + description: Retrieves the details of a specific service by its ID. parameters: - $ref: '#/components/parameters/ProjectId' - $ref: '#/components/parameters/ServiceId' - requestBody: - required: true - content: - application/json: - schema: - $ref: '#/components/schemas/UpdatePasswordInput' responses: - '204': - description: Password updated successfully. No content returned. + '200': + description: Service details. + content: + application/json: + schema: + $ref: '#/components/schemas/Service' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/setEnvironment: - post: + delete: tags: - Services - summary: Set Environment for a Service - description: Sets the environment type for the service. + summary: Delete a Service + description: Deletes a specific service. This is an asynchronous operation. parameters: - $ref: '#/components/parameters/ProjectId' - $ref: '#/components/parameters/ServiceId' - requestBody: - required: true - content: - application/json: - schema: - $ref: '#/components/schemas/SetEnvironmentInput' responses: - '200': - $ref: '#/components/responses/SuccessMessage' + '202': + description: Deletion request has been accepted. '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/setHA: + + /projects/{project_id}/services/{service_id}/start: post: tags: - Services - summary: Change HA configuration for a Service - description: Changes the HA configuration for a specific service. This is an asynchronous operation. + summary: Start a Service + description: Starts a stopped service within a project. This is an asynchronous operation. parameters: - $ref: '#/components/parameters/ProjectId' - $ref: '#/components/parameters/ServiceId' - requestBody: - required: true - content: - application/json: - schema: - $ref: '#/components/schemas/SetHAReplicaInput' responses: '202': - description: HA replica configuration updated + description: Service start request has been accepted. content: application/json: schema: $ref: '#/components/schemas/Service' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/replicaSets: - get: + + /projects/{project_id}/services/{service_id}/stop: + post: tags: - - Read Replica Sets - summary: Get Read Replica Sets - description: Retrieves a list of all read replica sets associated with a primary service within a project. + - Services + summary: Stop a Service + description: Stops a running service within a project. This is an asynchronous operation. parameters: - $ref: '#/components/parameters/ProjectId' - $ref: '#/components/parameters/ServiceId' responses: - '200': - description: A list of read replica sets. + '202': + description: Service stop request has been accepted. content: application/json: schema: - type: array - items: - $ref: '#/components/schemas/ReadReplicaSet' + $ref: '#/components/schemas/Service' '4XX': $ref: '#/components/responses/ClientError' + + /projects/{project_id}/services/{service_id}/attachToVPC: post: tags: - - Read Replica Sets - summary: Create a Read Replica Set - description: Creates a new read replica set for a service. This is an asynchronous operation. + - Services + summary: Attach Service to VPC + description: Associates a service with a VPC. parameters: - $ref: '#/components/parameters/ProjectId' - $ref: '#/components/parameters/ServiceId' @@ -332,41 +406,43 @@ paths: content: application/json: schema: - $ref: '#/components/schemas/ReadReplicaSetCreate' + $ref: '#/components/schemas/ServiceVPCInput' responses: '202': - description: Read replica set creation request has been accepted. - content: - application/json: - schema: - $ref: '#/components/schemas/ReadReplicaSet' + $ref: '#/components/responses/SuccessMessage' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}: - delete: + + /projects/{project_id}/services/{service_id}/detachFromVPC: + post: tags: - - Read Replica Sets - summary: Delete a Read Replica Set - description: Deletes a specific read replica set. This is an asynchronous operation. + - Services + summary: Detach Service from VPC + description: Disassociates a service from its VPC. parameters: - $ref: '#/components/parameters/ProjectId' - $ref: '#/components/parameters/ServiceId' - - $ref: '#/components/parameters/ReplicaSetId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/ServiceVPCInput' responses: '202': - description: Deletion request has been accepted. + $ref: '#/components/responses/SuccessMessage' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/resize: + + /projects/{project_id}/services/{service_id}/resize: post: tags: - - Read Replica Sets - summary: Resize a Read Replica Set - description: Changes the resource allocation for a specific read replica set. + - Services + summary: Resize a Service + description: Changes the CPU and memory allocation for a specific service within a project. parameters: - $ref: '#/components/parameters/ProjectId' - $ref: '#/components/parameters/ServiceId' - - $ref: '#/components/parameters/ReplicaSetId' requestBody: required: true content: @@ -378,46 +454,92 @@ paths: description: Resize request has been accepted and is in progress. '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/enablePooler: + + /projects/{project_id}/services/{service_id}/enablePooler: post: tags: - - Read Replica Sets - summary: Enable Connection Pooler for a Read Replica - description: Activates the connection pooler for a specific read replica set. + - Services + summary: Enable Connection Pooler for a Service + description: Activates the connection pooler for a specific service within a project. parameters: - $ref: '#/components/parameters/ProjectId' - $ref: '#/components/parameters/ServiceId' - - $ref: '#/components/parameters/ReplicaSetId' responses: '200': $ref: '#/components/responses/SuccessMessage' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/disablePooler: + + /projects/{project_id}/services/{service_id}/disablePooler: post: tags: - - Read Replica Sets - summary: Disable Connection Pooler for a Read Replica - description: Deactivates the connection pooler for a specific read replica set. + - Services + summary: Disable Connection Pooler for a Service + description: Deactivates the connection pooler for a specific service within a project. parameters: - $ref: '#/components/parameters/ProjectId' - $ref: '#/components/parameters/ServiceId' - - $ref: '#/components/parameters/ReplicaSetId' responses: '200': $ref: '#/components/responses/SuccessMessage' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/setEnvironment: + + /projects/{project_id}/services/{service_id}/forkService: + post: + tags: + - Services + summary: Fork a Service + description: Creates a new, independent service within a project by taking a snapshot of an existing one. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/ForkServiceCreate' + responses: + '202': + description: Fork request accepted. The response contains the details of the new service being created. + content: + application/json: + schema: + $ref: '#/components/schemas/Service' + '4XX': + $ref: '#/components/responses/ClientError' + + /projects/{project_id}/services/{service_id}/updatePassword: + post: + tags: + - Services + summary: Update Service Password + description: Sets a new master password for the service within a project. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/UpdatePasswordInput' + responses: + '204': + description: Password updated successfully. No content returned. + '4XX': + $ref: '#/components/responses/ClientError' + + /projects/{project_id}/services/{service_id}/setEnvironment: post: tags: - - Read Replica Sets - summary: Set Environment for a Read Replica - description: Sets the environment type for the read replica set. + - Services + summary: Set Environment for a Service + description: Sets the environment type for the service. parameters: - $ref: '#/components/parameters/ProjectId' - $ref: '#/components/parameters/ServiceId' - - $ref: '#/components/parameters/ReplicaSetId' requestBody: required: true content: @@ -429,235 +551,184 @@ paths: $ref: '#/components/responses/SuccessMessage' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/vpcs: - get: - tags: - - VPCs - parameters: - - $ref: '#/components/parameters/ProjectId' - summary: List All VPCs - description: Retrieves a list of all Virtual Private Clouds (VPCs). - responses: - '200': - description: A list of VPCs. - content: - application/json: - schema: - type: array - items: - $ref: '#/components/schemas/VPC' - '4XX': - $ref: '#/components/responses/ClientError' + + /projects/{project_id}/services/{service_id}/setHA: post: tags: - - VPCs + - Services + summary: Change HA configuration for a Service + description: Changes the HA configuration for a specific service. This is an asynchronous operation. parameters: - $ref: '#/components/parameters/ProjectId' - summary: Create a VPC - description: Creates a new Virtual Private Cloud (VPC). + - $ref: '#/components/parameters/ServiceId' requestBody: required: true content: application/json: schema: - $ref: '#/components/schemas/VPCCreate' + $ref: '#/components/schemas/SetHAReplicaInput' responses: - '201': - description: VPC created successfully. + '202': + description: HA replica configuration updated content: application/json: schema: - $ref: '#/components/schemas/VPC' + $ref: '#/components/schemas/Service' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/vpcs/{vpc_id}: + + /projects/{project_id}/services/{service_id}/replicaSets: get: tags: - - VPCs + - Read Replica Sets + summary: Get Read Replica Sets + description: Retrieves a list of all read replica sets associated with a primary service within a project. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/VPCId' - summary: Get a VPC - description: Retrieves the details of a specific VPC by its ID. + - $ref: '#/components/parameters/ServiceId' responses: '200': - description: VPC details. + description: A list of read replica sets. content: application/json: schema: - $ref: '#/components/schemas/VPC' - '4XX': - $ref: '#/components/responses/ClientError' - delete: - tags: - - VPCs - summary: Delete a VPC - description: Deletes a specific VPC. - parameters: - - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/VPCId' - responses: - '204': - description: VPC deleted successfully. + type: array + items: + $ref: '#/components/schemas/ReadReplicaSet' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/vpcs/{vpc_id}/rename: post: tags: - - VPCs - summary: Rename a VPC - description: Updates the name of a specific VPC. + - Read Replica Sets + summary: Create a Read Replica Set + description: Creates a new read replica set for a service. This is an asynchronous operation. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/VPCId' + - $ref: '#/components/parameters/ServiceId' requestBody: required: true content: application/json: schema: - $ref: '#/components/schemas/VPCRename' + $ref: '#/components/schemas/ReadReplicaSetCreate' responses: - '200': - description: VPC renamed successfully. + '202': + description: Read replica set creation request has been accepted. content: application/json: schema: - $ref: '#/components/schemas/VPC' + $ref: '#/components/schemas/ReadReplicaSet' '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/vpcs/{vpc_id}/peerings: - get: + + /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}: + delete: tags: - - VPCs - summary: List VPC Peerings - description: Retrieves a list of all VPC peering connections for a given VPC. + - Read Replica Sets + summary: Delete a Read Replica Set + description: Deletes a specific read replica set. This is an asynchronous operation. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/VPCId' + - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/ReplicaSetId' responses: - '200': - description: A list of VPC peering connections. - content: - application/json: - schema: - type: array - items: - $ref: '#/components/schemas/Peering' + '202': + description: Deletion request has been accepted. '4XX': $ref: '#/components/responses/ClientError' + + /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/resize: post: tags: - - VPCs - summary: Create a VPC Peering - description: Creates a new VPC peering connection. + - Read Replica Sets + summary: Resize a Read Replica Set + description: Changes the resource allocation for a specific read replica set. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/VPCId' + - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/ReplicaSetId' requestBody: required: true content: application/json: schema: - $ref: '#/components/schemas/PeeringCreate' + $ref: '#/components/schemas/ResizeInput' responses: - '201': - description: VPC peering created successfully. - content: - application/json: - schema: - $ref: '#/components/schemas/Peering' + '202': + description: Resize request has been accepted and is in progress. '4XX': $ref: '#/components/responses/ClientError' - /projects/{project_id}/vpcs/{vpc_id}/peerings/{peering_id}: - get: + + /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/enablePooler: + post: tags: - - VPCs - summary: Get a VPC Peering - description: Retrieves the details of a specific VPC peering connection. + - Read Replica Sets + summary: Enable Connection Pooler for a Read Replica + description: Activates the connection pooler for a specific read replica set. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/VPCId' - - $ref: '#/components/parameters/PeeringId' + - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/ReplicaSetId' responses: '200': - description: VPC peering details. - content: - application/json: - schema: - $ref: '#/components/schemas/Peering' + $ref: '#/components/responses/SuccessMessage' '4XX': $ref: '#/components/responses/ClientError' - delete: + + /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/disablePooler: + post: tags: - - VPCs - summary: Delete a VPC Peering - description: Deletes a specific VPC peering connection. + - Read Replica Sets + summary: Disable Connection Pooler for a Read Replica + description: Deactivates the connection pooler for a specific read replica set. parameters: - $ref: '#/components/parameters/ProjectId' - - $ref: '#/components/parameters/VPCId' - - $ref: '#/components/parameters/PeeringId' - responses: - '204': - description: VPC peering deleted successfully. - '4XX': - $ref: '#/components/responses/ClientError' - /analytics/identify: - post: - tags: - - Analytics - summary: Identify a user - description: Identifies a user with optional properties for analytics tracking. - requestBody: - required: true - content: - application/json: - schema: - type: object - properties: - properties: - type: object - additionalProperties: true - description: Optional map of arbitrary properties associated with the user - example: - email: "user@example.com" - name: "John Doe" + - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/ReplicaSetId' responses: '200': - $ref: '#/components/responses/AnalyticsResponse' + $ref: '#/components/responses/SuccessMessage' '4XX': $ref: '#/components/responses/ClientError' - /analytics/track: + + /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/setEnvironment: post: tags: - - Analytics - summary: Track an analytics event - description: Tracks an analytics event with optional properties. + - Read Replica Sets + summary: Set Environment for a Read Replica + description: Sets the environment type for the read replica set. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/ReplicaSetId' requestBody: required: true content: application/json: schema: - type: object - required: - - event - properties: - event: - type: string - description: The name of the event to track - example: service_created - properties: - type: object - additionalProperties: true - description: Optional map of arbitrary properties associated with the event - example: - region: "us-east-1" + $ref: '#/components/schemas/SetEnvironmentInput' responses: '200': - $ref: '#/components/responses/AnalyticsResponse' + $ref: '#/components/responses/SuccessMessage' '4XX': $ref: '#/components/responses/ClientError' components: + securitySchemes: + basicAuth: + type: http + scheme: basic + description: | + HTTP Basic Authentication using your Tiger Cloud public key and secret key. + + Format: `Authorization: Basic ` + + Example: + ```bash + curl -X GET "https://console.cloud.timescale.com/public/api/v1/projects/{project_id}/services" \ + -H "Authorization: Basic $(echo -n 'your_public_key:your_secret_key' | base64)" + ``` + parameters: ProjectId: name: project_id @@ -701,6 +772,81 @@ components: example: "1234567890" schemas: + AuthInfo: + type: object + required: + - type + - apiKey + properties: + type: + type: string + description: The type of authentication being used + enum: ["apiKey"] + example: "apiKey" + apiKey: + type: object + description: Information about the API key credentials + required: + - public_key + - name + - created + - project + - issuing_user + properties: + public_key: + type: string + description: The public key of the client credentials + example: "tskey_abc123" + name: + type: string + description: The name of the credential + example: "my-production-token" + created: + type: string + format: date-time + description: When the client credentials were created + example: "2024-01-15T10:30:00Z" + project: + type: object + description: Information about the project + required: + - id + - name + - plan_type + properties: + id: + type: string + description: The project ID + example: "rp1pz7uyae" + name: + type: string + description: The name of the project + example: "My Production Project" + plan_type: + type: string + description: The plan type for the project + example: "FREE" + issuing_user: + type: object + description: Information about the user who created the credentials + required: + - id + - name + - email + properties: + id: + type: string + description: The user ID + example: "user123" + name: + type: string + description: The user's name + example: "John Doe" + email: + type: string + format: email + description: The user's email + example: "john.doe@example.com" VPC: type: object properties: diff --git a/api-reference/tiger-cloud-rest-api/openapi.yaml.old b/api-reference/tiger-cloud-rest-api/openapi.yaml.old new file mode 100644 index 0000000..5a502f4 --- /dev/null +++ b/api-reference/tiger-cloud-rest-api/openapi.yaml.old @@ -0,0 +1,1186 @@ +openapi: 3.0.3 +info: + title: Tiger Cloud REST API + description: | + A comprehensive RESTful API for managing Tiger Cloud resources including VPCs, services, and read replicas. + + ## Authentication + + The Tiger Cloud REST API uses HTTP Basic Authentication. Include your public key and secret key in the Authorization header. + + ### Basic Authentication + ```http + Authorization: Basic + ``` + + ### Example + ```bash + # Using cURL + curl -X GET "https://console.cloud.timescale.com/public/api/v1/projects/{project_id}/services" \ + -H "Authorization: Basic $(echo -n 'your_public_key:your_secret_key' | base64)" + ``` + + ## Service Management + + You use this endpoint to create a Tiger Cloud service with one or more of the following addons: + + - `time-series`: a Tiger Cloud service optimized for real-time analytics. For time-stamped data like events, prices, metrics, sensor readings, or any information that changes over time. + - `ai`: a Tiger Cloud service instance with vector extensions. + + To have multiple addons when you create a new service, set `"addons": ["time-series", "ai"]`. To create a vanilla Postgres instance, set `addons` to an empty list `[]`. + version: 1.0.0 + license: + name: Proprietary + url: https://www.tigerdata.com/legal/terms + contact: + name: Tiger Data Support + url: https://www.tigerdata.com/contact +servers: + - url: https://console.cloud.timescale.com/public/api/v1 + description: Tiger Cloud API server + - url: http://localhost:8080 + description: Local development server + +security: + - basicAuth: [] + +tags: + - name: Services + description: Manage services, read replicas, and their associated actions. + - name: VPCs + description: Manage VPCs and their peering connections. + - name: Analytics + description: Track analytics events. + + +paths: + /projects/{project_id}/services: + get: + tags: + - Services + summary: List All Services + description: Retrieves a list of all services within a specific project. + parameters: + - $ref: '#/components/parameters/ProjectId' + responses: + '200': + description: A list of services. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/Service' + '4XX': + $ref: '#/components/responses/ClientError' + post: + tags: + - Services + summary: Create a Service + description: Creates a new database service within a project. This is an asynchronous operation. + parameters: + - $ref: '#/components/parameters/ProjectId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/ServiceCreate' + responses: + '202': + description: Service creation request has been accepted. + content: + application/json: + schema: + $ref: '#/components/schemas/Service' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}: + get: + tags: + - Services + summary: Get a Service + description: Retrieves the details of a specific service by its ID. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + responses: + '200': + description: Service details. + content: + application/json: + schema: + $ref: '#/components/schemas/Service' + '4XX': + $ref: '#/components/responses/ClientError' + delete: + tags: + - Services + summary: Delete a Service + description: Deletes a specific service. This is an asynchronous operation. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + responses: + '202': + description: Deletion request has been accepted. + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/attachToVPC: + post: + tags: + - Services + summary: Attach Service to VPC + description: Associates a service with a VPC. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/ServiceVPCInput' + responses: + '202': + $ref: '#/components/responses/SuccessMessage' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/detachFromVPC: + post: + tags: + - Services + summary: Detach Service from VPC + description: Disassociates a service from its VPC. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/ServiceVPCInput' + responses: + '202': + $ref: '#/components/responses/SuccessMessage' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/resize: + post: + tags: + - Services + summary: Resize a Service + description: Changes the CPU and memory allocation for a specific service within a project. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/ResizeInput' + responses: + '202': + description: Resize request has been accepted and is in progress. + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/enablePooler: + post: + tags: + - Services + summary: Enable Connection Pooler for a Service + description: Activates the connection pooler for a specific service within a project. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + responses: + '200': + $ref: '#/components/responses/SuccessMessage' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/disablePooler: + post: + tags: + - Services + summary: Disable Connection Pooler for a Service + description: Deactivates the connection pooler for a specific service within a project. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + responses: + '200': + $ref: '#/components/responses/SuccessMessage' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/forkService: + post: + tags: + - Services + summary: Fork a Service + description: Creates a new, independent service within a project by taking a snapshot of an existing one. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/ForkServiceCreate' + responses: + '202': + description: Fork request accepted. The response contains the details of the new service being created. + content: + application/json: + schema: + $ref: '#/components/schemas/Service' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/updatePassword: + post: + tags: + - Services + summary: Update Service Password + description: Sets a new master password for the service within a project. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/UpdatePasswordInput' + responses: + '204': + description: Password updated successfully. No content returned. + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/setEnvironment: + post: + tags: + - Services + summary: Set Environment for a Service + description: Sets the environment type for the service. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/SetEnvironmentInput' + responses: + '200': + $ref: '#/components/responses/SuccessMessage' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/setHA: + post: + tags: + - Services + summary: Change HA configuration for a Service + description: Changes the HA configuration for a specific service. This is an asynchronous operation. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/SetHAReplicaInput' + responses: + '202': + description: HA replica configuration updated + content: + application/json: + schema: + $ref: '#/components/schemas/Service' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/replicaSets: + get: + tags: + - Read Replica Sets + summary: Get Read Replica Sets + description: Retrieves a list of all read replica sets associated with a primary service within a project. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + responses: + '200': + description: A list of read replica sets. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/ReadReplicaSet' + '4XX': + $ref: '#/components/responses/ClientError' + post: + tags: + - Read Replica Sets + summary: Create a Read Replica Set + description: Creates a new read replica set for a service. This is an asynchronous operation. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/ReadReplicaSetCreate' + responses: + '202': + description: Read replica set creation request has been accepted. + content: + application/json: + schema: + $ref: '#/components/schemas/ReadReplicaSet' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}: + delete: + tags: + - Read Replica Sets + summary: Delete a Read Replica Set + description: Deletes a specific read replica set. This is an asynchronous operation. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/ReplicaSetId' + responses: + '202': + description: Deletion request has been accepted. + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/resize: + post: + tags: + - Read Replica Sets + summary: Resize a Read Replica Set + description: Changes the resource allocation for a specific read replica set. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/ReplicaSetId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/ResizeInput' + responses: + '202': + description: Resize request has been accepted and is in progress. + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/enablePooler: + post: + tags: + - Read Replica Sets + summary: Enable Connection Pooler for a Read Replica + description: Activates the connection pooler for a specific read replica set. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/ReplicaSetId' + responses: + '200': + $ref: '#/components/responses/SuccessMessage' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/disablePooler: + post: + tags: + - Read Replica Sets + summary: Disable Connection Pooler for a Read Replica + description: Deactivates the connection pooler for a specific read replica set. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/ReplicaSetId' + responses: + '200': + $ref: '#/components/responses/SuccessMessage' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/setEnvironment: + post: + tags: + - Read Replica Sets + summary: Set Environment for a Read Replica + description: Sets the environment type for the read replica set. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/ServiceId' + - $ref: '#/components/parameters/ReplicaSetId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/SetEnvironmentInput' + responses: + '200': + $ref: '#/components/responses/SuccessMessage' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/vpcs: + get: + tags: + - VPCs + parameters: + - $ref: '#/components/parameters/ProjectId' + summary: List All VPCs + description: Retrieves a list of all Virtual Private Clouds (VPCs). + responses: + '200': + description: A list of VPCs. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/VPC' + '4XX': + $ref: '#/components/responses/ClientError' + post: + tags: + - VPCs + parameters: + - $ref: '#/components/parameters/ProjectId' + summary: Create a VPC + description: Creates a new Virtual Private Cloud (VPC). + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/VPCCreate' + responses: + '201': + description: VPC created successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/VPC' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/vpcs/{vpc_id}: + get: + tags: + - VPCs + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/VPCId' + summary: Get a VPC + description: Retrieves the details of a specific VPC by its ID. + responses: + '200': + description: VPC details. + content: + application/json: + schema: + $ref: '#/components/schemas/VPC' + '4XX': + $ref: '#/components/responses/ClientError' + delete: + tags: + - VPCs + summary: Delete a VPC + description: Deletes a specific VPC. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/VPCId' + responses: + '204': + description: VPC deleted successfully. + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/vpcs/{vpc_id}/rename: + post: + tags: + - VPCs + summary: Rename a VPC + description: Updates the name of a specific VPC. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/VPCId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/VPCRename' + responses: + '200': + description: VPC renamed successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/VPC' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/vpcs/{vpc_id}/peerings: + get: + tags: + - VPCs + summary: List VPC Peerings + description: Retrieves a list of all VPC peering connections for a given VPC. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/VPCId' + responses: + '200': + description: A list of VPC peering connections. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/Peering' + '4XX': + $ref: '#/components/responses/ClientError' + post: + tags: + - VPCs + summary: Create a VPC Peering + description: Creates a new VPC peering connection. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/VPCId' + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/PeeringCreate' + responses: + '201': + description: VPC peering created successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/Peering' + '4XX': + $ref: '#/components/responses/ClientError' + /projects/{project_id}/vpcs/{vpc_id}/peerings/{peering_id}: + get: + tags: + - VPCs + summary: Get a VPC Peering + description: Retrieves the details of a specific VPC peering connection. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/VPCId' + - $ref: '#/components/parameters/PeeringId' + responses: + '200': + description: VPC peering details. + content: + application/json: + schema: + $ref: '#/components/schemas/Peering' + '4XX': + $ref: '#/components/responses/ClientError' + delete: + tags: + - VPCs + summary: Delete a VPC Peering + description: Deletes a specific VPC peering connection. + parameters: + - $ref: '#/components/parameters/ProjectId' + - $ref: '#/components/parameters/VPCId' + - $ref: '#/components/parameters/PeeringId' + responses: + '204': + description: VPC peering deleted successfully. + '4XX': + $ref: '#/components/responses/ClientError' + /analytics/identify: + post: + tags: + - Analytics + summary: Identify a user + description: Identifies a user with optional properties for analytics tracking. + requestBody: + required: true + content: + application/json: + schema: + type: object + properties: + properties: + type: object + additionalProperties: true + description: Optional map of arbitrary properties associated with the user + example: + email: "user@example.com" + name: "John Doe" + responses: + '200': + $ref: '#/components/responses/AnalyticsResponse' + '4XX': + $ref: '#/components/responses/ClientError' + /analytics/track: + post: + tags: + - Analytics + summary: Track an analytics event + description: Tracks an analytics event with optional properties. + requestBody: + required: true + content: + application/json: + schema: + type: object + required: + - event + properties: + event: + type: string + description: The name of the event to track + example: service_created + properties: + type: object + additionalProperties: true + description: Optional map of arbitrary properties associated with the event + example: + region: "us-east-1" + responses: + '200': + $ref: '#/components/responses/AnalyticsResponse' + '4XX': + $ref: '#/components/responses/ClientError' + +components: + securitySchemes: + basicAuth: + type: http + scheme: basic + description: | + HTTP Basic Authentication using your Tiger Cloud public key and secret key. + + Format: `Authorization: Basic ` + + Example: + ```bash + curl -X GET "https://console.cloud.timescale.com/public/api/v1/projects/{project_id}/services" \ + -H "Authorization: Basic $(echo -n 'your_public_key:your_secret_key' | base64)" + ``` + x-basic-info-func: | + basicAuth: + username: + label: Public Key + password: + label: Secret Key + + parameters: + ProjectId: + name: project_id + in: path + required: true + description: The unique identifier of the project. + schema: + type: string + example: "rp1pz7uyae" + ServiceId: + name: service_id + in: path + required: true + description: The unique identifier of the service. + schema: + type: string + example: "d1k5vk7hf2" + ReplicaSetId: + name: replica_set_id + in: path + required: true + description: The unique identifier of the read replica set. + schema: + type: string + example: "alb8jicdpr" + VPCId: + name: vpc_id + in: path + required: true + description: The unique identifier of the VPC. + schema: + type: string + example: "1234567890" + PeeringId: + name: peering_id + in: path + required: true + description: The unique identifier of the VPC peering connection. + schema: + type: string + example: "1234567890" + + schemas: + VPC: + type: object + properties: + id: + type: string + readOnly: true + example: "1234567890" + name: + type: string + example: "my-production-vpc" + cidr: + type: string + example: "10.0.0.0/16" + region_code: + type: string + example: "us-east-1" + VPCCreate: + type: object + required: + - name + - cidr + - region_code + properties: + name: + type: string + example: "my-production-vpc" + cidr: + type: string + example: "10.0.0.0/16" + region_code: + type: string + example: "us-east-1" + VPCRename: + type: object + required: + - name + properties: + name: + type: string + description: The new name for the VPC. + example: "my-renamed-vpc" + Peering: + type: object + properties: + id: + type: string + readOnly: true + example: "1234567890" + peer_account_id: + type: string + example: "acc-12345" + peer_region_code: + type: string + example: "aws-us-east-1" + peer_vpc_id: + type: string + example: "1234567890" + provisioned_id: + type: string + example: "1234567890" + status: + type: string + example: "active" + error_message: + type: string + example: "VPC not found" + PeeringCreate: + type: object + required: + - peer_account_id + - peer_region_code + - peer_vpc_id + properties: + peer_account_id: + type: string + example: "acc-12345" + peer_region_code: + type: string + example: "aws-us-east-1" + peer_vpc_id: + type: string + example: "1234567890" + Endpoint: + type: object + properties: + host: + type: string + example: "my-service.com" + port: + type: integer + example: 8080 + ConnectionPooler: + type: object + properties: + endpoint: + $ref: '#/components/schemas/Endpoint' + Service: + type: object + properties: + service_id: + type: string + description: The unique identifier for the service. + project_id: + type: string + description: The project this service belongs to. + name: + type: string + description: The name of the service. + region_code: + type: string + description: The cloud region where the service is hosted. + example: "us-east-1" + service_type: + $ref: '#/components/schemas/ServiceType' + description: The type of the service. + created: + type: string + format: date-time + description: Creation timestamp + initial_password: + type: string + description: The initial password for the service. + format: password + example: "a-very-secure-initial-password" + paused: + type: boolean + description: Whether the service is paused + status: + $ref: '#/components/schemas/DeployStatus' + description: Current status of the service + resources: + type: array + description: List of resources allocated to the service + items: + type: object + properties: + id: + type: string + description: Resource identifier + spec: + type: object + description: Resource specification + properties: + cpu_millis: + type: integer + description: CPU allocation in millicores + memory_gbs: + type: integer + description: Memory allocation in gigabytes + volume_type: + type: string + description: Type of storage volume + metadata: + type: object + description: Additional metadata for the service + properties: + environment: + type: string + description: Environment tag for the service + endpoint: + $ref: '#/components/schemas/Endpoint' + vpcEndpoint: + type: object + nullable: true + description: VPC endpoint configuration if available + forked_from: + $ref: '#/components/schemas/ForkSpec' + ha_replicas: + $ref: '#/components/schemas/HAReplica' + connection_pooler: + $ref: '#/components/schemas/ConnectionPooler' + read_replica_sets: + type: array + items: + $ref: '#/components/schemas/ReadReplicaSet' + ServiceType: + type: string + enum: + - TIMESCALEDB + - POSTGRES + - VECTOR + EnvironmentTag: + type: string + enum: + - DEV + - PROD + description: The environment tag for the service. + ForkStrategy: + type: string + enum: + - LAST_SNAPSHOT + - NOW + - PITR + description: | + Strategy for creating the fork: + - LAST_SNAPSHOT: Use existing snapshot for fast fork + - NOW: Create new snapshot for up-to-date fork + - PITR: Point-in-time recovery using target_time + DeployStatus: + type: string + enum: + - QUEUED + - DELETING + - CONFIGURING + - READY + - DELETED + - UNSTABLE + - PAUSING + - PAUSED + - RESUMING + - UPGRADING + - OPTIMIZING + ForkSpec: + type: object + properties: + project_id: + type: string + example: "asda1b2c3" + service_id: + type: string + example: "bbss422fg" + is_standby: + type: boolean + example: false + ReadReplicaSet: + type: object + properties: + id: + type: string + example: "alb8jicdpr" + name: + type: string + example: "reporting-replica-1" + status: + type: string + enum: [creating, active, resizing, deleting, error] + example: "active" + nodes: + type: integer + description: Number of nodes in the replica set. + example: 2 + cpu_millis: + type: integer + description: CPU allocation in milli-cores. + example: 250 + memory_gbs: + type: integer + description: Memory allocation in gigabytes. + example: 1 + metadata: + type: object + description: Additional metadata for the read replica set + properties: + environment: + type: string + description: Environment tag for the read replica set + endpoint: + $ref: '#/components/schemas/Endpoint' + connection_pooler: + $ref: '#/components/schemas/ConnectionPooler' + ServiceCreate: + type: object + required: + - name + properties: + name: + type: string + description: A human-readable name for the service. + example: "my-production-db" + addons: + type: array + items: + type: string + enum: ["time-series", "ai"] + description: List of addons to enable for the service. 'time-series' enables TimescaleDB, 'ai' enables AI/vector extensions. + example: ["time-series", "ai"] + region_code: + type: string + description: The region where the service will be created. If not provided, we'll choose the best region for you. + example: "us-east-1" + replica_count: + type: integer + description: Number of high-availability replicas to create (all replicas are asynchronous by default). + example: 2 + cpu_millis: + type: string + description: The initial CPU allocation in milli-cores, or 'shared' for a shared-resource service. + example: "1000" + memory_gbs: + type: string + description: The initial memory allocation in gigabytes, or 'shared' for a shared-resource service. + example: "4" + environment_tag: + $ref: '#/components/schemas/EnvironmentTag' + description: The environment tag for the service, 'DEV' by default. + default: DEV + ForkServiceCreate: + type: object + required: + - fork_strategy + properties: + name: + type: string + description: A human-readable name for the forked service. If not provided, will use parent service name with "-fork" suffix. + example: "my-production-db-fork" + cpu_millis: + type: string + description: The initial CPU allocation in milli-cores, or 'shared' for a shared-resource service. If not provided, will inherit from parent service. + example: "1000" + memory_gbs: + type: string + description: The initial memory allocation in gigabytes, or 'shared' for a shared-resource service. If not provided, will inherit from parent service. + example: "4" + fork_strategy: + $ref: '#/components/schemas/ForkStrategy' + description: Strategy for creating the fork. This field is required. + target_time: + type: string + format: date-time + description: Target time for point-in-time recovery. Required when fork_strategy is PITR. + example: "2024-01-01T00:00:00Z" + environment_tag: + $ref: '#/components/schemas/EnvironmentTag' + description: The environment tag for the forked service, 'DEV' by default. + default: DEV + description: | + Create a fork of an existing service. Service type, region code, and storage are always inherited from the parent service. + HA replica count is always set to 0 for forked services. + HAReplica: + type: object + properties: + sync_replica_count: + type: integer + description: Number of synchronous high-availability replicas. + example: 1 + replica_count: + type: integer + description: Number of high-availability replicas (all replicas are asynchronous by default). + example: 1 + SetHAReplicaInput: + type: object + properties: + sync_replica_count: + type: integer + description: Number of synchronous high-availability replicas. + example: 1 + replica_count: + type: integer + description: Number of high-availability replicas (all replicas are asynchronous by default). + example: 1 + description: At least one of sync_replica_count or replica_count must be provided. + ReadReplicaSetCreate: + type: object + required: + - name + - nodes + - cpu_millis + - memory_gbs + properties: + name: + type: string + description: A human-readable name for the read replica. + example: "my-reporting-replica" + nodes: + type: integer + description: Number of nodes to create in the replica set. + example: 2 + cpu_millis: + type: integer + description: The initial CPU allocation in milli-cores. + example: 250 + memory_gbs: + type: integer + description: The initial memory allocation in gigabytes. + example: 1 + ResizeInput: + type: object + required: + - cpu_millis + - memory_gbs + properties: + cpu_millis: + type: integer + description: The new CPU allocation in milli-cores (e.g., 1000 for 1 vCPU). + example: 1000 + memory_gbs: + type: integer + description: The new memory allocation in gigabytes. + example: 4 + nodes: + type: integer + description: The new number of nodes in the replica set. + example: 2 + UpdatePasswordInput: + type: object + required: + - password + properties: + password: + type: string + description: The new password. + format: password + example: "a-very-secure-new-password" + SetEnvironmentInput: + type: object + required: + - environment + properties: + environment: + type: string + description: The target environment for the service. + enum: [PROD, DEV] + example: + environment: "PROD" + ServiceVPCInput: + type: object + required: + - vpc_id + properties: + vpc_id: + type: string + description: The ID of the VPC to attach the service to. + example: "1234567890" + Error: + type: object + properties: + code: + type: string + message: + type: string + + responses: + AnalyticsResponse: + description: Analytics action completed successfully. + content: + application/json: + schema: + type: object + properties: + status: + type: string + description: Status of the analytics operation + example: "success" + SuccessMessage: + description: The action was completed successfully. + content: + application/json: + schema: + type: object + properties: + message: + type: string + example: "Action completed successfully." + ClientError: + description: Client error response (4xx status codes). + content: + application/json: + schema: + $ref: '#/components/schemas/Error' diff --git a/api-reference/timescaledb-toolkit/candlestick_agg/index.mdx b/api-reference/timescaledb-toolkit/candlestick_agg/index.mdx index 67a46eb..29f16a4 100644 --- a/api-reference/timescaledb-toolkit/candlestick_agg/index.mdx +++ b/api-reference/timescaledb-toolkit/candlestick_agg/index.mdx @@ -263,16 +263,16 @@ GROUP BY weekly_bucket, symbol - [`rollup()`][rollup]: roll up multiple candlestick aggregates [two-step-aggregation]: #two-step-aggregation -[candlestick_agg]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/candlestick_agg -[candlestick]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/candlestick -[open]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/open -[open_time]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/open_time -[high]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/high -[high_time]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/high_time -[low]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/low -[low_time]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/low_time -[close]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/close -[close_time]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/close_time -[volume]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/volume -[vwap]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/vwap -[rollup]: /api-reference/timescaledb/hyperfunctions/candlestick_agg/rollup +[candlestick_agg]: /api-reference/timescaledb-toolkit/candlestick_agg/candlestick_agg +[candlestick]: /api-reference/timescaledb-toolkit/candlestick_agg/candlestick +[open]: /api-reference/timescaledb-toolkit/candlestick_agg/open +[open_time]: /api-reference/timescaledb-toolkit/candlestick_agg/open_time +[high]: /api-reference/timescaledb-toolkit/candlestick_agg/high +[high_time]: /api-reference/timescaledb-toolkit/candlestick_agg/high_time +[low]: /api-reference/timescaledb-toolkit/candlestick_agg/low +[low_time]: /api-reference/timescaledb-toolkit/candlestick_agg/low_time +[close]: /api-reference/timescaledb-toolkit/candlestick_agg/close +[close_time]: /api-reference/timescaledb-toolkit/candlestick_agg/close_time +[volume]: /api-reference/timescaledb-toolkit/candlestick_agg/volume +[vwap]: /api-reference/timescaledb-toolkit/candlestick_agg/vwap +[rollup]: /api-reference/timescaledb-toolkit/candlestick_agg/rollup diff --git a/api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/index.mdx b/api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/index.mdx index d0afc91..7d9f9f6 100644 --- a/api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/index.mdx +++ b/api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/index.mdx @@ -109,29 +109,29 @@ FROM t; - [`with_bounds()`][with_bounds]: add time bounds to a counter aggregate for extrapolation [two-step-aggregation]: #two-step-aggregation -[gauge_agg]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/index -[counter_agg]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/counter_agg -[corr]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/corr -[counter_zero_time]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/counter_zero_time -[delta]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/delta -[extrapolated_delta]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/extrapolated_delta -[extrapolated_rate]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/extrapolated_rate -[first_time]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/first_time -[first_val]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/first_val -[idelta_left]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/idelta_left -[idelta_right]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/idelta_right -[intercept]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/intercept -[interpolated_delta]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/interpolated_delta -[interpolated_rate]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/interpolated_rate -[irate_left]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/irate_left -[irate_right]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/irate_right -[last_time]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/last_time -[last_val]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/last_val -[num_changes]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/num_changes -[num_elements]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/num_elements -[num_resets]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/num_resets -[rate]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/rate -[slope]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/slope -[time_delta]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/time_delta -[rollup]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/rollup -[with_bounds]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/with_bounds \ No newline at end of file +[gauge_agg]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg +[counter_agg]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/counter_agg +[corr]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/corr +[counter_zero_time]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/counter_zero_time +[delta]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/delta +[extrapolated_delta]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/extrapolated_delta +[extrapolated_rate]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/extrapolated_rate +[first_time]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/first_time +[first_val]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/first_val +[idelta_left]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/idelta_left +[idelta_right]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/idelta_right +[intercept]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/intercept +[interpolated_delta]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/interpolated_delta +[interpolated_rate]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/interpolated_rate +[irate_left]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/irate_left +[irate_right]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/irate_right +[last_time]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/last_time +[last_val]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/last_val +[num_changes]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/num_changes +[num_elements]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/num_elements +[num_resets]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/num_resets +[rate]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/rate +[slope]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/slope +[time_delta]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/time_delta +[rollup]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/rollup +[with_bounds]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg/with_bounds \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/index.mdx b/api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/index.mdx index 53a053a..222cf08 100644 --- a/api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/index.mdx +++ b/api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/index.mdx @@ -85,24 +85,24 @@ ORDER BY hour; - [`with_bounds()`][with_bounds]: add time bounds to a gauge aggregate for extrapolation [two-step-aggregation]: #two-step-aggregation -[counter_agg]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/index -[gauge_agg]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/gauge_agg -[corr]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/corr -[delta]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/delta -[extrapolated_delta]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/extrapolated_delta -[extrapolated_rate]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/extrapolated_rate -[gauge_zero_time]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/gauge_zero_time -[idelta_left]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/idelta_left -[idelta_right]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/idelta_right -[intercept]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/intercept -[interpolated_delta]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/interpolated_delta -[interpolated_rate]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/interpolated_rate -[irate_left]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/irate_left -[irate_right]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/irate_right -[num_changes]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/num_changes -[num_elements]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/num_elements -[rate]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/rate -[slope]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/slope -[time_delta]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/time_delta -[rollup]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/rollup -[with_bounds]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/with_bounds \ No newline at end of file +[counter_agg]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg +[gauge_agg]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/gauge_agg +[corr]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/corr +[delta]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/delta +[extrapolated_delta]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/extrapolated_delta +[extrapolated_rate]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/extrapolated_rate +[gauge_zero_time]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/gauge_zero_time +[idelta_left]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/idelta_left +[idelta_right]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/idelta_right +[intercept]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/intercept +[interpolated_delta]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/interpolated_delta +[interpolated_rate]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/interpolated_rate +[irate_left]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/irate_left +[irate_right]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/irate_right +[num_changes]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/num_changes +[num_elements]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/num_elements +[rate]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/rate +[slope]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/slope +[time_delta]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/time_delta +[rollup]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/rollup +[with_bounds]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg/with_bounds \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/counters-and-gauges/index.mdx b/api-reference/timescaledb-toolkit/counters-and-gauges/index.mdx index 55e9a14..15718e2 100644 --- a/api-reference/timescaledb-toolkit/counters-and-gauges/index.mdx +++ b/api-reference/timescaledb-toolkit/counters-and-gauges/index.mdx @@ -102,5 +102,5 @@ FROM daily; - [`gauge_agg()`][gauge_agg]: analyze gauge metrics that can increase or decrease [two-step-aggregation]: #two-step-aggregation -[counter_agg]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/counter_agg/index -[gauge_agg]: /api-reference/timescaledb/hyperfunctions/counters-and-gauges/gauge_agg/index +[counter_agg]: /api-reference/timescaledb-toolkit/counters-and-gauges/counter_agg +[gauge_agg]: /api-reference/timescaledb-toolkit/counters-and-gauges/gauge_agg diff --git a/api-reference/timescaledb-toolkit/downsampling/index.mdx b/api-reference/timescaledb-toolkit/downsampling/index.mdx index ef8e3aa..1059172 100644 --- a/api-reference/timescaledb-toolkit/downsampling/index.mdx +++ b/api-reference/timescaledb-toolkit/downsampling/index.mdx @@ -74,6 +74,6 @@ FROM unnest(( ### ASAP smoothing - [`asap_smooth()`][asap_smooth]: downsample using the ASAP smoothing algorithm -[lttb]: /api-reference/timescaledb/hyperfunctions/downsampling/lttb -[gp_lttb]: /api-reference/timescaledb/hyperfunctions/downsampling/gp_lttb -[asap_smooth]: /api-reference/timescaledb/hyperfunctions/downsampling/asap_smooth +[lttb]: /api-reference/timescaledb-toolkit/downsampling/lttb +[gp_lttb]: /api-reference/timescaledb-toolkit/downsampling/gp_lttb +[asap_smooth]: /api-reference/timescaledb-toolkit/downsampling/asap_smooth diff --git a/api-reference/timescaledb-toolkit/frequency-analysis/count_min_sketch/index.mdx b/api-reference/timescaledb-toolkit/frequency-analysis/count_min_sketch/index.mdx index 44d93ef..2c8ae9d 100644 --- a/api-reference/timescaledb-toolkit/frequency-analysis/count_min_sketch/index.mdx +++ b/api-reference/timescaledb-toolkit/frequency-analysis/count_min_sketch/index.mdx @@ -53,5 +53,5 @@ FROM sketch; - [`approx_count()`][approx_count]: estimate the number of times a value appears in a count-min sketch [two-step-aggregation]: #two-step-aggregation -[count_min_sketch]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/count_min_sketch/count_min_sketch -[approx_count]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/count_min_sketch/approx_count +[count_min_sketch]: /api-reference/timescaledb-toolkit/frequency-analysis/count_min_sketch/count_min_sketch +[approx_count]: /api-reference/timescaledb-toolkit/frequency-analysis/count_min_sketch/approx_count diff --git a/api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/index.mdx b/api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/index.mdx index 2b93ad9..0d2d0b4 100644 --- a/api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/index.mdx +++ b/api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/index.mdx @@ -98,11 +98,11 @@ The output for this query looks like this, with some variation due to randomness - [`rollup()`][rollup]: combine multiple frequency aggregates [two-step-aggregation]: #two-step-aggregation -[count_min_sketch]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/count_min_sketch/index -[freq_agg]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/freq_agg/freq_agg -[mcv_agg]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/freq_agg/mcv_agg -[into_values]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/freq_agg/into_values -[max_frequency]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/freq_agg/max_frequency -[min_frequency]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/freq_agg/min_frequency -[topn]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/freq_agg/topn -[rollup]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/freq_agg/rollup +[count_min_sketch]: /api-reference/timescaledb-toolkit/frequency-analysis/count_min_sketch +[freq_agg]: /api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/freq_agg +[mcv_agg]: /api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/mcv_agg +[into_values]: /api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/into_values +[max_frequency]: /api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/max_frequency +[min_frequency]: /api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/min_frequency +[topn]: /api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/topn +[rollup]: /api-reference/timescaledb-toolkit/frequency-analysis/freq_agg/rollup diff --git a/api-reference/timescaledb-toolkit/frequency-analysis/index.mdx b/api-reference/timescaledb-toolkit/frequency-analysis/index.mdx index ab94ce6..c061350 100644 --- a/api-reference/timescaledb-toolkit/frequency-analysis/index.mdx +++ b/api-reference/timescaledb-toolkit/frequency-analysis/index.mdx @@ -73,5 +73,5 @@ FROM sketch; - [`count_min_sketch()`][count_min_sketch]: estimate absolute counts using the count-min sketch data structure [two-step-aggregation]: #two-step-aggregation -[freq_agg]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/freq_agg/index -[count_min_sketch]: /api-reference/timescaledb/hyperfunctions/frequency-analysis/count_min_sketch/index +[freq_agg]: /api-reference/timescaledb-toolkit/frequency-analysis/freq_agg +[count_min_sketch]: /api-reference/timescaledb-toolkit/frequency-analysis/count_min_sketch diff --git a/api-reference/timescaledb-toolkit/hyperloglog/index.mdx b/api-reference/timescaledb-toolkit/hyperloglog/index.mdx index ed78a91..f59b1eb 100644 --- a/api-reference/timescaledb-toolkit/hyperloglog/index.mdx +++ b/api-reference/timescaledb-toolkit/hyperloglog/index.mdx @@ -94,8 +94,8 @@ These are the approximate errors for each bucket size: [two-step-aggregation]: #two-step-aggregation [hyperloglog-wiki]: https://en.wikipedia.org/wiki/HyperLogLog -[hyperloglog]: /api-reference/timescaledb/hyperfunctions/hyperloglog/hyperloglog -[approx_count_distinct]: /api-reference/timescaledb/hyperfunctions/hyperloglog/approx_count_distinct -[distinct_count]: /api-reference/timescaledb/hyperfunctions/hyperloglog/distinct_count -[stderror]: /api-reference/timescaledb/hyperfunctions/hyperloglog/stderror -[rollup]: /api-reference/timescaledb/hyperfunctions/hyperloglog/rollup +[hyperloglog]: /api-reference/timescaledb-toolkit/hyperloglog/hyperloglog +[approx_count_distinct]: /api-reference/timescaledb-toolkit/hyperloglog/approx_count_distinct +[distinct_count]: /api-reference/timescaledb-toolkit/hyperloglog/distinct_count +[stderror]: /api-reference/timescaledb-toolkit/hyperloglog/stderror +[rollup]: /api-reference/timescaledb-toolkit/hyperloglog/rollup diff --git a/api-reference/timescaledb-toolkit/index.mdx b/api-reference/timescaledb-toolkit/index.mdx index 15aedec..af5701a 100644 --- a/api-reference/timescaledb-toolkit/index.mdx +++ b/api-reference/timescaledb-toolkit/index.mdx @@ -100,4 +100,4 @@ import { TOOLKIT_LONG, TIMESCALE_DB, HYPERFUNC } from '/snippets/vars.mdx'; {TOOLKIT_LONG} extends {TIMESCALE_DB} with additional {HYPERFUNC} for advanced time-series analysis. For -{HYPERFUNC} included by default in {TIMESCALE_DB}, see the [{TIMESCALE_DB} {HYPERFUNC} documentation](/api-reference/timescaledb/hyperfunctions/index). +{HYPERFUNC} included by default in {TIMESCALE_DB}, see the [{TIMESCALE_DB} {HYPERFUNC} documentation](/api-reference/timescaledb-toolkit/index). diff --git a/api-reference/timescaledb-toolkit/minimum-and-maximum/index.mdx b/api-reference/timescaledb-toolkit/minimum-and-maximum/index.mdx index 4aa7575..f06f06b 100644 --- a/api-reference/timescaledb-toolkit/minimum-and-maximum/index.mdx +++ b/api-reference/timescaledb-toolkit/minimum-and-maximum/index.mdx @@ -150,7 +150,7 @@ FROM daily_max, - [`max_n_by()`][max_n_by]: get the N largest values with accompanying data [two-step-aggregation]: #two-step-aggregation -[min_n]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n/index -[max_n]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n/index -[min_n_by]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n_by/index -[max_n_by]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n_by/index +[min_n]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n +[max_n]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n +[min_n_by]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n_by +[max_n_by]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n_by diff --git a/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/index.mdx b/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/index.mdx index 900ad94..627cdfe 100644 --- a/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/index.mdx +++ b/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/index.mdx @@ -66,9 +66,9 @@ FROM t; - [`rollup()`][rollup]: combine multiple MaxN aggregates [two-step-aggregation]: #two-step-aggregation -[max_n]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n/max_n -[min_n]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n/min_n -[max_n_by]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n_by/max_n_by -[into_values]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n/into_values -[into_array]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n/into_array -[rollup]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n/rollup +[max_n]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/max_n +[min_n]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n/min_n +[max_n_by]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n_by/max_n_by +[into_values]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/into_values +[into_array]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/into_array +[rollup]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/rollup diff --git a/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/into_values.mdx b/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/into_values.mdx index c317e4d..28b7d92 100644 --- a/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/into_values.mdx +++ b/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/into_values.mdx @@ -32,7 +32,8 @@ FROM ( ``` Output: -``` + +```sql into_values ------------- 10006 @@ -42,6 +43,7 @@ into_values 10002 ``` + ## Arguments The syntax is: diff --git a/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n_by/index.mdx b/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n_by/index.mdx index 6cf59ff..3ae887f 100644 --- a/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n_by/index.mdx +++ b/api-reference/timescaledb-toolkit/minimum-and-maximum/max_n_by/index.mdx @@ -66,8 +66,8 @@ FROM - [`rollup()`][rollup]: combine multiple MaxNBy aggregates [two-step-aggregation]: #two-step-aggregation -[max_n_by]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n_by/max_n_by -[min_n_by]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n_by/min_n_by -[max_n]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n/max_n -[into_values]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n_by/into_values -[rollup]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n_by/rollup +[max_n_by]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n_by/max_n_by +[min_n_by]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n_by/min_n_by +[max_n]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/max_n +[into_values]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n_by/into_values +[rollup]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n_by/rollup diff --git a/api-reference/timescaledb-toolkit/minimum-and-maximum/min_n/index.mdx b/api-reference/timescaledb-toolkit/minimum-and-maximum/min_n/index.mdx index 70e5ec7..f52e547 100644 --- a/api-reference/timescaledb-toolkit/minimum-and-maximum/min_n/index.mdx +++ b/api-reference/timescaledb-toolkit/minimum-and-maximum/min_n/index.mdx @@ -64,9 +64,9 @@ FROM t; - [`rollup()`][rollup]: combine multiple MinN aggregates [two-step-aggregation]: #two-step-aggregation -[min_n]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n/min_n -[max_n]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n/max_n -[min_n_by]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n_by/min_n_by -[into_values]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n/into_values -[into_array]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n/into_array -[rollup]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n/rollup +[min_n]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n/min_n +[max_n]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n/max_n +[min_n_by]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n_by/min_n_by +[into_values]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n/into_values +[into_array]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n/into_array +[rollup]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n/rollup diff --git a/api-reference/timescaledb-toolkit/minimum-and-maximum/min_n_by/index.mdx b/api-reference/timescaledb-toolkit/minimum-and-maximum/min_n_by/index.mdx index ffd802f..631323e 100644 --- a/api-reference/timescaledb-toolkit/minimum-and-maximum/min_n_by/index.mdx +++ b/api-reference/timescaledb-toolkit/minimum-and-maximum/min_n_by/index.mdx @@ -66,8 +66,8 @@ FROM - [`rollup()`][rollup]: combine multiple MinNBy aggregates [two-step-aggregation]: #two-step-aggregation -[min_n_by]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n_by/min_n_by -[max_n_by]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/max_n_by/max_n_by -[min_n]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n/min_n -[into_values]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n_by/into_values -[rollup]: /api-reference/timescaledb/hyperfunctions/minimum-and-maximum/min_n_by/rollup \ No newline at end of file +[min_n_by]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n_by/min_n_by +[max_n_by]: /api-reference/timescaledb-toolkit/minimum-and-maximum/max_n_by/max_n_by +[min_n]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n/min_n +[into_values]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n_by/into_values +[rollup]: /api-reference/timescaledb-toolkit/minimum-and-maximum/min_n_by/rollup \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/percentile-approximation/index.mdx b/api-reference/timescaledb-toolkit/percentile-approximation/index.mdx index 2eada94..46d22e3 100644 --- a/api-reference/timescaledb-toolkit/percentile-approximation/index.mdx +++ b/api-reference/timescaledb-toolkit/percentile-approximation/index.mdx @@ -89,5 +89,5 @@ FROM response_times_hourly; - [`tdigest()`][tdigest]: estimate percentiles using the t-digest algorithm, optimized for extreme quantiles [two-step-aggregation]: #two-step-aggregation -[uddsketch]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/index -[tdigest]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/tdigest/index +[uddsketch]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch +[tdigest]: /api-reference/timescaledb-toolkit/percentile-approximation/tdigest diff --git a/api-reference/timescaledb-toolkit/percentile-approximation/tdigest/index.mdx b/api-reference/timescaledb-toolkit/percentile-approximation/tdigest/index.mdx index 485f496..501fe0e 100644 --- a/api-reference/timescaledb-toolkit/percentile-approximation/tdigest/index.mdx +++ b/api-reference/timescaledb-toolkit/percentile-approximation/tdigest/index.mdx @@ -75,13 +75,13 @@ GROUP BY 1; - [`rollup()`][rollup]: combine multiple t-digest aggregates [two-step-aggregation]: #two-step-aggregation -[uddsketch]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/index -[percentile_agg]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/percentile_agg -[tdigest]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/tdigest/tdigest -[approx_percentile]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/tdigest/approx_percentile -[approx_percentile_rank]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/tdigest/approx_percentile_rank -[max_val]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/tdigest/max_val -[mean]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/tdigest/mean -[min_val]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/tdigest/min_val -[num_vals]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/tdigest/num_vals -[rollup]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/tdigest/rollup \ No newline at end of file +[uddsketch]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch +[percentile_agg]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/percentile_agg +[tdigest]: /api-reference/timescaledb-toolkit/percentile-approximation/tdigest/tdigest +[approx_percentile]: /api-reference/timescaledb-toolkit/percentile-approximation/tdigest/approx_percentile +[approx_percentile_rank]: /api-reference/timescaledb-toolkit/percentile-approximation/tdigest/approx_percentile_rank +[max_val]: /api-reference/timescaledb-toolkit/percentile-approximation/tdigest/max_val +[mean]: /api-reference/timescaledb-toolkit/percentile-approximation/tdigest/mean +[min_val]: /api-reference/timescaledb-toolkit/percentile-approximation/tdigest/min_val +[num_vals]: /api-reference/timescaledb-toolkit/percentile-approximation/tdigest/num_vals +[rollup]: /api-reference/timescaledb-toolkit/percentile-approximation/tdigest/rollup \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/index.mdx b/api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/index.mdx index f309026..5df51ef 100644 --- a/api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/index.mdx +++ b/api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/index.mdx @@ -103,13 +103,13 @@ GROUP BY 1; - [`rollup()`][rollup]: combine multiple uddsketch aggregates [two-step-aggregation]: #two-step-aggregation -[tdigest]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/tdigest/index -[uddsketch]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/uddsketch -[percentile_agg]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/percentile_agg -[approx_percentile]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/approx_percentile -[approx_percentile_array]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/approx_percentile_array -[approx_percentile_rank]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/approx_percentile_rank -[error]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/error -[mean]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/mean -[num_vals]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/num_vals -[rollup]: /api-reference/timescaledb/hyperfunctions/percentile-approximation/uddsketch/rollup \ No newline at end of file +[tdigest]: /api-reference/timescaledb-toolkit/percentile-approximation/tdigest +[uddsketch]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/uddsketch +[percentile_agg]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/percentile_agg +[approx_percentile]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/approx_percentile +[approx_percentile_array]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/approx_percentile_array +[approx_percentile_rank]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/approx_percentile_rank +[error]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/error +[mean]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/mean +[num_vals]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/num_vals +[rollup]: /api-reference/timescaledb-toolkit/percentile-approximation/uddsketch/rollup \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/saturating-math/index.mdx b/api-reference/timescaledb-toolkit/saturating-math/index.mdx index 9edc742..4edd6db 100644 --- a/api-reference/timescaledb-toolkit/saturating-math/index.mdx +++ b/api-reference/timescaledb-toolkit/saturating-math/index.mdx @@ -33,8 +33,8 @@ to be greater than or equal to zero. - [`saturating_mul()`][saturating_mul]: multiply two numbers, saturating at the 32-bit integer bounds instead of overflowing -[saturating_add]: /api-reference/timescaledb/hyperfunctions/saturating-math/saturating_add -[saturating_add_pos]: /api-reference/timescaledb/hyperfunctions/saturating-math/saturating_add_pos -[saturating_sub]: /api-reference/timescaledb/hyperfunctions/saturating-math/saturating_sub -[saturating_sub_pos]: /api-reference/timescaledb/hyperfunctions/saturating-math/saturating_sub_pos -[saturating_mul]: /api-reference/timescaledb/hyperfunctions/saturating-math/saturating_mul \ No newline at end of file +[saturating_add]: /api-reference/timescaledb-toolkit/saturating-math/saturating_add +[saturating_add_pos]: /api-reference/timescaledb-toolkit/saturating-math/saturating_add_pos +[saturating_sub]: /api-reference/timescaledb-toolkit/saturating-math/saturating_sub +[saturating_sub_pos]: /api-reference/timescaledb-toolkit/saturating-math/saturating_sub_pos +[saturating_mul]: /api-reference/timescaledb-toolkit/saturating-math/saturating_mul \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/duration_in.mdx b/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/duration_in.mdx index ba1fa21..c0e4e57 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/duration_in.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/duration_in.mdx @@ -73,5 +73,5 @@ duration_in( |--------|------|-------------| | duration_in | INTERVAL | The time spent in the given state. Displayed in `days`, `hh:mm:ss`, or a combination of the two. | -[compact_state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/compact_state_agg -[interpolated_duration_in]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/interpolated_duration_in## Arguments +[compact_state_agg]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/compact_state_agg +[interpolated_duration_in]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/interpolated_duration_in#arguments diff --git a/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/index.mdx b/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/index.mdx index 3b7b814..d6e8c44 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/index.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/index.mdx @@ -54,10 +54,10 @@ To learn more, see the [blog post on two-step aggregates][blog-two-step-aggregat [two-step-aggregation]: #two-step-aggregation [blog-two-step-aggregates]: https://www.timescale.com/blog/how-postgresql-aggregation-works-and-how-it-inspired-our-hyperfunctions-design [caggs]: /use-timescale/continuous-aggregates/about-continuous-aggregates/ -[heartbeat_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/ -[state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/ -[compact_state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/compact_state_agg -[duration_in]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/duration_in -[interpolated_duration_in]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/interpolated_duration_in -[into_values]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/into_values -[rollup]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/rollup \ No newline at end of file +[heartbeat_agg]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg +[state_agg]: /api-reference/timescaledb-toolkit/state-tracking/state_agg +[compact_state_agg]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/compact_state_agg +[duration_in]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/duration_in +[interpolated_duration_in]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/interpolated_duration_in +[into_values]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/into_values +[rollup]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/rollup \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/interpolated_duration_in.mdx b/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/interpolated_duration_in.mdx index 20c00bf..0061213 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/interpolated_duration_in.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/interpolated_duration_in.mdx @@ -86,5 +86,5 @@ interpolated_duration_in( [extract]:https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -[compact_state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/compact_state_agg -[duration_in]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/duration_in \ No newline at end of file +[compact_state_agg]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/compact_state_agg +[duration_in]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/duration_in \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/into_values.mdx b/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/into_values.mdx index 873431b..28501bd 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/into_values.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/into_values.mdx @@ -62,4 +62,4 @@ into_int_values( | state | TEXT \| BIGINT | A state found in the state aggregate | | duration | INTERVAL | The total time spent in that state | -[compact_state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/compact_state_agg## Arguments +[compact_state_agg]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg/compact_state_agg#arguments diff --git a/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/downtime.mdx b/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/downtime.mdx index fa414b4..79dc992 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/downtime.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/downtime.mdx @@ -60,4 +60,4 @@ downtime( |--------|------|-------------| | downtime | INTERVAL | The sum of all the dead ranges in the aggregate. | -[interpolated_downtime]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/interpolated_downtime +[interpolated_downtime]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/interpolated_downtime diff --git a/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/index.mdx b/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/index.mdx index 7162cae..feb452a 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/index.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/index.mdx @@ -61,16 +61,16 @@ To learn more, see the [blog post on two-step aggregates][blog-two-step-aggregat [two-step-aggregation]: #two-step-aggregation [blog-two-step-aggregates]: https://www.timescale.com/blog/how-postgresql-aggregation-works-and-how-it-inspired-our-hyperfunctions-design [caggs]: /use-timescale/continuous-aggregates/about-continuous-aggregates/ -[heartbeat_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/heartbeat_agg -[uptime]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/uptime -[downtime]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/downtime -[interpolated_uptime]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/interpolated_uptime -[interpolated_downtime]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/interpolated_downtime -[live_at]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/live_at -[live_ranges]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/live_ranges -[dead_ranges]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/dead_ranges -[num_live_ranges]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/num_live_ranges -[num_gaps]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/num_gaps -[trim_to]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/trim_to -[interpolate]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/interpolate -[rollup]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/rollup +[heartbeat_agg]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/heartbeat_agg +[uptime]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/uptime +[downtime]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/downtime +[interpolated_uptime]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/interpolated_uptime +[interpolated_downtime]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/interpolated_downtime +[live_at]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/live_at +[live_ranges]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/live_ranges +[dead_ranges]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/dead_ranges +[num_live_ranges]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/num_live_ranges +[num_gaps]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/num_gaps +[trim_to]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/trim_to +[interpolate]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/interpolate +[rollup]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/rollup diff --git a/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/uptime.mdx b/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/uptime.mdx index 7feb6e4..c003bb6 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/uptime.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/uptime.mdx @@ -58,4 +58,4 @@ uptime( |--------|------|-------------| | uptime | INTERVAL | The sum of all the live ranges in the aggregate. | -[interpolated_uptime]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/interpolated_uptime +[interpolated_uptime]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg/interpolated_uptime diff --git a/api-reference/timescaledb-toolkit/state-tracking/index.mdx b/api-reference/timescaledb-toolkit/state-tracking/index.mdx index 61baafb..d66a993 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/index.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/index.mdx @@ -101,6 +101,6 @@ FROM heartbeats; - [`heartbeat_agg()`][heartbeat_agg]: monitor system liveness based on heartbeat signals [two-step-aggregation]: #two-step-aggregation -[compact_state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/index -[state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/index -[heartbeat_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/heartbeat_agg/index +[compact_state_agg]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg +[state_agg]: /api-reference/timescaledb-toolkit/state-tracking/state_agg +[heartbeat_agg]: /api-reference/timescaledb-toolkit/state-tracking/heartbeat_agg diff --git a/api-reference/timescaledb-toolkit/state-tracking/state_agg/duration_in.mdx b/api-reference/timescaledb-toolkit/state-tracking/state_agg/duration_in.mdx index a2174fe..90b5d6b 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/state_agg/duration_in.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/state_agg/duration_in.mdx @@ -74,5 +74,5 @@ duration_in( |--------|------|-------------| | duration_in | INTERVAL | The time spent in the given state. Displayed in `days`, `hh:mm:ss`, or a combination of the two. | -[state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_agg -[interpolated_duration_in]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/interpolated_duration_in +[state_agg]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg +[interpolated_duration_in]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_duration_in diff --git a/api-reference/timescaledb-toolkit/state-tracking/state_agg/index.mdx b/api-reference/timescaledb-toolkit/state-tracking/state_agg/index.mdx index 5a16704..8f995c5 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/state_agg/index.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/state_agg/index.mdx @@ -62,14 +62,14 @@ To learn more, see the [blog post on two-step aggregates][blog-two-step-aggregat [two-step-aggregation]: #two-step-aggregation [blog-two-step-aggregates]: https://www.timescale.com/blog/how-postgresql-aggregation-works-and-how-it-inspired-our-hyperfunctions-design [caggs]: /use-timescale/continuous-aggregates/about-continuous-aggregates/ -[compact_state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/ -[state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_agg -[state_at]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_at -[duration_in]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/duration_in -[interpolated_duration_in]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/interpolated_duration_in -[state_periods]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_periods -[state_timeline]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_timeline -[interpolated_state_periods]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/interpolated_state_periods -[interpolated_state_timeline]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/interpolated_state_timeline -[into_values]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/into_values -[rollup]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/rollup +[compact_state_agg]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg +[state_agg]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg +[state_at]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_at +[duration_in]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/duration_in +[interpolated_duration_in]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_duration_in +[state_periods]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_periods +[state_timeline]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_timeline +[interpolated_state_periods]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_periods +[interpolated_state_timeline]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_timeline +[into_values]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/into_values +[rollup]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/rollup diff --git a/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_duration_in.mdx b/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_duration_in.mdx index 9832ffa..ef5401d 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_duration_in.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_duration_in.mdx @@ -83,5 +83,5 @@ interpolated_duration_in( |--------|------|-------------| | interpolated_duration_in | INTERVAL | The total time spent in the queried state. Displayed as `days`, `hh:mm:ss`, or a combination of the two. | -[state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_agg -[duration_in]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/duration_in +[state_agg]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg +[duration_in]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/duration_in diff --git a/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_periods.mdx b/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_periods.mdx index 5a28dfd..a9a1c65 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_periods.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_periods.mdx @@ -83,5 +83,5 @@ interpolated_state_periods( | start_time | TIMESTAMPTZ | The time when the state started (inclusive) | | end_time | TIMESTAMPTZ | The time when the state ended (exclusive) | -[state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_agg -[state_periods]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_periods +[state_agg]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg +[state_periods]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_periods diff --git a/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_timeline.mdx b/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_timeline.mdx index 9dd2c1f..6218a46 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_timeline.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_timeline.mdx @@ -91,5 +91,5 @@ interpolated_state_int_timeline( | start_time | TIMESTAMPTZ | The time when the state started (inclusive) | | end_time | TIMESTAMPTZ | The time when the state ended (exclusive) | -[state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_agg -[state_timeline]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_timeline +[state_agg]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg +[state_timeline]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_timeline diff --git a/api-reference/timescaledb-toolkit/state-tracking/state_agg/into_values.mdx b/api-reference/timescaledb-toolkit/state-tracking/state_agg/into_values.mdx index 442e5ba..e59c5c8 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/state_agg/into_values.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/state_agg/into_values.mdx @@ -62,4 +62,4 @@ into_int_values( | state | TEXT | BIGINT | A state found in the state aggregate | | duration | INTERVAL | The total time spent in that state | -[state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_agg## Arguments +[state_agg]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg#arguments diff --git a/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg.mdx b/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg.mdx index 5612be4..357a155 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg.mdx @@ -46,4 +46,4 @@ state_agg( |--------|------|-------------| | agg | StateAgg | An object storing the periods spent in each state, including timestamps of state transitions | -[compact_state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/compact_state_agg/ +[compact_state_agg]: /api-reference/timescaledb-toolkit/state-tracking/compact_state_agg diff --git a/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_at.mdx b/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_at.mdx index 19bfb4a..5b2a786 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_at.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_at.mdx @@ -61,4 +61,4 @@ state_at_int( |--------|------|-------------| | state | TEXT \| BIGINT | The state at the given time. | -[state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_agg +[state_agg]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg diff --git a/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_periods.mdx b/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_periods.mdx index 4c3ff08..b8aa679 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_periods.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_periods.mdx @@ -62,5 +62,5 @@ state_periods( | start_time | TIMESTAMPTZ | The time when the state started (inclusive) | | end_time | TIMESTAMPTZ | The time when the state ended (exclusive) | -[state_agg]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/state_agg -[interpolated_state_periods]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/interpolated_state_periods +[state_agg]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/state_agg +[interpolated_state_periods]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_periods diff --git a/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_timeline.mdx b/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_timeline.mdx index 111dcf0..3aec4d2 100644 --- a/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_timeline.mdx +++ b/api-reference/timescaledb-toolkit/state-tracking/state_agg/state_timeline.mdx @@ -68,4 +68,4 @@ state_int_timeline( | start_time | TIMESTAMPTZ | The time when the state started (inclusive) | | end_time | TIMESTAMPTZ | The time when the state ended (exclusive) | -[interpolated_state_timeline]: /api-reference/timescaledb/hyperfunctions/state-tracking/state_agg/interpolated_state_timeline +[interpolated_state_timeline]: /api-reference/timescaledb-toolkit/state-tracking/state_agg/interpolated_state_timeline diff --git a/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/index.mdx b/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/index.mdx index 1e11074..2ed57c5 100644 --- a/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/index.mdx +++ b/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/index.mdx @@ -89,5 +89,5 @@ ORDER BY day; variables [two-step-aggregation]: #two-step-aggregation -[stats_agg-one-variable]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/index -[stats_agg-two-variables]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/index \ No newline at end of file +[stats_agg-one-variable]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable +[stats_agg-two-variables]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/index.mdx b/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/index.mdx index 8aefb05..1e830e1 100644 --- a/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/index.mdx +++ b/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/index.mdx @@ -67,14 +67,14 @@ FROM t; - [`rolling()`][rolling]: create a rolling window aggregate for use in window functions [two-step-aggregation]: #two-step-aggregation -[stats_agg-2d]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/index -[stats_agg]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/stats_agg -[average]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/average -[stddev]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/stddev -[variance]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/variance -[skewness]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/skewness -[kurtosis]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/kurtosis -[sum]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/sum -[num_vals]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/num_vals -[rollup]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/rollup -[rolling]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/rolling +[stats_agg-2d]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables +[stats_agg]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/stats_agg +[average]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/average +[stddev]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/stddev +[variance]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/variance +[skewness]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/skewness +[kurtosis]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/kurtosis +[sum]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/sum +[num_vals]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/num_vals +[rollup]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/rollup +[rolling]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable/rolling diff --git a/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/index.mdx b/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/index.mdx index 431d19f..c098f50 100644 --- a/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/index.mdx +++ b/api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/index.mdx @@ -80,20 +80,20 @@ FROM t; - [`rolling()`][rolling]: create a rolling window aggregate for use in window functions [two-step-aggregation]: #two-step-aggregation -[stats_agg-1d]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-one-variable/index -[stats_agg]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/stats_agg -[average_y_x]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/average_y_x -[stddev_y_x]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/stddev_y_x -[variance_y_x]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/variance_y_x -[skewness_y_x]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/skewness_y_x -[kurtosis_y_x]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/kurtosis_y_x -[sum_y_x]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/sum_y_x -[corr]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/corr -[covariance]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/covariance -[determination_coeff]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/determination_coeff -[slope]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/slope -[intercept]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/intercept -[x_intercept]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/x_intercept -[num_vals]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/num_vals -[rollup]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/rollup -[rolling]: /api-reference/timescaledb/hyperfunctions/statistical-and-regression-analysis/stats_agg-two-variables/rolling \ No newline at end of file +[stats_agg-1d]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-one-variable +[stats_agg]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/stats_agg +[average_y_x]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/average_y_x +[stddev_y_x]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/stddev_y_x +[variance_y_x]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/variance_y_x +[skewness_y_x]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/skewness_y_x +[kurtosis_y_x]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/kurtosis_y_x +[sum_y_x]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/sum_y_x +[corr]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/corr +[covariance]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/covariance +[determination_coeff]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/determination_coeff +[slope]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/slope +[intercept]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/intercept +[x_intercept]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/x_intercept +[num_vals]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/num_vals +[rollup]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/rollup +[rolling]: /api-reference/timescaledb-toolkit/statistical-and-regression-analysis/stats_agg-two-variables/rolling \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/time_weight/average.mdx b/api-reference/timescaledb-toolkit/time_weight/average.mdx index 18a14a8..d25778b 100644 --- a/api-reference/timescaledb-toolkit/time_weight/average.mdx +++ b/api-reference/timescaledb-toolkit/time_weight/average.mdx @@ -57,4 +57,4 @@ average( |--------|------|-------------| | average | DOUBLE PRECISION | The time-weighted average. | -[integral]: /api-reference/timescaledb/hyperfunctions/time_weight/integral \ No newline at end of file +[integral]: /api-reference/timescaledb-toolkit/time_weight/integral \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/time_weight/index.mdx b/api-reference/timescaledb-toolkit/time_weight/index.mdx index 6c61941..4e0bcb4 100644 --- a/api-reference/timescaledb-toolkit/time_weight/index.mdx +++ b/api-reference/timescaledb-toolkit/time_weight/index.mdx @@ -127,13 +127,13 @@ GROUP BY measure_id; [two-step-aggregation]: #two-step-aggregation [blog-two-step-aggregates]: https://www.timescale.com/blog/how-postgresql-aggregation-works-and-how-it-inspired-our-hyperfunctions-design [caggs]: /use-timescale/continuous-aggregates/about-continuous-aggregates/ -[time_weight]: /api-reference/timescaledb/hyperfunctions/time_weight/time_weight -[average]: /api-reference/timescaledb/hyperfunctions/time_weight/average -[first_time]: /api-reference/timescaledb/hyperfunctions/time_weight/first_time -[first_val]: /api-reference/timescaledb/hyperfunctions/time_weight/first_val -[integral]: /api-reference/timescaledb/hyperfunctions/time_weight/integral -[interpolated_average]: /api-reference/timescaledb/hyperfunctions/time_weight/interpolated_average -[interpolated_integral]: /api-reference/timescaledb/hyperfunctions/time_weight/interpolated_integral -[last_time]: /api-reference/timescaledb/hyperfunctions/time_weight/last_time -[last_val]: /api-reference/timescaledb/hyperfunctions/time_weight/last_val -[rollup]: /api-reference/timescaledb/hyperfunctions/time_weight/rollup +[time_weight]: /api-reference/timescaledb-toolkit/time_weight/time_weight +[average]: /api-reference/timescaledb-toolkit/time_weight/average +[first_time]: /api-reference/timescaledb-toolkit/time_weight/first_time +[first_val]: /api-reference/timescaledb-toolkit/time_weight/first_val +[integral]: /api-reference/timescaledb-toolkit/time_weight/integral +[interpolated_average]: /api-reference/timescaledb-toolkit/time_weight/interpolated_average +[interpolated_integral]: /api-reference/timescaledb-toolkit/time_weight/interpolated_integral +[last_time]: /api-reference/timescaledb-toolkit/time_weight/last_time +[last_val]: /api-reference/timescaledb-toolkit/time_weight/last_val +[rollup]: /api-reference/timescaledb-toolkit/time_weight/rollup diff --git a/api-reference/timescaledb-toolkit/time_weight/integral.mdx b/api-reference/timescaledb-toolkit/time_weight/integral.mdx index e828e7e..a26133e 100644 --- a/api-reference/timescaledb-toolkit/time_weight/integral.mdx +++ b/api-reference/timescaledb-toolkit/time_weight/integral.mdx @@ -63,4 +63,4 @@ integral( |--------|------|-------------| | integral | DOUBLE PRECISION | The time-weighted integral. | -[average]: /api-reference/timescaledb/hyperfunctions/time_weight/average +[average]: /api-reference/timescaledb-toolkit/time_weight/average diff --git a/api-reference/timescaledb-toolkit/time_weight/interpolated_average.mdx b/api-reference/timescaledb-toolkit/time_weight/interpolated_average.mdx index 227f8e0..b4da7cb 100644 --- a/api-reference/timescaledb-toolkit/time_weight/interpolated_average.mdx +++ b/api-reference/timescaledb-toolkit/time_weight/interpolated_average.mdx @@ -81,5 +81,5 @@ interpolated_average( |--------|------|-------------| | average | DOUBLE PRECISION | The time-weighted average for the interval (`start`, `start` + `interval`), computed from the `TimeWeightSummary` plus end points interpolated from `prev` and `next` | -[average]: /api-reference/timescaledb/hyperfunctions/time_weight/average -[interpolated_integral]: /api-reference/timescaledb/hyperfunctions/time_weight/interpolated_integral \ No newline at end of file +[average]: /api-reference/timescaledb-toolkit/time_weight/average +[interpolated_integral]: /api-reference/timescaledb-toolkit/time_weight/interpolated_integral \ No newline at end of file diff --git a/api-reference/timescaledb-toolkit/time_weight/interpolated_integral.mdx b/api-reference/timescaledb-toolkit/time_weight/interpolated_integral.mdx index 6f506e9..64794d6 100644 --- a/api-reference/timescaledb-toolkit/time_weight/interpolated_integral.mdx +++ b/api-reference/timescaledb-toolkit/time_weight/interpolated_integral.mdx @@ -87,5 +87,5 @@ interpolated_integral( | integral | DOUBLE PRECISION | The time-weighted integral for the interval (`start`, `start` + `interval`), computed from the `TimeWeightSummary` plus end points interpolated from `prev` and `next` | -[integral]: /api-reference/timescaledb/hyperfunctions/time_weight/integral -[interpolated_average]: /api-reference/timescaledb/hyperfunctions/time_weight/interpolated_average +[integral]: /api-reference/timescaledb-toolkit/time_weight/integral +[interpolated_average]: /api-reference/timescaledb-toolkit/time_weight/interpolated_average diff --git a/api-reference/timescaledb/administration/get_telemetry_report.mdx b/api-reference/timescaledb/administration/get_telemetry_report.mdx index d1274a5..0d7dc3c 100644 --- a/api-reference/timescaledb/administration/get_telemetry_report.mdx +++ b/api-reference/timescaledb/administration/get_telemetry_report.mdx @@ -8,6 +8,8 @@ license: apache type: function --- + Since 0.6.0 + Returns the background [telemetry][telemetry] string sent to Timescale. If telemetry is turned off, it sends the string that would be sent if telemetry were enabled. diff --git a/api-reference/timescaledb/administration/timescaledb_post_restore.mdx b/api-reference/timescaledb/administration/timescaledb_post_restore.mdx index 83c8824..1659a53 100644 --- a/api-reference/timescaledb/administration/timescaledb_post_restore.mdx +++ b/api-reference/timescaledb/administration/timescaledb_post_restore.mdx @@ -10,6 +10,8 @@ type: function import { TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 0.9.0 + Perform the required operations after you have finished restoring the database using `pg_restore`. Specifically, this resets the `timescaledb.restoring` GUC and restarts any background workers. diff --git a/api-reference/timescaledb/administration/timescaledb_pre_restore.mdx b/api-reference/timescaledb/administration/timescaledb_pre_restore.mdx index e4b3759..3940ad3 100644 --- a/api-reference/timescaledb/administration/timescaledb_pre_restore.mdx +++ b/api-reference/timescaledb/administration/timescaledb_pre_restore.mdx @@ -10,6 +10,8 @@ type: function import { TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 0.9.0 + Perform the required operations so that you can restore the database using `pg_restore`. Specifically, this sets the `timescaledb.restoring` GUC to `on` and stops any background workers which could have been performing tasks. diff --git a/api-reference/timescaledb/compression/add_compression_policy.mdx b/api-reference/timescaledb/compression/add_compression_policy.mdx index 985bfe8..84cd905 100644 --- a/api-reference/timescaledb/compression/add_compression_policy.mdx +++ b/api-reference/timescaledb/compression/add_compression_policy.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { CHUNK, HYPERTABLE, CAGG, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.5.0 + Old API since [{TIMESCALE_DB} v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0). Superseded by [add_columnstore_policy()][add-columnstore-policy]. However, compression APIs are still supported, you do not need to migrate to the hypercore APIs. diff --git a/api-reference/timescaledb/compression/alter_table_compression.mdx b/api-reference/timescaledb/compression/alter_table_compression.mdx index 2c305b4..1707d08 100644 --- a/api-reference/timescaledb/compression/alter_table_compression.mdx +++ b/api-reference/timescaledb/compression/alter_table_compression.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, CHUNK, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.5.0 + Old API since [{TIMESCALE_DB} v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0). Superseded by [ALTER TABLE (Hypercore)][alter-table-hypercore]. However, compression APIs are still supported, you do not need to migrate to the hypercore APIs. diff --git a/api-reference/timescaledb/compression/chunk_compression_stats.mdx b/api-reference/timescaledb/compression/chunk_compression_stats.mdx index 24a6f6d..1eaad1e 100644 --- a/api-reference/timescaledb/compression/chunk_compression_stats.mdx +++ b/api-reference/timescaledb/compression/chunk_compression_stats.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { CHUNK, HYPERTABLE, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.5.0 + Old API since [{TIMESCALE_DB} v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0). Superseded by [chunk_columnstore_stats()][chunk-columnstore-stats]. However, compression APIs are still supported, you do not need to migrate to the hypercore APIs. diff --git a/api-reference/timescaledb/compression/compress_chunk.mdx b/api-reference/timescaledb/compression/compress_chunk.mdx index f9cd0f6..48d8cc2 100644 --- a/api-reference/timescaledb/compression/compress_chunk.mdx +++ b/api-reference/timescaledb/compression/compress_chunk.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { CHUNK, HYPERTABLE, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.5.0 + Old API since [{TIMESCALE_DB} v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0). Superseded by [convert_to_columnstore()][convert-to-columnstore]. However, compression APIs are still supported, you do not need to migrate to the hypercore APIs. diff --git a/api-reference/timescaledb/compression/decompress_chunk.mdx b/api-reference/timescaledb/compression/decompress_chunk.mdx index 6468462..ffffbaf 100644 --- a/api-reference/timescaledb/compression/decompress_chunk.mdx +++ b/api-reference/timescaledb/compression/decompress_chunk.mdx @@ -10,6 +10,8 @@ products: [cloud, mst, self_hosted] import { CHUNK, HYPERTABLE, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.5.0 + Old API since [{TIMESCALE_DB} v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0). Superseded by [convert_to_rowstore()][convert-to-rowstore]. However, compression APIs are still supported, you do not need to migrate to the hypercore APIs. diff --git a/api-reference/timescaledb/compression/hypertable_compression_stats.mdx b/api-reference/timescaledb/compression/hypertable_compression_stats.mdx index 2c29dd9..9c2cfcd 100644 --- a/api-reference/timescaledb/compression/hypertable_compression_stats.mdx +++ b/api-reference/timescaledb/compression/hypertable_compression_stats.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, CHUNK, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.5.0 + Old API since [{TIMESCALE_DB} v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0). Superseded by [hypertable_columnstore_stats()][hypertable-columnstore-stats]. However, compression APIs are still supported, you do not need to migrate to the hypercore APIs. diff --git a/api-reference/timescaledb/compression/recompress_chunk.mdx b/api-reference/timescaledb/compression/recompress_chunk.mdx index aacd14b..ce3398a 100644 --- a/api-reference/timescaledb/compression/recompress_chunk.mdx +++ b/api-reference/timescaledb/compression/recompress_chunk.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { CHUNK, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.5.0 + Old API since [{TIMESCALE_DB} v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0). Superseded by [convert_to_columnstore()][convert-to-columnstore]. However, compression APIs are still supported, you do not need to migrate to the hypercore APIs. diff --git a/api-reference/timescaledb/compression/remove_compression_policy.mdx b/api-reference/timescaledb/compression/remove_compression_policy.mdx index 0d23d3d..73fbfd7 100644 --- a/api-reference/timescaledb/compression/remove_compression_policy.mdx +++ b/api-reference/timescaledb/compression/remove_compression_policy.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, CAGG, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.5.0 + Old API since [{TIMESCALE_DB} v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0). Superseded by [remove_columnstore_policy()][remove-columnstore-policy]. However, compression APIs are still supported, you do not need to migrate to the hypercore APIs. diff --git a/api-reference/timescaledb/continuous-aggregates/add_continuous_aggregate_policy.mdx b/api-reference/timescaledb/continuous-aggregates/add_continuous_aggregate_policy.mdx index e498407..f4114bf 100644 --- a/api-reference/timescaledb/continuous-aggregates/add_continuous_aggregate_policy.mdx +++ b/api-reference/timescaledb/continuous-aggregates/add_continuous_aggregate_policy.mdx @@ -11,6 +11,8 @@ products: [cloud, self_hosted, mst] import { CAGG, HYPERTABLE, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.7.0 + Create a policy that automatically refreshes a {CAGG}. To view the policies that you set or the policies that already exist, see [informational views][informational-views]. diff --git a/api-reference/timescaledb/continuous-aggregates/add_policies.mdx b/api-reference/timescaledb/continuous-aggregates/add_policies.mdx index 5ba1198..9c65547 100644 --- a/api-reference/timescaledb/continuous-aggregates/add_policies.mdx +++ b/api-reference/timescaledb/continuous-aggregates/add_policies.mdx @@ -11,7 +11,7 @@ products: [cloud, self_hosted, mst] import { CAGG, HYPERTABLE, CHUNK } from '/snippets/vars.mdx'; - Early access + Early access Since 2.10.0 Add refresh, compression, and data retention policies to a {CAGG} in one step. The added compression and retention policies apply to the diff --git a/api-reference/timescaledb/continuous-aggregates/alter_materialized_view.mdx b/api-reference/timescaledb/continuous-aggregates/alter_materialized_view.mdx index c6b8743..173caa2 100644 --- a/api-reference/timescaledb/continuous-aggregates/alter_materialized_view.mdx +++ b/api-reference/timescaledb/continuous-aggregates/alter_materialized_view.mdx @@ -12,6 +12,8 @@ products: [cloud, self_hosted, mst] import { CAGG, HYPERTABLE, COLUMNSTORE, CHUNK, TIMESCALE_DB, PG } from '/snippets/vars.mdx'; + Since 1.3.0 + You use the `ALTER MATERIALIZED VIEW` statement to modify some of the `WITH` clause [options][create_materialized_view] for a {CAGG} view. You can only set the `continuous` and `create_group_indexes` options when you [create a {CAGG}][create_materialized_view]. `ALTER MATERIALIZED VIEW` also @@ -71,6 +73,10 @@ ALTER MATERIALIZED VIEW SET ( timescaledb. = [, . | `timescaledb.enable_cagg_window_functions` | BOOLEAN | `false` | - | EXPERIMENTAL: enable window functions on {CAGG}s. Support is experimental, as there is a risk of data inconsistency. For example, in backfill scenarios, buckets could be missed. | | `timescaledb.chunk_interval` (formerly `timescaledb.chunk_time_interval`) | INTERVAL | 10x the original {HYPERTABLE}. | - | Set the {CHUNK} interval. Renamed in {TIMESCALE_DB} V2.20. | +## Returns + +For standard `ALTER MATERIALIZED VIEW` return behavior, see the [PostgreSQL ALTER MATERIALIZED VIEW documentation][postgres-alterview]. + [create_materialized_view]: /api-reference/timescaledb/continuous-aggregates/create_materialized_view#arguments [postgres-alterview]: https://www.postgresql.org/docs/current/sql-alterview.html [create-cagg]: /use-timescale/latest/continuous-aggregates/create-a-continuous-aggregate/ diff --git a/api-reference/timescaledb/continuous-aggregates/alter_policies.mdx b/api-reference/timescaledb/continuous-aggregates/alter_policies.mdx index 44ebc3d..ebec033 100644 --- a/api-reference/timescaledb/continuous-aggregates/alter_policies.mdx +++ b/api-reference/timescaledb/continuous-aggregates/alter_policies.mdx @@ -12,7 +12,7 @@ products: [cloud, self_hosted, mst] import { CAGG, HYPERTABLE, COLUMNSTORE, CHUNK } from '/snippets/vars.mdx'; - Early access + Early access Since 2.10.0 Alter refresh, {COLUMNSTORE}, or data retention policies on a {CAGG}. The altered {COLUMNSTORE} and retention policies apply to the @@ -71,4 +71,4 @@ time bucket is based on integers. ## Returns -Returns true if successful. +Returns `true` if successful. diff --git a/api-reference/timescaledb/continuous-aggregates/cagg_migrate.mdx b/api-reference/timescaledb/continuous-aggregates/cagg_migrate.mdx index 0ff1b02..f5aafea 100644 --- a/api-reference/timescaledb/continuous-aggregates/cagg_migrate.mdx +++ b/api-reference/timescaledb/continuous-aggregates/cagg_migrate.mdx @@ -9,8 +9,11 @@ type: procedure products: [cloud, self_hosted, mst] --- +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; import { CAGG, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 2.7.0 + Migrate a {CAGG} from the old format to the new format introduced in {TIMESCALE_DB} 2.7. @@ -58,4 +61,8 @@ CALL cagg_migrate( | `override` | `BOOLEAN` | `false` | - | If false, the old {CAGG} keeps its name. The new {CAGG} is named `_new`. If true, the new {CAGG} gets the old name. The old {CAGG} is renamed `_old`. | | `drop_old` | `BOOLEAN` | `false` | - | If true, the old {CAGG} is deleted. Must be used together with `override`. | +## Returns + + + [how-to-migrate]: /use-timescale/latest/continuous-aggregates/migrate/ diff --git a/api-reference/timescaledb/continuous-aggregates/create_materialized_view.mdx b/api-reference/timescaledb/continuous-aggregates/create_materialized_view.mdx index c6122d9..5bba36e 100644 --- a/api-reference/timescaledb/continuous-aggregates/create_materialized_view.mdx +++ b/api-reference/timescaledb/continuous-aggregates/create_materialized_view.mdx @@ -111,8 +111,14 @@ WITH (timescaledb.continuous) AS | `timescaledb.materialized_only` | BOOLEAN | `TRUE` | - | Return only materialized data when querying the {CAGG} view | | `timescaledb.invalidate_using` | TEXT | `trigger` | - | Set to `wal` to read changes from the WAL using logical decoding, then update the materialization invalidations for {CAGG}s using this information. This reduces the I/O and CPU needed to manage the {HYPERTABLE} invalidation log. Set to `trigger` to collect invalidations whenever there are inserts, updates, or deletes to a {HYPERTABLE}. This default behaviour uses more resources than `wal`. | +## Returns + +For standard `CREATE MATERIALIZED VIEW` return behavior, see the [PostgreSQL CREATE MATERIALIZED VIEW documentation][postgres-create-matview]. + For more information, see the [real-time aggregates][real-time-aggregates] section. +[postgres-create-matview]: https://www.postgresql.org/docs/current/sql-creatematerializedview.html + [cagg-how-tos]: /use-timescale/latest/continuous-aggregates/ [real-time-aggregates]: /use-timescale/latest/continuous-aggregates/real-time-aggregates/ [refresh-cagg]: /api-reference/timescaledb/continuous-aggregates/refresh_continuous_aggregate diff --git a/api-reference/timescaledb/continuous-aggregates/drop_materialized_view.mdx b/api-reference/timescaledb/continuous-aggregates/drop_materialized_view.mdx index 97d4d14..7cadf65 100644 --- a/api-reference/timescaledb/continuous-aggregates/drop_materialized_view.mdx +++ b/api-reference/timescaledb/continuous-aggregates/drop_materialized_view.mdx @@ -12,6 +12,8 @@ products: [cloud, self_hosted, mst] import { CAGG, HYPERTABLE } from '/snippets/vars.mdx'; + Since 1.3.0 + {CAGG_CAP} views can be dropped using the `DROP MATERIALIZED VIEW` statement. This statement deletes the {CAGG} and all its internal @@ -45,3 +47,9 @@ DROP MATERIALIZED VIEW [IF EXISTS] ; | Name | Type | Default | Required | Description | |-|-|-|-|-| | `` | TEXT | - | ✔ | Name (optionally schema-qualified) of {CAGG} view to be dropped. | + +## Returns + +For standard `DROP MATERIALIZED VIEW` return behavior, see the [PostgreSQL DROP MATERIALIZED VIEW documentation][postgres-drop-matview]. + +[postgres-drop-matview]: https://www.postgresql.org/docs/current/sql-dropmaterializedview.html diff --git a/api-reference/timescaledb/continuous-aggregates/refresh_continuous_aggregate.mdx b/api-reference/timescaledb/continuous-aggregates/refresh_continuous_aggregate.mdx index 4429aab..7a47154 100644 --- a/api-reference/timescaledb/continuous-aggregates/refresh_continuous_aggregate.mdx +++ b/api-reference/timescaledb/continuous-aggregates/refresh_continuous_aggregate.mdx @@ -8,8 +8,11 @@ type: function products: [cloud, self_hosted, mst] --- +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; import { CAGG, HYPERTABLE } from '/snippets/vars.mdx'; + Since 1.3.0 + Refresh all buckets of a {CAGG} in the refresh window given by `window_start` and `window_end`. @@ -114,5 +117,9 @@ not take place when buckets are materialized with no data changes or with changes that only occurred in the secondary table used in the JOIN. +## Returns + + + [modify-parameters]: /use-timescale/latest/configuration/customize-configuration/ [create_materialized_view]: /api-reference/timescaledb/continuous-aggregates/create_materialized_view diff --git a/api-reference/timescaledb/continuous-aggregates/remove_all_policies.mdx b/api-reference/timescaledb/continuous-aggregates/remove_all_policies.mdx index 87f6bf7..c88c453 100644 --- a/api-reference/timescaledb/continuous-aggregates/remove_all_policies.mdx +++ b/api-reference/timescaledb/continuous-aggregates/remove_all_policies.mdx @@ -11,7 +11,7 @@ products: [cloud, self_hosted, mst] import { CAGG, HYPERTABLE, COLUMNSTORE } from '/snippets/vars.mdx'; - Early access + Early access Since 2.10.0 Remove all policies from a {CAGG}. The removed {COLUMNSTORE} and retention policies apply to the {CAGG}, _not_ to the original @@ -52,4 +52,4 @@ CALL remove_all_policies( ## Returns -Returns true if successful. +Returns `true` if successful. diff --git a/api-reference/timescaledb/continuous-aggregates/remove_continuous_aggregate_policy.mdx b/api-reference/timescaledb/continuous-aggregates/remove_continuous_aggregate_policy.mdx index c509e4b..d7d01dc 100644 --- a/api-reference/timescaledb/continuous-aggregates/remove_continuous_aggregate_policy.mdx +++ b/api-reference/timescaledb/continuous-aggregates/remove_continuous_aggregate_policy.mdx @@ -9,8 +9,11 @@ type: function products: [cloud, self_hosted, mst] --- +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; import { CAGG, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.7.0 + Remove all refresh policies from a {CAGG}. ```sql @@ -21,7 +24,7 @@ remove_continuous_aggregate_policy( ``` -To view the existing {CAGG} policies, see the [policies informational view](/api-reference/timescaledb/informational-views/policies/). +To view the existing {CAGG} policies, see the [policies informational view](/api-reference/timescaledb/informational-views/policies). ## Samples @@ -47,3 +50,7 @@ SELECT remove_continuous_aggregate_policy( |-|-|-|-|-| | `continuous_aggregate` | `REGCLASS` | - | ✔ | Name of the {CAGG} the policies should be removed from | | `if_exists` (formerly `if_not_exists`) | `BOOL` | false | - | When true, prints a warning instead of erroring if the policy doesn't exist. Renamed in {TIMESCALE_DB} 2.8. | + +## Returns + + diff --git a/api-reference/timescaledb/continuous-aggregates/remove_policies.mdx b/api-reference/timescaledb/continuous-aggregates/remove_policies.mdx index 94f715b..0931e32 100644 --- a/api-reference/timescaledb/continuous-aggregates/remove_policies.mdx +++ b/api-reference/timescaledb/continuous-aggregates/remove_policies.mdx @@ -11,7 +11,7 @@ products: [cloud, self_hosted, mst] import { CAGG, HYPERTABLE, COLUMNSTORE } from '/snippets/vars.mdx'; - Early access + Early access Since 2.10.0 Remove refresh, {COLUMNSTORE}, and data retention policies from a {CAGG}. The removed {COLUMNSTORE} and retention policies apply to the @@ -64,6 +64,6 @@ CALL remove_policies( ## Returns -Returns true if successful. +Returns `true` if successful. [remove-all-policies]: /api-reference/timescaledb/continuous-aggregates/remove_all_policies diff --git a/api-reference/timescaledb/continuous-aggregates/show_policies.mdx b/api-reference/timescaledb/continuous-aggregates/show_policies.mdx index 8679208..66de711 100644 --- a/api-reference/timescaledb/continuous-aggregates/show_policies.mdx +++ b/api-reference/timescaledb/continuous-aggregates/show_policies.mdx @@ -11,7 +11,7 @@ products: [cloud, self_hosted, mst] import { CAGG } from '/snippets/vars.mdx'; - Early access + Early access Since 2.10.0 Show all policies that are currently set on a {CAGG}. diff --git a/api-reference/timescaledb/data-retention/add_retention_policy.mdx b/api-reference/timescaledb/data-retention/add_retention_policy.mdx index 6c87029..bde9a8a 100644 --- a/api-reference/timescaledb/data-retention/add_retention_policy.mdx +++ b/api-reference/timescaledb/data-retention/add_retention_policy.mdx @@ -9,7 +9,7 @@ type: function products: [cloud, self_hosted, mst] --- - Community + Community Since 1.2.0 import {CAGG, CHUNK, HYPERTABLE, TIMESCALE_DB} from '/snippets/vars.mdx'; diff --git a/api-reference/timescaledb/data-retention/remove_retention_policy.mdx b/api-reference/timescaledb/data-retention/remove_retention_policy.mdx index 515c898..e4d4fde 100644 --- a/api-reference/timescaledb/data-retention/remove_retention_policy.mdx +++ b/api-reference/timescaledb/data-retention/remove_retention_policy.mdx @@ -9,9 +9,10 @@ type: function products: [cloud, self_hosted, mst] --- +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; import {CHUNK, HYPERTABLE} from '/snippets/vars.mdx'; - Community + Community Since 1.2.0 Remove a policy to drop {CHUNK}s of a particular {HYPERTABLE}. @@ -38,3 +39,7 @@ SELECT remove_retention_policy( |-|-|-|-|-| | `relation` | REGCLASS | - | ✔ | Name of the hypertable or continuous aggregate from which to remove the policy | | `if_exists` | BOOLEAN | `false` | - | Set to true to avoid throwing an error if the policy does not exist. | + +## Returns + + diff --git a/api-reference/timescaledb/hypercore/add_columnstore_policy.mdx b/api-reference/timescaledb/hypercore/add_columnstore_policy.mdx index ae91300..4c216e8 100644 --- a/api-reference/timescaledb/hypercore/add_columnstore_policy.mdx +++ b/api-reference/timescaledb/hypercore/add_columnstore_policy.mdx @@ -12,6 +12,7 @@ products: [cloud, mst, self_hosted] import OldCreateHypertable from '/snippets/api-reference/timescaledb/hypercore/_old-api-create-hypertable.mdx'; import CreateHypertablePolicyNote from '/snippets/api-reference/timescaledb/hypercore/_create-hypertable-columnstore-policy-note.mdx'; +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; import { COLUMNSTORE, ROWSTORE, TIMESCALE_DB, CHUNK, HYPERTABLE, CAGG } from '/snippets/vars.mdx'; Since 2.18.0 @@ -156,11 +157,15 @@ Calls to `add_columnstore_policy` require either `after` or `created_before`, bu | `timezone` |TEXT| UTC. However, daylight savings time(DST) changes may shift this alignment. | ✖ | Set to a valid time zone to mitigate DST shifting. If `initial_start` is set, subsequent executions of this policy are aligned on `initial_start`. | | `if_not_exists` |BOOLEAN| `false` | ✖ | Set to `true` so this job fails with a warning rather than an error if a {COLUMNSTORE} policy already exists on `hypertable` | -[compression_alter-table]: /api-reference/timescaledb/hypercore/alter_table/ -[compression_continuous-aggregate]: /api-reference/timescaledb/continuous-aggregates/alter_materialized_view/ -[set_integer_now_func]: /api-reference/timescaledb/hypertable/set_integer_now_func -[informational-views]: /api-reference/timescaledb/informational-views/jobs/ -[chunk_time_interval]: /api-reference/timescaledb/hypertable/set_chunk_time_interval/ +## Returns + + + +[compression_alter-table]: /api-reference/timescaledb/hypercore/alter_table +[compression_continuous-aggregate]: /api-reference/timescaledb/continuous-aggregates/alter_materialized_view +[set_integer_now_func]: /api-reference/timescaledb/hypertables/set_integer_now_func +[informational-views]: /api-reference/timescaledb/informational-views/jobs +[chunk_time_interval]: /api-reference/timescaledb/hypertables/set_chunk_time_interval [next-start]: /api-reference/timescaledb/informational-views/jobs/#arguments [job]: /api-reference/timescaledb/jobs-automation/add_job/ [remove_columnstore_policy]: /api-reference/timescaledb/hypercore/remove_columnstore_policy/ @@ -169,7 +174,7 @@ Calls to `add_columnstore_policy` require either `after` or `created_before`, bu [hypercore]: /manage-data/data-management/hypercore/ [secondary-indexes]: /manage-data/data-management/hypercore/secondary-indexes/ [bloom-filters]: https://en.wikipedia.org/wiki/Bloom_filter -[create_table_arguments]: /api-reference/timescaledb/hypertable/create_table/#arguments +[create_table_arguments]: /api-reference/timescaledb/hypertables/create_table/#arguments [alter_job_samples]: /api-reference/timescaledb/jobs-automation/alter_job/#samples -[add_columnstore_policy]: /api-reference/timescaledb/hypercore/add_columnstore_policy/ +[add_columnstore_policy]: /api-reference/timescaledb/hypercore/add_columnstore_policy [tsdb-release-2-19-3]: https://github.com/timescale/timescaledb/releases/tag/2.19.3 diff --git a/api-reference/timescaledb/hypercore/alter_table.mdx b/api-reference/timescaledb/hypercore/alter_table.mdx index b60ecba..6f3a002 100644 --- a/api-reference/timescaledb/hypercore/alter_table.mdx +++ b/api-reference/timescaledb/hypercore/alter_table.mdx @@ -95,11 +95,16 @@ ALTER TABLE SET (timescaledb.enable_columnstore, | `ALTER` | TEXT | | ✖ | Set a specific column in the columnstore to be `NOT NULL`. | | `ADD CONSTRAINT` | TEXT | | ✖ | Add `UNIQUE` constraints to data in the columnstore. | -[chunk_time_interval]: /api-reference/timescaledb/hypertable/set_chunk_time_interval/ -[add_columnstore_policy]: /api-reference/timescaledb/hypercore/add_columnstore_policy/ -[convert_to_columnstore]: /api-reference/timescaledb/hypercore/convert_to_columnstore/ -[convert_to_rowstore]: /api-reference/timescaledb/hypercore/convert_to_rowstore/ -[job]: /api-reference/timescaledb/jobs-automation/add_job/ +## Returns + +For standard `ALTER TABLE` return behavior, see the [PostgreSQL ALTER TABLE documentation][postgres-alter-table]. + +[postgres-alter-table]: https://www.postgresql.org/docs/current/sql-altertable.html +[chunk_time_interval]: /api-reference/timescaledb/hypertables/set_chunk_time_interval +[add_columnstore_policy]: /api-reference/timescaledb/hypercore/add_columnstore_policy +[convert_to_columnstore]: /api-reference/timescaledb/hypercore/convert_to_columnstore +[convert_to_rowstore]: /api-reference/timescaledb/hypercore/convert_to_rowstore +[job]: /api-reference/timescaledb/jobs-automation/add_job [default_table_access_method]: https://www.postgresql.org/docs/17/runtime-config-client.html#GUC-DEFAULT-TABLE-ACCESS-METHOD -[create-hypertable]: /api-reference/timescaledb/hypertable/create_hypertable +[create-hypertable]: /api-reference/timescaledb/hypertables/create_hypertable [bloom-filters]: https://en.wikipedia.org/wiki/Bloom_filter diff --git a/api-reference/timescaledb/hypercore/chunk_columnstore_stats.mdx b/api-reference/timescaledb/hypercore/chunk_columnstore_stats.mdx index ce6fa3d..e966aab 100644 --- a/api-reference/timescaledb/hypercore/chunk_columnstore_stats.mdx +++ b/api-reference/timescaledb/hypercore/chunk_columnstore_stats.mdx @@ -116,8 +116,8 @@ SELECT * FROM chunk_columnstore_stats( |`node_name`|TEXT| **DEPRECATED**: nodes the {CHUNK} is located on, applicable only to distributed {HYPERTABLE}s. | -[add_columnstore_policy]: /api-reference/timescaledb/hypercore/add_columnstore_policy/ -[convert_to_columnstore]: /api-reference/timescaledb/hypercore/convert_to_columnstore/ -[job]: /api-reference/timescaledb/jobs-automation/add_job/ -[chunks_detailed_size]: /api-reference/timescaledb/hypertable/chunks_detailed_size/ -[hypertable-create-table]: /api-reference/timescaledb/hypertable/create_table/ +[add_columnstore_policy]: /api-reference/timescaledb/hypercore/add_columnstore_policy +[convert_to_columnstore]: /api-reference/timescaledb/hypercore/convert_to_columnstore +[job]: /api-reference/timescaledb/jobs-automation/add_job +[chunks_detailed_size]: /api-reference/timescaledb/hypertables/chunks_detailed_size +[hypertable-create-table]: /api-reference/timescaledb/hypertables/create_table diff --git a/api-reference/timescaledb/hypercore/convert_to_columnstore.mdx b/api-reference/timescaledb/hypercore/convert_to_columnstore.mdx index 5b1b6ad..cb75139 100644 --- a/api-reference/timescaledb/hypercore/convert_to_columnstore.mdx +++ b/api-reference/timescaledb/hypercore/convert_to_columnstore.mdx @@ -58,6 +58,6 @@ Calls to `convert_to_columnstore` return: | `chunk name` or `table` | REGCLASS or String | The name of the {CHUNK} added to the {COLUMNSTORE}, or a table-like result set with zero or more rows. | -[add_columnstore_policy]: /api-reference/timescaledb/hypercore/add_columnstore_policy/ -[run-job]: /api-reference/timescaledb/jobs-automation/run_job/ -[convert_to_rowstore]: /api-reference/timescaledb/hypercore/convert_to_rowstore/ +[add_columnstore_policy]: /api-reference/timescaledb/hypercore/add_columnstore_policy +[run-job]: /api-reference/timescaledb/jobs-automation/run_job +[convert_to_rowstore]: /api-reference/timescaledb/hypercore/convert_to_rowstore diff --git a/api-reference/timescaledb/hypercore/convert_to_rowstore.mdx b/api-reference/timescaledb/hypercore/convert_to_rowstore.mdx index d2ca044..ad4aaad 100644 --- a/api-reference/timescaledb/hypercore/convert_to_rowstore.mdx +++ b/api-reference/timescaledb/hypercore/convert_to_rowstore.mdx @@ -9,6 +9,7 @@ products: [cloud, mst, self_hosted] --- import HypercoreManualWorkflow from '/snippets/api-reference/timescaledb/hypercore/_hypercore-manual-workflow.mdx'; +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; import { COLUMNSTORE, ROWSTORE, TIMESCALE_DB, CHUNK, HYPERTABLE } from '/snippets/vars.mdx'; Since 2.18.0 @@ -44,6 +45,10 @@ CALL convert_to_rowstore( |`chunk`| REGCLASS | - | ✖ | Name of the {CHUNK} to be moved to the {ROWSTORE}. | |`if_compressed`| BOOLEAN | `true` | ✔ | Set to `false` so this job fails with an error rather than an warning if `chunk` is not in the {COLUMNSTORE} | +## Returns + + + [job]: /api-reference/timescaledb/jobs-automation/ [alter_job]: /api-reference/timescaledb/jobs-automation/alter_job/ [convert_to_columnstore]: /api-reference/timescaledb/hypercore/convert_to_columnstore/ diff --git a/api-reference/timescaledb/hypercore/remove_columnstore_policy.mdx b/api-reference/timescaledb/hypercore/remove_columnstore_policy.mdx index 0e34415..fe00bf1 100644 --- a/api-reference/timescaledb/hypercore/remove_columnstore_policy.mdx +++ b/api-reference/timescaledb/hypercore/remove_columnstore_policy.mdx @@ -9,6 +9,7 @@ tags: [delete, drop] products: [cloud, mst, self_hosted] --- +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; import { COLUMNSTORE, HYPERTABLE, CAGG } from '/snippets/vars.mdx'; Since 2.18.0 @@ -52,6 +53,9 @@ CALL remove_columnstore_policy( |`hypertable`|REGCLASS|-|✔| Name of the {HYPERTABLE} or {CAGG} to remove the policy from| | `if_exists` | BOOLEAN | `false` |✖| Set to `true` so this job fails with a warning rather than an error if a {COLUMNSTORE} policy does not exist on `hypertable` | +## Returns -[informational-views]: /api-reference/timescaledb/informational-views/jobs/ -[add_columnstore_policy]: /api-reference/timescaledb/hypercore/add_columnstore_policy/ + + +[informational-views]: /api-reference/timescaledb/informational-views/jobs +[add_columnstore_policy]: /api-reference/timescaledb/hypercore/add_columnstore_policy diff --git a/api-reference/timescaledb/hyperfunctions/distribution-analysis/approximate_row_count.mdx b/api-reference/timescaledb/hyperfunctions/distribution-analysis/approximate_row_count.mdx index c3f575e..568cdaa 100644 --- a/api-reference/timescaledb/hyperfunctions/distribution-analysis/approximate_row_count.mdx +++ b/api-reference/timescaledb/hyperfunctions/distribution-analysis/approximate_row_count.mdx @@ -58,4 +58,6 @@ SELECT approximate_row_count( ## Returns -A numeric estimate of the number of rows in the specified table or {HYPERTABLE}. \ No newline at end of file +|Column|Type|Description| +|-|-|-| +| `approximate_row_count` | BIGINT | A numeric estimate of the number of rows in the specified table or {HYPERTABLE}. | \ No newline at end of file diff --git a/api-reference/timescaledb/hyperfunctions/distribution-analysis/histogram.mdx b/api-reference/timescaledb/hyperfunctions/distribution-analysis/histogram.mdx index 6af3e59..3ca216f 100644 --- a/api-reference/timescaledb/hyperfunctions/distribution-analysis/histogram.mdx +++ b/api-reference/timescaledb/hyperfunctions/distribution-analysis/histogram.mdx @@ -71,4 +71,10 @@ SELECT histogram( | `value` | ANY VALUE | - | ✔ | A set of values to partition into a histogram | | `min` | NUMERIC | - | ✔ | The histogram's lower bound used in bucketing (inclusive) | | `max` | NUMERIC | - | ✔ | The histogram's upper bound used in bucketing (exclusive) | -| `nbuckets` | INTEGER | - | ✔ | The integer value for the number of histogram buckets (partitions) | \ No newline at end of file +| `nbuckets` | INTEGER | - | ✔ | The integer value for the number of histogram buckets (partitions) | + +## Returns + +|Column|Type|Description| +|-|-|-| +| `histogram` | BIGINT[ ] | An array of counts, with `nbuckets` + 2 elements. The first element is the count of values less than `min`, the last element is the count of values greater than or equal to `max`, and the middle elements are counts for each bucket in the range. | \ No newline at end of file diff --git a/api-reference/timescaledb/hyperfunctions/index.mdx b/api-reference/timescaledb/hyperfunctions/index.mdx index 54acd85..b5bfaf6 100644 --- a/api-reference/timescaledb/hyperfunctions/index.mdx +++ b/api-reference/timescaledb/hyperfunctions/index.mdx @@ -88,9 +88,9 @@ see the [{TOOLKIT_LONG} API reference][toolkit]. - [`time_bucket_ng()`][time_bucket_ng]: next generation time bucketing with additional features -[toolkit]: /api-reference/timescaledb-toolkit/index +[toolkit]: /api-reference/timescaledb-toolkit -[time-series-utilities]: /api-reference/timescaledb/hyperfunctions/time-series-utilities/index +[time-series-utilities]: /api-reference/timescaledb/hyperfunctions/time-series-utilities [time_bucket]: /api-reference/timescaledb/hyperfunctions/time-series-utilities/time_bucket [time_bucket_ng]: /api-reference/timescaledb/hyperfunctions/legacy/time_bucket_ng [first]: /api-reference/timescaledb/hyperfunctions/time-series-utilities/first @@ -98,11 +98,11 @@ see the [{TOOLKIT_LONG} API reference][toolkit]. [days_in_month]: /api-reference/timescaledb/hyperfunctions/time-series-utilities/days_in_month [month_normalize]: /api-reference/timescaledb/hyperfunctions/time-series-utilities/month_normalize -[gapfilling]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/index +[gapfilling]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill [time_bucket_gapfill]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/time_bucket_gapfill [locf]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/locf [interpolate]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/interpolate -[distribution-analysis]: /api-reference/timescaledb/hyperfunctions/distribution-analysis/index +[distribution-analysis]: /api-reference/timescaledb/hyperfunctions/distribution-analysis [histogram]: /api-reference/timescaledb/hyperfunctions/distribution-analysis/histogram [approximate_row_count]: /api-reference/timescaledb/hyperfunctions/distribution-analysis/approximate_row_count diff --git a/api-reference/timescaledb/hyperfunctions/legacy/time_bucket_ng.mdx b/api-reference/timescaledb/hyperfunctions/legacy/time_bucket_ng.mdx index 7cdaeea..4a3edc1 100644 --- a/api-reference/timescaledb/hyperfunctions/legacy/time_bucket_ng.mdx +++ b/api-reference/timescaledb/hyperfunctions/legacy/time_bucket_ng.mdx @@ -15,7 +15,7 @@ products: [cloud, mst, self_hosted] import { PG, TIMESCALE_DB, HYPERTABLE } from '/snippets/vars.mdx'; - Deprecated + Deprecated Since 2.0.0 The `time_bucket_ng()` function is an experimental version of the [`time_bucket()`][time_bucket] function. It introduced some new capabilities, @@ -218,7 +218,9 @@ can't be used with continuous aggregates. Best practice is to use ## Returns -The function returns the bucket's start time. The return value type is the same as `ts`. +|Column|Type|Description| +|-|-|-| +| `time_bucket_ng` | DATE, TIMESTAMP, or TIMESTAMPTZ | The bucket's start time. The return type matches the input `ts` type. | -[time_bucket]: /api-reference/timescaledb/hyperfunctions/time_bucket +[time_bucket]: /api-reference/timescaledb/hyperfunctions/time-series-utilities/time_bucket [caggs]: /manage-data/timescaledb/data-management/continuous-aggregates/ \ No newline at end of file diff --git a/api-reference/timescaledb/hyperfunctions/time-series-utilities/days_in_month.mdx b/api-reference/timescaledb/hyperfunctions/time-series-utilities/days_in_month.mdx index d1b9e86..f53c7b9 100644 --- a/api-reference/timescaledb/hyperfunctions/time-series-utilities/days_in_month.mdx +++ b/api-reference/timescaledb/hyperfunctions/time-series-utilities/days_in_month.mdx @@ -43,4 +43,10 @@ SELECT days_in_month( | Name | Type | Default | Required | Description | |--|--|--|--|--| -| `date` | TIMESTAMPTZ | - | ✔ | Timestamp to use to calculate how many days in the month | \ No newline at end of file +| `date` | TIMESTAMPTZ | - | ✔ | Timestamp to use to calculate how many days in the month | + +## Returns + +|Column|Type|Description| +|-|-|-| +| `days_in_month` | INTEGER | The number of days in the month of the input timestamp. | \ No newline at end of file diff --git a/api-reference/timescaledb/hyperfunctions/time-series-utilities/first.mdx b/api-reference/timescaledb/hyperfunctions/time-series-utilities/first.mdx index 7b75967..2a39b23 100644 --- a/api-reference/timescaledb/hyperfunctions/time-series-utilities/first.mdx +++ b/api-reference/timescaledb/hyperfunctions/time-series-utilities/first.mdx @@ -63,4 +63,10 @@ SELECT first( | Name | Type | Default | Required | Description | |--|--|--|--|--| | `value` | TEXT | - | ✔ | The value to return | -| `time` | TIMESTAMP or INTEGER | - | ✔ | The timestamp to use for comparison | \ No newline at end of file +| `time` | TIMESTAMP or INTEGER | - | ✔ | The timestamp to use for comparison | + +## Returns + +|Column|Type|Description| +|-|-|-| +| `first` | ANY ELEMENT | The value from the `value` column corresponding to the earliest `time` within the aggregate group. The return type matches the `value` input type. | \ No newline at end of file diff --git a/api-reference/timescaledb/hyperfunctions/time-series-utilities/last.mdx b/api-reference/timescaledb/hyperfunctions/time-series-utilities/last.mdx index 1bfeda2..78d20e8 100644 --- a/api-reference/timescaledb/hyperfunctions/time-series-utilities/last.mdx +++ b/api-reference/timescaledb/hyperfunctions/time-series-utilities/last.mdx @@ -66,4 +66,10 @@ SELECT last( | Name | Type | Default | Required | Description | |--|--|--|--|--| | `value` | ANY ELEMENT | - | ✔ | The value to return | -| `time` | TIMESTAMP or INTEGER | - | ✔ | The timestamp to use for comparison | \ No newline at end of file +| `time` | TIMESTAMP or INTEGER | - | ✔ | The timestamp to use for comparison | + +## Returns + +|Column|Type|Description| +|-|-|-| +| `last` | ANY ELEMENT | The value from the `value` column corresponding to the latest `time` within the aggregate group. The return type matches the `value` input type. | \ No newline at end of file diff --git a/api-reference/timescaledb/hyperfunctions/time-series-utilities/month_normalize.mdx b/api-reference/timescaledb/hyperfunctions/time-series-utilities/month_normalize.mdx index e68c893..8a603eb 100644 --- a/api-reference/timescaledb/hyperfunctions/time-series-utilities/month_normalize.mdx +++ b/api-reference/timescaledb/hyperfunctions/time-series-utilities/month_normalize.mdx @@ -68,4 +68,10 @@ SELECT month_normalize( |--|--|--|--|--| | `metric` | float8 | - | ✔ | The metric value to normalize | | `reference_date` | TIMESTAMPTZ | - | ✔ | Timestamp to normalize the metric with | -| `days` | float8 | 365.25/12 | ❌ | Number of days to use for normalization | \ No newline at end of file +| `days` | float8 | 365.25/12 | ❌ | Number of days to use for normalization | + +## Returns + +|Column|Type|Description| +|-|-|-| +| `month_normalize` | DOUBLE PRECISION | The normalized metric value, adjusted to a standard 30.4375-day month (365.25/12). | \ No newline at end of file diff --git a/api-reference/timescaledb/hyperfunctions/time-series-utilities/time_bucket.mdx b/api-reference/timescaledb/hyperfunctions/time-series-utilities/time_bucket.mdx index e0e44dd..e7cf11a 100644 --- a/api-reference/timescaledb/hyperfunctions/time-series-utilities/time_bucket.mdx +++ b/api-reference/timescaledb/hyperfunctions/time-series-utilities/time_bucket.mdx @@ -137,4 +137,10 @@ If you use months as an interval for `bucket_width`, you cannot combine it with a non-month component. For example, `1 month` and `3 months` are both valid bucket widths, but `1 month 1 day` and `3 months 2 weeks` are not. - \ No newline at end of file + + +## Returns + +|Column|Type|Description| +|-|-|-| +| `time_bucket` | TIMESTAMP, TIMESTAMPTZ, DATE, or INTEGER | The start time of the bucket that contains the input timestamp. The return type matches the input `ts` type. | \ No newline at end of file diff --git a/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/index.mdx b/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/index.mdx index 6f4cbca..c762314 100644 --- a/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/index.mdx +++ b/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/index.mdx @@ -157,7 +157,7 @@ day | value - [`locf()`][locf]: fill in missing values by carrying the last observed value forward - [`interpolate()`][interpolate]: fill in missing values by linear interpolation -[time_bucket]: /api-reference/timescaledb/hyperfunctions/time_bucket +[time_bucket]: /api-reference/timescaledb/hyperfunctions/time-series-utilities/time_bucket [time_bucket_gapfill]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/time_bucket_gapfill [locf]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/locf [interpolate]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/interpolate diff --git a/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/interpolate.mdx b/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/interpolate.mdx index 3a4ab8f..68b5b5c 100644 --- a/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/interpolate.mdx +++ b/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/interpolate.mdx @@ -37,6 +37,8 @@ SELECT interpolate( ## Returns -The gapfilled value. The return type is the type of `value`. +|Column|Type|Description| +|-|-|-| +| `interpolate` | SMALLINT, INTEGER, BIGINT, REAL, or DOUBLE PRECISION | The gapfilled value. The return type matches the input `value` type. | [time_bucket_gapfill]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/time_bucket_gapfill diff --git a/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/locf.mdx b/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/locf.mdx index 8c2a38a..855c5f4 100644 --- a/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/locf.mdx +++ b/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/locf.mdx @@ -37,7 +37,9 @@ SELECT locf( ## Returns -The gapfilled value. The return type is the type of `value`. +|Column|Type|Description| +|-|-|-| +| `locf` | ANY ELEMENT | The gapfilled value. The return type matches the input `value` type. | [time_bucket_gapfill]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/time_bucket_gapfill [interpolate]: /api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/interpolate diff --git a/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/time_bucket_gapfill.mdx b/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/time_bucket_gapfill.mdx index b6cba92..be8ca87 100644 --- a/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/time_bucket_gapfill.mdx +++ b/api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/time_bucket_gapfill.mdx @@ -43,4 +43,6 @@ SELECT time_bucket_gapfill( ## Returns -The start time of the time bucket. \ No newline at end of file +|Column|Type|Description| +|-|-|-| +| `time_bucket_gapfill` | TIMESTAMPTZ or INTEGER | The start time of the time bucket. The return type matches the input `time` type. | \ No newline at end of file diff --git a/api-reference/timescaledb/hypertables/add_dimension.mdx b/api-reference/timescaledb/hypertables/add_dimension.mdx index c382998..b1dfd11 100644 --- a/api-reference/timescaledb/hypertables/add_dimension.mdx +++ b/api-reference/timescaledb/hypertables/add_dimension.mdx @@ -10,8 +10,12 @@ products: [cloud, mst, self_hosted] --- import DimensionInfo from '/snippets/api-reference/timescaledb/hypertables/_dimensions-info.mdx'; +import AddDimensionErrors from '/snippets/api-reference/timescaledb/_add-dimension-errors.mdx'; + import { TIMESCALE_DB, CLOUD_LONG, HYPERTABLE, HYPERTABLE_CAP, CHUNK, PG } from '/snippets/vars.mdx'; + Since 0.1.0 + Add an additional partitioning dimension to a {TIMESCALE_DB} {HYPERTABLE}. You can only execute this `add_dimension` command on an empty {HYPERTABLE}. To convert a normal table to a {HYPERTABLE}, call [create hypertable][create_hypertable]. @@ -89,6 +93,8 @@ SELECT add_dimension( |`dimension_id`|INTEGER| ID of the dimension in the {TIMESCALE_DB} internal catalog | |`created`|BOOLEAN| `true` if the dimension was added, `false` when you set `if_not_exists` to `true` and no dimension was added. | + + [create_hypertable]: /api-reference/timescaledb/hypertables/create_hypertable [add-dimension-old]: /api-reference/timescaledb/hypertables/add_dimension_old diff --git a/api-reference/timescaledb/hypertables/add_dimension_old.mdx b/api-reference/timescaledb/hypertables/add_dimension_old.mdx index df97bee..9458634 100644 --- a/api-reference/timescaledb/hypertables/add_dimension_old.mdx +++ b/api-reference/timescaledb/hypertables/add_dimension_old.mdx @@ -10,7 +10,8 @@ deprecated: true products: [cloud, mst, self_hosted] --- -import { TIMESCALE_DB, HYPERTABLE, CHUNK } from '/snippets/vars.mdx'; +import AddDimensionErrors from '/snippets/api-reference/timescaledb/_add-dimension-errors.mdx'; +import { CHUNK, HYPERTABLE, HYPERTABLE_CAP, TIMESCALE_DB } from '/snippets/vars.mdx'; Deprecated since {TIMESCALE_DB} v2.13.0. Use, [add_dimension()][add-dimension]. @@ -151,6 +152,8 @@ SELECT add_dimension( |`column_name`|TEXT|Column name of the column to partition by| |`created`|BOOLEAN|True if the dimension was added, false when `if_not_exists` is true and no dimension was added| + + When executing this function, either `number_partitions` or `chunk_time_interval` must be supplied, which dictates if the dimension uses hash or interval partitioning. diff --git a/api-reference/timescaledb/hypertables/add_reorder_policy.mdx b/api-reference/timescaledb/hypertables/add_reorder_policy.mdx index 88b0ae2..a13ffb5 100644 --- a/api-reference/timescaledb/hypertables/add_reorder_policy.mdx +++ b/api-reference/timescaledb/hypertables/add_reorder_policy.mdx @@ -9,10 +9,12 @@ type: function products: [cloud, mst, self_hosted] --- -import { HYPERTABLE, CHUNK } from '/snippets/vars.mdx'; +import { CHUNK, HYPERTABLE, HYPERTABLE_CAP, TIMESCALE_DB } from '/snippets/vars.mdx'; Community + Since 1.2.0 + Create a policy to reorder the rows of a {HYPERTABLE}'s {CHUNK}s on a specific index. The policy reorders the rows for all {CHUNK}s except the two most recent ones, because these are still getting writes. By default, the policy runs every 24 hours. To change the schedule, call [alter_job][alter_job] and adjust `schedule_interval`. diff --git a/api-reference/timescaledb/hypertables/attach_chunk.mdx b/api-reference/timescaledb/hypertables/attach_chunk.mdx index 39644e2..df6eb6a 100644 --- a/api-reference/timescaledb/hypertables/attach_chunk.mdx +++ b/api-reference/timescaledb/hypertables/attach_chunk.mdx @@ -10,6 +10,8 @@ products: [cloud, mst, self_hosted] Since 2.21.0 +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; + import { HYPERTABLE, HYPERTABLE_CAP, CHUNK } from '/snippets/vars.mdx'; Community @@ -61,6 +63,6 @@ CALL attach_chunk( ## Returns -This function returns void. + [hypertable-detach-chunk]: /api-reference/timescaledb/hypertables/detach_chunk diff --git a/api-reference/timescaledb/hypertables/attach_tablespace.mdx b/api-reference/timescaledb/hypertables/attach_tablespace.mdx index c15c237..7ba52d3 100644 --- a/api-reference/timescaledb/hypertables/attach_tablespace.mdx +++ b/api-reference/timescaledb/hypertables/attach_tablespace.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, TIMESCALE_DB, PG } from '/snippets/vars.mdx'; + Since 0.7.0 + Attach a tablespace to a {HYPERTABLE} and use it to store {CHUNK}s. A [tablespace][postgres-tablespaces] is a directory on the filesystem that allows control over where individual tables and indexes are @@ -63,5 +65,9 @@ using the `TABLESPACE` option to `CREATE TABLE`, prior to calling `create_hypertable`, has the same effect as calling `attach_tablespace` immediately following `create_hypertable`. +## Returns + +This function returns void. + [postgres-createtablespace]: https://www.postgresql.org/docs/current/sql-createtablespace.html [postgres-tablespaces]: https://www.postgresql.org/docs/current/manage-ag-tablespaces.html diff --git a/api-reference/timescaledb/hypertables/chunks_detailed_size.mdx b/api-reference/timescaledb/hypertables/chunks_detailed_size.mdx index ed4374e..29c0504 100644 --- a/api-reference/timescaledb/hypertables/chunks_detailed_size.mdx +++ b/api-reference/timescaledb/hypertables/chunks_detailed_size.mdx @@ -9,8 +9,11 @@ type: function products: [cloud, mst, self_hosted] --- +import ReturnsNullIfNotHypertable from '/snippets/api-reference/timescaledb/_returns-null-if-not-hypertable.mdx'; import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx'; + Since 0.2.0 + Get information about the disk space used by the {CHUNK}s belonging to a {HYPERTABLE}, returning size information for each {CHUNK} table, any indexes on the {CHUNK}, any toast tables, and the total size associated @@ -54,17 +57,12 @@ SELECT * FROM chunks_detailed_size( |Column|Type|Description| |---|---|---| -|chunk_schema| TEXT | Schema name of the chunk | -|chunk_name| TEXT | Name of the chunk| -|table_bytes|BIGINT | Disk space used by the chunk table| +|chunk_schema| TEXT | Schema name of the {CHUNK} | +|chunk_name| TEXT | Name of the {CHUNK}| +|table_bytes|BIGINT | Disk space used by the {CHUNK} table| |index_bytes|BIGINT | Disk space used by indexes| |toast_bytes|BIGINT | Disk space of toast tables| -|total_bytes|BIGINT | Total disk space used by the chunk, including all indexes and TOAST data| +|total_bytes|BIGINT | Total disk space used by the {CHUNK}, including all indexes and TOAST data| |node_name| TEXT | Node for which size is reported, applicable only to distributed {HYPERTABLE}s| - - -If executed on a relation that is not a {HYPERTABLE}, the function -returns `NULL`. - - + diff --git a/api-reference/timescaledb/hypertables/create_hypertable.mdx b/api-reference/timescaledb/hypertables/create_hypertable.mdx index 6ffe580..42ad853 100644 --- a/api-reference/timescaledb/hypertables/create_hypertable.mdx +++ b/api-reference/timescaledb/hypertables/create_hypertable.mdx @@ -12,6 +12,8 @@ import DimensionInfo from '/snippets/api-reference/timescaledb/hypertables/_dime import { HYPERTABLE, CHUNK, PG, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 0.1.0 + Replace a standard {PG} relational table with a [{HYPERTABLE}][hypertable-docs] that is partitioned on a single dimension. To create a new {HYPERTABLE}, best practice is to call CREATE TABLE. @@ -28,7 +30,7 @@ as `NOT NULL`. If this is not already specified on table creation, `create_hyper this constraint on the table when it is executed. This page describes the generalized {HYPERTABLE} API introduced in {TIMESCALE_DB} v2.13. -The [old interface for `create_hypertable` is also available](/api-reference/timescaledb/hypertables/create_hypertable_old/). +The [old interface for `create_hypertable` is also available](/api-reference/timescaledb/hypertables/create_hypertable_old). ## Samples @@ -188,9 +190,24 @@ SELECT create_hypertable( ## Returns |Column|Type| Description | -|-|-|-------------------------------------------------------------------------------------------------------------| -|`hypertable_id`|INTEGER| The ID of the hypertable you created. | -|`created`|BOOLEAN| `TRUE` when the hypertable is created. `FALSE` when `if_not_exists` is `true` and no hypertable was created. | +|------|----|------------------------------------------------------------------------------------------------------------| +|`hypertable_id`|INTEGER| The ID of the {HYPERTABLE} you created. | +|`created`|BOOLEAN| `TRUE` when the {HYPERTABLE} is created. `FALSE` when `if_not_exists` is `true` and no {HYPERTABLE} was created. | + +On failure, an error is returned: + +| Error | Description | +|-------|-------------| +| `table "" is already a hypertable` | The table is already a {HYPERTABLE}. Use `if_not_exists => TRUE` to suppress this error. | +| `column "" does not exist` | The specified partitioning column does not exist in the table. | +| `column "" is already a dimension` | The column is already used as a partitioning dimension. | +| `cannot create a unique index without the column "" (used in partitioning)` | Unique and primary key constraints must include all partitioning columns. | +| `cannot have FOREIGN KEY constraints to hypertable ""` | Foreign key constraints to {HYPERTABLE}s are not supported. | +| `cannot create hypertable for table "" because it is part of a publication` | Tables in publications cannot be converted to {HYPERTABLE}s. | +| `invalid number of partitions: must be between 1 and 32767` | The number of hash partitions specified is out of valid range. | +| `cannot specify both the number of partitions and an interval` | When using `by_hash`, specify either the number of partitions or an interval, not both. | +| `invalid interval type for dimension` | The chunk interval type does not match the partitioning column type. | +| `must be owner of hypertable ""` | Only the table owner can convert it to a {HYPERTABLE}. | @@ -201,7 +218,7 @@ SELECT create_hypertable( [inheritance]: https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE [migrate-data]: /api-reference/timescaledb/hypertables/create_hypertable/#arguments [dimension-info]: /api-reference/timescaledb/hypertables/create_hypertable/#dimension-info -[set_chunk_time_interval]: /api-reference/timescaledb/hypertables/set_chunk_time_interval/ +[set_chunk_time_interval]: /api-reference/timescaledb/hypertables/set_chunk_time_interval [about-constraints]: /use-timescale/schema-management/about-constraints [share-row-exclusive]: https://www.postgresql.org/docs/current/sql-lock.html [by-range]: /api-reference/timescaledb/hypertables/create_hypertable/#by_range @@ -210,4 +227,4 @@ SELECT create_hypertable( [sample-composite-columns]: /api-reference/timescaledb/hypertables/create_hypertable/#time-partition-a-hypertable-using-composite-columns-and-immutable-functions [sample-iso-formatting]: /api-reference/timescaledb/hypertables/create_hypertable/#time-partition-a-hypertable-using-iso-formatting [sample-uuidv7]: /api-reference/timescaledb/hypertables/create_hypertable/#time-partition-a-hypertable-using-iso-formatting -[uuidv7_functions]: /api-reference/timescaledb/uuid-functions/ +[uuidv7_functions]: /api-reference/timescaledb/uuid-functions diff --git a/api-reference/timescaledb/hypertables/create_hypertable_old.mdx b/api-reference/timescaledb/hypertables/create_hypertable_old.mdx index 44c2fd0..c65501c 100644 --- a/api-reference/timescaledb/hypertables/create_hypertable_old.mdx +++ b/api-reference/timescaledb/hypertables/create_hypertable_old.mdx @@ -8,7 +8,7 @@ type: function products: [cloud, mst, self_hosted] --- -import { TIMESCALE_DB, HYPERTABLE, HYPERTABLE_CAP, PG } from '/snippets/vars.mdx'; +import { CHUNK, HYPERTABLE, HYPERTABLE_CAP, PG, TIMESCALE_DB } from '/snippets/vars.mdx'; Old API since {TIMESCALE_DB} v2.13.0. Use [`create_hypertable`][api-create-hypertable]. @@ -132,6 +132,18 @@ If you use `SELECT * FROM create_hypertable(...)` you get the return value formatted as a table with column headings. +On failure, an error is returned: + +| Error | Description | +|-|-| +| {HYPERTABLE} "{table_name}" not found | The specified table does not exist | +| permission denied for schema {schema_name} | Insufficient permissions to access the schema | +| must be owner of {HYPERTABLE} "{table_name}" | Only the table owner can convert it to a {HYPERTABLE} | +| permissions denied: cannot create {CHUNK}s in schema "{schema_name}" | Insufficient permissions on the associated schema for {CHUNK}s | +| table "{table_name}" is already a {HYPERTABLE} | The table has already been converted to a {HYPERTABLE} | +| table "{table_name}" is not empty | The table contains data and `migrate_data` is not set to true | +| invalid partitioning function | The specified partitioning function is not valid or has incorrect signature | + The use of the `migrate_data` argument to convert a non-empty table can lock the table for a significant amount of time, depending on how much data is in the table. It can also run into deadlock if foreign key constraints exist to diff --git a/api-reference/timescaledb/hypertables/create_index.mdx b/api-reference/timescaledb/hypertables/create_index.mdx index e4f4b4b..dc7657a 100644 --- a/api-reference/timescaledb/hypertables/create_index.mdx +++ b/api-reference/timescaledb/hypertables/create_index.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx'; + Since 1.3.0 + ```SQL CREATE INDEX ... WITH (timescaledb.transaction_per_chunk, ...); ``` @@ -59,4 +61,8 @@ CREATE INDEX ON conditions USING brin(time, location) WITH (timescaledb.transaction_per_chunk); ``` +## Returns + +The `CREATE INDEX` command does not return a value. Upon successful completion, an index is created on the {HYPERTABLE} and all its {CHUNK}s. + [postgres-createindex]: https://www.postgresql.org/docs/current/sql-createindex.html diff --git a/api-reference/timescaledb/hypertables/create_table.mdx b/api-reference/timescaledb/hypertables/create_table.mdx index 4cb3597..a952b1d 100644 --- a/api-reference/timescaledb/hypertables/create_table.mdx +++ b/api-reference/timescaledb/hypertables/create_table.mdx @@ -173,7 +173,21 @@ WITH ( ## Returns -{TIMESCALE_DB} returns a simple message indicating success or failure. +| Return Value | Type | Description | +|--------------|------|-------------| +| CREATE TABLE | Command tag | Command completed successfully | + +On failure, an error is returned: + +| Error | Description | +|-------|-------------| +| `partition column could not be determined` | No timestamp column found for automatic partitioning. Use `tsdb.partition_column` to specify the partitioning column. | +| `column "" does not exist` | The specified partition column does not exist in the table. | +| `timescaledb options requires hypertable option` | {TIMESCALE_DB} options used without setting `tsdb.hypertable=true`. | +| `invalid input syntax for type ` | Invalid value for `tsdb.chunk_interval` for the partition column type. | +| `invalid value for tsdb.create_default_indexes ''` | Value for `tsdb.create_default_indexes` must be a boolean. | +| `unrecognized parameter ""` | Invalid {TIMESCALE_DB} parameter specified. | +| `functionality not supported under the current "apache" license` | Feature requires a {TIMESCALE_DB} license with additional capabilities. | [pg-create-table]: https://www.postgresql.org/docs/current/sql-createtable.html diff --git a/api-reference/timescaledb/hypertables/detach_chunk.mdx b/api-reference/timescaledb/hypertables/detach_chunk.mdx index ac1537e..fed70b0 100644 --- a/api-reference/timescaledb/hypertables/detach_chunk.mdx +++ b/api-reference/timescaledb/hypertables/detach_chunk.mdx @@ -9,7 +9,8 @@ products: [cloud, mst, self_hosted] --- -import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx'; +import { CHUNK, CHUNK_CAP, COLUMNSTORE, HYPERTABLE, HYPERTABLE_CAP } from '/snippets/vars.mdx'; +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; Since 2.21.0 @@ -50,7 +51,7 @@ CALL detach_chunk( ## Returns -This function returns void. + [hypertables-section]: /use-timescale/hypertables/ [setup-hypercore]: /use-timescale/hypercore/real-time-analytics-in-hypercore/ diff --git a/api-reference/timescaledb/hypertables/detach_tablespace.mdx b/api-reference/timescaledb/hypertables/detach_tablespace.mdx index dce1c46..92ab557 100644 --- a/api-reference/timescaledb/hypertables/detach_tablespace.mdx +++ b/api-reference/timescaledb/hypertables/detach_tablespace.mdx @@ -11,6 +11,8 @@ type: function import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx'; + Since 0.7.0 + Detach a tablespace from one or more {HYPERTABLE}s. This _only_ means that _new_ {CHUNK}s are not placed on the detached tablespace. This is useful, for instance, when a tablespace is running low on disk @@ -54,7 +56,7 @@ SELECT detach_tablespace( |Name|Type| Default | Required | Description| |---|---|---|---|---| | `tablespace` | TEXT | - | ✔ | Tablespace to detach.| -| `hypertable` | REGCLASS | - | ✖ | Hypertable to detach a the tablespace from.| +| `hypertable` | REGCLASS | - | ✖ | Hypertable to detach the tablespace from.| | `if_attached` | BOOLEAN | `FALSE` | ✖ | Set to true to avoid throwing an error if the tablespace is not attached to the given table. A notice is issued instead. | When giving only the tablespace name as argument, the given tablespace @@ -66,3 +68,11 @@ is issued. When specifying a specific {HYPERTABLE}, the tablespace is only detached from the given {HYPERTABLE} and thus may remain attached to other {HYPERTABLE}s. + +## Returns + +| Name | Type | Description | +|-|-|-| +| detach_tablespace | INTEGER | The number of {HYPERTABLE}s from which the tablespace was detached. | + +When called with both `tablespace` and `hypertable` arguments, returns 1 if the tablespace was successfully detached from the specified {HYPERTABLE}. When called with only the `tablespace` argument, returns the total number of {HYPERTABLE}s from which the tablespace was detached. diff --git a/api-reference/timescaledb/hypertables/detach_tablespaces.mdx b/api-reference/timescaledb/hypertables/detach_tablespaces.mdx index b4d77f0..3627202 100644 --- a/api-reference/timescaledb/hypertables/detach_tablespaces.mdx +++ b/api-reference/timescaledb/hypertables/detach_tablespaces.mdx @@ -11,6 +11,8 @@ type: function import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx'; + Since 0.7.0 + Detach all tablespaces from a {HYPERTABLE}. After issuing this command on a {HYPERTABLE}, it no longer has any tablespaces attached to it. New {CHUNK}s are instead placed in the database's default @@ -36,4 +38,11 @@ SELECT detach_tablespaces( |Name|Type| Default | Required | Description| |---|---|---|---|---| -| `hypertable` | REGCLASS | - | ✔ | Hypertable to detach a the tablespace from.| +| `hypertable` | REGCLASS | - | ✔ | Hypertable to detach the tablespace from.| + +## Returns + +| Name | Type | Description | +|-|-|-| +| detach_tablespaces | INTEGER | The total number of tablespaces that were detached from the {HYPERTABLE}. | + diff --git a/api-reference/timescaledb/hypertables/disable_chunk_skipping.mdx b/api-reference/timescaledb/hypertables/disable_chunk_skipping.mdx index 1e60ac8..8d31b67 100644 --- a/api-reference/timescaledb/hypertables/disable_chunk_skipping.mdx +++ b/api-reference/timescaledb/hypertables/disable_chunk_skipping.mdx @@ -9,7 +9,9 @@ type: function products: [cloud, mst, self_hosted] --- -import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP, COLUMNSTORE } from '/snippets/vars.mdx'; +import { CHUNK, CHUNK_CAP, COLUMNSTORE, HYPERTABLE, HYPERTABLE_CAP, TIMESCALE_DB } from '/snippets/vars.mdx'; + + Since 2.11.0 Disable range tracking for a specific column in a {HYPERTABLE} **in the {COLUMNSTORE}**. @@ -57,7 +59,7 @@ SELECT disable_chunk_skipping( |Column|Type|Description| |-|-|-| -|`hypertable_id`|INTEGER|ID of the hypertable in TimescaleDB.| +|`hypertable_id`|INTEGER|ID of the {HYPERTABLE} in {TIMESCALE_DB}.| |`column_name`|TEXT|Name of the column range tracking is disabled for| |`disabled`|BOOLEAN|Returns `true` when tracking is disabled. `false` when `if_not_exists` is `true` and the entry was not removed| @@ -69,4 +71,4 @@ and enabled range tracking on a column in the hypertable. -[enable_chunk_skipping]: /api-reference/timescaledb/hypertables/enable_chunk_skipping/ +[enable_chunk_skipping]: /api-reference/timescaledb/hypertables/enable_chunk_skipping diff --git a/api-reference/timescaledb/hypertables/drop_chunks.mdx b/api-reference/timescaledb/hypertables/drop_chunks.mdx index 127a85e..9dade13 100644 --- a/api-reference/timescaledb/hypertables/drop_chunks.mdx +++ b/api-reference/timescaledb/hypertables/drop_chunks.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, CHUNK, CAGG } from '/snippets/vars.mdx'; + Since 0.1.0 + Removes data {CHUNK}s whose time range falls completely before (or after) a specified time. Shows a list of the {CHUNK}s that were dropped, in the same style as the `show_chunks` [function][show_chunks]. @@ -172,4 +174,12 @@ The `created_before`/`created_after` parameters cannot be used together with `older_than`/`newer_than`. -[show_chunks]: /api-reference/timescaledb/hypertables/show_chunks/ +## Returns + +| Column | Type | Description | +|-|-|-| +| drop_chunks | TEXT | The name of each {CHUNK} that was dropped. Returns one row per dropped {CHUNK}. | + +The function returns a set of {CHUNK} names in the format `_timescaledb_internal._hyper_X_Y_chunk`, where each row represents a {CHUNK} that was successfully dropped. If no {CHUNK}s match the specified criteria, the function returns an empty result set. + +[show_chunks]: /api-reference/timescaledb/hypertables/show_chunks diff --git a/api-reference/timescaledb/hypertables/enable_chunk_skipping.mdx b/api-reference/timescaledb/hypertables/enable_chunk_skipping.mdx index 36f8f51..b2c9a15 100644 --- a/api-reference/timescaledb/hypertables/enable_chunk_skipping.mdx +++ b/api-reference/timescaledb/hypertables/enable_chunk_skipping.mdx @@ -12,9 +12,9 @@ products: [cloud, mst, self_hosted] import CreateHypertablePolicyNote from '/snippets/manage-data/create-hypertable-columnstore-policy-note.mdx'; -import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx'; +import { CHUNK, CHUNK_CAP, HYPERTABLE, HYPERTABLE_CAP, TIMESCALE_DB } from '/snippets/vars.mdx'; - Early access 2.17.1 + Since 2.11.0 Enable range statistics for a specific column in a **compressed** {HYPERTABLE}. This tracks a range of values for that column per {CHUNK}. @@ -94,8 +94,8 @@ SELECT enable_chunk_skipping( |Column|Type|Description| |-|-|-| -|`column_stats_id`|INTEGER|ID of the entry in the TimescaleDB internal catalog| +|`column_stats_id`|INTEGER|ID of the entry in the {TIMESCALE_DB} internal catalog| |`enabled`|BOOLEAN|Returns `true` when tracking is enabled, `if_not_exists` is `true`, and when a new entry is not added| -[compress_chunk]: /api-reference/timescaledb/compression/compress_chunk/ -[decompress_chunk]: /api-reference/timescaledb/compression/decompress_chunk/ +[compress_chunk]: /api-reference/timescaledb/compression/compress_chunk +[decompress_chunk]: /api-reference/timescaledb/compression/decompress_chunk diff --git a/api-reference/timescaledb/hypertables/hypertable_approximate_detailed_size.mdx b/api-reference/timescaledb/hypertables/hypertable_approximate_detailed_size.mdx index bc8abc7..e178189 100644 --- a/api-reference/timescaledb/hypertables/hypertable_approximate_detailed_size.mdx +++ b/api-reference/timescaledb/hypertables/hypertable_approximate_detailed_size.mdx @@ -9,8 +9,11 @@ type: function products: [cloud, mst, self_hosted] --- +import ReturnsNullIfNotHypertable from '/snippets/api-reference/timescaledb/_returns-null-if-not-hypertable.mdx'; import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP, CAGG, PG } from '/snippets/vars.mdx'; + Since 2.13.0 + Get detailed information about approximate disk space used by a {HYPERTABLE} or {CAGG}, returning size information for the table itself, any indexes on the table, any toast tables, and the total @@ -70,10 +73,7 @@ SELECT hypertable_approximate_detailed_size( |toast_bytes|BIGINT|Approximate disk space of toast tables| |total_bytes|BIGINT|Approximate total disk space used by the specified table, including all indexes and TOAST data| - -If executed on a relation that is not a hypertable, the function -returns `NULL`. - + [hypertable-docs]: /use-timescale/hypertables/ diff --git a/api-reference/timescaledb/hypertables/hypertable_approximate_size.mdx b/api-reference/timescaledb/hypertables/hypertable_approximate_size.mdx index bf9d085..b889a85 100644 --- a/api-reference/timescaledb/hypertables/hypertable_approximate_size.mdx +++ b/api-reference/timescaledb/hypertables/hypertable_approximate_size.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP, CAGG, PG } from '/snippets/vars.mdx'; + Since 2.13.0 + Get the approximate total disk space used by a {HYPERTABLE} or {CAGG}, that is, the sum of the size for the table itself including {CHUNK}s, any indexes on the table, and any toast tables. The size is reported @@ -81,7 +83,7 @@ SELECT hypertable_approximate_size( ## Returns -|Name|Type|Description| +|Column|Type|Description| |-|-|-| |hypertable_approximate_size|BIGINT|Total approximate disk space used by the specified {HYPERTABLE}, including all indexes and TOAST data| diff --git a/api-reference/timescaledb/hypertables/hypertable_detailed_size.mdx b/api-reference/timescaledb/hypertables/hypertable_detailed_size.mdx index 6bde78b..d883554 100644 --- a/api-reference/timescaledb/hypertables/hypertable_detailed_size.mdx +++ b/api-reference/timescaledb/hypertables/hypertable_detailed_size.mdx @@ -11,4 +11,6 @@ products: [cloud, mst, self_hosted] import HypertableDetailedSize from '/snippets/api-reference/timescaledb/hypertables/_hypertable-detailed-size.mdx'; + Since 0.2.0 + diff --git a/api-reference/timescaledb/hypertables/hypertable_index_size.mdx b/api-reference/timescaledb/hypertables/hypertable_index_size.mdx index a042f35..5687b79 100644 --- a/api-reference/timescaledb/hypertables/hypertable_index_size.mdx +++ b/api-reference/timescaledb/hypertables/hypertable_index_size.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx'; + Since 0.2.0 + Get the disk space used by an index on a {HYPERTABLE}, including the disk space needed to provide the index on all {CHUNK}s. The size is reported in bytes. diff --git a/api-reference/timescaledb/hypertables/hypertable_size.mdx b/api-reference/timescaledb/hypertables/hypertable_size.mdx index 2e8c210..63d1d96 100644 --- a/api-reference/timescaledb/hypertables/hypertable_size.mdx +++ b/api-reference/timescaledb/hypertables/hypertable_size.mdx @@ -11,4 +11,6 @@ products: [cloud, mst, self_hosted] import HypertableSize from '/snippets/api-reference/timescaledb/hypertables/_hypertable-size.mdx'; + Since 0.2.0 + diff --git a/api-reference/timescaledb/hypertables/index.mdx b/api-reference/timescaledb/hypertables/index.mdx index a0054e6..81845f7 100644 --- a/api-reference/timescaledb/hypertables/index.mdx +++ b/api-reference/timescaledb/hypertables/index.mdx @@ -136,7 +136,7 @@ SELECT add_dimension('conditions', 'location', number_partitions => 4); - [`enable_chunk_skipping()`][enable_chunk_skipping]: enable {CHUNK} skipping for a {HYPERTABLE} - [`disable_chunk_skipping()`][disable_chunk_skipping]: disable {CHUNK} skipping for a {HYPERTABLE} -## Legacy functions +### Legacy functions For backward compatibility, {TIMESCALE_DB} also provides [`create_hypertable()`][create_hypertable], which was the original function for creating {HYPERTABLE}s. Use [`CREATE TABLE`][create_table] for new {HYPERTABLE}s. diff --git a/api-reference/timescaledb/hypertables/merge_chunks.mdx b/api-reference/timescaledb/hypertables/merge_chunks.mdx index 67da118..8f1e283 100644 --- a/api-reference/timescaledb/hypertables/merge_chunks.mdx +++ b/api-reference/timescaledb/hypertables/merge_chunks.mdx @@ -68,3 +68,7 @@ arguments. |--------------------|-------------|--|--|------------------------------------------------| | `chunk1`, `chunk2` | REGCLASS | - | ✖ | The two {CHUNK}s to merge in partition order | | `chunks` | REGCLASS[] |- | ✖ | The array of {CHUNK}s to merge in partition order | + +## Returns + +This procedure does not return a value. Upon successful completion, the specified {CHUNK}s are merged into a single {CHUNK}. diff --git a/api-reference/timescaledb/hypertables/move_chunk.mdx b/api-reference/timescaledb/hypertables/move_chunk.mdx index d235641..bd111af 100644 --- a/api-reference/timescaledb/hypertables/move_chunk.mdx +++ b/api-reference/timescaledb/hypertables/move_chunk.mdx @@ -14,6 +14,8 @@ import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP, PG } from '/snippets/vars Community + Since 1.3.0 + TimescaleDB allows you to move data and indexes to different tablespaces. This allows you to move data to more cost-effective storage as it ages. @@ -65,6 +67,10 @@ SELECT move_chunk( |`reorder_index`|REGCLASS| - | ✖ |The name of the index (on either the {HYPERTABLE} or {CHUNK}) to order by| |`verbose`|BOOLEAN| `FALSE` | ✖ |Setting to true displays messages about the progress of the move_chunk command.| +## Returns + +This function returns void. + [manage-storage]: /use-timescale/schema-management/about-tablespaces/ [postgres-cluster]: https://www.postgresql.org/docs/current/sql-cluster.html [postgres-altertable]: https://www.postgresql.org/docs/13/sql-altertable.html diff --git a/api-reference/timescaledb/hypertables/remove_reorder_policy.mdx b/api-reference/timescaledb/hypertables/remove_reorder_policy.mdx index 67d93ef..5bca636 100644 --- a/api-reference/timescaledb/hypertables/remove_reorder_policy.mdx +++ b/api-reference/timescaledb/hypertables/remove_reorder_policy.mdx @@ -13,6 +13,8 @@ import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx Community + Since 1.2.0 + Remove a policy to reorder a particular {HYPERTABLE}. ## Samples @@ -38,3 +40,7 @@ SELECT remove_reorder_policy( |---|---|---|---|---| | `hypertable` | REGCLASS | - | ✔ | Name of the {HYPERTABLE} from which to remove the policy. | | `if_exists` | BOOLEAN | `FALSE` | ✖ | Set to true to avoid throwing an error if the reorder_policy does not exist. A notice is issued instead. | + +## Returns + +This function returns void. diff --git a/api-reference/timescaledb/hypertables/reorder_chunk.mdx b/api-reference/timescaledb/hypertables/reorder_chunk.mdx index f2c6cfe..1118d7a 100644 --- a/api-reference/timescaledb/hypertables/reorder_chunk.mdx +++ b/api-reference/timescaledb/hypertables/reorder_chunk.mdx @@ -10,9 +10,12 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP, PG } from '/snippets/vars.mdx'; +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; Community + Since 1.2.0 + Reorder a single {CHUNK}'s heap to follow the order of an index. This function acts similarly to the [PostgreSQL CLUSTER command][postgres-cluster] , however it uses lower lock levels so that, unlike with the CLUSTER command, the {CHUNK} @@ -58,8 +61,8 @@ SELECT reorder_chunk( ## Returns -This function returns void. + -[add_reorder_policy]: /api-reference/timescaledb/hypertables/add_reorder_policy/ +[add_reorder_policy]: /api-reference/timescaledb/hypertables/add_reorder_policy [postgres-cluster]: https://www.postgresql.org/docs/current/sql-cluster.html diff --git a/api-reference/timescaledb/hypertables/set_chunk_time_interval.mdx b/api-reference/timescaledb/hypertables/set_chunk_time_interval.mdx index b1d14d6..b439929 100644 --- a/api-reference/timescaledb/hypertables/set_chunk_time_interval.mdx +++ b/api-reference/timescaledb/hypertables/set_chunk_time_interval.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP, CAGG } from '/snippets/vars.mdx'; + Since 0.1.0 + Sets the `chunk_time_interval` on a {HYPERTABLE}. The new interval is used when new {CHUNK}s are created, and time intervals on existing {CHUNK}s are not changed. @@ -75,5 +77,8 @@ The valid types for the `chunk_time_interval` depend on the type used for the For more information, see [{HYPERTABLE} partitioning][hypertable-partitioning]. +## Returns + +This function returns void. [hypertable-partitioning]: /use-timescale/hypertables/#hypertable-partitioning diff --git a/api-reference/timescaledb/hypertables/set_integer_now_func.mdx b/api-reference/timescaledb/hypertables/set_integer_now_func.mdx index fd78f00..fa27fda 100644 --- a/api-reference/timescaledb/hypertables/set_integer_now_func.mdx +++ b/api-reference/timescaledb/hypertables/set_integer_now_func.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx'; + Since 1.5.0 + Override the [`now()`](https://www.postgresql.org/docs/16/functions-datetime.html) date/time function used to set the current time in the integer `time` column in a {HYPERTABLE}. Many policies only apply to [{CHUNK}s][chunks] of a certain age. `integer_now_func` determines the age of each {CHUNK}. @@ -64,5 +66,8 @@ SELECT set_integer_now_func( |`integer_now_func`|REGPROC| - | ✔ | A function that returns the current time set in each row in the `time` column in `main_table`.| |`replace_if_exists`|BOOLEAN| `FALSE` | ✖ | Set to `true` to override `integer_now_func` when you have previously set a custom function. | +## Returns + +This function returns void. [chunks]: /use-timescale/hypertables/#hypertable-partitioning diff --git a/api-reference/timescaledb/hypertables/show_chunks.mdx b/api-reference/timescaledb/hypertables/show_chunks.mdx index 3de8636..f992f53 100644 --- a/api-reference/timescaledb/hypertables/show_chunks.mdx +++ b/api-reference/timescaledb/hypertables/show_chunks.mdx @@ -11,10 +11,8 @@ products: [cloud, mst, self_hosted] import { CHUNK, HYPERTABLE, HYPERTABLE_CAP, CAGG } from '/snippets/vars.mdx'; -Get the list of {CHUNK}s associated with a {HYPERTABLE}. + Since 0.9.0 -Function accepts the following required and optional arguments. These arguments -have the same semantics as the `drop_chunks` [function][drop_chunks]. ## Samples @@ -129,4 +127,19 @@ The `created_before`/`created_after` parameters cannot be used together with `older_than`/`newer_than`. -[drop_chunks]: /api-reference/timescaledb/hypertable/drop_chunks +## Returns + +| Column | Type | Description | +|--------|------|-------------| +| show_chunks | REGCLASS | Name of the {CHUNK} matching the criteria | + +This function returns a set of rows, one for each {CHUNK} that matches the specified criteria. + +On failure, an error is returned: + +| Error | Description | +|-------|-------------| +| `invalid hypertable or continuous aggregate` | The specified relation is not a valid {HYPERTABLE} or {CAGG} | +| `invalid time range` | The specified time range parameters do not result in a valid overlapping range | + +[drop_chunks]: /api-reference/timescaledb/hypertables/drop_chunks diff --git a/api-reference/timescaledb/hypertables/show_tablespaces.mdx b/api-reference/timescaledb/hypertables/show_tablespaces.mdx index 2ce5585..195afa8 100644 --- a/api-reference/timescaledb/hypertables/show_tablespaces.mdx +++ b/api-reference/timescaledb/hypertables/show_tablespaces.mdx @@ -11,6 +11,8 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx'; + Since 0.7.0 + Show the tablespaces attached to a {HYPERTABLE}. ## Samples @@ -37,3 +39,11 @@ SELECT show_tablespaces( |Name|Type| Default | Required | Description| |---|---|---|---|---| | `hypertable` | REGCLASS | - | ✔ | {HYPERTABLE} to show attached tablespaces for.| + +## Returns + +| Column | Type | Description | +|-|-|-| +| show_tablespaces | NAME | The name of each tablespace attached to the {HYPERTABLE}. Returns one row per attached tablespace. | + +The function returns a set of tablespace names. If no tablespaces are attached to the {HYPERTABLE}, the function returns an empty result set. diff --git a/api-reference/timescaledb/hypertables/split_chunk.mdx b/api-reference/timescaledb/hypertables/split_chunk.mdx index d93c819..f61cdc5 100644 --- a/api-reference/timescaledb/hypertables/split_chunk.mdx +++ b/api-reference/timescaledb/hypertables/split_chunk.mdx @@ -10,9 +10,12 @@ products: [cloud, mst, self_hosted] import { HYPERTABLE, HYPERTABLE_CAP, CHUNK, CHUNK_CAP } from '/snippets/vars.mdx'; +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; Community + Since 2.18.0 + Split a large {CHUNK} at a specific point in time. If you do not specify the timestamp to split at, the {CHUNK} is split equally. @@ -51,4 +54,4 @@ CALL split_chunk( ## Returns -This function returns void. + diff --git a/api-reference/timescaledb/index.mdx b/api-reference/timescaledb/index.mdx index 32a4b71..6bd278a 100644 --- a/api-reference/timescaledb/index.mdx +++ b/api-reference/timescaledb/index.mdx @@ -7,7 +7,7 @@ keywords: [API, reference, SQL, functions, hypertables] mode: "wide" --- -import { HYPERFUNC, HYPERTABLE_CAP, HYPERFUNC_CAP, TIMESCALE_DB } from '/snippets/vars.mdx'; +import { HYPERFUNC, TIMESCALE_DB, HYPERTABLE_CAP, HYPERFUNC_CAP } from '/snippets/vars.mdx'; Since 1.5.0 + Shows information about compression settings for each {CHUNK} that has compression enabled on it. ## Samples diff --git a/api-reference/timescaledb/informational-views/chunks.mdx b/api-reference/timescaledb/informational-views/chunks.mdx index d699536..021b40b 100644 --- a/api-reference/timescaledb/informational-views/chunks.mdx +++ b/api-reference/timescaledb/informational-views/chunks.mdx @@ -10,6 +10,8 @@ type: view import { CHUNK, HYPERTABLE, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 0.1.0 + Get metadata about the {CHUNK}s of {HYPERTABLE}s. This view shows metadata for the {CHUNK}'s primary time-based dimension. diff --git a/api-reference/timescaledb/informational-views/compression_settings.mdx b/api-reference/timescaledb/informational-views/compression_settings.mdx index c97838c..384164d 100644 --- a/api-reference/timescaledb/informational-views/compression_settings.mdx +++ b/api-reference/timescaledb/informational-views/compression_settings.mdx @@ -12,6 +12,8 @@ import { HYPERTABLE, TIMESCALE_DB } from '/snippets/vars.mdx'; Deprecated + Since 1.5.0 + This view exists for backwards compatibility. The supported views to retrieve information about compression are: - [timescaledb_information.hypertable_compression_settings][hypertable_compression_settings] diff --git a/api-reference/timescaledb/informational-views/continuous_aggregates.mdx b/api-reference/timescaledb/informational-views/continuous_aggregates.mdx index 64cf609..d8fe79f 100644 --- a/api-reference/timescaledb/informational-views/continuous_aggregates.mdx +++ b/api-reference/timescaledb/informational-views/continuous_aggregates.mdx @@ -10,6 +10,8 @@ type: view import { CAGG, HYPERTABLE, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.3.0 + Get metadata and settings information for {CAGG}s. ## Samples diff --git a/api-reference/timescaledb/informational-views/dimensions.mdx b/api-reference/timescaledb/informational-views/dimensions.mdx index ec21168..4f44f09 100644 --- a/api-reference/timescaledb/informational-views/dimensions.mdx +++ b/api-reference/timescaledb/informational-views/dimensions.mdx @@ -10,6 +10,8 @@ type: view import { HYPERTABLE, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 0.1.0 + Returns information about the dimensions of a {HYPERTABLE}. {HYPERTABLE}s can be partitioned on a range of different dimensions. By default, all {HYPERTABLE}s are partitioned on time, but it is also possible to partition on other dimensions in diff --git a/api-reference/timescaledb/informational-views/hypertable_compression_settings.mdx b/api-reference/timescaledb/informational-views/hypertable_compression_settings.mdx index 9e99fd4..3ca6f48 100644 --- a/api-reference/timescaledb/informational-views/hypertable_compression_settings.mdx +++ b/api-reference/timescaledb/informational-views/hypertable_compression_settings.mdx @@ -10,6 +10,8 @@ type: view import { HYPERTABLE, CHUNK } from '/snippets/vars.mdx'; + Since 1.5.0 + Shows information about compression settings for each {HYPERTABLE} {CHUNK} that has compression enabled on it. ## Samples diff --git a/api-reference/timescaledb/informational-views/hypertables.mdx b/api-reference/timescaledb/informational-views/hypertables.mdx index 627512c..8cc193d 100644 --- a/api-reference/timescaledb/informational-views/hypertables.mdx +++ b/api-reference/timescaledb/informational-views/hypertables.mdx @@ -10,6 +10,8 @@ type: view import { HYPERTABLE, CHUNK } from '/snippets/vars.mdx'; + Since 0.1.0 + Get metadata information about {HYPERTABLE}s. For more information about using {HYPERTABLE}s, including {CHUNK} size partitioning, diff --git a/api-reference/timescaledb/informational-views/job_errors.mdx b/api-reference/timescaledb/informational-views/job_errors.mdx index 8ac7ad4..4d5acda 100644 --- a/api-reference/timescaledb/informational-views/job_errors.mdx +++ b/api-reference/timescaledb/informational-views/job_errors.mdx @@ -10,6 +10,8 @@ type: view import { CAGG, COLUMNSTORE } from '/snippets/vars.mdx'; + Since 2.12.0 + Shows information about runtime errors encountered by jobs run by the automation framework. This includes custom jobs and jobs run by policies created to manage data retention, {CAGG}s, {COLUMNSTORE}, and @@ -84,4 +86,4 @@ For example, the owner can change the retention interval like this: SELECT alter_job(id,config:=jsonb_set(config,'{drop_after}', '"2 weeks"')) FROM _timescaledb_config.bgw_job WHERE id = 2; ``` -[jobs]: /api-reference/timescaledb/jobs-automation/ +[jobs]: /api-reference/timescaledb/jobs-automation diff --git a/api-reference/timescaledb/informational-views/job_history.mdx b/api-reference/timescaledb/informational-views/job_history.mdx index b5ee802..2088569 100644 --- a/api-reference/timescaledb/informational-views/job_history.mdx +++ b/api-reference/timescaledb/informational-views/job_history.mdx @@ -10,6 +10,8 @@ type: view import { CAGG, COLUMNSTORE } from '/snippets/vars.mdx'; + Since 2.12.0 + Shows information about the jobs run by the automation framework. This includes custom jobs and jobs run by policies created to manage data retention, {CAGG}s, {COLUMNSTORE}, and @@ -89,4 +91,4 @@ For example, the owner can change the retention interval like this: SELECT alter_job(id,config:=jsonb_set(config,'{drop_after}', '"2 weeks"')) FROM _timescaledb_config.bgw_job WHERE id = 3; ``` -[jobs]: /api-reference/timescaledb/jobs-automation/ +[jobs]: /api-reference/timescaledb/jobs-automation diff --git a/api-reference/timescaledb/informational-views/job_stats.mdx b/api-reference/timescaledb/informational-views/job_stats.mdx index a8acec8..1d6379a 100644 --- a/api-reference/timescaledb/informational-views/job_stats.mdx +++ b/api-reference/timescaledb/informational-views/job_stats.mdx @@ -10,6 +10,8 @@ type: view import { CAGG, COLUMNSTORE, HYPERTABLE } from '/snippets/vars.mdx'; + Since 1.2.0 + Shows information and statistics about jobs run by the automation framework. This includes jobs set up for user defined actions and jobs run by policies created to manage data retention, {CAGG}s, {COLUMNSTORE}, and @@ -76,4 +78,4 @@ total_failures | 0 | `total_successes` | BIGINT | The total number of times this job succeeded | | `total_failures` | BIGINT | The total number of times this job failed | -[actions]: /api-reference/timescaledb/jobs-automation/ +[actions]: /api-reference/timescaledb/jobs-automation diff --git a/api-reference/timescaledb/informational-views/jobs.mdx b/api-reference/timescaledb/informational-views/jobs.mdx index b099ac1..38a2b92 100644 --- a/api-reference/timescaledb/informational-views/jobs.mdx +++ b/api-reference/timescaledb/informational-views/jobs.mdx @@ -10,6 +10,8 @@ type: view import { CAGG, COLUMNSTORE, TIMESCALE_DB } from '/snippets/vars.mdx'; + Since 1.2.0 + Shows information about all jobs registered with the automation framework. ## Samples diff --git a/api-reference/timescaledb/informational-views/policies.mdx b/api-reference/timescaledb/informational-views/policies.mdx index fb11372..356b538 100644 --- a/api-reference/timescaledb/informational-views/policies.mdx +++ b/api-reference/timescaledb/informational-views/policies.mdx @@ -9,7 +9,7 @@ type: view import { CAGG, HYPERTABLE } from '/snippets/vars.mdx'; - Early access + Early access Since 2.10.0 The `policies` view provides information on all policies set on {CAGG}s. diff --git a/api-reference/timescaledb/jobs-automation/add_job.mdx b/api-reference/timescaledb/jobs-automation/add_job.mdx index f13ed76..9236a5e 100644 --- a/api-reference/timescaledb/jobs-automation/add_job.mdx +++ b/api-reference/timescaledb/jobs-automation/add_job.mdx @@ -11,7 +11,7 @@ products: [cloud, mst, self_hosted] import { JOB_CAP, JOB } from '/snippets/vars.mdx'; - Community + Community Since 1.2.0 Register a {JOB_CAP} for scheduling by the automation framework. For more information about scheduling, including example {JOB}s, see the [jobs documentation section][using-jobs]. diff --git a/api-reference/timescaledb/jobs-automation/alter_job.mdx b/api-reference/timescaledb/jobs-automation/alter_job.mdx index dbed757..17a2840 100644 --- a/api-reference/timescaledb/jobs-automation/alter_job.mdx +++ b/api-reference/timescaledb/jobs-automation/alter_job.mdx @@ -11,7 +11,7 @@ products: [cloud, mst, self_hosted] import { JOB_CAP, JOB, HYPERTABLE, TIMESCALE_DB, CAGG, COLUMNSTORE, CHUNK } from '/snippets/vars.mdx'; - Community + Community Since 1.2.0 {JOB_CAP}s scheduled using the {TIMESCALE_DB} automation framework run periodically in a background worker. You can change the schedule of these {JOB}s with the diff --git a/api-reference/timescaledb/jobs-automation/delete_job.mdx b/api-reference/timescaledb/jobs-automation/delete_job.mdx index 9213c20..21d6ba5 100644 --- a/api-reference/timescaledb/jobs-automation/delete_job.mdx +++ b/api-reference/timescaledb/jobs-automation/delete_job.mdx @@ -9,9 +9,10 @@ tags: [background jobs, scheduled jobs, automation framework] products: [cloud, mst, self_hosted] --- +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; import { JOB_CAP, JOB } from '/snippets/vars.mdx'; - Community + Community Since 1.2.0 Delete a {JOB} registered with the automation framework. This works for {JOB}s as well as policies. @@ -39,3 +40,7 @@ SELECT delete_job( | Name | Type | Default | Required | Description | |---|---|---|---|---| | `job_id` | INTEGER | - | ✔ | TimescaleDB background {JOB} id | + +## Returns + + diff --git a/api-reference/timescaledb/jobs-automation/run_job.mdx b/api-reference/timescaledb/jobs-automation/run_job.mdx index 60245fa..61e7a80 100644 --- a/api-reference/timescaledb/jobs-automation/run_job.mdx +++ b/api-reference/timescaledb/jobs-automation/run_job.mdx @@ -9,9 +9,10 @@ tags: [background jobs, scheduled jobs, automation framework] products: [cloud, mst, self_hosted] --- +import ReturnsVoid from '/snippets/api-reference/timescaledb/_returns-void.mdx'; import { JOB_CAP, JOB } from '/snippets/vars.mdx'; - Community + Community Since 1.2.0 Run a previously registered {JOB} in the current session. This works for {JOB} as well as policies. @@ -47,3 +48,7 @@ CALL run_job( | Name | Type | Default | Required | Description | |---|---|---|---|---| | `job_id` | INTEGER | - | ✔ | TimescaleDB background {JOB} ID | + +## Returns + + diff --git a/api-reference/timescaledb/uuid-functions/generate_uuidv7.mdx b/api-reference/timescaledb/uuid-functions/generate_uuidv7.mdx index 0cbbc7e..ea3fe30 100644 --- a/api-reference/timescaledb/uuid-functions/generate_uuidv7.mdx +++ b/api-reference/timescaledb/uuid-functions/generate_uuidv7.mdx @@ -11,7 +11,7 @@ products: [cloud, mst, self_hosted] import { TIMESCALE_DB } from '/snippets/vars.mdx'; - Community + Since 2.13.0 Generate a UUIDv7 object based on the current time. @@ -39,3 +39,9 @@ suitable for use in a time-partitioned column in {TIMESCALE_DB}. ```sql INSERT INTO alerts VALUES (generate_uuidv7(), 'high CPU'); ``` + +## Returns + +|Column|Type|Description| +|-|-|-| +| `generate_uuidv7` | UUID | A UUIDv7 object based on the current time with random bits. | diff --git a/api-reference/timescaledb/uuid-functions/to_uuidv7.mdx b/api-reference/timescaledb/uuid-functions/to_uuidv7.mdx index 64ab351..6b67fa1 100644 --- a/api-reference/timescaledb/uuid-functions/to_uuidv7.mdx +++ b/api-reference/timescaledb/uuid-functions/to_uuidv7.mdx @@ -11,7 +11,7 @@ products: [cloud, mst, self_hosted] import { PG } from '/snippets/vars.mdx'; - Community + Since 2.13.0 Create a UUIDv7 object from a {PG} timestamp and random bits. @@ -39,3 +39,9 @@ SELECT to_uuidv7( | Name | Type | Default | Required | Description | | - | - | - | - | - | | `ts` | TIMESTAMPTZ | - | ✔ | The timestamp used to return a UUIDv7 object | + +## Returns + +|Column|Type|Description| +|-|-|-| +| `to_uuidv7` | UUID | A UUIDv7 object created from the input timestamp with random bits. | diff --git a/api-reference/timescaledb/uuid-functions/to_uuidv7_boundary.mdx b/api-reference/timescaledb/uuid-functions/to_uuidv7_boundary.mdx index 7418121..9322081 100644 --- a/api-reference/timescaledb/uuid-functions/to_uuidv7_boundary.mdx +++ b/api-reference/timescaledb/uuid-functions/to_uuidv7_boundary.mdx @@ -11,7 +11,7 @@ products: [cloud, mst, self_hosted] import { PG } from '/snippets/vars.mdx'; - Community + Since 2.13.0 Create a UUIDv7 object from a {PG} timestamp for use in range queries. @@ -58,3 +58,9 @@ SELECT to_uuidv7_boundary( | Name | Type | Default | Required | Description | | - | - | - | - | - | | `ts` | TIMESTAMPTZ | - | ✔ | The timestamp used to return a UUIDv7 object | + +## Returns + +|Column|Type|Description| +|-|-|-| +| `to_uuidv7_boundary` | UUID | A boundary UUIDv7 object with random bits set to zero, suitable for use in range queries. | diff --git a/api-reference/timescaledb/uuid-functions/uuid_timestamp.mdx b/api-reference/timescaledb/uuid-functions/uuid_timestamp.mdx index 91cee48..9feae1d 100644 --- a/api-reference/timescaledb/uuid-functions/uuid_timestamp.mdx +++ b/api-reference/timescaledb/uuid-functions/uuid_timestamp.mdx @@ -11,7 +11,7 @@ products: [cloud, mst, self_hosted] import { PG } from '/snippets/vars.mdx'; - Community + Since 2.13.0 Extract a {PG} timestamp with time zone from a UUIDv7 object. @@ -49,4 +49,10 @@ SELECT uuid_timestamp( | - | - | - | - | - | | `uuid` | UUID | - | ✔ | The UUID object to extract the timestamp from | +## Returns + +|Column|Type|Description| +|-|-|-| +| `uuid_timestamp` | TIMESTAMPTZ | The timestamp extracted from the UUIDv7 object with millisecond precision. | + [uuid_timestamp_micros]: /api-reference/timescaledb/uuid-functions/uuid_timestamp_micros diff --git a/api-reference/timescaledb/uuid-functions/uuid_timestamp_micros.mdx b/api-reference/timescaledb/uuid-functions/uuid_timestamp_micros.mdx index 4f12f7d..3ff3af1 100644 --- a/api-reference/timescaledb/uuid-functions/uuid_timestamp_micros.mdx +++ b/api-reference/timescaledb/uuid-functions/uuid_timestamp_micros.mdx @@ -11,7 +11,7 @@ products: [cloud, mst, self_hosted] import { PG } from '/snippets/vars.mdx'; - Community + Since 2.13.0 Extract a [{PG} timestamp with time zone][pg-timestamp-timezone] from a UUIDv7 object. `uuid` contains a millisecond unix timestamp and an optional sub-millisecond fraction. @@ -49,5 +49,11 @@ SELECT uuid_timestamp_micros( | - | - | - | - | - | | `uuid` | UUID | - | ✔ | The UUID object to extract the timestamp from | +## Returns + +|Column|Type|Description| +|-|-|-| +| `uuid_timestamp_micros` | TIMESTAMPTZ | The timestamp extracted from the UUIDv7 object with microsecond precision. | + [uuid_timestamp]: /api-reference/timescaledb/uuid-functions/uuid_timestamp [pg-timestamp-timezone]: https://www.postgresql.org/docs/current/datatype-datetime.html diff --git a/api-reference/timescaledb/uuid-functions/uuid_version.mdx b/api-reference/timescaledb/uuid-functions/uuid_version.mdx index f2c0deb..3877b0e 100644 --- a/api-reference/timescaledb/uuid-functions/uuid_version.mdx +++ b/api-reference/timescaledb/uuid-functions/uuid_version.mdx @@ -9,7 +9,7 @@ type: function products: [cloud, mst, self_hosted] --- - Community + Since 2.13.0 Extract the version number from a UUID object: @@ -40,3 +40,9 @@ SELECT uuid_version( | Name | Type | Default | Required | Description | | - | - | - | - | - | | `uuid` | UUID | - | ✔ | The UUID object to extract the version number from | + +## Returns + +|Column|Type|Description| +|-|-|-| +| `uuid_version` | INTEGER | The version number extracted from the UUID object (e.g., 7 for UUIDv7). | diff --git a/docs.json b/docs.json index fc30b15..ca69f0b 100644 --- a/docs.json +++ b/docs.json @@ -36,7 +36,8 @@ { "group": " ", "pages": [ - "deploy-and-operate/index" + "deploy-and-operate/index", + "deploy-and-operate/tiger-cloud/index" ] }, { @@ -975,7 +976,10 @@ { "group": "Semantic search", "pages": [ - "agentic-postgres/pgvectorscale/pgvectorscale-get-started" + "agentic-postgres/key-vector-database-concepts", + "agentic-postgres/pgvectorscale/pgvectorscale-get-started", + "agentic-postgres/interfaces/sql-interface", + "agentic-postgres/interfaces/python-interface" ] }, { @@ -1645,12 +1649,6 @@ { "item": "Configuration and deployment", "groups": [ - { - "group": " ", - "pages": [ - "api-reference/timescaledb/timescaledb-api-reference" - ] - }, { "group": "Troubleshoot", "pages": [ @@ -2113,6 +2111,17 @@ { "tab": "API", "menu": [ + { + "item": " ", + "groups": [ + { + "group": " ", + "pages": [ + "api-reference/overview" + ] + } + ] + }, { "item": "TimescaleDB API", "groups": [ @@ -2144,14 +2153,14 @@ "api-reference/timescaledb/hypertables/split_chunk", "api-reference/timescaledb/hypertables/attach_chunk", "api-reference/timescaledb/hypertables/detach_chunk", - "api-reference/timescaledb/hypertables/set_chunk_time_interval" + "api-reference/timescaledb/hypertables/set_chunk_time_interval", + "api-reference/timescaledb/hypertables/set_integer_now_func" ] }, { "group": "Dimension management", "pages": [ - "api-reference/timescaledb/hypertables/add_dimension", - "api-reference/timescaledb/hypertables/set_integer_now_func" + "api-reference/timescaledb/hypertables/add_dimension" ] }, { @@ -2307,6 +2316,12 @@ "api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/locf", "api-reference/timescaledb/hyperfunctions/time_bucket_gapfill/interpolate" ] + }, + { + "group": "Legacy functions", + "pages": [ + "api-reference/timescaledb/hyperfunctions/legacy/time_bucket_ng" + ] } ] }, @@ -2782,6 +2797,7 @@ }, { "item": "Tiger Cloud REST API", + "openapi": "api-reference/tiger-cloud-rest-api/openapi.yaml", "groups": [ { "group": " ", @@ -2790,8 +2806,68 @@ ] }, { - "group": " ", - "openapi": "api-reference/tiger-cloud-rest-api/openapi.yaml" + "group": "Auth", + "expanded": true, + "pages": [ + "GET /auth/info" + ] + }, + { + "group": "Services", + "expanded": true, + "pages": [ + "GET /projects/{project_id}/services", + "POST /projects/{project_id}/services", + "GET /projects/{project_id}/services/{service_id}", + "DELETE /projects/{project_id}/services/{service_id}", + "POST /projects/{project_id}/services/{service_id}/start", + "POST /projects/{project_id}/services/{service_id}/stop", + "POST /projects/{project_id}/services/{service_id}/attachToVPC", + "POST /projects/{project_id}/services/{service_id}/detachFromVPC", + "POST /projects/{project_id}/services/{service_id}/resize", + "POST /projects/{project_id}/services/{service_id}/enablePooler", + "POST /projects/{project_id}/services/{service_id}/disablePooler", + "POST /projects/{project_id}/services/{service_id}/forkService", + "POST /projects/{project_id}/services/{service_id}/updatePassword", + "POST /projects/{project_id}/services/{service_id}/setEnvironment", + "POST /projects/{project_id}/services/{service_id}/setHA" + ] + }, + { + "group": "Read Replica Sets", + "expanded": true, + "pages": [ + "GET /projects/{project_id}/services/{service_id}/replicaSets", + "POST /projects/{project_id}/services/{service_id}/replicaSets", + "DELETE /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}", + "POST /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/resize", + "POST /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/enablePooler", + "POST /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/disablePooler", + "POST /projects/{project_id}/services/{service_id}/replicaSets/{replica_set_id}/setEnvironment" + ] + }, + { + "group": "VPCs", + "expanded": true, + "pages": [ + "GET /projects/{project_id}/vpcs", + "POST /projects/{project_id}/vpcs", + "GET /projects/{project_id}/vpcs/{vpc_id}", + "DELETE /projects/{project_id}/vpcs/{vpc_id}", + "POST /projects/{project_id}/vpcs/{vpc_id}/rename", + "GET /projects/{project_id}/vpcs/{vpc_id}/peerings", + "POST /projects/{project_id}/vpcs/{vpc_id}/peerings", + "GET /projects/{project_id}/vpcs/{vpc_id}/peerings/{peering_id}", + "DELETE /projects/{project_id}/vpcs/{vpc_id}/peerings/{peering_id}" + ] + }, + { + "group": "Analytics", + "expanded": true, + "pages": [ + "POST /analytics/identify", + "POST /analytics/track" + ] } ] }, diff --git a/integrations/index.mdx b/integrations/index.mdx index b7529d7..2e3f449 100644 --- a/integrations/index.mdx +++ b/integrations/index.mdx @@ -36,7 +36,7 @@ mode: "wide" Automate your infrastructure with tools like Terraform and Kubernetes. Manage Tiger Cloud services as code and deploy at scale with modern DevOps practices. diff --git a/integrations/integrate/terraform.mdx b/integrations/integrate/terraform.mdx deleted file mode 100644 index b937c76..0000000 --- a/integrations/integrate/terraform.mdx +++ /dev/null @@ -1,152 +0,0 @@ ---- -title: Integrate Terraform with Tiger Cloud -sidebarTitle: Terraform -description: Manage your Tiger Cloud services with a Terraform provider -products: [cloud, self_hosted] -keywords: [Terraform, configuration, deployment] ---- - -import { CLOUD_LONG, COMPANY, CONSOLE, PG, VPC } from '/snippets/vars.mdx'; -import IntegrationPrereqCloud from "/snippets/prerequisites/_integration-prereqs-cloud-only.mdx"; - -[Terraform][terraform] is an infrastructure-as-code tool that enables you to safely and predictably provision and manage infrastructure. - -This page explains how to configure Terraform to manage your {SERVICE_LONG} or {SELF_LONG}. - -## Prerequisites - - -* [Download and install][terraform-install] Terraform. - -## Configure Terraform - -Configure Terraform based on your deployment type: - - - - - - You use the [{COMPANY} Terraform provider][terraform-provider] to manage {SERVICE_LONG}s: - - - - 1. **Generate client credentials for programmatic use** - - 1. In [{CONSOLE}][console], click `Projects` and save your `Project ID`, then click `Project settings`. - - 1. Click `Create credentials`, then save `Public key` and `Secret key`. - - 1. **Configure {COMPANY} Terraform provider** - - 1. Create a `main.tf` configuration file with at least the following content. Change `x.y.z` to the [latest version][terraform-provider] of the provider. - - ```hcl - terraform { - required_providers { - timescale = { - source = "timescale/timescale" - version = "x.y.z" - } - } - } - - # Authenticate using client credentials generated in Tiger Console. - # When required, these credentials will change to a short-lived JWT to do the calls. - provider "timescale" { - project_id = var.ts_project_id - access_key = var.ts_access_key - secret_key = var.ts_secret_key - } - - variable "ts_project_id" { - type = string - } - - variable "ts_access_key" { - type = string - } - - variable "ts_secret_key" { - type = string - } - ``` - - 1. Create a `terraform.tfvars` file in the same directory as your `main.tf` to pass in the variable values: - - ```hcl - export TF_VAR_ts_project_id="" - export TF_VAR_ts_access_key="" - export TF_VAR_ts_secret_key="" - ``` - - 1. **Add your resources** - - Add your {SERVICE_LONG}s or {VPC} connections to the `main.tf` configuration file. For example: - - ```hcl - resource "timescale_service" "test" { - name = "test-service" - milli_cpu = 500 - memory_gb = 2 - region_code = "us-east-1" - enable_ha_replica = false - - timeouts = { - create = "30m" - } - } - - resource "timescale_vpc" "vpc" { - cidr = "10.10.0.0/16" - name = "test-vpc" - region_code = "us-east-1" - } - ``` - - You can now manage your resources with Terraform. See more about [available resources][terraform-resources] and [data sources][terraform-data-sources]. - - - - - - - - You use the [`cyrilgdn/postgresql`][pg-provider] {PG} provider to connect to your {SELF_LONG} instance. - - Create a `main.tf` configuration file with the following content, using your [connection details][connection-info]: - - ```hcl - terraform { - required_providers { - postgresql = { - source = "cyrilgdn/postgresql" - version = ">= 1.15.0" - } - } - } - - provider "postgresql" { - host = "your-timescaledb-host" - port = "your-timescaledb-port" - database = "your-database-name" - username = "your-username" - password = "your-password" - sslmode = "require" # Or "disable" if SSL isn't enabled - } - ``` - - You can now manage your database with Terraform. - - - - - -[terraform-install]: https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli -[terraform]: https://developer.hashicorp.com/terraform -[console]: https://console.cloud.timescale.com/dashboard/services -[terraform-provider]: https://registry.terraform.io/providers/timescale/timescale/latest/docs -[connection-info]: /integrations/find-connection-details/ -[terraform-resources]: https://registry.terraform.io/providers/timescale/timescale/latest/docs/resources/peering_connection -[terraform-data-sources]: https://registry.terraform.io/providers/timescale/timescale/latest/docs/data-sources/products -[pg-provider]: https://registry.terraform.io/providers/cyrilgdn/postgresql/latest - diff --git a/snippets/api-reference/timescaledb/_add-dimension-errors.mdx b/snippets/api-reference/timescaledb/_add-dimension-errors.mdx new file mode 100644 index 0000000..7d1d102 --- /dev/null +++ b/snippets/api-reference/timescaledb/_add-dimension-errors.mdx @@ -0,0 +1,11 @@ +import { HYPERTABLE, TIMESCALE_DB } from '/snippets/vars.mdx'; + +On failure, an error is returned: + +| Error | Description | +|-------|-------------| +| table "{table_name}" is not a {HYPERTABLE} | The specified table has not been converted to a {HYPERTABLE} | +| column "{column_name}" does not exist | The specified column does not exist in the {HYPERTABLE} | +| column "{column_name}" is already a dimension | A dimension already exists for this column | +| cannot specify both the number of partitions and an interval | Both `number_partitions` and `chunk_time_interval` were provided | +| invalid interval type for bigint dimension | An INTERVAL type was used for a BIGINT column instead of an integer value | diff --git a/snippets/api-reference/timescaledb/_returns-null-if-not-hypertable.mdx b/snippets/api-reference/timescaledb/_returns-null-if-not-hypertable.mdx new file mode 100644 index 0000000..0e3df33 --- /dev/null +++ b/snippets/api-reference/timescaledb/_returns-null-if-not-hypertable.mdx @@ -0,0 +1,5 @@ +import { HYPERTABLE } from '/snippets/vars.mdx'; + + +If executed on a relation that is not a {HYPERTABLE}, the function returns `NULL`. + diff --git a/snippets/api-reference/timescaledb/_returns-void.mdx b/snippets/api-reference/timescaledb/_returns-void.mdx new file mode 100644 index 0000000..1572711 --- /dev/null +++ b/snippets/api-reference/timescaledb/_returns-void.mdx @@ -0,0 +1 @@ +This function returns void.