Skip to content

Conversation

ianton-ru
Copy link

@ianton-ru ianton-ru commented Jun 12, 2025

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Allow data and metadata with different paths.
Support S3 tables as a warehouse.

Documentation entry for user-facing changes

Solved #856

Exclude tests:

  • Fast test
  • Integration Tests
  • Stateless tests
  • Stateful tests
  • Performance tests
  • All with ASAN
  • All with TSAN
  • All with MSAN
  • All with UBSAN
  • All with Coverage
  • All with Aarch64
  • All Regression
  • Disable CI Cache

@ianton-ru
Copy link
Author

Tested on test installation in Altinity Cloud
Before:

CREATE DATABASE "ice"
  ENGINE = DataLakeCatalog('https://iceberg-catalog.aws-us-west-2.dev.altinity.cloud/')
  SETTINGS
    catalog_type = 'rest',
    auth_header = 'Authorization: Bearer [CENSORED]',
    warehouse = 's3://aws-st-2-fs5vug37-iceberg'
    
select count() from ice."nyc.test"

Code: 36. DB::Exception: Expected to find 'nyc/test' in data path: 's3://aws-st-2-fs5vug37-iceberg/nyc/tripdata/data/200901/data.parquet'. (BAD_ARGUMENTS) (version 25.3.3.20139.altinityantalya.20139 (official build))

after

clickhouse1 :) select count() from ice."nyc.test"

SELECT count()
FROM ice.`nyc.test`

Query id: 72153980-4f22-4522-ae34-8759c7ef1ad0

   ┌────count()─┐
1. │ 1310903963 │ -- 1.31 billion
   └────────────┘

clickhouse1 :) select * from ice."nyc.test" limit 1

SELECT *
FROM ice.`nyc.test`
LIMIT 1

Query id: 053796f4-b8da-4a18-b8dd-b83ea8e5ef1c

Row 1:
──────
pickup_date:           2009-01-30
id:                    0
vendor_id:             VTS
pickup_datetime:       2009-01-30 17:01:00.000000
dropoff_datetime:      2009-01-30 17:06:00.000000
passenger_count:       5
trip_distance:         1.3599999
pickup_longitude:      -74.00617
pickup_latitude:       40.733162
rate_code_id:
store_and_fwd_flag:
dropoff_longitude:     -73.994835
dropoff_latitude:      40.75027
payment_type:          CASH
fare_amount:           5.7
extra:                 1
mta_tax:               0
tip_amount:            0
tolls_amount:          0
improvement_surcharge: 0
total_amount:          6.7
pickup_location_id:    0
dropoff_location_id:   0
junk1:
junk2:

{
throw ::DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Expected to find '{}' in data path: '{}'", common_path, data_path);
/// Data files can have different path
pos = data_path.find("://");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The method docs say:

// For example, if the full blob path is s3://bucket/table_name/data/00000-1-1234567890.avro, the function will return table_name/data/00000-1-1234567890.avro

Which means the bucket shall not be included in the return value as far as I understand.

As far as I understand your exception handling,you skip the initial "://" and then copy everything after the next "/". But what if you are using Path-style URIs (i.e, https://s3.region-code.amazonaws.com/bucket-name/key-name)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try to remove bucket

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same S3/URI comment here

@ianton-ru
Copy link
Author

Added code to support S3 tables
Before:

show tables from ice

2025.06.19 05:01:38.947571 [ 799 ] {85d354ee-ba11-4ac1-89e7-3d8684a7e449} <Error> RestCatalog(s3://aws-st-2-fs5vug37-iceberg): Code: 48. DB::Exception: Unexpected location format: s3://3b8aac22-bb5f-4548-y5ta9hipodcohbq9o6pwbbiht7dcgusw2b--table-s3. (NOT_IMPLEMENTED), Stack trace (when copying this message, always include the lines below):

After allow empty path in location:

2025.06.19 11:39:14.915327 [ 85 ] {14de1b7e-5a8f-4b3a-af2d-463e0bda9ba8} <Warning> DNSResolver: Cannot resolve host (91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3.s3-us-west-2.us-west-2.amazonaws.com), error 0: Host not found.
2025.06.19 11:39:14.915513 [ 85 ] {14de1b7e-5a8f-4b3a-af2d-463e0bda9ba8} <Information> AWSClient: Failed to make request to: https://91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3.s3-us-west-2.us-west-2.amazonaws.com/metadata/00001-9c6d55c7-3a32-4bd8-a25d-dd2bbdad06ba.metadata.json: Code: 198. DB::NetException: Not found address of host: 91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3.s3-us-west-2.us-west-2.amazonaws.com. (DNS_ERROR), Stack trace (when copying this message, always include the lines below):

So added code to avoid insert region in url if region already there.

Strange thing:
s3-us-west-2 is a legacy format for region names (see https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#VirtualHostingBackwardsCompatibility)

But host with s3-us-west-2 can be resolved:

$ host 91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3.s3-us-west-2.amazonaws.com
91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3.s3-us-west-2.amazonaws.com is an alias for s3-r-w.us-west-2.amazonaws.com.
s3-r-w.us-west-2.amazonaws.com has address 52.92.132.202
s3-r-w.us-west-2.amazonaws.com has address 3.5.80.209
s3-r-w.us-west-2.amazonaws.com has address 52.92.224.10
s3-r-w.us-west-2.amazonaws.com has address 3.5.78.129
s3-r-w.us-west-2.amazonaws.com has address 3.5.78.172
s3-r-w.us-west-2.amazonaws.com has address 3.5.81.130
s3-r-w.us-west-2.amazonaws.com has address 52.92.152.242
s3-r-w.us-west-2.amazonaws.com has address 3.5.87.17

when with us-west-2 - not:

$ host 91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3.us-west-2.amazonaws.com
Host 91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3.us-west-2.amazonaws.com not found: 3(NXDOMAIN)

So now catalog works.

But S3 tables not:

clickhouse1 :) show tables from ice

SHOW TABLES FROM ice

Query id: e569776c-1533-407d-be95-5f3c844afcc8

   ┌─name───────┐
1. │ ns1.table1 │
2. │ ns1.table2 │
   └────────────┘

clickhouse1 :) describe table ice.`ns1.table1`

DESCRIBE TABLE ice.`ns1.table1`

Query id: 0e79caae-5e20-40a0-82e4-8f516053f3e8

   ┌─name─────────┬─type──────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
1. │ sepal.length │ Nullable(Float64) │              │                    │         │                  │                │
2. │ sepal.width  │ Nullable(Float64) │              │                    │         │                  │                │
3. │ petal.length │ Nullable(Float64) │              │                    │         │                  │                │
4. │ petal.width  │ Nullable(Float64) │              │                    │         │                  │                │
5. │ variety      │ Nullable(String)  │              │                    │         │                  │                │
   └──────────────┴───────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘

clickhouse1 :) describe table ice.`ns1.table2`

DESCRIBE TABLE ice.`ns1.table2`

Query id: 738875bf-abde-49fe-9d60-33c51355d61d

   ┌─name─────────┬─type──────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
1. │ sepal.length │ Nullable(Float64) │              │                    │         │                  │                │
2. │ sepal.width  │ Nullable(Float64) │              │                    │         │                  │                │
3. │ petal.length │ Nullable(Float64) │              │                    │         │                  │                │
4. │ petal.width  │ Nullable(Float64) │              │                    │         │                  │                │
5. │ variety      │ Nullable(String)  │              │                    │         │                  │                │
   └──────────────┴───────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘

SHOW CREATE TABLE ice.`ns1.table1`

Query id: abb38eb5-b51e-4e48-882e-bf2cc0ca1133

   ┌─statement─────────────────────────────────────────────────────────────────────────────────┐
1. │ CREATE TABLE ice.`ns1.table1`                                                            ↴│
   │↳(                                                                                        ↴│
   │↳    `sepal.length` Nullable(Float64),                                                    ↴│
   │↳    `sepal.width` Nullable(Float64),                                                     ↴│
   │↳    `petal.length` Nullable(Float64),                                                    ↴│
   │↳    `petal.width` Nullable(Float64),                                                     ↴│
   │↳    `variety` Nullable(String)                                                           ↴│
   │↳)                                                                                        ↴│
   │↳ENGINE = Iceberg('s3://91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3/') │
   └───────────────────────────────────────────────────────────────────────────────────────────┘

clickhouse1 :) show create table ice.`ns1.table2`

SHOW CREATE TABLE ice.`ns1.table2`

Query id: cfb9e022-cd5f-4d5c-87f1-d646f063d59e

   ┌─statement─────────────────────────────────────────────────────────────────────────────────┐
1. │ CREATE TABLE ice.`ns1.table2`                                                            ↴│
   │↳(                                                                                        ↴│
   │↳    `sepal.length` Nullable(Float64),                                                    ↴│
   │↳    `sepal.width` Nullable(Float64),                                                     ↴│
   │↳    `petal.length` Nullable(Float64),                                                    ↴│
   │↳    `petal.width` Nullable(Float64),                                                     ↴│
   │↳    `variety` Nullable(String)                                                           ↴│
   │↳)                                                                                        ↴│
   │↳ENGINE = Iceberg('s3://3b8aac22-bb5f-4548-y5ta9hipodcohbq9o6pwbbiht7dcgusw2b--table-s3/') │
   └───────────────────────────────────────────────────────────────────────────────────────────┘

clickhouse1 :) select count() from ice.`ns1.table2`

SELECT count()
FROM ice.`ns1.table2`

Query id: 0a4c0e08-3634-4f22-9a2f-58132472ccda

   ┌─count()─┐
1. │     150 │
   └─────────┘

clickhouse1 :) select count() from ice.`ns1.table1`

SELECT count()
FROM ice.`ns1.table1`

Query id: 5ece468d-b549-40b9-92c0-1bb1dc9be417

   ┌─count()─┐
1. │     150 │
   └─────────┘

but query for data shows this:

clickhouse1 :) select * from ice.`ns1.table1` limit 1

SELECT *
FROM ice.`ns1.table1`
LIMIT 1

Query id: 3ef18a94-8eb4-4337-8081-1bebbab05a2f

Ok.

0 rows in set. Elapsed: 3.348 sec.

clickhouse1 :) select * from ice.`ns1.table2` limit 1

SELECT *
FROM ice.`ns1.table2`
LIMIT 1

Query id: 779cf2d9-435b-493e-8390-b0c1b64f63bd

   ┌─sepal.length─┬─sepal.width─┬─petal.length─┬─petal.width─┬─variety─┐
1. │         ᴺᵁᴸᴸ │        ᴺᵁᴸᴸ │         ᴺᵁᴸᴸ │        ᴺᵁᴸᴸ │ Setosa  │
   └──────────────┴─────────────┴──────────────┴─────────────┴─────────┘

1 row in set. Elapsed: 3.734 sec.

clickhouse1 :) select * from ice.`ns1.table2` limit 3

SELECT *
FROM ice.`ns1.table2`
LIMIT 3

Query id: 05a87dd0-2721-417f-99ce-caa5d033e494

Ok.

0 rows in set. Elapsed: 2.330 sec.

@ianton-ru
Copy link
Author

Dirty fix.
Reason of empty answer is that when some data is cached ClickHouse try to make HeadObject request first. It failed on endpoint resolving stage - it tried to make ListObjects request instead (https://github.com/Altinity/ClickHouse/blob/antalya/src/IO/S3/Client.cpp#L821), but S3 Table buckets don't support it.
Changed on GetObject.

@ianton-ru
Copy link
Author

Null is a reason of name conflict - columns have names with dot in catalog like sepal.length, but in parquet file have names like sepal_x2Elength

@ianton-ru ianton-ru force-pushed the feature/different_folder_for_data_lake branch from 625f31b to 95f4f0f Compare June 20, 2025 13:22
@alex-zaitsev alex-zaitsev changed the title Allow data and metadata by different paths Support different backends behind Iceberg REST catalog Jun 20, 2025
@alex-zaitsev alex-zaitsev changed the title Support different backends behind Iceberg REST catalog Support different warehouses behind Iceberg REST catalog Jun 20, 2025
Copy link
Collaborator

@arthurpassos arthurpassos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still did not understand fully the purpose of this PR. Could you please write a summary of the changes explaining the use case, why it did not work and what you did to make it work?

Example:

use case 1: Iceberg specs allow data to live in a completely different storage than its metadata.

Problem in clickhouse 1: clickhouse holds a single storage path for each iceberg installation, it should have two..

Fixed in clickhouse 1: fixed it by adding a new variable and bla bla bla

throw DB::Exception(DB::ErrorCodes::NOT_IMPLEMENTED, "Unexpected location format: {}", location_);

pos_to_path = pos_to_bucket + pos_to_path;
{ // empty path
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to have this comment, I suggest moving it into a variable instead:

bool empty_path = pos_to_path == std::string::npos

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this better? It's an additional variable filled in runtime, and makes no logic changes.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just a very opionated comment. I usually prefer named variables over comments because variables compile and we tend to forget to update comments.

In any case, I should have deleted or mentioned this is not a requirement. Don't worry about it.

throw DB::Exception(DB::ErrorCodes::NOT_IMPLEMENTED, "Unexpected location format: {}", location_);

auto pos_to_bucket = pos + std::strlen("://");
auto pos_to_path = location_.substr(pos_to_bucket).find('/');
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be easier to just instantiate S3::URI? AFAIK, it should support all types of S3 endpoints and its constructor takes a URI string.

https://github.com/ClickHouse/ClickHouse/blob/fbd99df81d18fac4c1e26f665d3bba316775bfd4/src/IO/S3/URI.cpp#L114

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure, Here location is just cut on two pieces, with Poco::URI come makes full analysis with splitting in all parts and later need to concat back.


// Do a list request because head requests don't have body in response
std::optional<Aws::S3::S3Error> Client::updateURIForBucketForHead(const std::string & bucket) const
// S3 Tables don't support ListObjects, so made dirty workaroung - changed on GetObject
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using GetObjects with very small byte range 1-2.

// Do a list request because head requests don't have body in response
std::optional<Aws::S3::S3Error> Client::updateURIForBucketForHead(const std::string & bucket) const
// S3 Tables don't support ListObjects, so made dirty workaroung - changed on GetObject
std::optional<Aws::S3::S3Error> Client::updateURIForBucketForHead(const std::string & bucket, const std::string & key) const
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the purpose of this method? The name is updateURIForBucketForHead, but it doesn't seem to update anything

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Magic inside AWS SDK. To get proper URI we must make some request to get 301 redirect. In response body AWS sends proper endpoint, and it extracted somewhere inside SDK. After that ClickHouse extracts it in method Client::getURIFromError.
But for this response must have body, while response on HeadObject request does not have one.
This is a reason for workaround with calling ListObjects instead, see this comment by Antonio (initial author of this code).
And ListObjests request is not supported by S3 Tables. That's why I changes it to GetObject for now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add a short comment about this and even add a link to this discussion? I find it surprising they did not do it.

{
throw ::DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Expected to find '{}' in data path: '{}'", common_path, data_path);
/// Data files can have different path
pos = data_path.find("://");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same S3/URI comment here

@ianton-ru
Copy link
Author

ianton-ru commented Jun 24, 2025

I still did not understand fully the purpose of this PR. Could you please write a summary of the changes explaining the use case, why it did not work and what you did to make it work?

Example:

use case 1: Iceberg specs allow data to live in a completely different storage than its metadata.

Problem in clickhouse 1: clickhouse holds a single storage path for each iceberg installation, it should have two..

Fixed in clickhouse 1: fixed it by adding a new variable and bla bla bla

ClickHouse expects that metadata and data have common non-empty path /table_name/ /table_name/metadata and /table_name/data. That is not true - data can be in different place - in initial case metadata has path /nyc/test/metadata, when data has nyc/tripdata/data. And for S3 tables metadata and data are in bucket root, and table location is just s3://bucket without any paths.

So this PR

  1. Allow data and metadata have different paths (changes in Iceberg::getProperFilePathFromMetadataInfo)
  2. Allow empty table path (changes in TableMetadata::setLocation)
  3. Workaroud for workaroud for getting proper bucket URI (see comment above)
  4. Fix for avoiding double regions in uri if initial_endpoind already has one (changes in URI::addRegionToURI)

@arthurpassos
Copy link
Collaborator

I still did not understand fully the purpose of this PR. Could you please write a summary of the changes explaining the use case, why it did not work and what you did to make it work?
Example:
use case 1: Iceberg specs allow data to live in a completely different storage than its metadata.
Problem in clickhouse 1: clickhouse holds a single storage path for each iceberg installation, it should have two..
Fixed in clickhouse 1: fixed it by adding a new variable and bla bla bla

ClickHouse expects that metadata and data have common non-empty path /table_name/ /table_name/metadata and /table_name/data. That is not true - data can be in different place - in initial case metadata has path /nyc/test/metadata, when data has nyc/tripdata/data. And for S3 tables metadata and data are in bucket root, and table location is just s3://bucket without any paths.

So this PR

  1. Allow data and metadata have different paths (changes in Iceberg::getProperFilePathFromMetadataInfo)
  2. Allow empty table path (changes in TableMetadata::setLocation)
  3. Workaroud for workaroud for getting proper bucket URI (see comment above)
  4. Fix for avoiding double regions in uri if initial_endpoind already has one (changes in URI::addRegionToURI)

Thanks for the explanation. I'll review it more carefully tomorrow morning. Skimmed through getProperFilePathFrommetadataInfo, but it lacks examples

Copy link
Collaborator

@arthurpassos arthurpassos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For change 1, see the comments
For change 2, I haven't reviewed it yet
For change 3, it is ok
For change 4, see the comments

Btw, I think most of these changes should also go into upstream. Please consider submitting it there.

return regions.contains(region);
}

void URI::addRegionToURI(const std::string &region)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed during our call, the current use of this method is already "checking" if region is in the endpoint, but it's probably doing it wrong. Please correct the call site, add docs and examples.

It might be a good idea to rename this method to be addRegionToURIIfNeeded or something of the sort and do the checks inside.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand code in Client.cpp.
Checks initial_endpoint, and add region to new_uri.
initial_endpoint is https://s3.amazonaws.com, when new_uri is from error response:

<?xml version="1.0"?>
<?xml version="1.0" encoding="UTF-8"?>
<Error>
    <Code>PermanentRedirect</Code>
    <Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message>
    <Endpoint>91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3.s3-us-west-2.amazonaws.com</Endpoint>
    <Bucket>91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3</Bucket>
    <RequestId>DCPGXTJ8PA0HVQ27</RequestId>
    <HostId>ANM0Ek0N5xhfUCDwE2bxeVl2MhOl381L8SJz1JjA4lqcKJnZ1ySwT3c3fH1JGjMoCD1uYuTYa3SmEubEU14nx5IHvYGSjXM1</HostId>
</Error>

In response url with region.
What the hell is here? Why initial_endpoint???

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial error:

2025.06.19 11:39:14.915513 [ 85 ] {14de1b7e-5a8f-4b3a-af2d-463e0bda9ba8} <Information> AWSClient: Failed to make request to: https://91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3.s3-us-west-2.us-west-2.amazonaws.com/metadata/00001-9c6d55c7-3a32-4bd8-a25d-dd2bbdad06ba.metadata.json: Code: 198. DB::NetException: Not found address of host: 91f4f060-1a63-4ff0-oy49hiiqy45utsdypodf5p1j7awk4usw2b--table-s3.s3-us-west-2.us-west-2.amazonaws.com. (DNS_ERROR), Stack trace (when copying this message, always include the lines below):

// For example, if the full blob path is s3://bucket/table_name/data/00000-1-1234567890.avro, the function will return table_name/data/00000-1-1234567890.avro
// Common path should end with "<table_name>" or "<table_name>/".
std::string getProperFilePathFromMetadataInfo(std::string_view data_path, std::string_view common_path, std::string_view table_location)
std::string getProperFilePathFromMetadataInfo(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've spent quite some time reviewing this function together and haven't understood it fully. Please document the possible values and examples for all the arguments here, where they come from and the scenarios.

That'll make reviewing this much easier.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data_path s3://aws-st-2-fs5vug37-iceberg/aws-public-blockchain/btc/metadata/snap-2539904009313210382-1-4f2e6056-d08e-4420-9bc9-47bc0dcbd6f9.avro
common_path aws-public-blockchain/btc
table_location s3://aws-st-2-fs5vug37-iceberg/aws-public-blockchain/btc
common_namespace aws-st-2-fs5vug37-iceberg
data_path s3://aws-st-2-fs5vug37-iceberg/ssb/lineorder_wide/data/199603/data.parquet
common_path ssb/lineorder_wide
table_location s3://aws-st-2-fs5vug37-iceberg/ssb/lineorder_wide
common_namespace aws-st-2-fs5vug37-iceberg
data_path s3://aws-st-2-fs5vug37-iceberg/nyc/test/metadata/snap-7890808452220287820-1-960673bb-b315-4df9-946e-fd34c44b98f7.avro
common_path nyc/test
table_location s3://aws-st-2-fs5vug37-iceberg/nyc/test
common_namespace aws-st-2-fs5vug37-iceberg

@Enmk Enmk changed the title Support different warehouses behind Iceberg REST catalog Antalya 25.3: Support different warehouses behind Iceberg REST catalog Jul 1, 2025
@Enmk Enmk merged commit a6ad598 into antalya-25.3 Jul 3, 2025
340 of 354 checks passed
@svb-alt svb-alt added antalya-25.6 port-antalya PRs to be ported to all new Antalya releases and removed antalya-25.6 labels Jul 14, 2025
ianton-ru pushed a commit that referenced this pull request Aug 4, 2025
…ta_lake

Antalya 25.3: Support different warehouses behind Iceberg REST catalog
ianton-ru pushed a commit that referenced this pull request Aug 6, 2025
…ta_lake

Antalya 25.3: Support different warehouses behind Iceberg REST catalog
Enmk added a commit that referenced this pull request Sep 9, 2025
…t_860

25.6.5 Antalya port of #860: Support different warehouses behind Iceberg REST catalog
ianton-ru pushed a commit that referenced this pull request Oct 3, 2025
…t_860

25.8 Antalya port of #860: Support different warehouses behind Iceberg REST catalog
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

antalya antalya-25.3 antalya-25.3.3 port-antalya PRs to be ported to all new Antalya releases

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants