Skip to content

Conversation

@userzhy
Copy link
Contributor

@userzhy userzhy commented Dec 26, 2025

Purpose

Linked issue: close #6328

This PR adds support for compact_database procedure in Spark SQL, similar to Flink's existing implementation.

The new procedure supports the following parameters:

  • including_databases: Databases to include, supports regular expressions.
  • including_tables: Tables to include, supports regular expressions.
  • excluding_tables: Tables to exclude, supports regular expressions.
  • options: Compaction options in format "key1=value1,key2=value2".

Usage examples:

-- Compact all tables in all databases
CALL sys.compact_database()

-- Compact tables in specific databases
CALL sys.compact_database(including_databases => 'db1|db2')

-- Compact specific tables with options
CALL sys.compact_database(including_tables => '.*_fact', options => 'write-only=true')

Tests

  • CompactDatabaseProcedureTest.testCompactDatabase: Basic functionality test.
  • CompactDatabaseProcedureTest.testCompactDatabaseWithDatabaseFilter: Database filtering test.
  • CompactDatabaseProcedureTest.testCompactDatabaseWithTableFilter: Table filtering test.
  • CompactDatabaseProcedureTest.testCompactDatabaseWithExcludingTables: Table exclusion test.

API and Format

This change adds a new stored procedure compact_database to Spark SQL. No storage format changes.

Documentation

This is a new feature. Documentation update may be needed for the Spark procedures page.

@JingsongLi
Copy link
Contributor

+1

@JingsongLi JingsongLi merged commit f5e5ada into apache:master Dec 27, 2025
23 checks passed
jerry-024 added a commit to jerry-024/paimon that referenced this pull request Dec 29, 2025
* upstream/master: (51 commits)
  [test] Fix unstable test: handle MiniCluster shutdown gracefully in collect method (apache#6913)
  [python] fix ray dataset not lazy loading issue when parallelism = 1 (apache#6916)
  [core] Refactor ExternalPathProviders abstraction
  [spark] fix Merge Into unstable tests (apache#6912)
  [core] Enable Entropy Inject for data file path to prevent being throttled by object storage (apache#6832)
  [iceberg] support millisecond timestamps in iceberg compatibility mode (apache#6352)
  [spark] Handle NPE for pushdown aggregate when a datasplit has a null max/min value (apache#6611)
  [test] Fix unstable case testLimitPushDown
  [core] Refactor row id pushdown to DataEvolutionFileStoreScan
  [spark] paimon-spark supports row id push down (apache#6697)
  [spark] Support compact_database procedure (apache#6328) (apache#6910)
  [lucene] Fix row count in IndexManifestEntry
  [test] Remove unstable test: AppendTableITCase.testFlinkMemoryPool
  [core] Refactor Global index writer and reader for Btree
  [core] Minor refactor to magic number into footer
  [core] Support btree global index in paimon-common (apache#6869)
  [spark] Optimize compact for data-evolution table, commit multiple times to avoid out of memory (apache#6907)
  [rest] Add fromSnapshot to rollback (apache#6905)
  [test] Fix unstable RowTrackingTestBase test
  [core] Simplify FileStoreCommitImpl to extract some classes (apache#6904)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Spark Procedure supports compact_database

2 participants