-
Notifications
You must be signed in to change notification settings - Fork 13.8k
[FLINK-22364][doc] Translate the page of "Data Sources" to Chinese. #15763
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit 84e5827 (Sat Aug 28 11:20:21 UTC 2021) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. DetailsThe Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
|
@wuchong @PatrickRen @becketqin |
PatrickRen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution Senhong! I left some comments for the translation.
|
|
||
| Data Souce API 以统一的方式对无界流数据和有界批数据进行处理。 | ||
|
|
||
| 这两种情况之间的区别是非常小的:在有界/批处理情况中,枚举器生成固定数量的分片,每个分片都必须是有限的。在无界流的情况下,这两个条件中的一个为假(分片大小不是有限的,或者枚举器不断生成新的分片)。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
“这两个条件中的一个为假” 建议意译,否则比较费解。
“在无界流的情况下,分片大小是无界的,或者枚举器会不断产生新的分片”
| - `SourceEvent` 的处理 | ||
| - `SourceEvent`s 是`分片枚举器` 和 `源阅读器`之间来回传递的自定义事件。可以利用此机制来执行复杂的协调任务。 | ||
| - 分片的发现以及分配 | ||
| - `分片枚举器` 可以将分片分配到`源阅读器`s 从而响应各种事件,例如发现新的分片,新的`源阅读器`的注册,`源阅读器`的失败等。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以将分片分配到源阅读器 从而响应各种事件
|
|
||
| [源阅读器](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/connector/source/SourceReader.java)是一个运行在Task Managers上的组件,用于处理来自分片的记录。 | ||
|
|
||
| `源阅读器`公布了一个拉动式(pull-based)处理接口。Flink任务会在循环中不断调用 `pollNext(ReaderOutput)` 来轮询来自`源阅读器`的记录。`pollNext(ReaderOutput)` 方法的返回值指示source reader的状态。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
源阅读器提供了一个拉动式(pull-based)处理接口
| - `END_OF_INPUT` - 源阅读器已经处理完所有记录,到达数据的尾部。这意味着源阅读器可以关闭了。 | ||
|
|
||
| The `SourceReader` exposes a pull-based consumption interface. A Flink task keeps calling `pollNext(ReaderOutput)` in a loop to poll records from the `SourceReader`. The return value of the `pollNext(ReaderOutput)` method indicates the status of the source reader. | ||
| `pollNext(ReaderOutput)`会提供 `ReaderOutput` 作为返回值,为了提高性能且在必要情况下,`源阅读器`可以在一次pollNext()调用中返回多条记录。例如,有时外部系统的工作粒度为块。一个块可以包含多个记录,但是source只能在块的边界处设置 Checkpoint。在这种情况下,`源阅读器`可以一次将一个块中的所有记录包含在 `ReaderOutput` 中返回。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pollNext(ReaderOutput)会提供 ReaderOutput 作为参数
| - `END_OF_INPUT` - 源阅读器已经处理完所有记录,到达数据的尾部。这意味着源阅读器可以关闭了。 | ||
|
|
||
| The `SourceReader` exposes a pull-based consumption interface. A Flink task keeps calling `pollNext(ReaderOutput)` in a loop to poll records from the `SourceReader`. The return value of the `pollNext(ReaderOutput)` method indicates the status of the source reader. | ||
| `pollNext(ReaderOutput)`会提供 `ReaderOutput` 作为返回值,为了提高性能且在必要情况下,`源阅读器`可以在一次pollNext()调用中返回多条记录。例如,有时外部系统的工作粒度为块。一个块可以包含多个记录,但是source只能在块的边界处设置 Checkpoint。在这种情况下,`源阅读器`可以一次将一个块中的所有记录包含在 `ReaderOutput` 中返回。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
在这种情况下,源阅读器可以一次将一个块中的所有记录通过 ReaderOutput 发送。
| ## 分片阅读器 API | ||
|
|
||
| The [SplitReader](https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/reader/splitreader/SplitReader.java) is the high-level API for simple synchronous reading/polling-based source implementations, like file reading, Kafka, etc. | ||
| 核心源阅读器 API是完全异步的,并且要求实现手动管理异步读取分片。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
核心 源阅读器 API 是完全异步的
|
|
||
| The [SplitReader](https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/reader/splitreader/SplitReader.java) is the high-level API for simple synchronous reading/polling-based source implementations, like file reading, Kafka, etc. | ||
| 核心源阅读器 API是完全异步的,并且要求实现手动管理异步读取分片。 | ||
| 但是,实际上,大多数sources都会阻塞操作,例如阻塞客户端(如 `KafkaConsumer`)的 *poll()* 调用,或者阻塞分布式文件系统(HDFS, S3等)的I/O操作。为了使其与异步Source API兼容,这些阻塞(同步)操作需要在单独线程中进行,而这些线程会将数据移交给阅读器的异步部分。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
例如阻塞客户端(如 KafkaConsumer)的 poll() 调用,或者阻塞分布式文件系统(HDFS, S3等)的I/O操作
-> 例如客户端(如 KafkaConsumer)的 poll() 阻塞调用,或者分布式文件系统(HDFS, S3等)的阻塞 I/O 操作
|
|
||
| {{< hint warning >}} | ||
| Applications based on the legacy [SourceFunction](https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SourceFunction.java) typically generate timestamps and watermarks in a separate later step via `stream.assignTimestampsAndWatermarks(WatermarkStrategy)`. This function should not be used with the new sources, because timestamps will be already assigned, and it will override the previous split-aware watermarks. | ||
| 基于遗留的 [SourceFunction](https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SourceFunction.java) 的应用通常在之后的单独的一步中通过 `stream.assignTimestampsAndWatermarks(WatermarkStrategy)` 生成时间戳和水印。这个函数不应该与新的sources一起使用,因为此时时间戳应该已经被分配了,而且该函数会覆盖掉之前的可识别分片水印。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
基于 遗留 旧 的
| {{< img width="80%" src="fig/per_split_watermarks.svg" alt="Watermark Generation in a Source with two Splits." >}} | ||
|
|
||
| When implementing a source connector using the *Split Reader API*, this is automatically handled. All implementations based on the Split Reader API have split-aware watermarks out-of-the-box. | ||
| 使用*分片提取器 API*实现源连接器时,将自动进行处理。所有基于分片提取器 API的实现都具有开箱即用(out-of-the-box)的可识别分片的水印。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
使用分片提取器 API实现源连接器时
-> 使用分片阅读器(SplitReader)API 实现源连接器时
与上文的翻译保持一致
| 使用*分片提取器 API*实现源连接器时,将自动进行处理。所有基于分片提取器 API的实现都具有开箱即用(out-of-the-box)的可识别分片的水印。 | ||
|
|
||
| For an implementation of the lower level `SourceReader` API to use split-aware watermark generation, the implementation must output events from different splits to different outputs: the *Split-local SourceOutputs*. Split-local outputs can be created and released on the main [ReaderOutput](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/connector/source/ReaderOutput.java) via the `createOutputForSplit(splitId)` and `releaseOutputForSplit(splitId)` methods. Please refer to the JavaDocs of the class and methods for details. | ||
| 为了实现较低级别的`源阅读器`API可以使用可识别分区的水印生成,必须将事件从不同的分片输出到不同的输出中:*局部分片 SourceOutputs*。通过 `createOutputForSplit(splitId)` 和 `releaseOutputForSplit(splitId)` 方法,可以在总 [ReaderOutput](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/connector/source/ReaderOutput.java) 上创建并发布局部分片输出。有关详细信息,请参阅该类和方法的JavaDocs。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为了实现 较低级别 更底层 的源阅读器API可以使用可识别分区的水印生成
|
@PatrickRen Hi, I have updated the document based on your advice, please take a look. |
wuchong
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
* [hotfix][docs][python] Fix the invalid url in state.md * [hotfix][table-planner-blink] Fix bug for window join: plan is wrong if join condition contains 'IS NOT DISTINCT FROM' Fix Flink-22098 caused by a mistake when rebasing This closes apache#15695 * [hotfix][docs] Fix the invalid urls in iterations.md * [FLINK-16384][table][sql-client] Support 'SHOW CREATE TABLE' statement This closes apache#13011 * [FLINK-16384][table][sql-client] Improve implementation of 'SHOW CREATE TABLE' statement. This commit tries to 1. resolve the conflicts 2. revert the changes made on old planner 3. apply spotless formatting 4. fix DDL missing `TEMPORARY` keyword for temporary table 5. display table's full object path as catalog.db.table 6. support displaying the expanded query for view 7. add view test in CatalogTableITCase, and adapt sql client test to the new test framework 8. adapt docs This closes apache#13011 * [hotfix][docs][python] Add missing pages for PyFlink documentation * [FLINK-22348][python] Fix DataStream.execute_and_collect which doesn't declare managed memory for Python operators This closes apache#15665. * [FLINK-22350][python][docs] Add documentation for user-defined window in Python DataStream API This closes apache#15702. * [FLINK-22354][table-planner] Fix the timestamp function return value precision isn't matching with declared DataType This closes apache#15689 * [FLINK-22354][table] Remove the strict precision check of time attribute field This closes apache#15689 * [FLINK-22354][table] Fix TIME field display in sql-client This closes apache#15689 * [hotfix] Set default value of resource-stabilization-timeout to 10s Additionally, set speed up the AdaptiveScheduler ITCases by configuring a very low jobmanager.adaptive-scheduler.resource-stabilization-timeout. * [FLINK-22345][coordination] Remove incorrect assertion in scheduler When this incorrect assertion is violated, the scheduler can trip into an unrecoverable failover loop. * [hotfix][table-planner-blink] Fix the incorrect ExecNode's id in testIncrementalAggregate.out After FLINK-22298 is finished, the ExecNode's id should always start from 1 in the json plan tests, while the testIncrementalAggregate.out was overrided by FLINK-20613 This closes apache#15698 * [FLINK-20654] Fix double recycling of Buffers in case of an exception on persisting Exception can be thrown for example if task is being cancelled. This was leading to same buffer being recycled twice. Most of the times that was just leading to an IllegalReferenceCount being thrown, which was ignored, as this task was being cancelled. However on rare occasions this buffer could have been picked up by another task after being recycled for the first time, recycled second time and being picked up by another (third task). In that case we had two users of the same buffer, which could lead to all sort of data corruptions. * [hotfix] Regenerate html configuration after 4be9aff * [FLINK-22001] Fix forwarding of JobMaster exceptions to user [FLINK-XXXX] Draft separation of leader election and creation of JobMasterService [FLINK-XXXX] Continued work on JobMasterServiceLeadershipRunner [FLINK-XXXX] Integrate RunningJobsRegistry, Cancelling state and termination future watching [FLINK-XXXX] Delete old JobManagerRunnerImpl classes [FLINK-22001] Add tests for DefaultJobMasterServiceProcess [FLINK-22001][hotfix] Clean up ITCase a bit [FLINK-22001] Add missing check for empty job graph [FLINK-22001] Rename JobMasterServiceFactoryNg to JobMasterServiceFactory This closes apache#15715. * [FLINK-22396][legal] Remove unnecessary entries from sql hbase connector NOTICE file This closes apache#15706 * [FLINK-22396][hbase] Remove unnecessary entries from shaded plugin configuration * [FLINK-19980][table-common] Add a ChangelogMode.upsert() shortcut * [FLINK-19980][table-common] Add a ChangelogMode.all() shortcut * [FLINK-19980][table] Support fromChangelogStream/toChangelogStream This closes apache#15699. * [hotfix][table-api] Properly deprecate StreamTableEnvironment.execute * [FLINK-22345][coordination] Catch pre-mature state restore for Operator Coordinators * [FLINK-22168][table] Partition insert can not work with union all This closes apache#15608 * [FLINK-22341][hive] Fix describe table for hive dialect This closes apache#15660 * [FLINK-22000][io] Set a default character set in InputStreamReader * [FLINK-22385][runtime] Fix type cast error in NetworkBufferPool This closes apache#15693. * [FLINK-22085][tests] Update TestUtils::tryExecute() to cancel the job after execution failure. This closes apache#15713 * [hotfix][runtime] fix typo in ResourceProfile This closes apache#15723 * [hotfix] Fix typo in TestingTaskManagerResourceInfoProvider * [FLINK-21174][coordination] Optimize the performance of DefaultResourceAllocationStrategy This commit optimize the performance of matching requirements with available/pending resources in DefaultResourceAllocationStrategy. This closes apache#15668 * [FLINK-22177][docs][table] Add documentation for time functions and time zone support This closes apache#15634 * [FLINK-22398][runtime] Fix incorrect comments in InputOutputFormatVertex (apache#15705) * [FLINK-22384][docs] Fix typos in Chinese "fault-tolerance/state" page (apache#15694) * [FLINK-21903][docs-zh] Translate "Importing Flink into an IDE" page into Chinese This closes apache#15721 * [FLINK-21659][coordination] Properly expose checkpoint settings for initializing jobs * [hotfix][tests] Remove unnecessary field * [FLINK-20723][tests] Ignore expected exception when retrying * [FLINK-20723][tests] Allow retries to be defined per class * [FLINK-20723][cassandra][tests] Retry on NoHostAvailableException * [hotfix][docs] Fix typo in jdbc execution options * [hotfix][docs] Fix typos * [FLINK-12351][DataStream] Fix AsyncWaitOperator to deep copy StreamElement when object reuse is enabled apache#8321 (apache#8321) * [FLINK-22119][hive][doc] Update document for hive dialect This closes apache#15630 * [FLINK-20720][docs][python] Add documentation about output types for Python DataStream API This closes apache#15733. * [FLINK-20720][python][docs] Add documentation for ProcessFunction in Python DataStream API This closes apache#15733. * [FLINK-22294][hive] Hive reading fail when getting file numbers on different filesystem nameservices This closes apache#15624 * [FLINK-22356][hive][filesystem] Fix partition-time commit failure when watermark is applied defined TIMESTAMP_LTZ column This closes apache#15709 * [FLINK-22433][tests] Make CoordinatorEventsExactlyOnceITCase work with Adaptive Scheduler. The test previously relied on an implicit contract that instances of OperatorCoordinators are never recreated on the same JobManager. That implicit contract is no longer true with the Adaptive Scheduler. This change adjusts the test to no longer make that assumption. This closes apache#15739 * [hotfix][tests] Remove timout rule from KafkaTableTestBase That way, we get thread dumps from the CI infrastructure when the test hangs. * [FLINK-21247][table] Fix problem in MapDataSerializer#copy when there exists custom MapData This closes apache#15569 * [FLINK-21923][table-planner-blink] Fix ClassCastException in SplitAggregateRule when a query contains both sum/count and avg function This closes apache#15341 * [FLINK-19449][doc] Fix wrong document for lead and lag * [FLINK-19449][table] Introduce LinkedListSerializer * [FLINK-19449][table] Pass isBounded to AggFunctionFactory * [FLINK-19449][table-planner] LEAD/LAG cannot work correctly in streaming mode This closes apache#15747 * [FLINK-18199][doc] translate FileSystem SQL Connector page into chinese This closes apache#13459 * [FLINK-22444][docs] Drop async checkpoint description of state backends in Chinese docs * [FLINK-22445][python][docs] Add documentation for row-based operations This closes apache#15757. * [FLINK-22463][table-planner-blink] Fix IllegalArgumentException in WindowAttachedWindowingStrategy when two phase is enabled for distinct agg This closes apache#15759 * [hotfix] Do not use ExecutorService.submit since it can swallow exceptions This commit changes the KubernetesLeaderElector to use ExecutorService.execute instead of submit which ensures that potential exceptions are forwarded to the fatal uncaught exeception handler. This closes apache#15740. * [FLINK-22431] Add information when and why the AdaptiveScheduler restarts or fails jobs This commit adds info log statements to tell the user when and why it restarts or fails a job. This closes apache#15736. * [hotfix] Add debug logging to the states of the AdaptiveScheduler This commit adds debug log statements to the states of the AdaptiveScheduler to log whenever we ignore a global failure. * [hotfix] Harden against FLINK-21376 by checking for null failure cause In order to harden the AdaptiveScheduler against FLINK-21376, this commit checks whether a task failure cause is null or not. In case of null, it will replace the failure with a generic cause. * [FLINK-22301][runtime] Statebackend and CheckpointStorage type is not shown in the Web UI [FLINK-22301][runtime] Statebackend and CheckpointStorage type is not shown in the Web UI This closes apache#15732. * [hotfix][network] Remove unused method BufferPool#getSubpartitionBufferRecyclers * [FLINK-22424][network] Prevent releasing PipelinedSubpartition while Task can still write to it This bug was happening when a downstream tasks were failing over or being cancelled. If all of the downstream tasks released subpartition views, underlying memory segments/buffers could have been recycled, while upstream task was still writting some records. The problem is fixed by adding the writer (result partition) itself as one more reference counted user of the result partition * [FLINK-22085][tests] Remove timeouts from KafkaSourceLegacyITCase * [FLINK-19606][table-runtime-blink] Refactor utility class JoinConditionWithFullFilters from AbstractStreamingJoinOperator This closes apache#15752 * [hotfix][coordination] Add log for slot allocation in FineGrainedSlotManager This closes apache#15748 * [FLINK-22074][runtime][test] Harden FineGrainedSlotManagerTest#testRequirementCheckOnlyTriggeredOnce in case deploying on a slow machine This closes apache#15751 * [FLINK-22470][python] Make sure that the root cause of the exception encountered during compiling the job was exposed to users in all cases This closes apache#15766. * [hotfix][docs] Removed duplicate word This closes apache#15756 . * [hotfix][python][docs] Add missing debugging page for PyFlink documentation * [FLINK-22136][e2e] Odd parallelism for resume_externalized_checkpoints was added to run-nightly-tests.sh. * [FLINK-22136][e2e] Exception instead of system out is used for errors in DataStreamAllroundTestProgram * [hotfix][python][docs] Clarify python.files option (apache#15779) * [FLINK-22476][docs] Extend the description of the config option `execution.target` This closes apache#15777. * [FLINK-18952][python][docs] Add "10 minutes to DataStream API" documentation This closes apache#15769. * [hotfix][table-planner-blink] Fix unstable itcase in OverAggregateITCase#testRowNumberOnOver This closes apache#15782 * [FLINK-22378][table] Derive type of SOURCE_WATERMARK() from time attribute This closes apache#15730. * [hotfix][python][docs] Fix compile issues * [FLINK-22469][runtime-web] Fix NoSuchFileException in HistoryServer * [FLINK-22289] Update JDBC XA sink docs * [FLINK-22428][docs][table] Translate "Timezone" page into Chinese (apache#15753) * [FLINK-22471][connector-elasticsearch] Remove commads from list This is a responsibility of the Formatter implementation. * [FLINK-22471][connector-elasticsearch] Do not repeat default value * [FLINK-22471][connector-jdbc] Improve spelling and avoid repeating default values * [FLINK-22471][connector-kafka] Use proper Description for connector options * [FLINK-22471][connector-kinesis] Remove required from ConfigOption This is defined by where the factory includes the option. * [FLINK-22471][connector-kinesis] Use proper Description for connector options * [FLINK-22471][table-runtime-blink] Remove repetition of default values * [FLINK-22471][table-runtime-blink] Use proper Description for connector options This closes apache#15764. * [hotfix][python][docs] Fix flat_aggregate example in Python Doc * [hotfix][python][docs] Add documentation to remind users to bundle Python UDF definitions when submitting the job This closes apache#15790. * [FLINK-17783][orc] Add array,map,row types support for orc row writer This closes apache#15746 * [FLINK-22489][webui] Fix displaying individual subtasks backpressure-level Previously (incorrectly) backpressure-level from the whole task was being displayed for each of the subtasks. * [FLINK-22479[Kinesis][Consumer] Potential lock-up under error condition * [FLINK-22304][table] Refactor some interfaces for TVF based window to improve the extendability This closes apache#15745 * [hotfix][python][docs] Fix python.archives docs This closes apache#15783. * [FLINK-20086][python][docs] Add documentation about how to override open() in UserDefinedFunction to load resources This closes apache#15795. * [FLINK-22438][metrics] Add numRecordsOut metric for Async IO This closes apache#15791. * [FLINK-22373][docs] Add Flink 1.13 release notes This closes apache#15687 * [FLINK-22495][docs] Add Reactive Mode section to K8s * [FLINK-21967] Add documentation on the operation of blocking shuffle This closes apache#15701 * [FLINK-22232][tests] Add UnalignedCheckpointsStressITCase * [FLINK-22232][network] Add task name to output recovery tracing. * [FLINK-22232][network] Add task name to persist tracing. * [FLINK-22232][network] More logging of network stack. * [FLINK-22109][table-planner-blink] Resolve misleading exception message in invalid nested function This closes apache#15523. * [FLINK-22426][table] Fix several shortcomings that prevent schema expressions This fixes a couple of critical bugs in the stack that prevented Table API expressions to be used for schema declaration. Due to time constraints, this PR could not make it into the 1.13 release but should be added to the next bugfix release for a smoother user experience. For testing and consistency, it exposes the Table API expression sourceWatermark() in Java, Scala, and Python API. This closes apache#15798 * [FLINK-22493] Increase test stability in AdaptiveSchedulerITCase. This addresses the following problem in the testStopWithSavepointFailOnFirstSavepointSucceedOnSecond() test. Once all tasks are running, the test triggers a savepoint, which intentionally fails, because of a test exception in a Task's checkpointing method. The test then waits for the savepoint future to fail, and the scheduler to restart the tasks. Once they are running again, it performs a sanity check whether the savepoint directory has been properly removed. In the reported run, there was still the savepoint directory around. The savepoint directory is removed via the PendingCheckpoint.discard() method. This method is executed using the i/o executor pool of the CheckpointCoordinator. There is no guarantee that this discard method has been executed when the job is running again (and the executor shuts down with the dispatcher, hence it is not bound to job restarts). * [FLINK-22510][core] Format durations with highest unit When converting a configuration value to a string, durations were formatted in nanoseconds regardless of their values. This produces serialized outputs which are hard to understand for humans. The functionality of formatting in the highest unit which allows the value to be an integer already exists, thus we can simply defer to it to produce a more useful result. This closes apache#15809. * [FLINK-22250][sql-parser] Add missing 'createSystemFunctionOnlySupportTemporary' entry in ParserResource.properties (apache#15582) * [FLINK-22524][docs] Fix the incorrect Java groupBy clause in Table API docs (apache#15802) * [FLINK-22522][table-runtime-blink] BytesHashMap prints many verbose INFO level logs (apache#15801) * [FLINK-22539][python][docs] Restructure the Python dependency management documentation (apache#15818) * [FLINK-22544][python][docs] Add the missing documentation about the command line options for PyFlink * Update japicmp configuration for 1.13.0 * [hotfix][python][docs] Correct a few invalid links and typos in PyFlink documentation * [FLINK-22368] Deque channel after releasing on EndOfPartition ...and don't enqueue the channel if it received EndOfPartition previously. Leaving a released channel enqueued may lead to CancelTaskException which can prevent EndOfPartitionEvent propagation and the job being stuck. * [FLINK-14393][webui] Add an option to enable/disable cancel job in web ui This closes apache#15817. * [FLINK-22323] Fix typo in JobEdge#connecDataSet (apache#15647) Co-authored-by: [email protected] <123456Lq> * [hotfix][python] Add missing space to exception message * [hotfix][docs] Fix typo * [FLINK-22535][runtime] CleanUp is invoked for task even when the task fail during the restore * [FLINK-22535][runtime] CleanUp is invoked despite of fail inside of cancelTask * [FLINK-22432] Update upgrading.md with 1.13.x * [FLINK-22253][docs] Update back pressure monitoring docs with new WebUI changes * [hotfix][docs] Update unaligned checkpoint docs * [hotfix][network] Remove redundant word from comment * [FLINK-21131][webui] Alignment timeout displayed in checkpoint configuration(WebUI) * [FLINK-22548][network] Remove illegal unsynchronized access to PipelinedSubpartition#buffers After peeking last buffer, this last buffer could have been processed and recycled by the task thread, before NetworkActionsLogger.traceRecover would manage to check it's content causing IllegalReferenceCount exceptions. Further more, this log seemed excessive and was accidentally added after debugging session, so it's ok to remove it. * [FLINK-22563][docs] Add migration guide for new StateBackend interfaces Co-authored-by: Nico Kruber <[email protected]> This closes apache#15831 * [FLINK-22379][runtime] CheckpointCoordinator checks the state of all subtasks before triggering the checkpoint * [FLINK-22488][hotfix] Update SubtaskGatewayImpl to specify the cause of sendEvent() failure when triggering task failover * [FLINK-22442][CEP] Using scala api to change the TimeCharacteristic of the PatternStream is invalid This closes apache#15742 * [FLINK-22573][datastream] Fix AsyncIO calls timeout on completed element. As long as the mailbox is blocked, timers are not cancelled, such that a completed element might still get a timeout. The fix is to check the completed flag when the timer triggers. * [FLINK-22573][datastream] Fix AsyncIO calls timeout on completed element. As long as the mailbox is blocked, timers are not cancelled, such that a completed element might still get a timeout. The fix is to check the completed flag when the timer triggers. * [FLINK-22233][table] Fix "constant" typo in PRIMARY KEY exception messages This closes apache#15656 Co-authored-by: wuys <[email protected]> * [hotfix][hive] Add an ITCase that checks partition-time commit pretty well This closes apache#15754 * [FLINK-22512][hive] Fix issue of calling current_timestamp with hive dialect for hive-3.1 This closes apache#15819 * [FLINK-22362][network] Improve error message when taskmanager can not connect to jobmanager * [FLINK-22581][docs] Keyword CATALOG is missing in sql client doc (apache#15844) * [hotfix][docs] Replace local failover with partial failover * [FLINK-22266][hotfix] Do not clean up jobs in AbstractTestBase if the MiniCluster is not running This closes apache#15829 * [hotfix][docs] Fix code tabs for state backend migration * [hotfix][docs][python] Fix the example in intro_to_datastream_api.md * [hotfix][docs][python] Add an overview page for Python UDFs * [hotfix] Make reactive warning less strong, clarifications * [hotfix][docs] Re-introduce note about FLINK_CONF_DIR This closes apache#15845 * [hotfix][docs][python] Add introduction about the open method in Python DataStream API * [FLINK-21095][ci] Remove legacy slot management profile * [FLINK-22406][tests] Add RestClusterClient to MiniClusterWithClientResource * [FLINK-22406][coordination][tests] Stabilize ReactiveModeITCase * [FLINK-22419][coordination][tests] Rework RpcEndpoint delay tests * [FLINK-22560][build] Move generic filters/transformers into general shade-plugin configuration * [FLINK-22560][build] Filter maven metadata directory * [FLINK-22560][build] Add dedicated name to flink-dist shade-plugin execution * [FLINK-22555][build][python] Exclude leftover jboss files * [hotfix] Ignore failing test reported in FLINK-22559 * [hotfix][docs] Fix typo in dependency_management.md * [hotfix][docs] Mention new StreamTableEnvironment.fromDataStream in release notes * [hotfix][docs] Mention new StreamTableEnvironment.fromDataStream in Chinese release notes * [FLINK-22505][core] Limit the scale of Resource to 8 This closes apache#15815 * [FLINK-19606][table-runtime-blink] Introduce WindowJoinOperator and WindowJoinOperatorBuilder This closes apache#15760 * [FLINK-22536][runtime] Promote the critical log in FineGrainedSlotManager to INFO level This closes apache#15850 * [FLINK-22355][docs] Fix simple task manager memory model image This closes apache#15862 * [FLINK-19606][table-planner] Introduce StreamExecWindowJoin and window join it cases This closes apache#15479 * [hotfix][docs] Correct the examples in Python DataStream API * [FLINK-17170][kinesis] Move KinesaliteContainer to flink-connector-kinesis. This testcontainer will be used in an ITCase in the next commit. Also move system properties required for test into pom.xml. * [FLINK-17170][kinesis] Fix deadlock during stop-with-savepoint. During stop-with-savepoint cancel is called under lock in legacy sources. Thus, if the fetcher is trying to emit a record at the same time, it cannot obtain the checkpoint lock. This behavior leads to a deadlock while cancel awaits the termination of the fetcher. The fix is to mostly rely on the termination inside the run method. As a safe-guard, close also awaits termination where close is always caused without lock. * [FLINK-21181][runtime] Wait for Invokable cancellation before releasing network resources * [FLINK-22609][runtime] Generalize AllVerticesIterator * [hotfix][docs] Fix image links * [FLINK-22525][table-api] Fix gmt format in Flink from GMT-8:00 to GMT-08:00 (apache#15859) * [FLINK-22559][table-planner] The consumed DataType of ExecSink should only consider physical columns This closes apache#15864. * [hotfix][core] Remove unused import * [FLINK-22537][docs] Add documentation how to interact with DataStream API This closes apache#15837. * [FLINK-22596] Active timeout is not triggered if there were no barriers The active timeout did not take effect if it elapsed before the first barrier arrived. The reason is that we did not reset the future for checkpoint complete on barrier announcement. Therefore we considered the completed status for previous checkpoint when evaluating the timeout for current checkpoint. * [hotfix][test] Adds -e flag to interpret newline in the right way * [FLINK-22566][test] Adds log extraction for the worker nodes We struggled to get the logs of the node manager which made it hard to investigate FLINK-22566 where there was a lag between setting up the YARN containers and starting the TaskExecutor. Hopefully, the nodemanager logs located on the worker nodes will help next time to investigate something like that. * [FLINK-22577][tests] Harden KubernetesLeaderElectionAndRetrievalITCase This commit introduces closing logic to the TestingLeaderElectionEventHandler which would otherwise forward calls after the KubernetesLeaderElectionDriver is closed. This closes apache#15849. * [FLINK-22604][table-runtime-blink] Fix NPE on bundle close when task failover after a failed task open This closes apache#15863 * [hotfix] Disable broken savepoint tests tracked in FLINK-22067 * [FLINK-22407][build] Bump log4j to 2.24.1 - CVE-2020-9488 * [FLINK-22313][table-planner-blink] Redundant CAST in plan when selecting window start and window end in window agg (apache#15806) * [FLINK-22523][table-planner-blink] Window TVF should throw helpful exception when specifying offset parameter (apache#15803) * [FLINK-22624][runtime] Utilize the remain resource of new pending task managers to fulfill requirement in DefaultResourceAllocationStrategy This closes apache#15888 * [FLINK-22413][WebUI] Hide Checkpointing page for batch jobs * [FLINK-22364][doc] Translate the page of "Data Sources" to Chinese. (apache#15763) * [FLINK-22586][table] Improve the precision dedivation for decimal arithmetics This closes apache#15848 * [hotfix][e2e] Output and collect the logs for Kubernetes IT cases * [FLINK-17857][test] Make K8s e2e tests could run on Mac This closes apache#14012. * [hotfix][ci] Use static methods * [FLINK-22556][ci] Extend JarFileChecker to search for traces of incompatible licenses * [FLINK-21700][security] Add an option to disable credential retrieval on a secure cluster (apache#15131) * [FLINK-22408][sql-parser] Fix SqlDropPartitions unparse Error This closes apache#15894 * [FLINK-21469][runtime] Implement advanceToEndOfEventTime for MultipleInputStreamTask For stop with savepoint, StreamTask#advanceToEndOfEventTime() is called (in source tasks) to advance to the max watermark. This PR implments advanceToEndOfEventTime for MultipleInputStreamTask chained sources. * [hotfix][docs] Fix all broken images Co-authored-by: Xintong Song <[email protected]> This closes apache#15865 * [hotfix][docs] Fix typo in k8s docs This closes apache#15886 * [FLINK-22628][docs] Update state_processor_api.md This closes apache#15892 * [hotfix] Add missing TestLogger to Kinesis tests * [FLINK-22574] Adaptive Scheduler: Fix cancellation while in Restarting state. The Canceling state of Adaptive Scheduler was expecting the ExecutionGraph to be in state RUNNING when entering the state. However, the Restarting state is cancelling the ExecutionGraph already, thus the ExectionGraph can be in state CANCELING or CANCELED when entering the Canceling state. Calling the ExecutionGraph.cancel() method in the Canceling state while being in ExecutionGraph.state = CANCELED || CANCELLED is not a problem. The change is guarded by a new ITCase, as this issue affects the interplay between different AS states. This closes apache#15882 * [FLINK-22640] [datagen] Fix DataGen SQL Connector does not support defining fields min/max option of decimal type field This closes apache#15900 * [FLINK-22534][runtime][yarn] Set delegation token's service name as credential alias This closes apache#15810 * [FLINK-22618][runtime] Fix incorrect free resource metrics of task managers This closes apache#15887 * [FLINK-15064][table-planner-blink] Remove XmlOutput util class in blink planner since Calcite has fixed the issue This closes apache#15911 * [FLINK-19796][table] Explicit casting shoule be made if the type of an element in `ARRAY/MAP` not equals with the derived component type This closes apache#15906 * [FLINK-22475][table-common] Document usage of '#' placeholder in option keys * [FLINK-22475][table-common] Exclude options with '#' placeholder from validation of required options * [FLINK-22475][table-api-java-bridge] Add placeholder options for datagen connector This closes apache#15896. * [FLINK-22511][python] Fix the bug of non-composite result type in Python TableAggregateFunction This closes apache#15796. * [[hotfix][docs] Changed argument for toDataStream to Table This closes apache#15923 . * [FLINK-22654][sql-parser] Fix SqlCreateTable#toString()/unparse() lose CONSTRAINTS and watermarks This closes apache#15918 * [FLINK-22658][table] Remove Deprecated util class TableConnectorUtil (apache#15914) * [FLINK-22592][runtime] numBuffersInLocal is always zero when using unaligned checkpoints This closes apache#15915 * [FLINK-22400][hive] fix NPE problem when convert flink object for Map This closes apache#15712 * [FLINK-22649][python][table-planner-blink] Support StreamExecPythonCalc json serialization/deserialization This closes apache#15913. * [FLINK-22502][checkpointing] Don't tolerate checkpoint retrieval failures on recovery Ignoring such failures and running with an incomplete set of checkpoints can lead to consistency violation. Instead, transient failures should be mitigated by automatic job restart. * [FLINK-22667][docs] Add missing slash * [FLINK-22650][python][table-planner-blink] Support StreamExecPythonCorrelate json serialization/deserialization This closes apache#15922. * [FLINK-22652][python][table-planner-blink] Support StreamExecPythonGroupWindowAggregate json serialization/deserialization This closes apache#15934. * [FLINK-22666][table] Make structured type's fields more lenient during casting Compare children individually for anonymous structured types. This fixes issues with primitive fields and Scala case classes. This closes apache#15935. * [hotfix][table-planner-blink] Give more helpful exception for codegen structured types * [FLINK-22620][orc] Drop BatchTableSource OrcTableSource and related classes This removes the OrcTableSource and related classes including OrcInputFormat. Use the filesystem connector with a ORC format as a replacement. It is possible to read via Table & SQL API snd convert the Table to DataStream API if necessary. DataSet API is not supported anymore. This closes apache#15891. * [FLINK-22651][python][table-planner-blink] Support StreamExecPythonGroupAggregate json serialization/deserialization This closes apache#15928. * [FLINK-22653][python][table-planner-blink] Support StreamExecPythonOverAggregate json serialization/deserialization This closes apache#15937. * [FLINK-12295][table] Fix comments in MinAggFunction and MaxAggFunction This closes apache#15940 * [FLINK-20695][ha] Clean ha data for job if globally terminated At the moment Flink only cleans up the ha data (e.g. K8s ConfigMaps, or Zookeeper nodes) while shutting down the cluster. This is not enough for a long running session cluster to which you submit multiple jobs. In this commit, we clean up the data for the particular job if it reaches a globally terminal state. This closes apache#15561. * [FLINK-22067][tests] Wait for vertices to using API * [hotfix][tests] Remove try/catch from SavepointWindowReaderITCase.testApplyEvictorWindowStateReader * [FLINK-22622][parquet] Drop BatchTableSource ParquetTableSource and related classes This removes the ParquetTableSource and related classes including various ParquetInputFormats. Use the filesystem connector with a Parquet format as a replacement. It is possible to read via Table & SQL API and convert the Table to DataStream API if necessary. DataSet API is not supported anymore. This closes apache#15895. * [hotfix] Ignore failing KinesisITCase traacked in FLINK-22613 * [FLINK-19545][e2e] Add e2e test for native Kubernetes HA The HA e2e test will start a Flink application first and wait for three successful checkpoints. Then kill the JobManager. A new JobManager should be launched and recover the job from latest successful checkpoint. Finally, cancel the job and all the K8s resources should be cleaned up automatically. This closes apache#14172. * [FLINK-22656] Fix typos * [hotfix][runtime] Fixes JavaDoc for RetrievableStateStorageHelper RetrievableStateStorageHelper is not only used by ZooKeeperStateHandleStore but also by KubernetesStateHandleStore. * [hotfix][runtime] Cleans up unnecessary annotations * [FLINK-22494][kubernetes] Introduces PossibleInconsistentStateException We experienced cases where the ConfigMap was updated but the corresponding HTTP request failed due to connectivity issues. PossibleInconsistentStateException is used to reflect cases where it's not clear whether the data was actually written or not. * [FLINK-22494][ha] Refactors TestingLongStateHandleHelper to operate on references The previous implementation stored the state in the StateHandle. This causes problems when deserializing the state creating a new instance that does not point to the actual state but is a copy of this state. This refactoring introduces LongStateHandle handling the actual state and LongRetrievableStateHandle referencing this handle. * [FLINK-22494][ha] Introduces PossibleInconsistentState to StateHandleStore * [FLINK-22494][runtime] Refactors CheckpointsCleaner to handle also discardOnFailedStoring * [FLINK-22494][runtime] Adds PossibleInconsistentStateException handling to CheckpointCoordinator This closes apache#15832. * [FLINK-22515][docs] Add documentation for GSR-Flink Integration * [FLINK-20487][table-planner-blink] Remove restriction on StreamPhysicalGroupWindowAggregate which only supports insert-only input node This closes apache#14830 * [FLINK-22623][hbase] Drop BatchTableSource/Sink HBaseTableSource/Sink and related classes This removes the HBaseTableSource/Sink and related classes including various HBaseInputFormats and HBaseSinkFunction. It is possible to read via Table & SQL API and convert the Table to DataStream API (or vice versa) if necessary. DataSet API is not supported anymore. This closes apache#15905. * [hotfix][hbase] Fix warnings around decimals in HBaseTestBase * [FLINK-22636][zk] Group job specific zNodes under /jobs zNode In order to better clean up job specific HA services, this commit changes the layout of the zNode structure so that the JobMaster leader, checkpoints and checkpoint counter is now grouped below the jobs/ zNode. Moreover, this commit groups the leaders of the cluster components (Dispatcher, ResourceManager, RestServer) under /leader/process/latch and /leader/process/connection-info. This closes apache#15893. * [FLINK-22696][tests] Enable Confluent Schema Registry e2e test on jdk 11 Co-authored-by: Dian Fu <[email protected]> Co-authored-by: Jing Zhang <[email protected]> Co-authored-by: Shengkai <[email protected]> Co-authored-by: Jane Chan <[email protected]> Co-authored-by: huangxingbo <[email protected]> Co-authored-by: Leonard Xu <[email protected]> Co-authored-by: Till Rohrmann <[email protected]> Co-authored-by: Stephan Ewen <[email protected]> Co-authored-by: godfreyhe <[email protected]> Co-authored-by: Piotr Nowojski <[email protected]> Co-authored-by: Robert Metzger <[email protected]> Co-authored-by: Dawid Wysakowicz <[email protected]> Co-authored-by: Timo Walther <[email protected]> Co-authored-by: Jingsong Lee <[email protected]> Co-authored-by: Rui Li <[email protected]> Co-authored-by: dbgp2021 <[email protected]> Co-authored-by: sharkdtu <[email protected]> Co-authored-by: Dong Lin <[email protected]> Co-authored-by: chuixue <[email protected]> Co-authored-by: Yangze Guo <[email protected]> Co-authored-by: kanata163 <[email protected]> Co-authored-by: zhaoxing <[email protected]> Co-authored-by: Chesnay Schepler <[email protected]> Co-authored-by: Cemre Mengu <[email protected]> Co-authored-by: paul8263 <[email protected]> Co-authored-by: Jark Wu <[email protected]> Co-authored-by: zhangjunfan <[email protected]> Co-authored-by: Shuo Cheng <[email protected]> Co-authored-by: Tartarus0zm <[email protected]> Co-authored-by: JingsongLi <[email protected]> Co-authored-by: Michael Li <[email protected]> Co-authored-by: Yun Tang <[email protected]> Co-authored-by: SteNicholas <[email protected]> Co-authored-by: mans2singh <[email protected]> Co-authored-by: Anton Kalashnikov <[email protected]> Co-authored-by: yiksanchan <[email protected]> Co-authored-by: Tony Wei <[email protected]> Co-authored-by: Gabor Somogyi <[email protected]> Co-authored-by: Yuan Mei <[email protected]> Co-authored-by: hackergin <[email protected]> Co-authored-by: Ingo Bürk <[email protected]> Co-authored-by: wangwei1025 <[email protected]> Co-authored-by: Danny Cranmer <[email protected]> Co-authored-by: shuo.cs <[email protected]> Co-authored-by: zhangzhengqi3 <[email protected]> Co-authored-by: Yun Gao <[email protected]> Co-authored-by: Arvid Heise <[email protected]> Co-authored-by: MaChengLong <[email protected]> Co-authored-by: zhaown <[email protected]> Co-authored-by: Huachao Mao <[email protected]> Co-authored-by: GuoWei Ma <[email protected]> Co-authored-by: Roman Khachatryan <[email protected]> Co-authored-by: fangyue1 <[email protected]> Co-authored-by: Jacklee <[email protected]> Co-authored-by: zhang chaoming <[email protected]> Co-authored-by: sasukerui <[email protected]> Co-authored-by: Seth Wiesman <[email protected]> Co-authored-by: chennuo <[email protected]> Co-authored-by: wysstartgo <[email protected]> Co-authored-by: wuys <[email protected]> Co-authored-by: 莫辞 <[email protected]> Co-authored-by: gentlewangyu <[email protected]> Co-authored-by: Youngwoo Kim <[email protected]> Co-authored-by: HuangXiao <[email protected]> Co-authored-by: Roc Marshal <[email protected]> Co-authored-by: Leonard Xu <[email protected]> Co-authored-by: Matthias Pohl <[email protected]> Co-authored-by: lincoln lee <[email protected]> Co-authored-by: Senhong Liu <[email protected]> Co-authored-by: wangyang0918 <[email protected]> Co-authored-by: aidenma <[email protected]> Co-authored-by: Kevin Bohinski <[email protected]> Co-authored-by: yangqu <[email protected]> Co-authored-by: Terry Wang <[email protected]> Co-authored-by: wangxianghu <[email protected]> Co-authored-by: hehuiyuan <[email protected]> Co-authored-by: lys0716 <[email protected]> Co-authored-by: Yi Tang <[email protected]> Co-authored-by: LeeJiangchuan <[email protected]> Co-authored-by: Linyu <[email protected]> Co-authored-by: Fabian Paul <[email protected]>
What is the purpose of the change
It is aimed to translate the page of "Data Sources" to Chinese, which is still in English currently.
Brief change log
Translate "docs/content.zh/docs/dev/datastream/sources.md" from English to Chinese.
Verifying this change
This change is a trivial rework / code cleanup without any test coverage.
Does this pull request potentially affect one of the following parts:
@Public(Evolving): noDocumentation