You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note: Redpanda has known issues interacting with the Schema Registry with https://www.confluent.io/blog/introducing-confluent-kafka-javascript/[confluent-kafka-javascript^].
40
41
|===
41
42
42
43
Clients that have not been validated by Redpanda Data, but use the Kafka protocol, remain compatible with Redpanda subject to the limitations below (particularly those based on librdkafka, such as confluent-kafka-dotnet or confluent-python).
* xref:redpanda-cloud:get-started:cloud-overview.adoc#redpanda-cloud-vs-self-managed-feature-compatibility[Redpanda Cloud vs Self-Managed feature compatibility]
10
10
11
-
== Crash recording for improved support
11
+
== Retrieve serialized Protobuf schemas with Schema Registry API
12
12
13
-
Redpanda now records detailed information about broker crashes to help streamline troubleshooting and reduce time to resolution. Crash reports include information such as a stack trace, exception details, the Redpanda broker version, and the timestamp of when the crash occurred. The recorded crash reports are now automatically collected as part of xref:troubleshoot:debug-bundle/overview.adoc[debug bundles], providing Redpanda customer support with more context to diagnose and resolve issues faster.
13
+
Starting in version 25.2, the Schema Registry API supports retrieving serialized schemas (Protobuf only) using the `format=serialized` query parameter for the following endpoints:
14
14
15
-
== New health probes for broker restarts and upgrades
The Redpanda Admin API now includes new health probes to help you ensure safe broker restarts and upgrades. The xref:api:ROOT:admin-api.adoc#get-/v1/broker/pre_restart_probe[`pre_restart_probe`] endpoint identifies potential risks if a broker is restarted, and xref:api:ROOT:admin-api.adoc#get-/v1/broker/post_restart_probe[`post_restart_probe`] indicates how much of its workloads a broker has reclaimed after the restart. See also:
20
+
This helps facilitate migration of Protobuf clients to Redpanda. See the xref:api:ROOT:schema-registry-api.adoc[Schema Registry API reference] for details.
The `confluent-kafka-javascript` client is now validated with Redpanda. For a list of validated clients, see xref:develop:kafka-clients.adoc[].
21
25
22
26
== HTTP Proxy authentication changes
23
27
@@ -34,193 +38,4 @@ If you need to maintain the current HTTP Proxy functionality while transitioning
34
38
- xref:reference:properties/broker-properties.adoc#scram_password[`scram_password`]: Password for SASL/SCRAM authentication
35
39
- xref:reference:properties/broker-properties.adoc#sasl_mechanism[`sasl_mechanism`]: SASL mechanism (typically `SCRAM-SHA-256` or `SCRAM-SHA-512`)
36
40
37
-
== Redpanda Console v3.0.0
38
-
39
-
The Redpanda Console v3.0.0 release includes the following updates:
40
-
41
-
=== New features
42
-
43
-
Redpanda Console now supports unified authentication and authorization between Console and Redpanda, including user impersonation. This means you can authenticate to Redpanda using the same credentials you use for Redpanda Console.
44
-
45
-
See xref:console:config/security/authentication.adoc[] for more information.
46
-
47
-
=== Breaking changes
48
-
49
-
* **Authentication and authorization:**
50
-
- Renamed the `login` stanza to `authentication`.
51
-
- Renamed `login.jwtSecret` to `authentication.jwtSigningKey`.
52
-
- Removed the plain login provider.
53
-
- OIDC group-based authorization is no longer supported.
54
-
- Role bindings must now be configured in the `authorization.roleBindings` stanza (no longer stored in a separate file).
55
-
56
-
* **Schema Registry:**
57
-
- Moved from under the `kafka` stanza to a top-level `schemaRegistry` stanza.
58
-
- All authentication settings for Schema Registry are now defined under `schemaRegistry.authentication`.
59
-
60
-
* **Admin API:**
61
-
- Authentication for the Redpanda Admin API is now defined under `redpanda.adminApi.authentication`.
62
-
63
-
* **Serialization settings:**
64
-
- Moved `kafka.protobuf`, `kafka.cbor`, and `kafka.messagePack` to a new top-level `serde` stanza.
65
-
- The `kafka.protobuf.schemaRegistry` setting is deprecated. Use the top-level `schemaRegistry` stanza instead.
66
-
67
-
* **Connect:**
68
-
- Renamed the `connect` stanza to `kafkaConnect` to avoid ambiguity with Redpanda Connect.
69
-
70
-
* **Console settings:**
71
-
- Moved `console.maxDeserializationPayloadSize` to `serde.maxDeserializationPayloadSize`.
The admin panel has been removed from the Redpanda Console UI. To manage users, use the Security page. To generate debug bundles, use the link on the Cluster overview page. To upload a new license, use the link on the Cluster overview page or in the license expiration warning banner.
78
-
79
-
== Iceberg improvements
80
-
81
-
Iceberg-enabled topics now support the following:
82
-
83
-
- xref:manage:iceberg/about-iceberg-topics.adoc#use-custom-partitioning[Custom partitioning] for improved query performance.
- xref:manage:iceberg/about-iceberg-topics.adoc#manage-dead-letter-queue[Dead-letter queue] for invalid records.
86
-
- xref:manage:iceberg/about-iceberg-topics.adoc#schema-evolution[Schema evolution], with schema mutations implemented according to the Iceberg specification.
87
-
- For Avro and Protobuf data, structured Iceberg tables without the use of the Schema Registry wire format or SerDes. See xref:manage:iceberg/choose-iceberg-mode.adoc[] for more information.
88
-
89
-
== Protobuf normalization in Schema Registry
90
-
91
-
Redpanda now supports normalization of Protobuf schemas in the Schema Registry. You can normalize Avro, JSON, and Protobuf schemas both during registration and lookup. For more information, see the xref:manage:schema-reg/schema-reg-overview.adoc#schema-normalization[Schema Registry overview], and the xref:api:ROOT:pandaproxy-schema-registry.adoc[Schema Registry API reference].
92
-
93
-
== Protobuf well-known types in `rpk`
94
-
95
-
Support for https://protobuf.dev/reference/protobuf/google.protobuf/[Protobuf well-known types^] is available in `rpk` when encoding and decoding records using Schema Registry.
96
-
97
-
== SASL/PLAIN authentication
98
-
99
-
You now can configure Kafka clients to authenticate using xref:manage:security/authentication#enable-sasl.adoc[SASL/PLAIN] with a single account using the same username and password. Unlike SASL/SCRAM, which uses a challenge response with hashed credentials, SASL/PLAIN transmits plaintext passwords. You enable SASL/PLAIN by appending `PLAIN` to the list of SASL mechanisms.
100
-
101
-
== Pause and resume uploads
102
-
103
-
Redpanda now supports xref:manage:tiered-storage.adoc#pause-and-resume-uploads[pausing and resuming uploads] to object storage when running Tiered Storage, with no risk to data consistency or data loss. You can use the xref:reference:properties/object-storage-properties.adoc#cloud_storage_enable_segment_uploads[`cloud_storage_enable_segment_uploads`] property to pause or resume uploads to help you troubleshoot any issues that occur in your cluster during uploads.
104
-
105
-
== Trial license
106
-
107
-
All new Redpanda clusters automatically receive a xref:get-started:licensing/overview.adoc#trial-license[trial license] valid for 30 days. You can extend this trial for 30 days using the new xref:reference:rpk/rpk-generate/rpk-generate-license.adoc[`rpk generate license`] command.
108
-
109
-
== Metrics
110
-
111
-
The following metrics are new in this version:
112
-
113
-
=== Consumer lag gauges
114
-
115
-
Redpanda can now expose dedicated consumer lag gauges that eliminate the need to calculate lag manually. These metrics provide real-time insights into consumer group performance and help identify issues. The following metrics are available:
Reports the maximum lag observed among all partitions for a consumer group. This metric helps pinpoint the partition with the greatest delay, indicating potential performance or configuration issues.
The following cluster properties are new in this version:
150
-
151
-
=== Metrics
152
-
153
-
- xref:reference:properties/cluster-properties.adoc#enable_consumer_group_metrics[`enable_consumer_group_metrics`]: Enables detailed consumer group metrics collection.
154
-
- xref:reference:properties/cluster-properties.adoc#enable_host_metrics[`enable_host_metrics`]: Enables exporting of some host metrics like `/proc/diskstats`, `/proc/snmp` and `/proc/net/netstat`.
155
-
156
-
=== Iceberg integration
157
-
158
-
- xref:reference:properties/cluster-properties.adoc#iceberg_backlog_controller_p_coeff[`iceberg_backlog_controller_p_coeff`]: Configures the coefficient for backlog control in Iceberg tables.
159
-
- xref:reference:properties/cluster-properties.adoc#iceberg_default_partition_spec[`iceberg_default_partition_spec`]: Sets the default partition specification for Iceberg tables.
160
-
- xref:reference:properties/cluster-properties.adoc#iceberg_disable_snapshot_tagging[`iceberg_disable_snapshot_tagging`]: Disables snapshot tagging in Iceberg.
161
-
- xref:reference:properties/cluster-properties.adoc#iceberg_invalid_record_action[`iceberg_invalid_record_action`]: Specifies the action for handling invalid records in Iceberg.
162
-
- xref:reference:properties/cluster-properties.adoc#iceberg_rest_catalog_authentication_mode[`iceberg_rest_catalog_authentication_mode`]: Defines the authentication mode for the Iceberg REST catalog.
163
-
- xref:reference:properties/cluster-properties.adoc#iceberg_rest_catalog_oauth2_server_uri[`iceberg_rest_catalog_oauth2_server_uri`]: Specifies the OAuth2 server URI for the Iceberg REST catalog.
164
-
- xref:reference:properties/cluster-properties.adoc#iceberg_target_backlog_size[`iceberg_target_backlog_size`]: Sets the target backlog size for Iceberg.
165
-
- xref:reference:properties/cluster-properties.adoc#iceberg_target_lag_ms[`iceberg_target_lag_ms`]: Configures the target lag (in milliseconds) for Iceberg.
166
-
167
-
=== Log compaction
168
-
169
-
- xref:reference:properties/cluster-properties.adoc#log_compaction_adjacent_merge_self_compaction_count[`log_compaction_adjacent_merge_self_compaction_count`]: Adjusts the number of self-compaction merges during log compaction.
170
-
- xref:reference:properties/cluster-properties.adoc#min_cleanable_dirty_ratio[`min_cleanable_dirty_ratio`]: Sets the minimum ratio between the number of bytes in dirty segments and the total number of bytes in closed segments that must be reached before a partition's log is eligible for compaction in a compact topic.
171
-
172
-
=== Raft optimization
173
-
174
-
- xref:reference:properties/cluster-properties.adoc#raft_max_buffered_follower_append_entries_bytes_per_shard[`raft_max_buffered_follower_append_entries_bytes_per_shard`]: Limits the maximum bytes buffered for follower append entries per shard.
175
-
- xref:reference:properties/cluster-properties.adoc#raft_max_inflight_follower_append_entries_requests_per_shard[`raft_max_inflight_follower_append_entries_requests_per_shard`]: Replaces the deprecated `raft_max_concurrent_append_requests_per_follower` to limit in-flight follower append requests per shard.
176
-
177
-
=== Tiered Storage
178
-
179
-
- xref:reference:properties/object-storage-properties.adoc#cloud_storage_enable_remote_allow_gaps[`cloud_storage_enable_remote_allow_gaps`]: Controls the eviction of locally stored log segments when Tiered Storage uploads are paused.
180
-
181
-
- xref:reference:properties/object-storage-properties.adoc#cloud_storage_enable_segment_uploads[`cloud_storage_enable_segment_uploads`]: Controls the upload of log segments to Tiered Storage. If set to `false`, this property temporarily pauses all log segment uploads from the Redpanda cluster.
182
-
183
-
=== TLS configuration
184
-
185
-
- xref:reference:properties/cluster-properties.adoc#tls_certificate_name_format[`tls_certificate_name_format`]: Sets the format of the certificates's distinguished name to use for mTLS principal mapping.
186
-
- xref:reference:properties/cluster-properties.adoc#tls_enable_renegotiation[`tls_enable_renegotiation`]: Enables support for TLS renegotiation.
187
-
188
-
=== Throughput quota
189
-
190
-
- xref:reference:properties/cluster-properties.adoc#target_fetch_quota_byte_rate[`target_fetch_quota_byte_rate`]: Configures the fetch quota in bytes per second.
191
-
192
-
=== Topic configuration
193
-
194
-
- xref:reference:properties/cluster-properties.adoc#topic_partitions_memory_allocation_percent[`topic_partitions_memory_allocation_percent`]: Adjusts the percentage of memory allocated for topic partitions.
195
-
196
-
=== Scheduler improvements
197
-
198
-
- xref:reference:properties/cluster-properties.adoc#use_kafka_handler_scheduler_group[`use_kafka_handler_scheduler_group`]: Enables the Kafka handler scheduler group.
199
-
- xref:reference:properties/cluster-properties.adoc#use_produce_scheduler_group[`use_produce_scheduler_group`]: Enables the produce scheduler group.
200
-
201
-
=== Changes to the default configuration
202
-
203
-
- xref:reference:properties/cluster-properties.adoc#storage_read_readahead_count[`storage_read_readahead_count`]: Reduced default from `10` to `1` to optimize read throughput and minimize unaccounted memory usage, lowering the risk of OOM errors on local storage paths.
204
-
- xref:reference:properties/cluster-properties.adoc#topic_memory_per_partition[`topic_memory_per_partition`]: Decreased default from `4194304` to `204800`
205
-
- xref:reference:properties/cluster-properties.adoc#topic_partitions_per_shard[`topic_partitions_per_shard`]: Increased default from `1000` to `5000` to support larger partition counts per shard.
206
-
207
-
=== Client quota properties removed
208
-
209
-
The following client configuration properties were deprecated in version 24.2.1, and have been removed in this release:
210
-
211
-
* `kafka_client_group_byte_rate_quota`
212
-
* `kafka_client_group_fetch_byte_rate_quota`
213
-
* `target_quota_byte_rate`
214
-
* `target_fetch_quota_byte_rate`
215
-
* `kafka_admin_topic_api_rate`
216
-
217
-
Use xref:reference:rpk/rpk-cluster/rpk-cluster-quotas.adoc[`rpk cluster quotas`] to manage xref:manage:cluster-maintenance/manage-throughput.adoc#client-throughput-limits[client throughput limits] based on the Kafka API.
218
-
219
-
== Broker properties
220
-
221
-
- xref:reference:properties/broker-properties.adoc#node_id_overrides[`node_id_overrides`]: Overrides a broker ID and UUID at broker startup.
222
-
223
-
== Topic properties
224
-
225
-
- xref:reference:properties/topic-properties.adoc#mincleanabledirtyratio[`min.cleanable.dirty.ratio`]: Sets the minimum ratio between the number of bytes in dirty segments and the total number of bytes in closed segments that must be reached before a partition's log is eligible for compaction in a compact topic.
0 commit comments