diff --git a/modules/develop/pages/kafka-clients.adoc b/modules/develop/pages/kafka-clients.adoc index 50c503aab1..e393177e32 100644 --- a/modules/develop/pages/kafka-clients.adoc +++ b/modules/develop/pages/kafka-clients.adoc @@ -34,9 +34,10 @@ The following clients have been validated with Redpanda. | https://github.com/kafka-rust/kafka-rust[kafka-rust^] | Node.js -| https://kafka.js.org[KafkaJS^] +a| +* https://kafka.js.org[KafkaJS^] +* https://github.com/confluentinc/confluent-kafka-javascript[confluent-kafka-javascript^] -Note: Redpanda has known issues interacting with the Schema Registry with https://www.confluent.io/blog/introducing-confluent-kafka-javascript/[confluent-kafka-javascript^]. |=== Clients that have not been validated by Redpanda Data, but use the Kafka protocol, remain compatible with Redpanda subject to the limitations below (particularly those based on librdkafka, such as confluent-kafka-dotnet or confluent-python). diff --git a/modules/get-started/pages/release-notes/redpanda.adoc b/modules/get-started/pages/release-notes/redpanda.adoc index b22695bd62..8e4b461330 100644 --- a/modules/get-started/pages/release-notes/redpanda.adoc +++ b/modules/get-started/pages/release-notes/redpanda.adoc @@ -8,16 +8,20 @@ This topic includes new content added in version {page-component-version}. For a * xref:redpanda-cloud:get-started:whats-new-cloud.adoc[] * xref:redpanda-cloud:get-started:cloud-overview.adoc#redpanda-cloud-vs-self-managed-feature-compatibility[Redpanda Cloud vs Self-Managed feature compatibility] -== Crash recording for improved support +== Retrieve serialized Protobuf schemas with Schema Registry API -Redpanda now records detailed information about broker crashes to help streamline troubleshooting and reduce time to resolution. Crash reports include information such as a stack trace, exception details, the Redpanda broker version, and the timestamp of when the crash occurred. The recorded crash reports are now automatically collected as part of xref:troubleshoot:debug-bundle/overview.adoc[debug bundles], providing Redpanda customer support with more context to diagnose and resolve issues faster. +Starting in version 25.2, the Schema Registry API supports retrieving serialized schemas (Protobuf only) using the `format=serialized` query parameter for the following endpoints: -== New health probes for broker restarts and upgrades +- `GET /schemas/ids/\{id}` +- `POST /subjects/\{subject}` +- `GET subjects/\{subject}/versions/\{version}` +- `GET subjects/\{subject}/versions/\{version}/schema` -The Redpanda Admin API now includes new health probes to help you ensure safe broker restarts and upgrades. The xref:api:ROOT:admin-api.adoc#get-/v1/broker/pre_restart_probe[`pre_restart_probe`] endpoint identifies potential risks if a broker is restarted, and xref:api:ROOT:admin-api.adoc#get-/v1/broker/post_restart_probe[`post_restart_probe`] indicates how much of its workloads a broker has reclaimed after the restart. See also: +This helps facilitate migration of Protobuf clients to Redpanda. See the xref:api:ROOT:schema-registry-api.adoc[Schema Registry API reference] for details. -* xref:manage:cluster-maintenance/rolling-restart.adoc[] -* xref:upgrade:rolling-upgrade.adoc[] +== Support for confluent-kafka-javascript client + +The `confluent-kafka-javascript` client is now validated with Redpanda. For a list of validated clients, see xref:develop:kafka-clients.adoc[]. == HTTP Proxy authentication changes @@ -34,193 +38,4 @@ If you need to maintain the current HTTP Proxy functionality while transitioning - xref:reference:properties/broker-properties.adoc#scram_password[`scram_password`]: Password for SASL/SCRAM authentication - xref:reference:properties/broker-properties.adoc#sasl_mechanism[`sasl_mechanism`]: SASL mechanism (typically `SCRAM-SHA-256` or `SCRAM-SHA-512`) -== Redpanda Console v3.0.0 - -The Redpanda Console v3.0.0 release includes the following updates: - -=== New features - -Redpanda Console now supports unified authentication and authorization between Console and Redpanda, including user impersonation. This means you can authenticate to Redpanda using the same credentials you use for Redpanda Console. - -See xref:console:config/security/authentication.adoc[] for more information. - -=== Breaking changes - -* **Authentication and authorization:** - - Renamed the `login` stanza to `authentication`. - - Renamed `login.jwtSecret` to `authentication.jwtSigningKey`. - - Removed the plain login provider. - - OIDC group-based authorization is no longer supported. - - Role bindings must now be configured in the `authorization.roleBindings` stanza (no longer stored in a separate file). - -* **Schema Registry:** - - Moved from under the `kafka` stanza to a top-level `schemaRegistry` stanza. - - All authentication settings for Schema Registry are now defined under `schemaRegistry.authentication`. - -* **Admin API:** - - Authentication for the Redpanda Admin API is now defined under `redpanda.adminApi.authentication`. - -* **Serialization settings:** - - Moved `kafka.protobuf`, `kafka.cbor`, and `kafka.messagePack` to a new top-level `serde` stanza. - - The `kafka.protobuf.schemaRegistry` setting is deprecated. Use the top-level `schemaRegistry` stanza instead. - -* **Connect:** - - Renamed the `connect` stanza to `kafkaConnect` to avoid ambiguity with Redpanda Connect. - -* **Console settings:** - - Moved `console.maxDeserializationPayloadSize` to `serde.maxDeserializationPayloadSize`. - -*Action required*: xref:migrate:console-v3.adoc[]. - -=== Other changes - -The admin panel has been removed from the Redpanda Console UI. To manage users, use the Security page. To generate debug bundles, use the link on the Cluster overview page. To upload a new license, use the link on the Cluster overview page or in the license expiration warning banner. - -== Iceberg improvements - -Iceberg-enabled topics now support the following: - -- xref:manage:iceberg/about-iceberg-topics.adoc#use-custom-partitioning[Custom partitioning] for improved query performance. -- xref:manage:iceberg/query-iceberg-topics.adoc#access-iceberg-tables[Snapshot expiry]. -- xref:manage:iceberg/about-iceberg-topics.adoc#manage-dead-letter-queue[Dead-letter queue] for invalid records. -- xref:manage:iceberg/about-iceberg-topics.adoc#schema-evolution[Schema evolution], with schema mutations implemented according to the Iceberg specification. -- For Avro and Protobuf data, structured Iceberg tables without the use of the Schema Registry wire format or SerDes. See xref:manage:iceberg/choose-iceberg-mode.adoc[] for more information. - -== Protobuf normalization in Schema Registry - -Redpanda now supports normalization of Protobuf schemas in the Schema Registry. You can normalize Avro, JSON, and Protobuf schemas both during registration and lookup. For more information, see the xref:manage:schema-reg/schema-reg-overview.adoc#schema-normalization[Schema Registry overview], and the xref:api:ROOT:pandaproxy-schema-registry.adoc[Schema Registry API reference]. - -== Protobuf well-known types in `rpk` - -Support for https://protobuf.dev/reference/protobuf/google.protobuf/[Protobuf well-known types^] is available in `rpk` when encoding and decoding records using Schema Registry. - -== SASL/PLAIN authentication - -You now can configure Kafka clients to authenticate using xref:manage:security/authentication#enable-sasl.adoc[SASL/PLAIN] with a single account using the same username and password. Unlike SASL/SCRAM, which uses a challenge response with hashed credentials, SASL/PLAIN transmits plaintext passwords. You enable SASL/PLAIN by appending `PLAIN` to the list of SASL mechanisms. - -== Pause and resume uploads - -Redpanda now supports xref:manage:tiered-storage.adoc#pause-and-resume-uploads[pausing and resuming uploads] to object storage when running Tiered Storage, with no risk to data consistency or data loss. You can use the xref:reference:properties/object-storage-properties.adoc#cloud_storage_enable_segment_uploads[`cloud_storage_enable_segment_uploads`] property to pause or resume uploads to help you troubleshoot any issues that occur in your cluster during uploads. - -== Trial license - -All new Redpanda clusters automatically receive a xref:get-started:licensing/overview.adoc#trial-license[trial license] valid for 30 days. You can extend this trial for 30 days using the new xref:reference:rpk/rpk-generate/rpk-generate-license.adoc[`rpk generate license`] command. - -== Metrics - -The following metrics are new in this version: - -=== Consumer lag gauges - -Redpanda can now expose dedicated consumer lag gauges that eliminate the need to calculate lag manually. These metrics provide real-time insights into consumer group performance and help identify issues. The following metrics are available: - -- xref:reference:public-metrics-reference.adoc#redpanda_kafka_consumer_group_lag_max[`redpanda_kafka_consumer_group_lag_max`]: -Reports the maximum lag observed among all partitions for a consumer group. This metric helps pinpoint the partition with the greatest delay, indicating potential performance or configuration issues. - -- xref:reference:public-metrics-reference.adoc#redpanda_kafka_consumer_group_lag_sum[`redpanda_kafka_consumer_group_lag_sum`]: -Aggregates the lag across all partitions, providing an overall view of data consumption delay for the consumer group. - -See xref:manage:monitoring.adoc#consumers[Monitor consumer group lag] for more information. - -=== Other metrics - -- xref:reference:public-metrics-reference.adoc#redpanda_rpc_received_bytes[`redpanda_rpc_received_bytes`]: -Reports the number of bytes received from valid requests from the client. - -- xref:reference:public-metrics-reference.adoc#redpanda_rpc_sent_bytes[`redpanda_rpc_sent_bytes`]: -Reports the number of bytes sent to clients. - -- xref:reference:public-metrics-reference.adoc#redpanda_kafka_request_bytes_total[`redpanda_kafka_request_bytes_total`]: -Reports the total number of bytes read from or written to the partitions of a topic. - -- xref:reference:public-metrics-reference.adoc#redpanda_cloud_storage_paused_archivers[`redpanda_cloud_storage_paused_archivers`]: -Reports the number of paused archivers. - -== rpk commands - -The following `rpk` commands are new in this version: - -- xref:reference:rpk/rpk-generate/rpk-generate-license.adoc[`rpk generate license`] - -- xref:reference:rpk/rpk-topic/rpk-topic-analyze.adoc[`rpk topic analyze`] - -== Cluster properties - -The following cluster properties are new in this version: - -=== Metrics - -- xref:reference:properties/cluster-properties.adoc#enable_consumer_group_metrics[`enable_consumer_group_metrics`]: Enables detailed consumer group metrics collection. -- xref:reference:properties/cluster-properties.adoc#enable_host_metrics[`enable_host_metrics`]: Enables exporting of some host metrics like `/proc/diskstats`, `/proc/snmp` and `/proc/net/netstat`. - -=== Iceberg integration - -- xref:reference:properties/cluster-properties.adoc#iceberg_backlog_controller_p_coeff[`iceberg_backlog_controller_p_coeff`]: Configures the coefficient for backlog control in Iceberg tables. -- xref:reference:properties/cluster-properties.adoc#iceberg_default_partition_spec[`iceberg_default_partition_spec`]: Sets the default partition specification for Iceberg tables. -- xref:reference:properties/cluster-properties.adoc#iceberg_disable_snapshot_tagging[`iceberg_disable_snapshot_tagging`]: Disables snapshot tagging in Iceberg. -- xref:reference:properties/cluster-properties.adoc#iceberg_invalid_record_action[`iceberg_invalid_record_action`]: Specifies the action for handling invalid records in Iceberg. -- xref:reference:properties/cluster-properties.adoc#iceberg_rest_catalog_authentication_mode[`iceberg_rest_catalog_authentication_mode`]: Defines the authentication mode for the Iceberg REST catalog. -- xref:reference:properties/cluster-properties.adoc#iceberg_rest_catalog_oauth2_server_uri[`iceberg_rest_catalog_oauth2_server_uri`]: Specifies the OAuth2 server URI for the Iceberg REST catalog. -- xref:reference:properties/cluster-properties.adoc#iceberg_target_backlog_size[`iceberg_target_backlog_size`]: Sets the target backlog size for Iceberg. -- xref:reference:properties/cluster-properties.adoc#iceberg_target_lag_ms[`iceberg_target_lag_ms`]: Configures the target lag (in milliseconds) for Iceberg. - -=== Log compaction - -- xref:reference:properties/cluster-properties.adoc#log_compaction_adjacent_merge_self_compaction_count[`log_compaction_adjacent_merge_self_compaction_count`]: Adjusts the number of self-compaction merges during log compaction. -- xref:reference:properties/cluster-properties.adoc#min_cleanable_dirty_ratio[`min_cleanable_dirty_ratio`]: Sets the minimum ratio between the number of bytes in dirty segments and the total number of bytes in closed segments that must be reached before a partition's log is eligible for compaction in a compact topic. - -=== Raft optimization - -- xref:reference:properties/cluster-properties.adoc#raft_max_buffered_follower_append_entries_bytes_per_shard[`raft_max_buffered_follower_append_entries_bytes_per_shard`]: Limits the maximum bytes buffered for follower append entries per shard. -- xref:reference:properties/cluster-properties.adoc#raft_max_inflight_follower_append_entries_requests_per_shard[`raft_max_inflight_follower_append_entries_requests_per_shard`]: Replaces the deprecated `raft_max_concurrent_append_requests_per_follower` to limit in-flight follower append requests per shard. - -=== Tiered Storage - -- xref:reference:properties/object-storage-properties.adoc#cloud_storage_enable_remote_allow_gaps[`cloud_storage_enable_remote_allow_gaps`]: Controls the eviction of locally stored log segments when Tiered Storage uploads are paused. - -- xref:reference:properties/object-storage-properties.adoc#cloud_storage_enable_segment_uploads[`cloud_storage_enable_segment_uploads`]: Controls the upload of log segments to Tiered Storage. If set to `false`, this property temporarily pauses all log segment uploads from the Redpanda cluster. - -=== TLS configuration - -- xref:reference:properties/cluster-properties.adoc#tls_certificate_name_format[`tls_certificate_name_format`]: Sets the format of the certificates's distinguished name to use for mTLS principal mapping. -- xref:reference:properties/cluster-properties.adoc#tls_enable_renegotiation[`tls_enable_renegotiation`]: Enables support for TLS renegotiation. - -=== Throughput quota - -- xref:reference:properties/cluster-properties.adoc#target_fetch_quota_byte_rate[`target_fetch_quota_byte_rate`]: Configures the fetch quota in bytes per second. - -=== Topic configuration - -- xref:reference:properties/cluster-properties.adoc#topic_partitions_memory_allocation_percent[`topic_partitions_memory_allocation_percent`]: Adjusts the percentage of memory allocated for topic partitions. - -=== Scheduler improvements - -- xref:reference:properties/cluster-properties.adoc#use_kafka_handler_scheduler_group[`use_kafka_handler_scheduler_group`]: Enables the Kafka handler scheduler group. -- xref:reference:properties/cluster-properties.adoc#use_produce_scheduler_group[`use_produce_scheduler_group`]: Enables the produce scheduler group. - -=== Changes to the default configuration - -- xref:reference:properties/cluster-properties.adoc#storage_read_readahead_count[`storage_read_readahead_count`]: Reduced default from `10` to `1` to optimize read throughput and minimize unaccounted memory usage, lowering the risk of OOM errors on local storage paths. -- xref:reference:properties/cluster-properties.adoc#topic_memory_per_partition[`topic_memory_per_partition`]: Decreased default from `4194304` to `204800` -- xref:reference:properties/cluster-properties.adoc#topic_partitions_per_shard[`topic_partitions_per_shard`]: Increased default from `1000` to `5000` to support larger partition counts per shard. - -=== Client quota properties removed - -The following client configuration properties were deprecated in version 24.2.1, and have been removed in this release: - -* `kafka_client_group_byte_rate_quota` -* `kafka_client_group_fetch_byte_rate_quota` -* `target_quota_byte_rate` -* `target_fetch_quota_byte_rate` -* `kafka_admin_topic_api_rate` - -Use xref:reference:rpk/rpk-cluster/rpk-cluster-quotas.adoc[`rpk cluster quotas`] to manage xref:manage:cluster-maintenance/manage-throughput.adoc#client-throughput-limits[client throughput limits] based on the Kafka API. - -== Broker properties - -- xref:reference:properties/broker-properties.adoc#node_id_overrides[`node_id_overrides`]: Overrides a broker ID and UUID at broker startup. - -== Topic properties - -- xref:reference:properties/topic-properties.adoc#mincleanabledirtyratio[`min.cleanable.dirty.ratio`]: Sets the minimum ratio between the number of bytes in dirty segments and the total number of bytes in closed segments that must be reached before a partition's log is eligible for compaction in a compact topic. diff --git a/modules/manage/pages/schema-reg/schema-reg-api.adoc b/modules/manage/pages/schema-reg/schema-reg-api.adoc index 0a88167237..b9bf310dcb 100644 --- a/modules/manage/pages/schema-reg/schema-reg-api.adoc +++ b/modules/manage/pages/schema-reg/schema-reg-api.adoc @@ -199,7 +199,7 @@ When you register an evolved schema for an existing subject, the version `id` is == Retrieve a schema -To retrieve a registered schema from the registry, make a GET request to the `/schemas/ids/` endpoint: +To retrieve a registered schema from the registry, make a GET request to the `/schemas/ids/\{id}` endpoint: [tabs] ==== @@ -292,7 +292,7 @@ This returns the subject: == Retrieve schema versions of a subject -To query the schema versions of a subject, make a GET request to the `/subjects//versions` endpoint. +To query the schema versions of a subject, make a GET request to the `/subjects/\{subject}/versions` endpoint. For example, to get the schema versions of the `sensor-value` subject: @@ -325,9 +325,9 @@ This returns the version ID: ] ``` -== Retrieve a schema of a subject +== Retrieve a subject's specific version of a schema -To retrieve a schema associated with a subject, make a GET request to the `/subjects//versions/` endpoint: +To retrieve a specific version of a schema associated with a subject, make a GET request to the `/subjects/\{subject}/versions/\{version}` endpoint: [tabs] ==== @@ -462,7 +462,7 @@ As applications change and their schemas evolve, you may find that producer sche include::manage:partial$schema-compatibility.adoc[] -To set the compatibility type for a subject, make a PUT request to `/config/` with the specific compatibility type: +To set the compatibility type for a subject, make a PUT request to `/config/\{subject}` with the specific compatibility type: [tabs] ==== @@ -717,8 +717,8 @@ curl -H 'Content-type: application/vnd.schemaregistry.v1+json' http://127.0.0.1: The Schema Registry API provides DELETE endpoints for deleting a single schema or all schemas of a subject: -- `/subjects//versions/` -- `/subjects/` +- `/subjects/\{subject}/versions/\{version}` +- `/subjects/\{subject}` Schemas cannot be deleted if any other schemas reference it. @@ -851,7 +851,7 @@ Redpanda doesn't recommend hard (permanently) deleting schemas in a production s The DELETE APIs are primarily used during the development phase, when schemas are being iterated and revised. ==== -To hard delete a schema, use the `--permanent` flag with the `rpk registry schema delete` command, or for curl or Python, make two DELETE requests with the second request setting the `permanent` parameter to `true` (`/subjects//versions/?permanent=true`): +To hard delete a schema, use the `--permanent` flag with the `rpk registry schema delete` command, or for curl or Python, make two DELETE requests with the second request setting the `permanent` parameter to `true` (`/subjects/\{subject}/versions/\{version}?permanent=true`): [tabs] ==== @@ -980,6 +980,34 @@ This request returns the mode that is enforced. If the subject is set to a speci curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"mode": "READONLY"}' http://localhost:8081/mode/ ``` +== Retrieve serialized schemas + +Starting in Redpanda version 25.2, the following endpoints return serialized schemas (Protobuf only) using the `format=serialized` query parameter: + +[cols="1,1"] +|=== +|Operation |Path + +| <> +|`GET /schemas/ids/\{id}?format=serialized` + +| Check if a schema is already registered for a subject +|`POST /subjects/\{subject}?format=serialized` + +|<> +|`GET /subjects/\{subject}/versions/\{version}?format=serialized` + +| Get the unescaped schema only for a subject +|`GET /subjects/\{subject}/versions/\{version}/schema?format=serialized` +|=== + +The `serialized` format returns the Protobuf schema in its wire binary format in Base64. + +- Passing an empty string (`format=''`) returns the schema in the current (default) format. +- For Avro, `resolved` is a valid value, but it is not currently supported and returns a 501 Not Implemented error. +- For Protobuf, `serialized` and `ignore_extensions` are valid, but only `serialized` is currently supported; passing `ignore_extensions` returns a 501 Not Implemented error. +- Cross-schema conditions such as `resolved` with Protobuf or `serialized` with Avro are ignored and the schema is returned in the default format. + == Suggested reading ifndef::env-cloud[] * xref:manage:schema-reg/schema-reg-overview.adoc[]