You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: release_notes/ocp-4-17-release-notes.adoc
+47-13Lines changed: 47 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -2885,6 +2885,40 @@ This section will continue to be updated over time to provide notes on enhanceme
2885
2885
For any {product-title} release, always review the instructions on xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[updating your cluster] properly.
{product-title} release {product-version}.28 is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2025:4431[RHSA-2025:4431] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHBA-2025:4433[RHBA-2025:4433] advisory.
2895
+
2896
+
Space precluded documenting all of the container images for this release in the advisory.
2897
+
2898
+
You can view the container images in this release by running the following command:
2899
+
2900
+
[source,terminal]
2901
+
----
2902
+
$ oc adm release info 4.17.28 --pullspecs
2903
+
----
2904
+
2905
+
[id="ocp-4-17-28-bug-fixes_{context}"]
2906
+
==== Bug fixes
2907
+
2908
+
* Previously, if you had permission to view nodes but not Certificate Signing Requests (CSR), you could not access the *Nodes list* page. With this release, permissions to view CSRs are no longer required to access the *Nodes list* page. (link:https://issues.redhat.com/browse/OCPBUGS-55202[OCPBUGS-55202])
2909
+
2910
+
* Previously, after you deleted the `ClusterResourceOverride` custom resource (CR) or you uninstalled the Cluster Resource Override Operator, which also removes the `ClusterResourceOverride` CR, the `v1.admission.autoscaling.openshift.io` API service becomes unreachable. This situation impacted other cluster functions, such as other Operator installations from succeeding. With this release, when you delete the Cluster Resource Override Operator, the `v1.admission.autoscaling.openshift.io` API service is also removed. As a result, you can now install other Operators without experiencing installation failures. (link:https://issues.redhat.com/browse/OCPBUGS-55355[OCPBUGS-55355])
2911
+
2912
+
* Previously, when you attempted to upgrade the Cluster Resource Override Operator from {product-title} 4.16 to {product-version}, the Cluster Resource Override webhook stopped functioning. This situation prevented pods from getting created in namespaces that had the Cluster Resource Override enabled. With this release, a stale secret is deleted so that {product-title} regenerates the secret with the correct parameters and values during an upgrade operation. As a result, the Operator upgrade succeeds and you can now create pods in any namespaces that have the Cluster Resource Override enabled. (link:https://issues.redhat.com/browse/OCPBUGS-55239[OCPBUGS-55239])
2913
+
2914
+
* Previously, the Assisted Installer failed to detect World Wide Name (WWN) details during Fibre Channel multipath volumes hardware discovery. As a result, a Fibre Channel multipath disk could not be matched with a WWN root device. This meant that when you specified a `wwn` root device hint, the hint excluded all Fibre Channel multipath disks. With this release, the Assisted Installer now detects WWN details during Fibre Channel multipath disk discovery. If multiple Fibre Channel multipath disks exist, you can now use the `wwn` root device hint to choose a primary disk for your cluster. (link:https://issues.redhat.com/browse/OCPBUGS-55184[OCPBUGS-55184])
2915
+
2916
+
* Previously, the `mtu-migration` service did not work correctly when you used `nmstate` to manage a `br-ex` bridge because of a missing service dependency. With this release, the service dependency is now added so that a network configuration that uses `nmstate` to manage a `br-ex` is correct before the migration process begins. (link:https://issues.redhat.com/browse/OCPBUGS-54830[OCPBUGS-54830])
2917
+
2918
+
[id="ocp-4-17-28-updating_{context}"]
2919
+
==== Updating
2920
+
To update an {product-title} 4.17 cluster to this latest release, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI].
* Previously, when you selected the load balancer, the installation program picked a fixed internet protocol (IP) address, (`10.0.0.100`), and attached the address to the load balancer even if the IP was outside of the range of the machine network or virtual network. With this release, the installation program checks for an available IP in the provided control plane subnet or machine network and elects an IP that is not reserved if the default IP is not within the range. (link:https://issues.redhat.com/browse/OCPBUGS-55224[*OCPBUGS-55224*])
2947
+
* Previously, when you selected the load balancer, the installation program picked a fixed internet protocol (IP) address, (`10.0.0.100`), and attached the address to the load balancer even if the IP was outside of the range of the machine network or virtual network. With this release, the installation program checks for an available IP in the provided control plane subnet or machine network and elects an IP that is not reserved if the default IP is not within the range. (link:https://issues.redhat.com/browse/OCPBUGS-55224[OCPBUGS-55224])
2914
2948
2915
-
* Previously, if a scrape failed, Prometheus erroneously considered the samples from the very next scrape as duplicates and dropped them. This issue impacted only the scrape immediately following a failure, while subsequent scrapes were processed correctly. With this release, the scrape following a failure is now correctly handled, ensuring that no valid samples are mistakenly dropped. (link:https://issues.redhat.com/browse/OCPBUGS-54941[*OCPBUGS-54941*])
2949
+
* Previously, if a scrape failed, Prometheus erroneously considered the samples from the very next scrape as duplicates and dropped them. This issue impacted only the scrape immediately following a failure, while subsequent scrapes were processed correctly. With this release, the scrape following a failure is now correctly handled, ensuring that no valid samples are mistakenly dropped. (link:https://issues.redhat.com/browse/OCPBUGS-54941[OCPBUGS-54941])
2916
2950
2917
-
* Previously, for an Ingress resource with an `IngressWithoutClassName` alert, the Ingress Controller did not delete the alert along with deletion of the resource. The alert continued to show on the {product-title} web console. With this release, the Ingress Controller resets the `openshift_ingress_to_route_controller_ingress_without_class_name` metric to `0` before the controller deletes the Ingress resource, so that the alert is deleted and no longer shows on the web console. (link:https://issues.redhat.com/browse/OCPBUGS-53077[*OCPBUGS-53077*])
2951
+
* Previously, for an Ingress resource with an `IngressWithoutClassName` alert, the Ingress Controller did not delete the alert along with deletion of the resource. The alert continued to show on the {product-title} web console. With this release, the Ingress Controller resets the `openshift_ingress_to_route_controller_ingress_without_class_name` metric to `0` before the controller deletes the Ingress resource, so that the alert is deleted and no longer shows on the web console. (link:https://issues.redhat.com/browse/OCPBUGS-53077[OCPBUGS-53077])
2918
2952
2919
-
* Previously, during cluster creation a control plane node was replaced when it was detected as unhealthy. This replacement irrevocably disabled the cluster and prevented the creation of the cluster. With this fix, the node is not inadvertently replaced, ensuring the stabilization of the control plane and the successful creation of the cluster. (link:https://issues.redhat.com/browse/OCPBUGS-52957[*OCPBUGS-52957*])
2953
+
* Previously, during cluster creation a control plane node was replaced when it was detected as unhealthy. This replacement irrevocably disabled the cluster and prevented the creation of the cluster. With this fix, the node is not inadvertently replaced, ensuring the stabilization of the control plane and the successful creation of the cluster. (link:https://issues.redhat.com/browse/OCPBUGS-52957[OCPBUGS-52957])
2920
2954
2921
-
* Previously, the Single Root I/O Virtualization (SR-IOV) virtual function (VF) did not revert any unexpected value changes to the maximum transmission unit (MTU) value when a pod was deleted. This issue occurred if the application inside the pod had its MTU value changed; in turn, the pod would also have its MTU value changed. With this release, the SR-IOV Container Network Interface (CNI) now reverts any unexpected MTU value changes to the original value so that this issue no longer exists. (link:https://issues.redhat.com/browse/OCPBUGS-54392[*OCPBUGS-54392*])
2955
+
* Previously, the Single Root I/O Virtualization (SR-IOV) virtual function (VF) did not revert any unexpected value changes to the maximum transmission unit (MTU) value when a pod was deleted. This issue occurred if the application inside the pod had its MTU value changed; in turn, the pod would also have its MTU value changed. With this release, the SR-IOV Container Network Interface (CNI) now reverts any unexpected MTU value changes to the original value so that this issue no longer exists. (link:https://issues.redhat.com/browse/OCPBUGS-54392[OCPBUGS-54392])
* Previously, when you attempted to create a validating webhook for a resource that was managed by the `oauth` API server, the validating webhook was not created. This issue occurred because of a communication issue with the `oauth` API server and the data plane. With this release, a Konnectivity proxy sidecar has been added to bridge communications between the `oauth` API server and the data plane so that you can now create a validating webhook for any resource that the `oauth` API server manages. (link:https://issues.redhat.com/browse/OCPBUGS-54841[*OCPBUGS-54841*])
2981
+
* Previously, when you attempted to create a validating webhook for a resource that was managed by the `oauth` API server, the validating webhook was not created. This issue occurred because of a communication issue with the `oauth` API server and the data plane. With this release, a Konnectivity proxy sidecar has been added to bridge communications between the `oauth` API server and the data plane so that you can now create a validating webhook for any resource that the `oauth` API server manages. (link:https://issues.redhat.com/browse/OCPBUGS-54841[OCPBUGS-54841])
2948
2982
2949
-
* Previously, virtual machines (VMs) in a cluster that ran on {azure-first} failed because the attached network interface controller (NIC) was in a `ProvisioningFailed` state. With this release, the Machine API controller now checks the provisioning status of a NIC and refreshes the VMs on a regular basis to prevent this issue. (link:https://issues.redhat.com/browse/OCPBUGS-54393[*OCPBUGS-54393*])
2983
+
* Previously, virtual machines (VMs) in a cluster that ran on {azure-first} failed because the attached network interface controller (NIC) was in a `ProvisioningFailed` state. With this release, the Machine API controller now checks the provisioning status of a NIC and refreshes the VMs on a regular basis to prevent this issue. (link:https://issues.redhat.com/browse/OCPBUGS-54393[OCPBUGS-54393])
2950
2984
2951
-
* Previously, the installation program malfunctioned if it attempted to retrieve {gcp-full} tags over an unstable network, or when it could not reach the GCP server. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-51210[*OCPBUGS-51210*])
2985
+
* Previously, the installation program malfunctioned if it attempted to retrieve {gcp-full} tags over an unstable network, or when it could not reach the GCP server. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-51210[OCPBUGS-51210])
2952
2986
2953
-
* Previously, a User Datagram Protocol (UDP) packet that was larger than the maximum transmission unit (MTU) value set for the cluster, could not be sent to the endpoint of the packet by using a service. With this release, the pod IP address is used instead of the service IP address regardless of the packet size, so that the UDP packet can be sent to the endpoint. (link:https://issues.redhat.com/browse/OCPBUGS-50579[*OCPBUGS-50579*])
2987
+
* Previously, a User Datagram Protocol (UDP) packet that was larger than the maximum transmission unit (MTU) value set for the cluster, could not be sent to the endpoint of the packet by using a service. With this release, the pod IP address is used instead of the service IP address regardless of the packet size, so that the UDP packet can be sent to the endpoint. (link:https://issues.redhat.com/browse/OCPBUGS-50579[OCPBUGS-50579])
* Previously, containers that use the SELinux domain of `container_logreader_t` for the purposes of viewing container logs on a host at `/var/log` could not access logs in the `/var/log/containers` subdirectory. This issue happened because of a missing symbolic link. With this release, a symbolic link is created for `/var/log/containers` so that containers can access the logs in `/var/log/containers`. (link:https://issues.redhat.com/browse/OCPBUGS-54343[*OCPBUGS-54343*])
3013
+
* Previously, containers that use the SELinux domain of `container_logreader_t` for the purposes of viewing container logs on a host at `/var/log` could not access logs in the `/var/log/containers` subdirectory. This issue happened because of a missing symbolic link. With this release, a symbolic link is created for `/var/log/containers` so that containers can access the logs in `/var/log/containers`. (link:https://issues.redhat.com/browse/OCPBUGS-54343[OCPBUGS-54343])
2980
3014
2981
-
* Previously, the cluster autoscaler stopped scaling when a machine failed in a machine set. This situation happened because of inaccuracies in the way the cluster autoscaler counts machines in various non-running phases. With this release, the inaccuracies have been fixed so that the cluster autoscaler has a more accurate count. (link:https://issues.redhat.com/browse/OCPBUGS-54325[*OCPBUGS-54325*])
3015
+
* Previously, the cluster autoscaler stopped scaling when a machine failed in a machine set. This situation happened because of inaccuracies in the way the cluster autoscaler counts machines in various non-running phases. With this release, the inaccuracies have been fixed so that the cluster autoscaler has a more accurate count. (link:https://issues.redhat.com/browse/OCPBUGS-54325[OCPBUGS-54325])
2982
3016
2983
-
* Previously, the *Alerts* page on the Developer perspective of the web console stopped querying the Prometheus tenancy path. This issue caused an `Error loading silences from alert manager` banner to show on the page. With this release, the page now queries the Prometheus tenancy path and the page retrieves silent alert data from the Developer perspective data store so that the banner no longer shows on the page. (link:https://issues.redhat.com/browse/OCPBUGS-54211[*OCPBUGS-54211*])
3017
+
* Previously, the *Alerts* page on the Developer perspective of the web console stopped querying the Prometheus tenancy path. This issue caused an `Error loading silences from alert manager` banner to show on the page. With this release, the page now queries the Prometheus tenancy path and the page retrieves silent alert data from the Developer perspective data store so that the banner no longer shows on the page. (link:https://issues.redhat.com/browse/OCPBUGS-54211[OCPBUGS-54211])
2984
3018
2985
-
* Previously, a missing machine config for the container runtime configuration prevented a cluster update operation from succeeding because of a container runtime controller failure. With this release, the missing machine config is now ignored so that a cluster operation can succeed. (link:https://issues.redhat.com/browse/OCPBUGS-52188[*OCPBUGS-52188*])
3019
+
* Previously, a missing machine config for the container runtime configuration prevented a cluster update operation from succeeding because of a container runtime controller failure. With this release, the missing machine config is now ignored so that a cluster operation can succeed. (link:https://issues.redhat.com/browse/OCPBUGS-52188[OCPBUGS-52188])
0 commit comments