-
Notifications
You must be signed in to change notification settings - Fork 10.3k
Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Unexpected end of request content. #23949
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The exception needs to be thrown from the request body APIs so you don't think the request finished gracefully, but the caller should catch it. In this case that's MVC, and I think they're adding something in 5.0. |
Thanks for contacting us. |
We've moved this issue to the Backlog milestone. This means that it is not going to be worked on for the coming release. We will reassess the backlog following the current release and consider this item at that time. To learn more about our issue management process and to have better expectation regarding different types of issues you can read our Triage Process. |
Do you recommend just filtering out the exceptions? |
Yes |
The issue is still here in 5.0, version 2.2 is normal. |
Can this error be ignored globally? |
1 similar comment
Can this error be ignored globally? |
The issue is still here in 5.0.7 |
We are facing the same issue on .NET 6 Preview 7:
|
How can I filter this exception in logs, without ignoring other error messages from kestrel ? |
I add a custom exception handler to ignore the |
@WeihanLi could you share a code snippet for filtering that exception? |
This is still happening. We are using dotnet 8 and are running on a k8s on Azure. As soon as we stress the pod a little with multiple concurrent connections the exception is being thrown. |
@AnisTigrini are you using app insights profiler?
After I disabled this, it fixed this issue for me. This is the info that got me there: |
Hey there @michaelmarcuccio thanks for the quick reply. |
By the way, for anyone that experienced the same problem, here is what I found. So we are running a pod with dotnet 8 using the official Microsoft image I tried to contact API enpoints that do not make any HTTP calls, and the server returned a response immediately, so I knew that the problem was occurring when the server was making HTTP calls to other services. I tried to take a look by downloading netstat into the pod and realized that the problem was socket starvation. The solution for us was to do a migration and replace all those calls with the recommended API (HTTPClient). I hope this helps some of you that are dealing with the issue. |
Hey everyone, this issue has a lot of responses, but from a quick read through these comments, it looks like there are lots of ways people end up hitting this exception. This makes sense, since this is a pretty generic exception that can occur in many cases (client side aborts, network issues, timeouts/disconnects under load, errors in proxies/LBs/etc.). Unfortunately, it also means that this issue isn't really actionable for us unless we have a specific case (with a repro) that we can investigate and address. |
It is actionable right away, and it was from the beginning. If you use "pretty generic exception" then it is pure common sense to actually include the reason of the problem (as the message), so we (end-developers) at least understand what is going on. Reproducing is tricky if it happens say once in 10 000 times, but once it hits we are not left in dark at least. |
Can you describe what you're looking for here? The reason for the |
@adityamandaleeka "Can you describe what you're looking for here?" I don't know because I am not developer of this part/framework. But since you mention few possibilities, I can only rely on those, and thus I would expect message: "client side aborted connection", or "connection timeout", or "proxy error". I am especially interested in core/root error message because I run my server locally, with no proxies, so basically none of so far mentioned reasons you mentioned yesterday apply to me. |
@adityamandaleeka If you want, I can provide the simple API source codes (approx. 150-200 lines) for inspection, which are causing this specific error under load (stress tests) when tested with Autocannon, which is a NodeJS + NPM packaged JavaScript application. I don't want to open source all the code, which I'll provide, so I'm not adding it to here as a ZIP file or GitHub repo. taylaninan ['@'] yahoo.com is my email address. Just write me an email, so that I can attach the ZIP file and send some instructions written in email, too. |
@adityamandaleeka 5 days have past since my last offer of help, but unfortunately I have been NOT contacted by you to provide the sample project to reproduce the bug, so that it can be fixed. This bug has been open for 4.5 years. Does Microsoft even care about the developers? Or considering the situation of Windows 11's "problematic" updates, does MS care about its end-users? I have mentioned this bug, 6-7 months ago, and in the meanwhile, I have learned "Java", and you know what? I'm not looking back. MS is loosing simply developers and end-users because of lack of support, which will eventually cost MS money. I'm avoiding .NET Core like plague. In my projects, I'm using either "outdated" .NET Framework or Java, but no .NET Core, because of bugs and lack of support. Developers and end-users are starting to get "angry" with MS. |
@taylaninan Sorry about the delay; this is on my list but I am juggling lots of things here and doing my best. Unfortunately I can't accept a zip repro, but if you can describe your scenario or create a minimal repro (which doesn't include any of your private code that you don't want to OSS) and put it on a public repo we can investigate. However, I want to be clear that it is not likely that addressing that particular case will "solve" this issue for everyone else. From the server's perspective, when this happens, we can only know that there was an unexpected end of request content, but not what other parts of the system can/should do. |
Does anyone know what do other servers do under these circumstances, like Apache? Or proxy servers like Nginx? |
This is such an incredibly weird one; the very first time I saw this happen is when a colleague of mine living in the USA (I am in Central Europe, the Server in question is hosted in Germany) started using a POST endpoint that I have used many times before without issue but when he does, this exception pops up fairly frequently and for now we use Polly to just retry until the upload succeeds. It's a bit cursed TBH. |
@adityamandaleeka Just concentrate on the file "Program.cs", which is only 129 lines long in total. Just open the project in Visual Studio 2022 and hit "run" to start the Kestrel server. For autocannon to work, you must have installed "node & npm" on your computer. If you have any questions, don't hesitate to ask me for help... |
Thank you @taylaninan. That does indeed reproduce this exception. The high volume of concurrent connections seems like a key ingredient here and suggests that resource exhaustion of some sort is the issue. |
Actually, it looks like these always appear at the end... I wonder if these are just partial requests that the client aborts when it needs to stop (in your case after 60 seconds). |
In that case, this sounds like some kind of expected behaviour. If that is the case, does it make sense to reduce the log level of this to Warning or Info? It is not an application error. |
@nenadcinober I agree, that makes sense. Let me consider that a bit more. Maybe that's the best outcome from this. |
It might be a breaking change, but could it be more |
@adityamandaleeka It seems that this issue is introduced long ago in this PR: I feel we should replace "ThrowUnexpectedEndOfRequestContent()" with just "break;" on this line, as client has canceled the request, so we can break gracefully: aspnetcore/src/Servers/Kestrel/Core/src/Internal/Http/Http1ContentLengthMessageBody.cs Line 91 in a4ca931
|
I've just opened a PR (#60359) to solve this issue in a safe way that's fairly straightforward. Basically we can skip logging these bad requests as an application error and continue to allow people who want to see it to opt into bad request logging. |
@adityamandaleeka 1 down, 3430 issues to go... |
Will this solution be available in Dotnet 8? |
@lindsay-duncan-tylertech We usually backport fixes based on severity and impact. In this case, the log message is unexpected, but it doesn’t actually cause any issues for your application. Plus, this behavior has been around for years. Given that, while I'm glad we've improved it, I don’t think there’s a strong enough reason to backport this one. |
@lindsay-duncan-tylertech perhaps you or others here would be willing to try out the next .NET 10 preview to confirm the fix. |
I am facing the same issue. I am using dotnet 8. Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Reading the request body timed out due to data arriving too slowly. See MinRequestBodyDataRate. at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1ContentLengthMessageBody.ReadAsyncInternal(CancellationToken cancellationToken) at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder |
@irfankhanteo Yours is a different issue than the client disconnects we discussed above. As the exception message suggests, your request body is arriving too slowly so it timed out. This doc will help you: https://learn.microsoft.com/dotnet/api/microsoft.aspnetcore.server.kestrel.core.kestrelserverlimits.minrequestbodydatarate |
Uh oh!
There was an error while loading. Please reload this page.
We are facing intermittently BadHttpRequestException: Unexpected end of request content.
We are running on .NET Core 3.1.5, this exception seemed to only appear when we moved over to .NET Core 3.0
There have been similiar issues opened in the past, #19476 (comment) and #6575.
This exception seems to be thrown when a client aborts mid-request.
My question is, should this be logged as a warning instead of an exception? It creates a lot of noise in our logs.
The text was updated successfully, but these errors were encountered: