Skip to content

Sentry HTTP 429 errors causing AWS Lambda InvokeErrror (status 502 response) result #4582

Closed
@tmilar

Description

@tmilar

Package + Version

  • @sentry/browser
  • @sentry/node 6.17.6
  • raven-js
  • raven-node (raven for node)
  • other:
    • @sentry/serverless 6.17.6
    • @sentry/tracing 6.17.6

Version:

6.17.6

Description

Sometimes, due to large Sentry API usage, we are getting HTTP 429 error responses with Sentry SDK in our Node.js AWS Lambda functions.

However, the problem is that sometimes we end up getting this error caused by Sentry SDK that results in a seemingly unhandleable InvokeError (resulting in AWS lambda HTTP 502 response) which is problematic for us because our actual business logic is working just fine.

2022-02-15T18:50:47.224Z	67dc62db-a670-4e84-ad4a-679145ecd1e1	ERROR	Invoke Error 	
{
    "errorType": "SentryError",
    "errorMessage": "HTTP Error (429)",
    "name": "SentryError",
    "stack": [
        "SentryError: HTTP Error (429)",
        "    at new SentryError (/var/task/node_modules/@sentry/utils/dist/error.js:9:28)",
        "    at ClientRequest.<anonymous> (/var/task/node_modules/@sentry/node/dist/transports/base/index.js:212:44)",
        "    at Object.onceWrapper (events.js:520:26)",
        "    at ClientRequest.emit (events.js:400:28)",
        "    at ClientRequest.emit (domain.js:475:12)",
        "    at HTTPParser.parserOnIncomingClient (_http_client.js:647:27)",
        "    at HTTPParser.parserOnHeadersComplete (_http_common.js:127:17)",
        "    at TLSSocket.socketOnData (_http_client.js:515:22)",
        "    at TLSSocket.emit (events.js:400:28)",
        "    at TLSSocket.emit (domain.js:475:12)"
    ]
}

Our expectation is that internal Sentry errors should not cause an outage in our own APIs.

We didn't found any way to handle/catch this error, because it looks like it's failing outside of our Sentry calls such as Sentry.init(), Sentry.wrapHandler() etc.

The 429 error response we are mostly certain that is due to an exceess in our Transactions quota.

When decreasing traceSampleRate config in Sentry.init() from 0.2 to 0, we stopped having this error.

We were seeing this issue was happening consistently in random cases, seemingly the same as the value we had for traceSampleRate. We've also tried setting it to 1 to confirm, and we could confirm a 100% error rate.

So currently, the workaround we are using is disabling this entire feature by setting it to 0, and no more unhandled Sentry 429 HTTP errors were thrown. Still, it doesn't seem to be correct solution having to disable entire features to have an external dependency not breaking my app.

For completeness, this is our current config:

    Sentry.init({
      debug: SLS_STAGE === 'dev',
      dsn: sentryKey,
      tracesSampleRate: 0,

      environment: SLS_STAGE,
      release: `${SLS_SERVICE_NAME}:${SLS_APIG_DEPLOYMENT_ID}`,
    })

Metadata

Metadata

Assignees

No one assigned

    Labels

    Package: serverlessIssues related to the Sentry Serverless SDK

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions