-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
log errors regardless of ignoreSentryErrors
#5064
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
If we don't even log anything when `ignoreSentryErrors` is enabled, it is impossible to tell when there's something wrong with the integration. For example, when transaction payloads get too big and we start getting 413s from Sentry's service, the options are a. cause 500s in the service that's being monitored (worst possible outcome) or b. have zero way to tell it's happening. This introduces c. make it possible to add a warning alarm on occurences of sentry errors in the logs.
Edit: was able to run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, thanks for opening this. I brought this back to the team - it's not ok that we interfere with return values (or crash any lambdas for that matter).
I am currently evaluating if we should just catch all errors that happen while flushing and log them out. Will get back to you very soon!
@@ -318,8 +318,10 @@ export function wrapHandler<TEvent, TResult>( | |||
transaction.finish(); | |||
hub.popScope(); | |||
await flush(options.flushTimeout).catch(e => { | |||
// logging regardless of ignoreSentryErrors makes it possible to detect issues with sentry | |||
// using logs; otherwise, things like oversized payloads fail completely silently. | |||
logger.error(e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger.error(e); | |
IS_DEBUG_BUILD && logger.error(e); |
Hi, we recently merged #5091. With that change, all errors while flushing are caught by default - no more crashing lambdas. Additionally, it logs those errors. I believe this makes this PR redundant, however, feel free to double check me on that one. Even though this probably won't get merged, thank you very much for the contribution - it got the ball rolling. Please check this thread for when this will reach our released lamda layers. |
Due to the use of I'm also not sure how the PR you linked would affect the crashing behavior. It seems to just log something if the conditional is true, but does not return early or handle the error in any other special way. As far as I can tell, the error would just keep bubbling up and trigger an exception. So in order to avoid an exception, we'd still have to run Are there arguments against always logging an error when an error occurs while communicating with the Sentry service? I can't think of any, but perhaps one of the following options would allow for sufficient flexibility to satisfy any concerns?
|
Sorry, linked the wrong PR. #5090 is the one that changes error behaviour.
#5090 provides an error callback for the Feel free to try our patch when it is released and ping us if something doesn't work as expected! |
This pull request has gone three weeks without activity. In another week, I will close it. But! If you comment or otherwise update it, I will reset the clock, and if you label it "A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀 |
If we don't even log anything when
ignoreSentryErrors
is enabled, it is impossible to tell when there's something wrong with the integration. For example, when transaction payloads get too big and we start getting 413s from Sentry's service, the options are a. cause 500s in the service that's being monitored (worst possible outcome) or b. have zero way to tell it's happening. This introduces c. make it possible to add a warning alarm on occurences of sentry errors in the logs.Before submitting a pull request, please take a look at our
Contributing guidelines and verify:
yarn lint
) & (yarn test
).