Skip to content

frequent error 500 when writing comment on code line in PR #14274

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
2 of 6 tasks
sbourdeauducq opened this issue Jan 7, 2021 · 11 comments
Closed
2 of 6 tasks

frequent error 500 when writing comment on code line in PR #14274

sbourdeauducq opened this issue Jan 7, 2021 · 11 comments
Labels
issue/needs-feedback For bugs, we need more details. For features, the feature must be described in more detail type/bug

Comments

@sbourdeauducq
Copy link

Description

Writing a comment on a source code line in a pull request often causes a weird "Error 500" screen (with multiple top bars, exactly as the screenshot shows). The comment is still posted.

Screenshots

Screenshot_2021-01-07 thermostat

@noerw
Copy link
Member

noerw commented Jan 7, 2021

Could you provide an excerpt from your gitea log file, when this happened?

@noerw noerw added the type/bug label Jan 7, 2021
@noerw
Copy link
Member

noerw commented Jan 7, 2021

@jpraet could this be a regression from #14002?

@noerw noerw added the issue/needs-feedback For bugs, we need more details. For features, the feature must be described in more detail label Jan 7, 2021
@jpraet
Copy link
Member

jpraet commented Jan 7, 2021

@jpraet could this be a regression from #14002?

I doubt that, as that PR just restored the previous behavior. But difficult to say without the logs.

@zeripath
Copy link
Contributor

zeripath commented Jan 7, 2021

Every 500 is accompanied by a log.Error - without providing logs we will not be able to debug this at all.

Not providing logs is frankly rude and is tantamount to spamming maintainers.

I've written multiple times about how to adjust logging configuration to help improve this, and it's in the header, but for completeness here is the link: https://docs.gitea.io/en-us/logging-configuration/#debugging-problems

You must give us logs. It is not fair to expect us to guess the cause of your problems.

@sbourdeauducq
Copy link
Author

You must give us logs. It is not fair to expect us to guess the cause of your problems.

I was not expecting someone to guess; but since logs via journalctl turned out to be somewhat tricky (probably not a gitea issue but on NixOS it doesn't use the systemd logging levels for instance) and the symptom (screenshot) is quite peculiar, I posted it just in case it was a known problem.

Anyway, sorry for posting a low quality bug report; I got logs now and this seems to be similar to #13581 (but happening more often) and not related to the "weirdness" of the screenshot:

Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...m.io/xorm/core/db.go:286:afterProcess() [I] [SQL] SELECT `id`, `user_id`, `repo_id`, `status`, `source`, `issue_id`, `commit_id`, `comment_id`, `updated_by`, `created_unix`, `upd>
Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...m.io/xorm/core/db.go:286:afterProcess() [I] [SQL] SELECT count(*) FROM `notification` WHERE (user_id = ?) AND (status = ?) [1 1] - 68.797µs
Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...user/notification.go:38:func1() [E] GetNotificationCount: database table is locked: notification
Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...s/context/context.go:139:HTML() [D] Template: status/500
Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...m.io/xorm/core/db.go:286:afterProcess() [I] [SQL] UPDATE `notification` SET `updated_unix` = ? WHERE `id`=? [1609996025 2091] - 19.885µs
Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...m.io/xorm/core/db.go:286:afterProcess() [I] [SQL] SELECT `id`, `lower_name`, `name`, `full_name`, `email`, `keep_email_private`, `email_notifications_preference`, `passwd`, `pass>
Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...m.io/xorm/core/db.go:286:afterProcess() [I] [SQL] SELECT count(*) FROM `notification` WHERE (user_id = ?) AND (status = ?) [1 1] - 20.3µs
Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...user/notification.go:38:func1() [E] GetNotificationCount: database table is locked: notification
Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...s/context/context.go:139:HTML() [D] Template: status/500

...

Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...m.io/xorm/core/db.go:286:afterProcess() [I] [SQL] SELECT count(*) FROM `notification` WHERE (user_id = ?) AND (status = ?) [1 1] - 76.472µs
Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...user/notification.go:38:func1() [E] GetNotificationCount: database table is locked: notification
Jan 07 13:07:05 nixbld gitea[2612]: 2021/01/07 13:07:05 ...s/context/context.go:139:HTML() [D] Template: status/500
...

Jan 07 13:07:05 nixbld gitea[2612]: [Macaron] 2021-01-07 13:07:05: Completed GET /M-Labs/thermostat/pulls/43/files 500 Internal Server Error in 46.118127ms

So, migrate to PostgreSQL?

@zeripath
Copy link
Contributor

zeripath commented Jan 9, 2021

Oh crap your SQLITE db is locking somehow.

But it is weird - because if we have a genuine deadlock we should be seeing your process just get locked up completely - edit: it probably is.

Do you have a cluster of multiple gitea processes running at the same time? If so you have to use a real db not sqlite.

Are you building gitea yourself? What tags are you building gitea with?

@zeripath
Copy link
Contributor

zeripath commented Jan 9, 2021

If it's a real deadlock we'll need more logs - we would need to get all of the logs from just before it was working to just as it stops. That is include at least a few SQL queries that definitely succeed and don't trim anything except confidential data out.

Deadlocks are a nightmare we literally need to know exactly what was the causative request for the failure.

The notification queries you've shown us will not be the cause - the issue will be earlier.

@sbourdeauducq
Copy link
Author

But it is weird - because if we have a genuine deadlock we should be seeing your process just get locked up completely - edit: it probably is.

Wouldn't there be an error entry in the log about it? I don't see any [E] messages that would indicate this.

Do you have a cluster of multiple gitea processes running at the same time?

No, only one.

Are you building gitea yourself? What tags are you building gitea with?

No, I am using the Nix package. It is built with the pam sqlite tags.

https://github.com/NixOS/nixpkgs/blob/release-20.09/pkgs/applications/version-management/gitea/default.nix
https://github.com/NixOS/nixpkgs/blob/release-20.09/nixos/modules/services/misc/gitea.nix

If it's a real deadlock we'll need more logs - we would need to get all of the logs from just before it was working to just as it stops

There doesn't seem to be any stopping. The same process has been running for days, and comments still get posted and the rest of the site still operates, the only symptom seems to be the "error 500" screen.

@zeripath
Copy link
Contributor

Ah. That's the problem!

You need the sqlite_unlock_notify tag in addition to pam and sqlite

@sbourdeauducq
Copy link
Author

Thanks, will try :)

@sbourdeauducq
Copy link
Author

Seems to do the trick :) Thanks again.

@go-gitea go-gitea locked and limited conversation to collaborators Mar 11, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
issue/needs-feedback For bugs, we need more details. For features, the feature must be described in more detail type/bug
Projects
None yet
Development

No branches or pull requests

4 participants