Skip to content

Error 500 after pushing to new repos - docker #8023

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
2 of 7 tasks
holmesb opened this issue Aug 28, 2019 · 17 comments
Closed
2 of 7 tasks

Error 500 after pushing to new repos - docker #8023

holmesb opened this issue Aug 28, 2019 · 17 comments

Comments

@holmesb
Copy link

holmesb commented Aug 28, 2019

Further to issue #7499, at the request of @sapk, I am opening a new issue to track this problem on docker. Get error 500 in webui after pushing to or initialising a repo. @sapk suspects this is related to gitea not being able to read some folders for various reason. But shelling into the container and writing files in the repos dir works fine. Example of error:

[Macaron] 2019-08-26 18:04:01: Completed GET /myorg/myrepo 500 Internal Server Error in 58.821377ms

I've tried SSH & HTTPS, OpenID & local login and private & non-private. Restarting container\pod doesn't help.

  • Gitea version (or commit ref): v1.9.2
  • Git version: 2.17.1
  • Operating system: Whatever the docker image is.
  • Database (use [x]):
    • PostgreSQL
    • MySQL
    • MSSQL
    • SQLite
  • Can you reproduce the bug at https://try.gitea.io:
    • Yes (provide example URL)
    • No
    • Not relevant

Can be fixed on systemd-based systems by adding line to systemd config file: ReadWritePaths=/path/to/git/repos But this does not exist in docker container.

@holmesb holmesb changed the title Error 500 after pushing to new repos - docker-based Error 500 after pushing to new repos - docker Aug 28, 2019
@lafriks
Copy link
Member

lafriks commented Aug 28, 2019

Can you give error from gitea.log that is logged when this error happens

@holmesb
Copy link
Author

holmesb commented Aug 28, 2019

Hi @lafriks. The error above ("[Macaron]...") is from the gitea.log AFAIK. I'm on Kubernetes, so whatever log file is output by the kubectl logs command is what I posted. I can post the surrounding lines if it helps, but don't think they were interesting. It happens at exactly the moment I try to browse to a repo, post-push\initialise.

@lafriks
Copy link
Member

lafriks commented Aug 28, 2019

Real error is logged in gitea.log file /data/gitea/logs/ directory

@holmesb
Copy link
Author

holmesb commented Aug 28, 2019

/data/gitea/log dir is empty in k8s. The surrounding lines when webui error occurs:

[Macaron] 2019-08-28 22:21:37: Started GET /myorg/myrepo for 10.197.8.9
[Macaron] 2019-08-28 22:21:38: Completed GET /myorg/myrepo 500 Internal Server Error in 61.893175ms
[Macaron] 2019-08-28 22:21:38: Started GET /img/favicon.png for 10.197.8.9
[Macaron] 2019-08-28 22:21:38: Started GET /manifest.json for 10.197.8.9
[Macaron] [Static] Serving /img/favicon.png
[Macaron] 2019-08-28 22:21:38: Completed GET /img/favicon.png 200 OK in 127.45µs
[Macaron] 2019-08-28 22:21:38: Completed GET /manifest.json 200 OK in 1.668535ms
[Macaron] 2019-08-28 22:21:40: Started GET /serviceworker.js for 10.197.8.9
[Macaron] 2019-08-28 22:21:40: Completed GET /serviceworker.js 200 OK in 2.08394ms

@typeless
Copy link
Contributor

How do you set up your Gitea using Docker? Can you post your docker-compose or the startup script for your Gitea image?

@typeless
Copy link
Contributor

By the way, the app.ini config file has a log section which can control the log level. Setting it to Trace level will provide more information.

@holmesb
Copy link
Author

holmesb commented Aug 30, 2019

Logging is already LEVEL = trace. Setup using this helm chart. I am using docker image gitea/gitea:1.9.2, which was built from this Dockerfile. Should be easy to try to reproduce. Can you?

@typeless
Copy link
Contributor

I set up my Gitea with systemd and have no idea about the helm chart.

For troubleshooting, the things that I would have a look are

  1. The owners and file permissions of the repository files and directories.
  2. The user/group of the Gitea process in the container.
  3. The log in gitea.log at the moment when creating the repository.

@holmesb
Copy link
Author

holmesb commented Aug 31, 2019

You don't need the helm chart just to run the docker image (to test without systemd). Same occurs just running gitea in a simple pod (not via Kubernetes stuff, ie no ingress\services, etc). Here are all processes and the associated user:

PID   USER     TIME  COMMAND
1      root      0:00 /bin/s6-svscan /etc/s6
14     root      0:00 s6-supervise openssh
15     root      0:00 s6-supervise gitea
16     root      0:00 /usr/sbin/sshd -D -e
17     git      12:57 /app/gitea/gitea web
52     root      0:00 bash
29     root      0:00 bash
138    root      0:00 bash

And here are the data perms:

bash-5.0# ls -al /data
total 4
drwxr-xr-x    1 root     root            48 Aug 26 17:58 .
drwxr-xr-x    1 root     root          4096 Aug 26 18:25 ..
drwxr-xr-x    1 git      git             64 Aug 26 18:25 git
drwxr-xr-x    1 git      git             14 Aug 25 21:38 gitea
drwx------    1 999      root           486 Aug 25 21:36 postgresql-db
drwx------    1 root     root           312 Aug 25 21:38 ssh

I haven't changed any perms. Everything under git & gitea are owned by git. Does that look right?

@typeless
Copy link
Contributor

typeless commented Aug 31, 2019

@holmesb I just tested it on my macOS using Sqlite, and it worked fine.
I used the docker-compose file as in the webpage: https://docs.gitea.io/en-us/install-with-docker/

version: "2"

networks:
  gitea:
    external: false

services:
  server:
    image: gitea/gitea:latest
    environment:
      - USER_UID=1000
      - USER_GID=1000
    restart: always
    networks:
      - gitea
    volumes:
      - ./gitea:/data
    ports:
      - "3000:3000"
      - "222:22"

I used the the above snippet verbatim, and saving it as docker-compose-yml.

@holmesb
Copy link
Author

holmesb commented Sep 1, 2019

Ok found gitea.log. It's not in /data/log as I expected, it's: /app/gitea/log/gitea.log. Here's what's logged when the 500 error occurs in the webui (when clicking a repo):

2019/09/01 17:02:11 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "id", "lower_name", "name", "full_name", "email", "keep_email_private", "passwd", "must_change_password", "login_type", "login_source", "login_name", "type", "location", "website", "rands", "salt", "language", "description", "created_unix", "updated_unix", "last_login_unix", "last_repo_visibility", "max_repo_creation", "is_active", "is_admin", "allow_git_hook", "allow_import_local", "allow_create_organization", "prohibit_login", "avatar", "avatar_email", "use_custom_avatar", "num_followers", "num_following", "num_stars", "num_repos", "num_teams", "num_members", "visibility", "diff_view_style", "theme" FROM "user" WHERE "id"=$1 LIMIT 1 []interface {}{1}
2019/09/01 17:02:11 .../xorm/session_raw.go:87:queryRow() [I] [SQL] SELECT count(*) FROM "notification" WHERE (user_id = $1) AND (status = $2) []interface {}{1, 0x1}
2019/09/01 17:02:11 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "id", "lower_name", "name", "full_name", "email", "keep_email_private", "passwd", "must_change_password", "login_type", "login_source", "login_name", "type", "location", "website", "rands", "salt", "language", "description", "created_unix", "updated_unix", "last_login_unix", "last_repo_visibility", "max_repo_creation", "is_active", "is_admin", "allow_git_hook", "allow_import_local", "allow_create_organization", "prohibit_login", "avatar", "avatar_email", "use_custom_avatar", "num_followers", "num_following", "num_stars", "num_repos", "num_teams", "num_members", "visibility", "diff_view_style", "theme" FROM "user" WHERE "lower_name"=$1 LIMIT 1 []interface {}{"myorg"}
2019/09/01 17:02:11 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "id", "owner_id", "lower_name", "name", "description", "website", "default_branch", "num_watches", "num_stars", "num_forks", "num_issues", "num_closed_issues", "num_pulls", "num_closed_pulls", "num_milestones", "num_closed_milestones", "is_private", "is_empty", "is_archived", "is_mirror", "is_fork", "fork_id", "size", "is_fsck_enabled", "close_issues_via_commit_in_any_branch", "topics", "avatar", "created_unix", "updated_unix" FROM "repository" WHERE "owner_id"=$1 AND "lower_name"=$2 LIMIT 1 []interface {}{8, "sr"}
2019/09/01 17:02:11 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "id", "repo_id", "user_id", "mode" FROM "collaboration" WHERE "repo_id"=$1 AND "user_id"=$2 LIMIT 1 []interface {}{3, 1}
2019/09/01 17:02:11 ...xorm/session_find.go:199:noCacheFind() [I] [SQL] SELECT "id", "repo_id", "type", "config", "created_unix" FROM "repo_unit" WHERE (repo_id = $1) []interface {}{3}
2019/09/01 17:02:11 .../xorm/session_raw.go:87:queryRow() [I] [SQL] SELECT count(*) FROM "release" WHERE repo_id=$1 AND is_draft=$2 []interface {}{3, false}
2019/09/01 17:02:11 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "id", "owner_id", "lower_name", "name", "description", "website", "default_branch", "num_watches", "num_stars", "num_forks", "num_issues", "num_closed_issues", "num_pulls", "num_closed_pulls", "num_milestones", "num_closed_milestones", "is_private", "is_empty", "is_archived", "is_mirror", "is_fork", "fork_id", "size", "is_fsck_enabled", "close_issues_via_commit_in_any_branch", "topics", "avatar", "created_unix", "updated_unix" FROM "repository" WHERE (owner_id=$1 AND fork_id=$2) LIMIT 1 []interface {}{1, 3}
2019/09/01 17:02:11 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "id", "user_id", "repo_id" FROM "watch" WHERE "user_id"=$1 AND "repo_id"=$2 LIMIT 1 []interface {}{1, 3}
2019/09/01 17:02:11 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "id", "uid", "repo_id" FROM "star" WHERE "uid"=$1 AND "repo_id"=$2 LIMIT 1 []interface {}{1, 3}
2019/09/01 17:02:11 ...ules/context/repo.go:664:func1() [E] GetCommitsCount: dial unix data/caches: connect: no such file or directory
2019/09/01 17:02:12 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "id", "lower_name", "name", "full_name", "email", "keep_email_private", "passwd", "must_change_password", "login_type", "login_source", "login_name", "type", "location", "website", "rands", "salt", "language", "description", "created_unix", "updated_unix", "last_login_unix", "last_repo_visibility", "max_repo_creation", "is_active", "is_admin", "allow_git_hook", "allow_import_local", "allow_create_organization", "prohibit_login", "avatar", "avatar_email", "use_custom_avatar", "num_followers", "num_following", "num_stars", "num_repos", "num_teams", "num_members", "visibility", "diff_view_style", "theme" FROM "user" WHERE "id"=$1 LIMIT 1 []interface {}{1}
2019/09/01 17:02:12 .../xorm/session_raw.go:87:queryRow() [I] [SQL] SELECT count(*) FROM "notification" WHERE (user_id = $1) AND (status = $2) []interface {}{1, 0x1}
2019/09/01 17:02:13 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "id", "lower_name", "name", "full_name", "email", "keep_email_private", "passwd", "must_change_password", "login_type", "login_source", "login_name", "type", "location", "website", "rands", "salt", "language", "description", "created_unix", "updated_unix", "last_login_unix", "last_repo_visibility", "max_repo_creation", "is_active", "is_admin", "allow_git_hook", "allow_import_local", "allow_create_organization", "prohibit_login", "avatar", "avatar_email", "use_custom_avatar", "num_followers", "num_following", "num_stars", "num_repos", "num_teams", "num_members", "visibility", "diff_view_style", "theme" FROM "user" WHERE "id"=$1 LIMIT 1 []interface {}{1}
2019/09/01 17:02:13 .../xorm/session_raw.go:87:queryRow() [I] [SQL] SELECT count(*) FROM "notification" WHERE (user_id = $1) AND (status = $2) []interface {}{1, 0x1}
2019/09/01 17:05:10 ...go-xorm/xorm/rows.go:50:newRows() [I] [SQL] SELECT "id", "repo_id", "interval", "enable_prune", "updated_unix", "next_update_unix" FROM "mirror" WHERE (next_update_unix<=$1) AND (next_update_unix!=0) []interface {}{1567357510}

So the real error is:
2019/09/01 17:02:11 ...ules/context/repo.go:664:func1() [E] GetCommitsCount: dial unix data/caches: connect: no such file or directory
But I created dir "/data/caches" and permissioned it to user: git, and this error still occurs. Any ideas? Perhaps some problem with the memcached container (memcached:1.5.6-alpine) that is also running in this pod?

@holmesb
Copy link
Author

holmesb commented Sep 1, 2019

I've created a new issue to address the fact finding gitea logs in docker is not intuitive.

@typeless
Copy link
Contributor

typeless commented Sep 2, 2019

I can not think of any clue about the log. Can you have a clean install with the default configurations (do not create app.ini by hand, just let Gitea generate it)?

If that worked, then try to bisect the nuances between the A/B cases.

@holmesb
Copy link
Author

holmesb commented Sep 8, 2019

Fixed. There was a mismatch between the tcp port memcached was listening on and app.ini setting:

[cache]
HOST     = 127.0.0.1:<PORT>

Thanks @typeless for establishing that it wasn't a problem with the Dockerfile.

@darrel1234
Copy link

@holmesb how did you manage to fix the memcache issue yea? i'm using the helm chart from https://github.com/jfelten/gitea-helm-chart as well

@holmesb
Copy link
Author

holmesb commented Oct 24, 2019

In memcached container, find the port it's listening on:
netstat -peanut |grep memc

Then in gitea container:

vi /data/gitea/conf/app.ini

and, in cache section, check the port in the HOST= value matches. Not sure, but think I finally had to reboot the container: (reboot).

@darrel1234
Copy link

you're a legend... thanks so much for that

@go-gitea go-gitea locked and limited conversation to collaborators Nov 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants