-
-
Notifications
You must be signed in to change notification settings - Fork 5.8k
Avatars need thumbnail versions #12350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yes, this would be a nice performane win. Github renders 40px with a 88px source image. I think we should aim for a multiple of our biggest rendered avatar size, 2x or 3x possibly. Question is whether we want to keep the thumbnails in memory (and recompute them every time on startup) or store them on the filesystem for faster startup after initial generation. |
Actually, a dynamic size would be even better e.g. |
I think they should be kept on disk because the goal is to be able to serve them from a different server at somepoint (#11387). I'm also not sure of our current dynamic avatar lookup system of https://gitea.com/user/avatar/6543/-1 because in a test of a very slow site (gitea.com): https://www.webpagetest.org/result/200729_K1_d89b116c0c49e418eea1af147116c08b/ That HTTP request can take anywhere from 200ms to almost 2 seconds in some tests just to return the real location of the image to then download separately. Seems random when that request is faster/slower but that it is consistently slow and even at its fastest it takes more time than it should to just download a smaller version of the image directly. I know the initial point was to not slow down overall page rendering waiting for avatars but use of gitea.com shows avatar rendering is kind of painful as-is using the current method (it looks like using a dialup connection for me). I think having them be direct links to much smaller images (3kb) could be better there. |
gitea.com is hosted in Asia so that distance will affect latency significantly for non-asian users. TLS 1.3 would improve latency by eliminating one RTT per TLS session. I see that avatars are served using HTTP 1.1 which means browsers have to open a separate connection for each avatar and are limited to generally 6 parallel connections per domain name. This can be improved by upgrading to HTTP2 where it should not really matter much if the avatars are on a separate domain or not. |
These are already HTTP/2 -- anything from gitea.com is HTTP/2 Theres one thing from goreportcard.com that is http/1 It doesn't matter too much for this example since there is only one user avatar anyway where there are never more than a few open connections to gitea.com at once because most things are hosted elsewhere and the connections to gitea.com often need to wait on each other anyway (can't download the avatar until the request for its location is finished). I think the point is that many gitea.com users are not connecting from Asia and experience this latency and it is helpful to remove or re-evaluate it wherever possible. TLS 1.3 wouldn't fix that a request for the location of an avatar can take almost 2 seconds sometimes. Theres also other situations where connections will be slow as well. In general I think its just worth reflecting on the current design of how avatars are loaded given the experience of using them on a slower site like gitea.com. The situation before wasn't good either where page rendering could be held up a while, but what we have currently doesn't seem perfect. You can see in the webpagetest example above that on each test run the repo avatar which is loaded directly from a known URL downloads and is displayed in a fraction of the time it even takes to lookup the location of the user avatar URL. So my original thought was that if the thumbnail images were small enough (3kb) that it wouldn't need the extra lookup workaround of the /-1 URL to avoid the original performance issues which caused that to be implemented (there are probably other reasons too I'm not familiar with). |
Not sure why but for me, everything from You can also clearly observe the 6-connection limit in that screenshot which slows down the avatar downloads. +1 to removing that I think we want dynamic-size avatars based on URL parameter and a memory- and/or disk-based cache of those avatars, invalidated only when users upload new ones, e.g. |
Also when looking at those avatar requests, I see https://developers.google.com/speed/docs/insights/LeverageBrowserCaching
|
weird! I don't know anything about the setup/configuration of gitea.com or why it might be different for some people. But I think disk based cache of them would be good so they could be stored on a separate host with other 'static' assets (which is what @lunny wants for gitea.com). Ideally many of them could get cached locally anyway but that savings is largely lost currently because it still takes a lot of time for the /-1 request just to respond that it is an image you might have in local memory. You can see in tests above it doesn't need to request the actual avatar image on a 2nd view but still spends time on the /-1 request |
Such multi-domain workarounds are a relic of the past since HTTP2 which can multiplex a theortical infinite amount of connection through a single TCP session. You should not expect much if any gain from moving assets to another host.
They can not cache currently because we don't send |
Maybe we're talking about different things -- I'm talking about hosting them on a CDN for the purpose of faster downloads from a geographically closer source -- not about using a 2nd hostname to get around the number of connections open at once for increased speed. gitea.com (for me) is still slow now with http/2. The fastest parts are downloaded from a CDN. It was pretty much unusable before the CDN. The avatars on this github page are hosted on a separate cdn as well. The images now have an expires header of about 6 hours in the future so some browsers like Chrome (and the test above) store it in a disk cache and don't always re-download for each page view but they need proper caching headers like the other files (many of which are set by the CDN they are hosted on). Not sure why the repo avatar gets cache-control headers but the user one doesn't though : ( |
Not sure how CDN integration works but I could see timely updates as an issue. When a user changes an avatar, they will expect it to show immediately so you'd need a mechism to pretty much realtime invalidate the CDN content which would probably add a lot of complexity. I think we're generally overthinking this issue. Just add some cache headers and maybe implement thumbnails served directly or optionally via CDN, but I doubt it will help much. |
Now almost all assets has been served by CDN except avatars because gitea didn't support that. About http1.1 and http2, we have some haproxy host as reverse proxy which may not support http2. We will enable it ASAP. |
Possible to move them to |
|
This issue has been automatically marked as stale because it has not had recent activity. I am here to help clear issues left open even if solved or waiting for more insight. This issue will be closed if no further activity occurs during the next 2 weeks. If the issue is still valid just add a comment to keep it alive. Thank you for your contributions. |
#13569 should take care of the caching. I'm not sure of the purpose of the 302 redirect that happens on every avatar fetch. Can someone explain maybe? |
And #13649 makes avatars actually cachable by removing the redirects. This should pretty much solve this performance issue for subsequent page loads, but actual image resizing would still be nice to have. |
When showing user avatars on pr/issue/etc... we limit it to 40x40 but still use a full 240kb png image. Github has a 3kb jpg version for this.
This is noticeable on slower connections like gitea.com and pages that might load many avatars like an issue with lots of comments. We should have a thumbnail version of user avatars to avoid this
The text was updated successfully, but these errors were encountered: