-
-
Notifications
You must be signed in to change notification settings - Fork 26
ENH: enable linkchecker #174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for taupe-gaufre-c4e660 ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
@HengchengZhang (cc: @HumphreyYang) can you please review the link checker results: I suspect the
It would be great if you could (as a project) do some research into making the link checker that we use (provided by sphinx) more robust. https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-the-linkcheck-builder I look forward to what you find. Thanks. |
Sorry @mmcky, I didn't manage to come up a nice solution for now. The problem with Therefore the Wiley online library return a 403 error code and rejected the link checker. I've tried changing the configuration in sphinx like changing the Thus I believe this problem only occurs when check links direct to some really anti-scrapping website like the Wiley online library because I also tried to use other web scrapping packages to read websites in Wiley online library but they also failed. The current solution come to my mind is to manually check these urls and ignore them in link checker to avoid building error. Of course we can try different link checker but it could be complicated and requires additional packages. |
thanks @HengchengZhang nicely researched. In that case I would be in favour of adding an ignore to these Wiley links for now. Now that our source files are in |
Thanks @mmcky, this actually works for the Wiley links! But it fails for some wikipedia links because it also check all the anchors in the link.( and wikipedia has a lot of empty anchors) I think that can be fixed by ignoring anchor checking during the link checking process. But then it is wried that sphinx didn't report them as errors as the anchor check is set to be True by default. |
I am not sure I fully understand this. Do the anchors resolve in the browser but fail on the link checker? |
It looks like they are building special cases into https://github.com/sphinx-doc/sphinx/pull/9260/files So maybe wikipedia has some special structures as well? |
@HengchengZhang once the new build runs can you check the linkchecker results for anything that should be fixed before next week. |
@HengchengZhang the results are now available in the linkchecker task. 👍 |
* MAINT: review workflows and actions * Fix Syntax for PDF * Update cache and publish workflows * enable some mathjax macros * FIX: fail the cache on execution issue * FIX: heavy_tails
Hi @mmcky I've fixed the broken links, but the error left here is that the link checker consider Shall we also add this to ignore list or paraphrase |
thanks @HengchengZhang it should pass now |
thanks @HengchengZhang |
This PR enables the link checker that will run:
The link checker will only run once per Pull Request.