-
Notifications
You must be signed in to change notification settings - Fork 13.3k
Insert yield checks at appropriate places #524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There's the issue of never yielding, and the issue of fairness. For 0.1 let's just try to make sure we don't tie up a scheduler forever in an iloop. |
I added a yield after send, which I thinks makes it less likely that one might create an iloop that never yields. Do we want to try to make back edges yield for 0.1? I say no. |
I will note that when this happens, we will probably need to change |
discussed at workweek, this is going to be accomplished "natively" by work stealing + keeping at least one spare thread whenever there are more tasks than threads. |
Maybe I don't understand what that means, but won't that just result in "extra threads" being allocated until there are as many threads as tasks? |
I'm pretty sure this is still an issue, so reopening. With the new runtime written in rust, we could have a |
The only overhead of a thread compared to a Rust task is the context switching at arbitrary points. Inserting yield checks would be much slower than just using 1:1 threading, at least on Linux, so I don't think it makes sense to do this. You're better off with context switches than yield checks in critical loops. |
If I remember right, the plan was to insert checks on back-edges in the CFG (presumably including tailcalls), and to have them only actually yield 1 in BIGNUM times. It would be worth profiling but I think that would still save significantly over kernel-mode context switches.. |
@bblum So long as the stealing and thread spawning behavior is rate limited (i.e. after the spare thread sees one of the schedulers hasn't seen a yield in K ms), the scenario you're describing would occur only when a user is exclusively making non-yielding (i.e., no i/o, no sleeping, fully cpu bound) tasks. And that's the case where multiplying threads to equal tasks is probably appropriate behavior: to saturate all the available cores with computation, as best the os kernel can. It is possible someone will not want this behavior in some case. If they make so many cpu-spinning tasks the os literally can't handle the overhead, or perhaps they want a fixed number of rust threads even though they want to overload them. There are other mechanisms users can employ to achieve these ends in these cases. We decided on the strategy we did because it seemed like the more appropriate default, and avoids the worst problems (systemic taxes, artificial blocking or starvation). Most of the time, tasks do i/o or enter a potential yield point (say, malloc) regularly. |
Somewhat off-topic: I don't think |
Blocking's not the point. It's just a possible bit of code-we-control to put a check in, that user code is unlikely to avoid for long. Similar to putting a gc check in there. |
(Not at all a requirement of this scheme, agree it's off topic) |
hmm, I guess I agree that it would be a pretty bad tax to insert checks pervasively. We already have a bunch of other ways tasks can hang forever. But we should make some smart choices about what runtime services are good points for yield/killed checks: malloc, spawn, stack growth, whatever. Tagging this RFC and nominating for well-defined milestone. |
We have to be careful to keep patterns like checking the thread-local |
accepted for well-defined milestone |
note: I expect the thing to define-well here is "we are not injecting such things" |
Still a problem |
I don't think this is relevant anymore. At least, it shouldn't be. We aren't do these now and have seen no mention of doing them in the future. I'll close the issue, it can be re-opened if somebody is particularly passionate about tasks yielding on allocation. |
Added *_setpshared and *_getpshared bindings Adding bindings to posix pthreads functions vital for IPC via shared memory. That's my first PR into libc and I'm not proficient in unix systems programming, so I would really appreciate if someone reviewed it in case I missed something regardrless of the passed tests. Thanks!
* Permit limited circumstance recomputation of phi headers * Handle placeholder unwraps * Fix MD build
We have changed rmc to always compile the target crate as a library. This allow us to treat any extern / public function to be used as the starting point of a proof. Furthermore, it allow us to remove the is_rmc check that we were using to skip the rust start function generation. The start function is only generated when we are compiling a bin crate.
No description provided.
The text was updated successfully, but these errors were encountered: