Skip to content

Insert yield checks at appropriate places #524

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
eholk opened this issue Jun 20, 2011 · 18 comments
Closed

Insert yield checks at appropriate places #524

eholk opened this issue Jun 20, 2011 · 18 comments
Labels
A-concurrency Area: Concurrency A-runtime Area: std's runtime and "pre-main" init for handling backtraces, unwinds, stack overflows E-hard Call for participation: Hard difficulty. Experience needed to fix: A lot. P-medium Medium priority

Comments

@eholk
Copy link
Contributor

eholk commented Jun 20, 2011

No description provided.

@ghost ghost assigned eholk Jun 20, 2011
@brson
Copy link
Contributor

brson commented Sep 14, 2011

There's the issue of never yielding, and the issue of fairness. For 0.1 let's just try to make sure we don't tie up a scheduler forever in an iloop.

@brson
Copy link
Contributor

brson commented Sep 16, 2011

I added a yield after send, which I thinks makes it less likely that one might create an iloop that never yields. Do we want to try to make back edges yield for 0.1? I say no.

@ghost ghost assigned brson Sep 27, 2011
@ghost ghost assigned brson Mar 15, 2012
@bblum
Copy link
Contributor

bblum commented Aug 16, 2012

I will note that when this happens, we will probably need to change rust_task_yield_fail() in rust_task.cpp to not always fail if a task yields in an atomic section. instead, it should check if the yield was explicit or compiler-inserted, and fail if the first or silently ignore if the second.

@graydon
Copy link
Contributor

graydon commented Apr 18, 2013

discussed at workweek, this is going to be accomplished "natively" by work stealing + keeping at least one spare thread whenever there are more tasks than threads.

@graydon graydon closed this as completed Apr 18, 2013
@bblum
Copy link
Contributor

bblum commented Jun 7, 2013

Maybe I don't understand what that means, but won't that just result in "extra threads" being allocated until there are as many threads as tasks?

@bblum
Copy link
Contributor

bblum commented Jun 26, 2013

I'm pretty sure this is still an issue, so reopening. With the new runtime written in rust, we could have a #[no_yield_checks] attribute for crate-level or file-level that we'd put in the scheduler.

@bblum bblum reopened this Jun 26, 2013
@thestinger
Copy link
Contributor

The only overhead of a thread compared to a Rust task is the context switching at arbitrary points. Inserting yield checks would be much slower than just using 1:1 threading, at least on Linux, so I don't think it makes sense to do this. You're better off with context switches than yield checks in critical loops.

@bblum
Copy link
Contributor

bblum commented Jun 26, 2013

If I remember right, the plan was to insert checks on back-edges in the CFG (presumably including tailcalls), and to have them only actually yield 1 in BIGNUM times. It would be worth profiling but I think that would still save significantly over kernel-mode context switches..

@graydon
Copy link
Contributor

graydon commented Jun 26, 2013

@bblum So long as the stealing and thread spawning behavior is rate limited (i.e. after the spare thread sees one of the schedulers hasn't seen a yield in K ms), the scenario you're describing would occur only when a user is exclusively making non-yielding (i.e., no i/o, no sleeping, fully cpu bound) tasks. And that's the case where multiplying threads to equal tasks is probably appropriate behavior: to saturate all the available cores with computation, as best the os kernel can.

It is possible someone will not want this behavior in some case. If they make so many cpu-spinning tasks the os literally can't handle the overhead, or perhaps they want a fixed number of rust threads even though they want to overload them. There are other mechanisms users can employ to achieve these ends in these cases. We decided on the strategy we did because it seemed like the more appropriate default, and avoids the worst problems (systemic taxes, artificial blocking or starvation). Most of the time, tasks do i/o or enter a potential yield point (say, malloc) regularly.

@thestinger
Copy link
Contributor

Somewhat off-topic: I don't think malloc is really a potential yield point, because with jemalloc it's lock-free for allocations under 4K and only hits kernel synchronization when it actually has to make a system call (allocations over 4K, and occasionally to increase the pool size for small ones). It's never really blocking.

@graydon
Copy link
Contributor

graydon commented Jun 26, 2013

Blocking's not the point. It's just a possible bit of code-we-control to put a check in, that user code is unlikely to avoid for long. Similar to putting a gc check in there.

@graydon
Copy link
Contributor

graydon commented Jun 26, 2013

(Not at all a requirement of this scheme, agree it's off topic)

@bblum
Copy link
Contributor

bblum commented Jul 5, 2013

hmm, I guess I agree that it would be a pretty bad tax to insert checks pervasively. We already have a bunch of other ways tasks can hang forever. But we should make some smart choices about what runtime services are good points for yield/killed checks: malloc, spawn, stack growth, whatever. Tagging this RFC and nominating for well-defined milestone.

@thestinger
Copy link
Contributor

We have to be careful to keep patterns like checking the thread-local errno variable working. Inserted yield checks are definitely a feature to be aware of while writing bindings.

@graydon
Copy link
Contributor

graydon commented Jul 11, 2013

accepted for well-defined milestone

@graydon
Copy link
Contributor

graydon commented Jul 11, 2013

note: I expect the thing to define-well here is "we are not injecting such things"

@pcwalton
Copy link
Contributor

Still a problem

@Aatch
Copy link
Contributor

Aatch commented Mar 4, 2014

I don't think this is relevant anymore. At least, it shouldn't be. We aren't do these now and have seen no mention of doing them in the future. I'll close the issue, it can be re-opened if somebody is particularly passionate about tasks yielding on allocation.

@Aatch Aatch closed this as completed Mar 4, 2014
keeperofdakeys pushed a commit to keeperofdakeys/rust that referenced this issue Dec 12, 2017
Added *_setpshared and *_getpshared bindings

Adding bindings to posix pthreads functions vital for IPC via shared memory.

That's my first PR into libc and I'm not proficient in unix systems programming, so I would really appreciate if someone reviewed it in case I missed something regardrless of the passed tests.
Thanks!
kazcw pushed a commit to kazcw/rust that referenced this issue Oct 23, 2018
ZuseZ4 pushed a commit to EnzymeAD/rust that referenced this issue Mar 7, 2023
* Permit limited circumstance recomputation of phi headers

* Handle placeholder unwraps

* Fix MD build
celinval added a commit to celinval/rust-dev that referenced this issue Jun 4, 2024
We have changed rmc to always compile the target crate as a library.
This allow us to treat any extern / public function to be used as the
starting point of a proof. Furthermore, it allow us to remove the is_rmc
check that we were using to skip the rust start function generation.
The start function is only generated when we are compiling a bin crate.
GuillaumeGomez pushed a commit to GuillaumeGomez/rust that referenced this issue Jul 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-concurrency Area: Concurrency A-runtime Area: std's runtime and "pre-main" init for handling backtraces, unwinds, stack overflows E-hard Call for participation: Hard difficulty. Experience needed to fix: A lot. P-medium Medium priority
Projects
None yet
Development

No branches or pull requests

7 participants