-
Notifications
You must be signed in to change notification settings - Fork 18k
proposal: sync: Allow (*sync.Map).LoadOrStore uses closure to create new value like sync.Pool #44159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
See previously #20884 (particularly #20884 (comment)) and #21199. I think the same concerns apply here — in particular, the |
Generally the pattern we use for allocation-sensitive If the initialization is expensive enough, you can guard it with a Which of those strategies is best depends on several properties of the access pattern — the ratio of loads to stores, tolerance for redundant work, cost of initialization, etc. I don't think we should bake assumptions about those properties into |
My point is that Yes I think it would make the map prone to lock-ordering bugs, in current, And I agree that writing a fast loading function and fallback to |
He's not trying to combine the two. He's just asking for the ability to lazily evaluate the corresponding value for a given key. If golang were as good a language as haskell, this would be the default implementation of sync.Map's LoadOrStore method and there wouldn't even be a need for a separate function.
Unfortunately this is more verbose than the proposal, and also slower. In extreme cases, or in cases where the store is very long and not just pretty long, you might force more than one threads to create corresponding values for the same key only to throw them away on the later LoadOrStore call. More importantly, it's a pattern that a lot of users of the sync.Map object might not come up with, because concurrency is hard. Libraries should feel comfortable designing for 110IQ programmers when they don't sacrifice the code of 130IQ programmers. This Load -> LoadOrStore trick slipped my mind until I read the docs further because I didn't have a good model of what the sync.Map object provided me in terms of thread safety.
If we're bringing out sync.Mutex, why not make the users just design their own sync.Map implementation? Most of the utility of sync.Map is in preventing me from having to use sync.Mutex!
You don't have to assume anything about the current users of sync.Map, you're just extending its viability to provide threadsafe key/value access to situations where allocating the map is resource intensive. I have run into this case twice and came here to make an issue, but it appears I've been beaten to it. |
LoadOrStore may run a few times while racing against other goroutines before a load or store succeeds.
That's the best any implementation is going to be able to do, and you can do it in a few lines of code in your own program. |
This proposal has been added to the active column of the proposals project |
I don't think this is actually the case. Why can't the closure-based-LoadOrStore call just internally hit a lock until it's done creating the object, so that any further LoadOrStore calls block until it's finished? Edit: Here's a loose implementation:
|
That seems like it would require either per-key locks or a lock shared among many keys. Since the purpose of That leaves per-key locks, which do not seem worth the increase in memory footprint. |
Based on the discussion above, this proposal seems like a likely decline. |
No change in consensus, so declined. |
Currently,
LoadOrStore
only allows passing a value directly, however, considering this case:if I want to avoid this initialization every time, I have to give up using
LoadOrStore
or combine it andsync.Pool
: implement the lazy initialization when there is not an existing value be loaded:if there is a way to put a
New
closure intoLoadOrStore
function, maybe like:it may reduce tons of overhead in this case, and I think the above case is common to meet.
The text was updated successfully, but these errors were encountered: