Skip to content

Commit 66b8787

Browse files
mknyszekandybons
authored andcommitted
[release-branch.go1.13] runtime: scavenge on growth instead of inline with allocation
Inline scavenging causes significant performance regressions in tail latency for k8s and has relatively little benefit for RSS footprint. We disabled inline scavenging in Go 1.12.5 (CL 174102) as well, but we thought other changes in Go 1.13 had mitigated the issues with inline scavenging. Apparently we were wrong. This CL switches back to only doing foreground scavenging on heap growth, rather than doing it when allocation tries to allocate from scavenged space. Fixes #34556 Change-Id: I1f5df44046091f0b4f89fec73c2cde98bf9448cb Reviewed-on: https://go-review.googlesource.com/c/go/+/183857 Run-TryBot: Austin Clements <[email protected]> TryBot-Result: Gobot Gobot <[email protected]> Reviewed-by: Keith Randall <[email protected]> Reviewed-by: Michael Knyszek <[email protected]> (cherry picked from commit eb96f8a) Reviewed-on: https://go-review.googlesource.com/c/go/+/198486 Reviewed-by: Austin Clements <[email protected]> Run-TryBot: Andrew Bonventre <[email protected]>
1 parent cd951ae commit 66b8787

File tree

1 file changed

+4
-10
lines changed

1 file changed

+4
-10
lines changed

src/runtime/mheap.go

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1227,16 +1227,6 @@ HaveSpan:
12271227
// heap_released since we already did so earlier.
12281228
sysUsed(unsafe.Pointer(s.base()), s.npages<<_PageShift)
12291229
s.scavenged = false
1230-
1231-
// Since we allocated out of a scavenged span, we just
1232-
// grew the RSS. Mitigate this by scavenging enough free
1233-
// space to make up for it but only if we need to.
1234-
//
1235-
// scavengeLocked may cause coalescing, so prevent
1236-
// coalescing with s by temporarily changing its state.
1237-
s.state = mSpanManual
1238-
h.scavengeIfNeededLocked(s.npages * pageSize)
1239-
s.state = mSpanFree
12401230
}
12411231

12421232
h.setSpans(s.base(), npage, s)
@@ -1312,6 +1302,10 @@ func (h *mheap) grow(npage uintptr) bool {
13121302
//
13131303
// h must be locked.
13141304
func (h *mheap) growAddSpan(v unsafe.Pointer, size uintptr) {
1305+
// Scavenge some pages to make up for the virtual memory space
1306+
// we just allocated, but only if we need to.
1307+
h.scavengeIfNeededLocked(size)
1308+
13151309
s := (*mspan)(h.spanalloc.alloc())
13161310
s.init(uintptr(v), size/pageSize)
13171311
h.setSpans(s.base(), s.npages, s)

0 commit comments

Comments
 (0)