Skip to content

Commit 4c55179

Browse files
mknyszekgopherbot
authored andcommitted
[release-branch.go1.22] runtime: clear trace map without write barriers
Currently the trace map is cleared with an assignment, but this ends up invoking write barriers. Theoretically, write barriers could try to write a trace event and eventually try to acquire the same lock. The static lock ranking expresses this constraint. This change replaces the assignment with a call to memclrNoHeapPointer to clear the map, removing the write barriers. Note that technically this problem is purely theoretical. The way the trace maps are used today is such that reset is only ever called when the tracer is no longer writing events that could emit data into a map. Furthermore, reset is never called from an event-writing context. Therefore another way to resolve this is to simply not hold the trace map lock over the reset operation. However, this makes the trace map implementation less robust because it needs to be used in a very specific way. Furthermore, the rest of the trace map code avoids write barriers already since its internal structures are all notinheap, so it's actually more consistent to just avoid write barriers in the reset method. Fixes #56554. Change-Id: Icd86472e75e25161b2c10c1c8aaae2c2fed4f67f Reviewed-on: https://go-review.googlesource.com/c/go/+/560216 Reviewed-by: Michael Pratt <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]> (cherry picked from commit 829f2ce) Reviewed-on: https://go-review.googlesource.com/c/go/+/559957 Auto-Submit: Michael Knyszek <[email protected]>
1 parent 5d647ed commit 4c55179

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

src/runtime/trace2map.go

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,5 +141,11 @@ func (tab *traceMap) reset() {
141141
assertLockHeld(&tab.lock)
142142
tab.mem.drop()
143143
tab.seq.Store(0)
144-
tab.tab = [1 << 13]atomic.UnsafePointer{}
144+
// Clear table without write barriers. The table consists entirely
145+
// of notinheap pointers, so this is fine.
146+
//
147+
// Write barriers may theoretically call into the tracer and acquire
148+
// the lock again, and this lock ordering is expressed in the static
149+
// lock ranking checker.
150+
memclrNoHeapPointers(unsafe.Pointer(&tab.tab), unsafe.Sizeof(tab.tab))
145151
}

0 commit comments

Comments
 (0)