Skip to content

runtime: specialize memhash32, memhash64 #21539

Closed
@josharian

Description

@josharian

memhash32 and memhash64 are defined as:

func memhash32(p unsafe.Pointer, h uintptr) uintptr {
	return memhash(p, h, 4)
}
func memhash64(p unsafe.Pointer, h uintptr) uintptr {
	return memhash(p, h, 8)
}

The generic memhash implementation contains a lot of mechanism that can be skipped when the size is known. A quick hack up of a specialized memhash64 on amd64, with aeshash manually disabled, shows nice gains:

name                  old time/op    new time/op    delta
MapPopulate/1-8         76.7ns ± 4%    75.1ns ± 4%     ~     (p=0.055 n=10+10)
MapPopulate/10-8         613ns ± 3%     570ns ± 3%   -6.94%  (p=0.000 n=10+9)
MapPopulate/100-8       7.93µs ± 2%    7.31µs ± 3%   -7.85%  (p=0.000 n=9+10)
MapPopulate/1000-8      97.0µs ± 3%    89.1µs ± 2%   -8.20%  (p=0.000 n=10+9)
MapPopulate/10000-8      843µs ± 3%     759µs ± 2%  -10.02%  (p=0.000 n=10+9)
MapPopulate/100000-8    9.19ms ± 4%    8.69ms ± 2%   -5.38%  (p=0.000 n=10+9)

The specialized code is small, both in terms of lines of code and machine code. For non-aeshash architectures, this seems like an easy, significant win.

Leaving for someone else to implement all the way through and benchmark on a non-aeshash architecture, due to my limited cycles.

cc @martisch @philhofer @randall77

Metadata

Metadata

Assignees

No one assigned

    Labels

    FrozenDueToAgePerformanceSuggestedIssues that may be good for new contributors looking for work to do.

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions