Skip to content

[IR] Initial introduction of memset_pattern #97583

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
627f1ef
[IR] Initial introduction of memset_pattern
asb Jul 10, 2024
3a0d10a
Tweak wording in Langref description based on feedback
asb Jul 24, 2024
d710a1f
Removing ConstantInt getLength (holdover from when this was a restric…
asb Jul 24, 2024
4bcc00e
Properly update memset-pattern.ll test cases
asb Jul 24, 2024
7d1347c
Removed outdated comment
asb Jul 31, 2024
60ba68b
Change to memset_pattern taking a count rather than a number of bytes
asb Jul 31, 2024
be558f9
Rename to llvm.memset.pattern as requested in review
asb Jul 31, 2024
dfc0564
Add comments to memset_pattern intrinsic to describe args
asb Aug 14, 2024
8ac8b69
Improve memset.pattern langref: fix outdated refs to bytes and mentio…
asb Aug 14, 2024
6d16c82
Excise errant memset_pattern mention
asb Sep 11, 2024
55ee84a
Fix incorrect mangling in LangRef and explain memory address is incre…
asb Sep 11, 2024
1e60edd
Allow memset.pattern expansion for big endian targets
asb Sep 11, 2024
88b5af3
Allow non-power-of-two length patterns
asb Sep 11, 2024
ea429b4
Remove unnecessary and incorrect mangling from llvm.memset.pattern uses
asb Sep 11, 2024
e9c98c8
Rename memset-pattern-inline.ll test to memset-pattern.ll to reflect …
asb Sep 11, 2024
30d59b9
Remove unnecessary comment
asb Sep 11, 2024
d83fdfb
Fix logic for alignment of stores in memset.pattern expansion
asb Sep 11, 2024
64bc6af
Merge remote-tracking branch 'origin/main' into 2024q2-memset-pattern…
asb Nov 6, 2024
c19adc1
Regenerate memset-pattern.ll after merge
asb Nov 8, 2024
03e07d5
Use normal createMemsetAsLoop helper for memset.pattern
asb Nov 8, 2024
a7373b7
Rename to llvm.experimental.memset.pattern
asb Nov 8, 2024
a68aa8d
Move MemSetPattern out of the MemSet hierarchy
asb Nov 8, 2024
9580ab0
Fix underline length in langref
asb Nov 8, 2024
4ebc985
Address review comments
asb Nov 9, 2024
78bad3b
Verkfy llvm.experimental.memset.pattern pattern arg is integral numbe…
asb Nov 9, 2024
71dd9b5
Revert "Verkfy llvm.experimental.memset.pattern pattern arg is integr…
asb Nov 9, 2024
ad7585c
Remove outdated comment about integral bit widths only
asb Nov 9, 2024
4d6d9ab
Adopt Nikita's langref rewording suggestion
asb Nov 13, 2024
0b0e81e
typo fixes
asb Nov 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 57 additions & 0 deletions llvm/docs/LangRef.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15434,6 +15434,63 @@ The behavior of '``llvm.memset.inline.*``' is equivalent to the behavior of
'``llvm.memset.*``', but the generated code is guaranteed not to call any
external functions.

.. _int_experimental_memset_pattern:

'``llvm.experimental.memset.pattern``' Intrinsic
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Syntax:
"""""""

This is an overloaded intrinsic. You can use
``llvm.experimental.memset.pattern`` on any integer bit width and for
different address spaces. Not all targets support all bit widths however.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need a Verifier check to check that the size is integral number of bytes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've now added this check to the verifier.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually that was my bad. There was some discussion about this before and in the end non-integral bitwidths should work and are tested in llvm/test/Transforms/PreISelIntrinsicLowering/RISCV/memset-pattern.ll. I've now pushed a fix to LangRef to remove the stated restriction.


::

declare void @llvm.experimental.memset.pattern.p0.i128.i64(ptr <dest>, i128 <val>,
i64 <count>, i1 <isvolatile>)

Overview:
"""""""""

The '``llvm.experimental.memset.pattern.*``' intrinsics fill a block of memory
with a particular value. This may be expanded to an inline loop, a sequence of
stores, or a libcall depending on what is available for the target and the
expected performance and code size impact.

Arguments:
""""""""""

The first argument is a pointer to the destination to fill, the second
is the value with which to fill it, the third argument is an integer
argument specifying the number of times to fill the value, and the fourth is a
boolean indicating a volatile access.

The :ref:`align <attr_align>` parameter attribute can be provided
for the first argument.

If the ``isvolatile`` parameter is ``true``, the
``llvm.experimental.memset.pattern`` call is a :ref:`volatile operation
<volatile>`. The detailed access behavior is not very cleanly specified and it
is unwise to depend on it.

Semantics:
""""""""""

The '``llvm.experimental.memset.pattern*``' intrinsic fills memory starting at
the destination location with the given pattern ``<count>`` times,
incrementing by the allocation size of the type each time. The stores follow
the usual semantics of store instructions, including regarding endianness and
padding. If the argument is known to be aligned to some boundary, this can be
specified as an attribute on the argument.

If ``<count>`` is 0, it is no-op modulo the behavior of attributes attached to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the size is not an integral number of bytes, what happens with the padding bits?
Do they get unspecified / undefined values or do they preserve their values?
Should this be clarified in LangRef?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behavior is the same as whatever stores of the type would do.

Possibly we should just explicit refer to store semantics here and reduce the description to something like this:

The 'llvm.experimental.memset.pattern*' intrinsic stores the provided value <count>> times, incrementing by the allocation size of the type each time. The stores follows the usual semantics of store instructions, including regarding endianness and padding.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that's much clearer - I've adopted that wording.

the arguments.
If ``<count>`` is not a well-defined value, the behavior is undefined.
If ``<count>`` is not zero, ``<dest>`` should be well-defined, otherwise the
behavior is undefined.

.. _int_sqrt:

'``llvm.sqrt.*``' Intrinsic
Expand Down
5 changes: 5 additions & 0 deletions llvm/include/llvm/IR/InstVisitor.h
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,9 @@ class InstVisitor {
RetTy visitDbgInfoIntrinsic(DbgInfoIntrinsic &I){ DELEGATE(IntrinsicInst); }
RetTy visitMemSetInst(MemSetInst &I) { DELEGATE(MemIntrinsic); }
RetTy visitMemSetInlineInst(MemSetInlineInst &I){ DELEGATE(MemSetInst); }
RetTy visitMemSetPatternInst(MemSetPatternInst &I) {
DELEGATE(IntrinsicInst);
}
RetTy visitMemCpyInst(MemCpyInst &I) { DELEGATE(MemTransferInst); }
RetTy visitMemCpyInlineInst(MemCpyInlineInst &I){ DELEGATE(MemCpyInst); }
RetTy visitMemMoveInst(MemMoveInst &I) { DELEGATE(MemTransferInst); }
Expand Down Expand Up @@ -295,6 +298,8 @@ class InstVisitor {
case Intrinsic::memset: DELEGATE(MemSetInst);
case Intrinsic::memset_inline:
DELEGATE(MemSetInlineInst);
case Intrinsic::experimental_memset_pattern:
DELEGATE(MemSetPatternInst);
case Intrinsic::vastart: DELEGATE(VAStartInst);
case Intrinsic::vaend: DELEGATE(VAEndInst);
case Intrinsic::vacopy: DELEGATE(VACopyInst);
Expand Down
35 changes: 35 additions & 0 deletions llvm/include/llvm/IR/IntrinsicInst.h
Original file line number Diff line number Diff line change
Expand Up @@ -1263,6 +1263,41 @@ class MemSetInlineInst : public MemSetInst {
}
};

/// This is the base class for llvm.experimental.memset.pattern
class MemSetPatternIntrinsic : public MemIntrinsicBase<MemIntrinsic> {
private:
enum { ARG_VOLATILE = 3 };

public:
ConstantInt *getVolatileCst() const {
return cast<ConstantInt>(const_cast<Value *>(getArgOperand(ARG_VOLATILE)));
}

bool isVolatile() const { return !getVolatileCst()->isZero(); }

void setVolatile(Constant *V) { setArgOperand(ARG_VOLATILE, V); }

// Methods for support of type inquiry through isa, cast, and dyn_cast:
static bool classof(const IntrinsicInst *I) {
return I->getIntrinsicID() == Intrinsic::experimental_memset_pattern;
}
static bool classof(const Value *V) {
return isa<IntrinsicInst>(V) && classof(cast<IntrinsicInst>(V));
}
};

/// This class wraps the llvm.experimental.memset.pattern intrinsic.
class MemSetPatternInst : public MemSetBase<MemSetPatternIntrinsic> {
public:
// Methods for support type inquiry through isa, cast, and dyn_cast:
static bool classof(const IntrinsicInst *I) {
return I->getIntrinsicID() == Intrinsic::experimental_memset_pattern;
}
static bool classof(const Value *V) {
return isa<IntrinsicInst>(V) && classof(cast<IntrinsicInst>(V));
}
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need both MemSetPatternIntrinsic and MemSetPatternInst?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm mirroring the pattern followed by other mem intrinsics, and although it's not super pretty, having both classes like this (as for the standard MemSet intrinsics) seems the way that reduces copy and paste of accessor code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm okay, I thought this might only be needed for cases where we have both atomic and non-atomic variants.


/// This class wraps the llvm.memcpy/memmove intrinsics.
class MemTransferInst : public MemTransferBase<MemIntrinsic> {
public:
Expand Down
11 changes: 11 additions & 0 deletions llvm/include/llvm/IR/Intrinsics.td
Original file line number Diff line number Diff line change
Expand Up @@ -1006,6 +1006,17 @@ def int_memset_inline
NoCapture<ArgIndex<0>>, WriteOnly<ArgIndex<0>>,
ImmArg<ArgIndex<3>>]>;

// Memset variant that writes a given pattern.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment what the operands are

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added with inline comments (there's not a totally consistent pattern for describing args - many intrinsics have no description at all, but I see some other examples in the file using inline comments for the args)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really we ought to have tablegen mandatory doc strings

def int_experimental_memset_pattern
: Intrinsic<[],
[llvm_anyptr_ty, // Destination.
llvm_anyint_ty, // Pattern value.
llvm_anyint_ty, // Count (number of times to fill value).
llvm_i1_ty], // IsVolatile.
[IntrWriteMem, IntrArgMemOnly, IntrWillReturn, IntrNoFree, IntrNoCallback,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DefaultAttrIntrinsic would hide most of these

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dug through this and unfortunately I can't make use of DefaultAttrIntrinsic because nosync isn't necessarily true. https://reviews.llvm.org/D86021 switched over memset to using DefaultAttrIntrinsic but this was later backed out in a888e49 due to nosync not applying unconditionally.

NoCapture<ArgIndex<0>>, WriteOnly<ArgIndex<0>>,
ImmArg<ArgIndex<3>>]>;

// FIXME: Add version of these floating point intrinsics which allow non-default
// rounding modes and FP exception handling.

Expand Down
4 changes: 4 additions & 0 deletions llvm/include/llvm/Transforms/Utils/LowerMemIntrinsics.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ class Instruction;
class MemCpyInst;
class MemMoveInst;
class MemSetInst;
class MemSetPatternInst;
class ScalarEvolution;
class TargetTransformInfo;
class Value;
Expand Down Expand Up @@ -57,6 +58,9 @@ bool expandMemMoveAsLoop(MemMoveInst *MemMove, const TargetTransformInfo &TTI);
/// Expand \p MemSet as a loop. \p MemSet is not deleted.
void expandMemSetAsLoop(MemSetInst *MemSet);

/// Expand \p MemSetPattern as a loop. \p MemSet is not deleted.
void expandMemSetPatternAsLoop(MemSetPatternInst *MemSet);

/// Expand \p AtomicMemCpy as a loop. \p AtomicMemCpy is not deleted.
void expandAtomicMemCpyAsLoop(AtomicMemCpyInst *AtomicMemCpy,
const TargetTransformInfo &TTI,
Expand Down
8 changes: 8 additions & 0 deletions llvm/lib/CodeGen/PreISelIntrinsicLowering.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,13 @@ bool PreISelIntrinsicLowering::expandMemIntrinsicUses(Function &F) const {
Memset->eraseFromParent();
break;
}
case Intrinsic::experimental_memset_pattern: {
auto *Memset = cast<MemSetPatternInst>(Inst);
expandMemSetPatternAsLoop(Memset);
Changed = true;
Memset->eraseFromParent();
break;
}
default:
llvm_unreachable("unhandled intrinsic");
}
Expand All @@ -339,6 +346,7 @@ bool PreISelIntrinsicLowering::lowerIntrinsics(Module &M) const {
case Intrinsic::memmove:
case Intrinsic::memset:
case Intrinsic::memset_inline:
case Intrinsic::experimental_memset_pattern:
Changed |= expandMemIntrinsicUses(F);
break;
case Intrinsic::load_relative:
Expand Down
3 changes: 2 additions & 1 deletion llvm/lib/IR/Verifier.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5519,7 +5519,8 @@ void Verifier::visitIntrinsicCall(Intrinsic::ID ID, CallBase &Call) {
case Intrinsic::memcpy_inline:
case Intrinsic::memmove:
case Intrinsic::memset:
case Intrinsic::memset_inline: {
case Intrinsic::memset_inline:
case Intrinsic::experimental_memset_pattern: {
break;
}
case Intrinsic::memcpy_element_unordered_atomic:
Expand Down
9 changes: 9 additions & 0 deletions llvm/lib/Transforms/Utils/LowerMemIntrinsics.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -970,6 +970,15 @@ void llvm::expandMemSetAsLoop(MemSetInst *Memset) {
Memset->isVolatile());
}

void llvm::expandMemSetPatternAsLoop(MemSetPatternInst *Memset) {
createMemSetLoop(/* InsertBefore=*/Memset,
/* DstAddr=*/Memset->getRawDest(),
/* CopyLen=*/Memset->getLength(),
/* SetValue=*/Memset->getValue(),
/* Alignment=*/Memset->getDestAlign().valueOrOne(),
Memset->isVolatile());
}

void llvm::expandAtomicMemCpyAsLoop(AtomicMemCpyInst *AtomicMemcpy,
const TargetTransformInfo &TTI,
ScalarEvolution *SE) {
Expand Down
Loading