Skip to content

[ELF] Rename target-specific RelExpr enumerators #118424

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

MaskRay
Copy link
Member

@MaskRay MaskRay commented Dec 3, 2024

RelExpr enumerators are named R_*, which can be confused with ELF
relocation type names. Rename the target-specific ones to RE_* to
avoid confusion.

For consistency, the target-independent ones can be renamed as well, but
that's not urgent. The relocation processing mechanism with RelExpr has
non-trivial overhead compared with mold's approach, and we might make
more code into Arch/*.cpp files and decrease the enumerators.

Created using spr 1.3.5-bogner
@llvmbot
Copy link
Member

llvmbot commented Dec 3, 2024

@llvm/pr-subscribers-lld-elf

Author: Fangrui Song (MaskRay)

Changes

RelExpr enumerators are named R_*, which can be confused with ELF
relocation type names. Rename the target-specific ones to REL_* to
avoid confusion.

For consistency, the target-independent ones can be renamed as well, but
that's not urgent. The relocation processing mechanism with RelExpr has
non-trivial overhead compared with mold's approach, and we might make
more code into Arch/*.cpp files and decrease the enumerators.


Patch is 36.27 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/118424.diff

11 Files Affected:

  • (modified) lld/ELF/Arch/AArch64.cpp (+10-10)
  • (modified) lld/ELF/Arch/ARM.cpp (+3-3)
  • (modified) lld/ELF/Arch/LoongArch.cpp (+11-11)
  • (modified) lld/ELF/Arch/Mips.cpp (+9-9)
  • (modified) lld/ELF/Arch/PPC.cpp (+1-1)
  • (modified) lld/ELF/Arch/PPC64.cpp (+8-8)
  • (modified) lld/ELF/Arch/RISCV.cpp (+4-4)
  • (modified) lld/ELF/InputSection.cpp (+37-37)
  • (modified) lld/ELF/Relocations.cpp (+43-42)
  • (modified) lld/ELF/Relocations.h (+31-31)
  • (modified) lld/ELF/SyntheticSections.cpp (+2-2)
diff --git a/lld/ELF/Arch/AArch64.cpp b/lld/ELF/Arch/AArch64.cpp
index 5b5ad482ea1279..a94f2ee037cecd 100644
--- a/lld/ELF/Arch/AArch64.cpp
+++ b/lld/ELF/Arch/AArch64.cpp
@@ -154,9 +154,9 @@ RelExpr AArch64::getRelExpr(RelType type, const Symbol &s,
   case R_AARCH64_MOVW_UABS_G3:
     return R_ABS;
   case R_AARCH64_AUTH_ABS64:
-    return R_AARCH64_AUTH;
+    return REL_R_AARCH64_AUTH;
   case R_AARCH64_TLSDESC_ADR_PAGE21:
-    return R_AARCH64_TLSDESC_PAGE;
+    return REL_R_AARCH64_TLSDESC_PAGE;
   case R_AARCH64_TLSDESC_LD64_LO12:
   case R_AARCH64_TLSDESC_ADD_LO12:
     return R_TLSDESC;
@@ -198,15 +198,15 @@ RelExpr AArch64::getRelExpr(RelType type, const Symbol &s,
     return R_PC;
   case R_AARCH64_ADR_PREL_PG_HI21:
   case R_AARCH64_ADR_PREL_PG_HI21_NC:
-    return R_AARCH64_PAGE_PC;
+    return REL_R_AARCH64_PAGE_PC;
   case R_AARCH64_LD64_GOT_LO12_NC:
   case R_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
     return R_GOT;
   case R_AARCH64_LD64_GOTPAGE_LO15:
-    return R_AARCH64_GOT_PAGE;
+    return REL_R_AARCH64_GOT_PAGE;
   case R_AARCH64_ADR_GOT_PAGE:
   case R_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
-    return R_AARCH64_GOT_PAGE_PC;
+    return REL_R_AARCH64_GOT_PAGE_PC;
   case R_AARCH64_GOTPCREL32:
   case R_AARCH64_GOT_LD_PREL19:
     return R_GOT_PC;
@@ -222,7 +222,7 @@ RelExpr AArch64::getRelExpr(RelType type, const Symbol &s,
 RelExpr AArch64::adjustTlsExpr(RelType type, RelExpr expr) const {
   if (expr == R_RELAX_TLS_GD_TO_IE) {
     if (type == R_AARCH64_TLSDESC_ADR_PAGE21)
-      return R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC;
+      return REL_R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC;
     return R_RELAX_TLS_GD_TO_IE_ABS;
   }
   return expr;
@@ -877,7 +877,7 @@ bool AArch64Relaxer::tryRelaxAdrpLdr(const Relocation &adrpRel,
   if (val != llvm::SignExtend64(val, 33))
     return false;
 
-  Relocation adrpSymRel = {R_AARCH64_PAGE_PC, R_AARCH64_ADR_PREL_PG_HI21,
+  Relocation adrpSymRel = {REL_R_AARCH64_PAGE_PC, R_AARCH64_ADR_PREL_PG_HI21,
                            adrpRel.offset, /*addend=*/0, &sym};
   Relocation addRel = {R_ABS, R_AARCH64_ADD_ABS_LO12_NC, ldrRel.offset,
                        /*addend=*/0, &sym};
@@ -922,21 +922,21 @@ void AArch64::relocateAlloc(InputSectionBase &sec, uint8_t *buf) const {
     }
 
     switch (rel.expr) {
-    case R_AARCH64_GOT_PAGE_PC:
+    case REL_R_AARCH64_GOT_PAGE_PC:
       if (i + 1 < size &&
           relaxer.tryRelaxAdrpLdr(rel, sec.relocs()[i + 1], secAddr, buf)) {
         ++i;
         continue;
       }
       break;
-    case R_AARCH64_PAGE_PC:
+    case REL_R_AARCH64_PAGE_PC:
       if (i + 1 < size &&
           relaxer.tryRelaxAdrpAdd(rel, sec.relocs()[i + 1], secAddr, buf)) {
         ++i;
         continue;
       }
       break;
-    case R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC:
+    case REL_R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC:
     case R_RELAX_TLS_GD_TO_IE_ABS:
       relaxTlsGdToIe(loc, rel, val);
       continue;
diff --git a/lld/ELF/Arch/ARM.cpp b/lld/ELF/Arch/ARM.cpp
index 62685b1e7dedea..6b24fd95c84f5d 100644
--- a/lld/ELF/Arch/ARM.cpp
+++ b/lld/ELF/Arch/ARM.cpp
@@ -136,7 +136,7 @@ RelExpr ARM::getRelExpr(RelType type, const Symbol &s,
     // GOT(S) + A - P
     return R_GOT_PC;
   case R_ARM_SBREL32:
-    return R_ARM_SBREL;
+    return REL_R_ARM_SBREL;
   case R_ARM_TARGET1:
     return ctx.arg.target1Rel ? R_PC : R_ABS;
   case R_ARM_TARGET2:
@@ -176,14 +176,14 @@ RelExpr ARM::getRelExpr(RelType type, const Symbol &s,
   case R_ARM_THM_ALU_PREL_11_0:
   case R_ARM_THM_PC8:
   case R_ARM_THM_PC12:
-    return R_ARM_PCA;
+    return REL_R_ARM_PCA;
   case R_ARM_MOVW_BREL_NC:
   case R_ARM_MOVW_BREL:
   case R_ARM_MOVT_BREL:
   case R_ARM_THM_MOVW_BREL_NC:
   case R_ARM_THM_MOVW_BREL:
   case R_ARM_THM_MOVT_BREL:
-    return R_ARM_SBREL;
+    return REL_R_ARM_SBREL;
   case R_ARM_NONE:
     return R_NONE;
   case R_ARM_TLS_LE32:
diff --git a/lld/ELF/Arch/LoongArch.cpp b/lld/ELF/Arch/LoongArch.cpp
index ebfdbafc9983e7..2771e93fb3ce01 100644
--- a/lld/ELF/Arch/LoongArch.cpp
+++ b/lld/ELF/Arch/LoongArch.cpp
@@ -428,7 +428,7 @@ RelExpr LoongArch::getRelExpr(const RelType type, const Symbol &s,
   case R_LARCH_SUB_ULEB128:
     // The LoongArch add/sub relocs behave like the RISCV counterparts; reuse
     // the RelExpr to avoid code duplication.
-    return R_RISCV_ADD;
+    return REL_R_RISCV_ADD;
   case R_LARCH_32_PCREL:
   case R_LARCH_64_PCREL:
   case R_LARCH_PCREL20_S2:
@@ -444,17 +444,17 @@ RelExpr LoongArch::getRelExpr(const RelType type, const Symbol &s,
   case R_LARCH_TLS_IE_PC_HI20:
   case R_LARCH_TLS_IE64_PC_LO20:
   case R_LARCH_TLS_IE64_PC_HI12:
-    return R_LOONGARCH_GOT_PAGE_PC;
+    return REL_R_LOONGARCH_GOT_PAGE_PC;
   case R_LARCH_GOT_PC_LO12:
   case R_LARCH_TLS_IE_PC_LO12:
-    return R_LOONGARCH_GOT;
+    return REL_R_LOONGARCH_GOT;
   case R_LARCH_TLS_LD_PC_HI20:
   case R_LARCH_TLS_GD_PC_HI20:
-    return R_LOONGARCH_TLSGD_PAGE_PC;
+    return REL_R_LOONGARCH_TLSGD_PAGE_PC;
   case R_LARCH_PCALA_HI20:
-    // Why not R_LOONGARCH_PAGE_PC, majority of references don't go through PLT
-    // anyway so why waste time checking only to get everything relaxed back to
-    // it?
+    // Why not REL_R_LOONGARCH_PAGE_PC, majority of references don't go through
+    // PLT anyway so why waste time checking only to get everything relaxed back
+    // to it?
     //
     // This is again due to the R_LARCH_PCALA_LO12 on JIRL case, where we want
     // both the HI20 and LO12 to potentially refer to the PLT. But in reality
@@ -474,12 +474,12 @@ RelExpr LoongArch::getRelExpr(const RelType type, const Symbol &s,
     //
     // So, unfortunately we have to again workaround this quirk the same way as
     // BFD: assuming every R_LARCH_PCALA_HI20 is potentially PLT-needing, only
-    // relaxing back to R_LOONGARCH_PAGE_PC if it's known not so at a later
+    // relaxing back to REL_R_LOONGARCH_PAGE_PC if it's known not so at a later
     // stage.
-    return R_LOONGARCH_PLT_PAGE_PC;
+    return REL_R_LOONGARCH_PLT_PAGE_PC;
   case R_LARCH_PCALA64_LO20:
   case R_LARCH_PCALA64_HI12:
-    return R_LOONGARCH_PAGE_PC;
+    return REL_R_LOONGARCH_PAGE_PC;
   case R_LARCH_GOT_HI20:
   case R_LARCH_GOT_LO12:
   case R_LARCH_GOT64_LO20:
@@ -501,7 +501,7 @@ RelExpr LoongArch::getRelExpr(const RelType type, const Symbol &s,
   case R_LARCH_TLS_DESC_PC_HI20:
   case R_LARCH_TLS_DESC64_PC_LO20:
   case R_LARCH_TLS_DESC64_PC_HI12:
-    return R_LOONGARCH_TLSDESC_PAGE_PC;
+    return REL_R_LOONGARCH_TLSDESC_PAGE_PC;
   case R_LARCH_TLS_DESC_PC_LO12:
   case R_LARCH_TLS_DESC_LD:
   case R_LARCH_TLS_DESC_HI20:
diff --git a/lld/ELF/Arch/Mips.cpp b/lld/ELF/Arch/Mips.cpp
index da76820de240d5..c2e31965e3cb92 100644
--- a/lld/ELF/Arch/Mips.cpp
+++ b/lld/ELF/Arch/Mips.cpp
@@ -105,7 +105,7 @@ RelExpr MIPS<ELFT>::getRelExpr(RelType type, const Symbol &s,
   case R_MIPS_GPREL32:
   case R_MICROMIPS_GPREL16:
   case R_MICROMIPS_GPREL7_S2:
-    return R_MIPS_GOTREL;
+    return REL_R_MIPS_GOTREL;
   case R_MIPS_26:
   case R_MICROMIPS_26_S1:
     return R_PLT;
@@ -122,9 +122,9 @@ RelExpr MIPS<ELFT>::getRelExpr(RelType type, const Symbol &s,
     // equal to the start of .got section. In that case we consider these
     // relocations as relative.
     if (&s == ctx.sym.mipsGpDisp)
-      return R_MIPS_GOT_GP_PC;
+      return REL_R_MIPS_GOT_GP_PC;
     if (&s == ctx.sym.mipsLocalGp)
-      return R_MIPS_GOT_GP;
+      return REL_R_MIPS_GOT_GP;
     [[fallthrough]];
   case R_MIPS_32:
   case R_MIPS_64:
@@ -163,14 +163,14 @@ RelExpr MIPS<ELFT>::getRelExpr(RelType type, const Symbol &s,
   case R_MIPS_GOT16:
   case R_MICROMIPS_GOT16:
     if (s.isLocal())
-      return R_MIPS_GOT_LOCAL_PAGE;
+      return REL_R_MIPS_GOT_LOCAL_PAGE;
     [[fallthrough]];
   case R_MIPS_CALL16:
   case R_MIPS_GOT_DISP:
   case R_MIPS_TLS_GOTTPREL:
   case R_MICROMIPS_CALL16:
   case R_MICROMIPS_TLS_GOTTPREL:
-    return R_MIPS_GOT_OFF;
+    return REL_R_MIPS_GOT_OFF;
   case R_MIPS_CALL_HI16:
   case R_MIPS_CALL_LO16:
   case R_MIPS_GOT_HI16:
@@ -179,15 +179,15 @@ RelExpr MIPS<ELFT>::getRelExpr(RelType type, const Symbol &s,
   case R_MICROMIPS_CALL_LO16:
   case R_MICROMIPS_GOT_HI16:
   case R_MICROMIPS_GOT_LO16:
-    return R_MIPS_GOT_OFF32;
+    return REL_R_MIPS_GOT_OFF32;
   case R_MIPS_GOT_PAGE:
-    return R_MIPS_GOT_LOCAL_PAGE;
+    return REL_R_MIPS_GOT_LOCAL_PAGE;
   case R_MIPS_TLS_GD:
   case R_MICROMIPS_TLS_GD:
-    return R_MIPS_TLSGD;
+    return REL_R_MIPS_TLSGD;
   case R_MIPS_TLS_LDM:
   case R_MICROMIPS_TLS_LDM:
-    return R_MIPS_TLSLD;
+    return REL_R_MIPS_TLSLD;
   case R_MIPS_NONE:
     return R_NONE;
   default:
diff --git a/lld/ELF/Arch/PPC.cpp b/lld/ELF/Arch/PPC.cpp
index 417401374436ae..faf12b10588d11 100644
--- a/lld/ELF/Arch/PPC.cpp
+++ b/lld/ELF/Arch/PPC.cpp
@@ -250,7 +250,7 @@ RelExpr PPC::getRelExpr(RelType type, const Symbol &s,
   case R_PPC_REL24:
     return R_PLT_PC;
   case R_PPC_PLTREL24:
-    return R_PPC32_PLTREL;
+    return REL_R_PPC32_PLTREL;
   case R_PPC_GOT_TLSGD16:
     return R_TLSGD_GOT;
   case R_PPC_GOT_TLSLD16:
diff --git a/lld/ELF/Arch/PPC64.cpp b/lld/ELF/Arch/PPC64.cpp
index b55385625a1cf1..709548f58cde0e 100644
--- a/lld/ELF/Arch/PPC64.cpp
+++ b/lld/ELF/Arch/PPC64.cpp
@@ -1029,12 +1029,12 @@ RelExpr PPC64::getRelExpr(RelType type, const Symbol &s,
     return R_GOT_PC;
   case R_PPC64_TOC16_HA:
   case R_PPC64_TOC16_LO_DS:
-    return ctx.arg.tocOptimize ? R_PPC64_RELAX_TOC : R_GOTREL;
+    return ctx.arg.tocOptimize ? REL_R_PPC64_RELAX_TOC : R_GOTREL;
   case R_PPC64_TOC:
-    return R_PPC64_TOCBASE;
+    return REL_R_PPC64_TOCBASE;
   case R_PPC64_REL14:
   case R_PPC64_REL24:
-    return R_PPC64_CALL_PLT;
+    return REL_R_PPC64_CALL_PLT;
   case R_PPC64_REL24_NOTOC:
     return R_PLT_PC;
   case R_PPC64_REL16_LO:
@@ -1452,7 +1452,7 @@ bool PPC64::needsThunk(RelExpr expr, RelType type, const InputFile *file,
 
   // If the offset exceeds the range of the branch type then it will need
   // a range-extending thunk.
-  // See the comment in getRelocTargetVA() about R_PPC64_CALL.
+  // See the comment in getRelocTargetVA() about REL_R_PPC64_CALL.
   return !inBranchRange(
       type, branchAddr,
       s.getVA(ctx, a) + getPPC64GlobalEntryToLocalEntryOffset(ctx, s.stOther));
@@ -1490,7 +1490,7 @@ RelExpr PPC64::adjustGotPcExpr(RelType type, int64_t addend,
     // It only makes sense to optimize pld since paddi means that the address
     // of the object in the GOT is required rather than the object itself.
     if ((readPrefixedInst(ctx, loc) & 0xfc000000) == 0xe4000000)
-      return R_PPC64_RELAX_GOT_PC;
+      return REL_R_PPC64_RELAX_GOT_PC;
   }
   return R_GOT_PC;
 }
@@ -1574,7 +1574,7 @@ void PPC64::relocateAlloc(InputSectionBase &sec, uint8_t *buf) const {
     uint8_t *loc = buf + rel.offset;
     const uint64_t val = sec.getRelocTargetVA(ctx, rel, secAddr + rel.offset);
     switch (rel.expr) {
-    case R_PPC64_RELAX_GOT_PC: {
+    case REL_R_PPC64_RELAX_GOT_PC: {
       // The R_PPC64_PCREL_OPT relocation must appear immediately after
       // R_PPC64_GOT_PCREL34 in the relocations table at the same offset.
       // We can only relax R_PPC64_PCREL_OPT if we have also relaxed
@@ -1588,7 +1588,7 @@ void PPC64::relocateAlloc(InputSectionBase &sec, uint8_t *buf) const {
       relaxGot(loc, rel, val);
       break;
     }
-    case R_PPC64_RELAX_TOC:
+    case REL_R_PPC64_RELAX_TOC:
       // rel.sym refers to the STT_SECTION symbol associated to the .toc input
       // section. If an R_PPC64_TOC16_LO (.toc + addend) references the TOC
       // entry, there may be R_PPC64_TOC16_HA not paired with
@@ -1598,7 +1598,7 @@ void PPC64::relocateAlloc(InputSectionBase &sec, uint8_t *buf) const {
           !tryRelaxPPC64TocIndirection(ctx, rel, loc))
         relocate(loc, rel, val);
       break;
-    case R_PPC64_CALL:
+    case REL_R_PPC64_CALL:
       // If this is a call to __tls_get_addr, it may be part of a TLS
       // sequence that has been relaxed and turned into a nop. In this
       // case, we don't want to handle it as a call.
diff --git a/lld/ELF/Arch/RISCV.cpp b/lld/ELF/Arch/RISCV.cpp
index 58a71fd9545c5c..050429c25da16d 100644
--- a/lld/ELF/Arch/RISCV.cpp
+++ b/lld/ELF/Arch/RISCV.cpp
@@ -282,7 +282,7 @@ RelExpr RISCV::getRelExpr(const RelType type, const Symbol &s,
   case R_RISCV_SUB16:
   case R_RISCV_SUB32:
   case R_RISCV_SUB64:
-    return R_RISCV_ADD;
+    return REL_R_RISCV_ADD;
   case R_RISCV_JAL:
   case R_RISCV_BRANCH:
   case R_RISCV_PCREL_HI20:
@@ -299,7 +299,7 @@ RelExpr RISCV::getRelExpr(const RelType type, const Symbol &s,
     return R_GOT_PC;
   case R_RISCV_PCREL_LO12_I:
   case R_RISCV_PCREL_LO12_S:
-    return R_RISCV_PC_INDIRECT;
+    return REL_R_RISCV_PC_INDIRECT;
   case R_RISCV_TLSDESC_HI20:
   case R_RISCV_TLSDESC_LOAD_LO12:
   case R_RISCV_TLSDESC_ADD_LO12:
@@ -321,7 +321,7 @@ RelExpr RISCV::getRelExpr(const RelType type, const Symbol &s,
     return ctx.arg.relax ? R_RELAX_HINT : R_NONE;
   case R_RISCV_SET_ULEB128:
   case R_RISCV_SUB_ULEB128:
-    return R_RISCV_LEB128;
+    return REL_R_RISCV_LEB128;
   default:
     Err(ctx) << getErrorLoc(ctx, loc) << "unknown relocation (" << type.v
              << ") against symbol " << &s;
@@ -650,7 +650,7 @@ void RISCV::relocateAlloc(InputSectionBase &sec, uint8_t *buf) const {
       else
         tlsdescToIe(ctx, loc, rel, val);
       continue;
-    case R_RISCV_LEB128:
+    case REL_R_RISCV_LEB128:
       if (i + 1 < size) {
         const Relocation &rel1 = relocs[i + 1];
         if (rel.type == R_RISCV_SET_ULEB128 &&
diff --git a/lld/ELF/InputSection.cpp b/lld/ELF/InputSection.cpp
index 30c2ff4d79ba5b..3236057c803b41 100644
--- a/lld/ELF/InputSection.cpp
+++ b/lld/ELF/InputSection.cpp
@@ -523,7 +523,7 @@ void InputSection::copyRelocations(Ctx &ctx, uint8_t *buf,
         addend = target.getImplicitAddend(bufLoc, type);
 
       if (ctx.arg.emachine == EM_MIPS &&
-          target.getRelExpr(type, sym, bufLoc) == R_MIPS_GOTREL) {
+          target.getRelExpr(type, sym, bufLoc) == REL_R_MIPS_GOTREL) {
         // Some MIPS relocations depend on "gp" value. By default,
         // this value has 0x7ff0 offset from a .got section. But
         // relocatable files produced by a compiler or a linker
@@ -655,7 +655,7 @@ static uint64_t getARMStaticBase(const Symbol &sym) {
   return os->ptLoad->firstSec->addr;
 }
 
-// For R_RISCV_PC_INDIRECT (R_RISCV_PCREL_LO12_{I,S}), the symbol actually
+// For REL_R_RISCV_PC_INDIRECT (R_RISCV_PCREL_LO12_{I,S}), the symbol actually
 // points the corresponding R_RISCV_PCREL_HI20 relocation, and the target VA
 // is calculated using PCREL_HI20's symbol.
 //
@@ -772,25 +772,25 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
   case R_DTPREL:
   case R_RELAX_TLS_LD_TO_LE_ABS:
   case R_RELAX_GOT_PC_NOPIC:
-  case R_AARCH64_AUTH:
-  case R_RISCV_ADD:
-  case R_RISCV_LEB128:
+  case REL_R_AARCH64_AUTH:
+  case REL_R_RISCV_ADD:
+  case REL_R_RISCV_LEB128:
     return r.sym->getVA(ctx, a);
   case R_ADDEND:
     return a;
   case R_RELAX_HINT:
     return 0;
-  case R_ARM_SBREL:
+  case REL_R_ARM_SBREL:
     return r.sym->getVA(ctx, a) - getARMStaticBase(*r.sym);
   case R_GOT:
   case R_RELAX_TLS_GD_TO_IE_ABS:
     return r.sym->getGotVA(ctx) + a;
-  case R_LOONGARCH_GOT:
+  case REL_R_LOONGARCH_GOT:
     // The LoongArch TLS GD relocs reuse the R_LARCH_GOT_PC_LO12 reloc r.type
     // for their page offsets. The arithmetics are different in the TLS case
     // so we have to duplicate some logic here.
     if (r.sym->hasFlag(NEEDS_TLSGD) && r.type != R_LARCH_TLS_IE_PC_LO12)
-      // Like R_LOONGARCH_TLSGD_PAGE_PC but taking the absolute value.
+      // Like REL_R_LOONGARCH_TLSGD_PAGE_PC but taking the absolute value.
       return ctx.in.got->getGlobalDynAddr(*r.sym) + a;
     return r.sym->getGotVA(ctx) + a;
   case R_GOTONLY_PC:
@@ -798,7 +798,7 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
   case R_GOTPLTONLY_PC:
     return ctx.in.gotPlt->getVA() + a - p;
   case R_GOTREL:
-  case R_PPC64_RELAX_TOC:
+  case REL_R_PPC64_RELAX_TOC:
     return r.sym->getVA(ctx, a) - ctx.in.got->getVA();
   case R_GOTPLTREL:
     return r.sym->getVA(ctx, a) - ctx.in.gotPlt->getVA();
@@ -809,10 +809,10 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
   case R_GOT_OFF:
   case R_RELAX_TLS_GD_TO_IE_GOT_OFF:
     return r.sym->getGotOffset(ctx) + a;
-  case R_AARCH64_GOT_PAGE_PC:
-  case R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC:
+  case REL_R_AARCH64_GOT_PAGE_PC:
+  case REL_R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC:
     return getAArch64Page(r.sym->getGotVA(ctx) + a) - getAArch64Page(p);
-  case R_AARCH64_GOT_PAGE:
+  case REL_R_AARCH64_GOT_PAGE:
     return r.sym->getGotVA(ctx) + a - getAArch64Page(ctx.in.got->getVA());
   case R_GOT_PC:
   case R_RELAX_TLS_GD_TO_IE:
@@ -821,17 +821,17 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
     return r.sym->getGotPltVA(ctx) + a - ctx.in.got->getVA();
   case R_GOTPLT_PC:
     return r.sym->getGotPltVA(ctx) + a - p;
-  case R_LOONGARCH_GOT_PAGE_PC:
+  case REL_R_LOONGARCH_GOT_PAGE_PC:
     if (r.sym->hasFlag(NEEDS_TLSGD))
       return getLoongArchPageDelta(ctx.in.got->getGlobalDynAddr(*r.sym) + a, p,
                                    r.type);
     return getLoongArchPageDelta(r.sym->getGotVA(ctx) + a, p, r.type);
-  case R_MIPS_GOTREL:
+  case REL_R_MIPS_GOTREL:
     return r.sym->getVA(ctx, a) - ctx.in.mipsGot->getGp(file);
-  case R_MIPS_GOT_GP:
+  case REL_R_MIPS_GOT_GP:
     return ctx.in.mipsGot->getGp(file) + a;
-  case R_MIPS_GOT_GP_PC: {
-    // R_MIPS_LO16 expression has R_MIPS_GOT_GP_PC r.type iif the target
+  case REL_R_MIPS_GOT_GP_PC: {
+    // R_MIPS_LO16 expression has REL_R_MIPS_GOT_GP_PC r.type iif the target
     // is _gp_disp symbol. In that case we should use the following
     // formula for calculation "AHL + GP - P + 4". For details see p. 4-19 at
     // ftp://www.linux-mips.org/pub/linux/mips/doc/ABI/mipsabi.pdf
@@ -845,43 +845,43 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
       v -= 1;
     return v;
   }
-  case R_MIPS_GOT_LOCAL_PAGE:
+  case REL_R_MIPS_GOT_LOCAL_PAGE:
     // If relocation against MIPS local symbol requires GOT entry, this entry
     // should be initialized by 'page address'. This address is high 16-bits
     // of sum the symbol's value and the addend.
     return ctx.in.mipsGot->getVA() +
            ctx.in.mipsGot->getPageEntryOffset(file, *r.sym, a) -
            ctx.in.mipsGot->getGp(file);
-  case R_MIPS_GOT_OFF:
-  case R_MIPS_GOT_OFF32:
+  case REL_R_MIPS_GOT_OFF:
+  case REL_R_MIPS_GOT_OFF32:
     // In case of MIPS if a GOT relocation has non-zero addend this addend
     // should be applied to the GOT entry content not to the GOT entry offset.
     // That is why we use separate expression r.type.
     return ctx.in.mipsGot->getVA() +
            ctx.in.mipsGot->getSymEntryOffset(file, *r.sym, a) -
            ctx.in.mipsGot->getGp(file);
-  case R_MIPS_TLSGD:
+  case REL_R_MIPS_TLSGD:
     return ctx.in.mipsGot->getVA() +
            ctx.in.mipsGot->getGlobalDynOffset(file, *r.sym) -
            ctx.in.mipsGot->getGp(file);
-  case R_MIPS_TLSLD:
+  case REL_R_MIPS_TLSLD:
     return ctx.in.mipsGot->getVA() + ctx.in.mipsGot->getTlsIndexOffset(file) -
            ctx.in.mipsGot->getGp(file);
-  case R_AARCH64_PAGE_PC: {
+  case REL_R_AARCH64_PAGE_PC: {
     uint64_t val = r.sym->isUndefWeak() ? p + a : r.sym->getVA(ctx, a);
     return getAArch64Page(val) - getAArch64Page(p);
   }
-  case R_RISCV_PC_INDIRECT: {
+  case REL_R_RISCV_PC_INDIRECT: {
     if (const Relocation *hiRel = getRISCVPCRelHi20(ctx, this, r))
       return getRelocTargetVA(ctx, *hiRel, r.sym->getVA(ctx));
     return 0;
   }
-  case R_LOONGARCH_PAGE_PC:
+  case REL_R_LOONGARCH_PAGE_PC:
     return getLoongArchPageDelta(r.sym->getVA(ctx, a), p, r.type);
   case R_PC:
-  case R_ARM_PCA: {
+  case REL_R_ARM_PCA: {
     uint64_t dest;
-    if (r.expr == R_ARM_PCA)
+    if (r.expr == REL_R_ARM_PCA)
       // Some PC relative ARM (Thumb) relocations align down the place.
       p = p & 0xfffffffc;
     if (r.sym->isUndefined()) {
@@ -909,20 +909,20 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
   case R_PLT:
     return r.sym->getPltVA(ctx) + a;
   case R_PLT_PC:
-  case R_PPC64_CALL_PLT:
+  case REL_R_PPC64_CALL_PLT:
     ...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Dec 3, 2024

@llvm/pr-subscribers-lld

Author: Fangrui Song (MaskRay)

Changes

RelExpr enumerators are named R_*, which can be confused with ELF
relocation type names. Rename the target-specific ones to REL_* to
avoid confusion.

For consistency, the target-independent ones can be renamed as well, but
that's not urgent. The relocation processing mechanism with RelExpr has
non-trivial overhead compared with mold's approach, and we might make
more code into Arch/*.cpp files and decrease the enumerators.


Patch is 36.27 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/118424.diff

11 Files Affected:

  • (modified) lld/ELF/Arch/AArch64.cpp (+10-10)
  • (modified) lld/ELF/Arch/ARM.cpp (+3-3)
  • (modified) lld/ELF/Arch/LoongArch.cpp (+11-11)
  • (modified) lld/ELF/Arch/Mips.cpp (+9-9)
  • (modified) lld/ELF/Arch/PPC.cpp (+1-1)
  • (modified) lld/ELF/Arch/PPC64.cpp (+8-8)
  • (modified) lld/ELF/Arch/RISCV.cpp (+4-4)
  • (modified) lld/ELF/InputSection.cpp (+37-37)
  • (modified) lld/ELF/Relocations.cpp (+43-42)
  • (modified) lld/ELF/Relocations.h (+31-31)
  • (modified) lld/ELF/SyntheticSections.cpp (+2-2)
diff --git a/lld/ELF/Arch/AArch64.cpp b/lld/ELF/Arch/AArch64.cpp
index 5b5ad482ea1279..a94f2ee037cecd 100644
--- a/lld/ELF/Arch/AArch64.cpp
+++ b/lld/ELF/Arch/AArch64.cpp
@@ -154,9 +154,9 @@ RelExpr AArch64::getRelExpr(RelType type, const Symbol &s,
   case R_AARCH64_MOVW_UABS_G3:
     return R_ABS;
   case R_AARCH64_AUTH_ABS64:
-    return R_AARCH64_AUTH;
+    return REL_R_AARCH64_AUTH;
   case R_AARCH64_TLSDESC_ADR_PAGE21:
-    return R_AARCH64_TLSDESC_PAGE;
+    return REL_R_AARCH64_TLSDESC_PAGE;
   case R_AARCH64_TLSDESC_LD64_LO12:
   case R_AARCH64_TLSDESC_ADD_LO12:
     return R_TLSDESC;
@@ -198,15 +198,15 @@ RelExpr AArch64::getRelExpr(RelType type, const Symbol &s,
     return R_PC;
   case R_AARCH64_ADR_PREL_PG_HI21:
   case R_AARCH64_ADR_PREL_PG_HI21_NC:
-    return R_AARCH64_PAGE_PC;
+    return REL_R_AARCH64_PAGE_PC;
   case R_AARCH64_LD64_GOT_LO12_NC:
   case R_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
     return R_GOT;
   case R_AARCH64_LD64_GOTPAGE_LO15:
-    return R_AARCH64_GOT_PAGE;
+    return REL_R_AARCH64_GOT_PAGE;
   case R_AARCH64_ADR_GOT_PAGE:
   case R_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
-    return R_AARCH64_GOT_PAGE_PC;
+    return REL_R_AARCH64_GOT_PAGE_PC;
   case R_AARCH64_GOTPCREL32:
   case R_AARCH64_GOT_LD_PREL19:
     return R_GOT_PC;
@@ -222,7 +222,7 @@ RelExpr AArch64::getRelExpr(RelType type, const Symbol &s,
 RelExpr AArch64::adjustTlsExpr(RelType type, RelExpr expr) const {
   if (expr == R_RELAX_TLS_GD_TO_IE) {
     if (type == R_AARCH64_TLSDESC_ADR_PAGE21)
-      return R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC;
+      return REL_R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC;
     return R_RELAX_TLS_GD_TO_IE_ABS;
   }
   return expr;
@@ -877,7 +877,7 @@ bool AArch64Relaxer::tryRelaxAdrpLdr(const Relocation &adrpRel,
   if (val != llvm::SignExtend64(val, 33))
     return false;
 
-  Relocation adrpSymRel = {R_AARCH64_PAGE_PC, R_AARCH64_ADR_PREL_PG_HI21,
+  Relocation adrpSymRel = {REL_R_AARCH64_PAGE_PC, R_AARCH64_ADR_PREL_PG_HI21,
                            adrpRel.offset, /*addend=*/0, &sym};
   Relocation addRel = {R_ABS, R_AARCH64_ADD_ABS_LO12_NC, ldrRel.offset,
                        /*addend=*/0, &sym};
@@ -922,21 +922,21 @@ void AArch64::relocateAlloc(InputSectionBase &sec, uint8_t *buf) const {
     }
 
     switch (rel.expr) {
-    case R_AARCH64_GOT_PAGE_PC:
+    case REL_R_AARCH64_GOT_PAGE_PC:
       if (i + 1 < size &&
           relaxer.tryRelaxAdrpLdr(rel, sec.relocs()[i + 1], secAddr, buf)) {
         ++i;
         continue;
       }
       break;
-    case R_AARCH64_PAGE_PC:
+    case REL_R_AARCH64_PAGE_PC:
       if (i + 1 < size &&
           relaxer.tryRelaxAdrpAdd(rel, sec.relocs()[i + 1], secAddr, buf)) {
         ++i;
         continue;
       }
       break;
-    case R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC:
+    case REL_R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC:
     case R_RELAX_TLS_GD_TO_IE_ABS:
       relaxTlsGdToIe(loc, rel, val);
       continue;
diff --git a/lld/ELF/Arch/ARM.cpp b/lld/ELF/Arch/ARM.cpp
index 62685b1e7dedea..6b24fd95c84f5d 100644
--- a/lld/ELF/Arch/ARM.cpp
+++ b/lld/ELF/Arch/ARM.cpp
@@ -136,7 +136,7 @@ RelExpr ARM::getRelExpr(RelType type, const Symbol &s,
     // GOT(S) + A - P
     return R_GOT_PC;
   case R_ARM_SBREL32:
-    return R_ARM_SBREL;
+    return REL_R_ARM_SBREL;
   case R_ARM_TARGET1:
     return ctx.arg.target1Rel ? R_PC : R_ABS;
   case R_ARM_TARGET2:
@@ -176,14 +176,14 @@ RelExpr ARM::getRelExpr(RelType type, const Symbol &s,
   case R_ARM_THM_ALU_PREL_11_0:
   case R_ARM_THM_PC8:
   case R_ARM_THM_PC12:
-    return R_ARM_PCA;
+    return REL_R_ARM_PCA;
   case R_ARM_MOVW_BREL_NC:
   case R_ARM_MOVW_BREL:
   case R_ARM_MOVT_BREL:
   case R_ARM_THM_MOVW_BREL_NC:
   case R_ARM_THM_MOVW_BREL:
   case R_ARM_THM_MOVT_BREL:
-    return R_ARM_SBREL;
+    return REL_R_ARM_SBREL;
   case R_ARM_NONE:
     return R_NONE;
   case R_ARM_TLS_LE32:
diff --git a/lld/ELF/Arch/LoongArch.cpp b/lld/ELF/Arch/LoongArch.cpp
index ebfdbafc9983e7..2771e93fb3ce01 100644
--- a/lld/ELF/Arch/LoongArch.cpp
+++ b/lld/ELF/Arch/LoongArch.cpp
@@ -428,7 +428,7 @@ RelExpr LoongArch::getRelExpr(const RelType type, const Symbol &s,
   case R_LARCH_SUB_ULEB128:
     // The LoongArch add/sub relocs behave like the RISCV counterparts; reuse
     // the RelExpr to avoid code duplication.
-    return R_RISCV_ADD;
+    return REL_R_RISCV_ADD;
   case R_LARCH_32_PCREL:
   case R_LARCH_64_PCREL:
   case R_LARCH_PCREL20_S2:
@@ -444,17 +444,17 @@ RelExpr LoongArch::getRelExpr(const RelType type, const Symbol &s,
   case R_LARCH_TLS_IE_PC_HI20:
   case R_LARCH_TLS_IE64_PC_LO20:
   case R_LARCH_TLS_IE64_PC_HI12:
-    return R_LOONGARCH_GOT_PAGE_PC;
+    return REL_R_LOONGARCH_GOT_PAGE_PC;
   case R_LARCH_GOT_PC_LO12:
   case R_LARCH_TLS_IE_PC_LO12:
-    return R_LOONGARCH_GOT;
+    return REL_R_LOONGARCH_GOT;
   case R_LARCH_TLS_LD_PC_HI20:
   case R_LARCH_TLS_GD_PC_HI20:
-    return R_LOONGARCH_TLSGD_PAGE_PC;
+    return REL_R_LOONGARCH_TLSGD_PAGE_PC;
   case R_LARCH_PCALA_HI20:
-    // Why not R_LOONGARCH_PAGE_PC, majority of references don't go through PLT
-    // anyway so why waste time checking only to get everything relaxed back to
-    // it?
+    // Why not REL_R_LOONGARCH_PAGE_PC, majority of references don't go through
+    // PLT anyway so why waste time checking only to get everything relaxed back
+    // to it?
     //
     // This is again due to the R_LARCH_PCALA_LO12 on JIRL case, where we want
     // both the HI20 and LO12 to potentially refer to the PLT. But in reality
@@ -474,12 +474,12 @@ RelExpr LoongArch::getRelExpr(const RelType type, const Symbol &s,
     //
     // So, unfortunately we have to again workaround this quirk the same way as
     // BFD: assuming every R_LARCH_PCALA_HI20 is potentially PLT-needing, only
-    // relaxing back to R_LOONGARCH_PAGE_PC if it's known not so at a later
+    // relaxing back to REL_R_LOONGARCH_PAGE_PC if it's known not so at a later
     // stage.
-    return R_LOONGARCH_PLT_PAGE_PC;
+    return REL_R_LOONGARCH_PLT_PAGE_PC;
   case R_LARCH_PCALA64_LO20:
   case R_LARCH_PCALA64_HI12:
-    return R_LOONGARCH_PAGE_PC;
+    return REL_R_LOONGARCH_PAGE_PC;
   case R_LARCH_GOT_HI20:
   case R_LARCH_GOT_LO12:
   case R_LARCH_GOT64_LO20:
@@ -501,7 +501,7 @@ RelExpr LoongArch::getRelExpr(const RelType type, const Symbol &s,
   case R_LARCH_TLS_DESC_PC_HI20:
   case R_LARCH_TLS_DESC64_PC_LO20:
   case R_LARCH_TLS_DESC64_PC_HI12:
-    return R_LOONGARCH_TLSDESC_PAGE_PC;
+    return REL_R_LOONGARCH_TLSDESC_PAGE_PC;
   case R_LARCH_TLS_DESC_PC_LO12:
   case R_LARCH_TLS_DESC_LD:
   case R_LARCH_TLS_DESC_HI20:
diff --git a/lld/ELF/Arch/Mips.cpp b/lld/ELF/Arch/Mips.cpp
index da76820de240d5..c2e31965e3cb92 100644
--- a/lld/ELF/Arch/Mips.cpp
+++ b/lld/ELF/Arch/Mips.cpp
@@ -105,7 +105,7 @@ RelExpr MIPS<ELFT>::getRelExpr(RelType type, const Symbol &s,
   case R_MIPS_GPREL32:
   case R_MICROMIPS_GPREL16:
   case R_MICROMIPS_GPREL7_S2:
-    return R_MIPS_GOTREL;
+    return REL_R_MIPS_GOTREL;
   case R_MIPS_26:
   case R_MICROMIPS_26_S1:
     return R_PLT;
@@ -122,9 +122,9 @@ RelExpr MIPS<ELFT>::getRelExpr(RelType type, const Symbol &s,
     // equal to the start of .got section. In that case we consider these
     // relocations as relative.
     if (&s == ctx.sym.mipsGpDisp)
-      return R_MIPS_GOT_GP_PC;
+      return REL_R_MIPS_GOT_GP_PC;
     if (&s == ctx.sym.mipsLocalGp)
-      return R_MIPS_GOT_GP;
+      return REL_R_MIPS_GOT_GP;
     [[fallthrough]];
   case R_MIPS_32:
   case R_MIPS_64:
@@ -163,14 +163,14 @@ RelExpr MIPS<ELFT>::getRelExpr(RelType type, const Symbol &s,
   case R_MIPS_GOT16:
   case R_MICROMIPS_GOT16:
     if (s.isLocal())
-      return R_MIPS_GOT_LOCAL_PAGE;
+      return REL_R_MIPS_GOT_LOCAL_PAGE;
     [[fallthrough]];
   case R_MIPS_CALL16:
   case R_MIPS_GOT_DISP:
   case R_MIPS_TLS_GOTTPREL:
   case R_MICROMIPS_CALL16:
   case R_MICROMIPS_TLS_GOTTPREL:
-    return R_MIPS_GOT_OFF;
+    return REL_R_MIPS_GOT_OFF;
   case R_MIPS_CALL_HI16:
   case R_MIPS_CALL_LO16:
   case R_MIPS_GOT_HI16:
@@ -179,15 +179,15 @@ RelExpr MIPS<ELFT>::getRelExpr(RelType type, const Symbol &s,
   case R_MICROMIPS_CALL_LO16:
   case R_MICROMIPS_GOT_HI16:
   case R_MICROMIPS_GOT_LO16:
-    return R_MIPS_GOT_OFF32;
+    return REL_R_MIPS_GOT_OFF32;
   case R_MIPS_GOT_PAGE:
-    return R_MIPS_GOT_LOCAL_PAGE;
+    return REL_R_MIPS_GOT_LOCAL_PAGE;
   case R_MIPS_TLS_GD:
   case R_MICROMIPS_TLS_GD:
-    return R_MIPS_TLSGD;
+    return REL_R_MIPS_TLSGD;
   case R_MIPS_TLS_LDM:
   case R_MICROMIPS_TLS_LDM:
-    return R_MIPS_TLSLD;
+    return REL_R_MIPS_TLSLD;
   case R_MIPS_NONE:
     return R_NONE;
   default:
diff --git a/lld/ELF/Arch/PPC.cpp b/lld/ELF/Arch/PPC.cpp
index 417401374436ae..faf12b10588d11 100644
--- a/lld/ELF/Arch/PPC.cpp
+++ b/lld/ELF/Arch/PPC.cpp
@@ -250,7 +250,7 @@ RelExpr PPC::getRelExpr(RelType type, const Symbol &s,
   case R_PPC_REL24:
     return R_PLT_PC;
   case R_PPC_PLTREL24:
-    return R_PPC32_PLTREL;
+    return REL_R_PPC32_PLTREL;
   case R_PPC_GOT_TLSGD16:
     return R_TLSGD_GOT;
   case R_PPC_GOT_TLSLD16:
diff --git a/lld/ELF/Arch/PPC64.cpp b/lld/ELF/Arch/PPC64.cpp
index b55385625a1cf1..709548f58cde0e 100644
--- a/lld/ELF/Arch/PPC64.cpp
+++ b/lld/ELF/Arch/PPC64.cpp
@@ -1029,12 +1029,12 @@ RelExpr PPC64::getRelExpr(RelType type, const Symbol &s,
     return R_GOT_PC;
   case R_PPC64_TOC16_HA:
   case R_PPC64_TOC16_LO_DS:
-    return ctx.arg.tocOptimize ? R_PPC64_RELAX_TOC : R_GOTREL;
+    return ctx.arg.tocOptimize ? REL_R_PPC64_RELAX_TOC : R_GOTREL;
   case R_PPC64_TOC:
-    return R_PPC64_TOCBASE;
+    return REL_R_PPC64_TOCBASE;
   case R_PPC64_REL14:
   case R_PPC64_REL24:
-    return R_PPC64_CALL_PLT;
+    return REL_R_PPC64_CALL_PLT;
   case R_PPC64_REL24_NOTOC:
     return R_PLT_PC;
   case R_PPC64_REL16_LO:
@@ -1452,7 +1452,7 @@ bool PPC64::needsThunk(RelExpr expr, RelType type, const InputFile *file,
 
   // If the offset exceeds the range of the branch type then it will need
   // a range-extending thunk.
-  // See the comment in getRelocTargetVA() about R_PPC64_CALL.
+  // See the comment in getRelocTargetVA() about REL_R_PPC64_CALL.
   return !inBranchRange(
       type, branchAddr,
       s.getVA(ctx, a) + getPPC64GlobalEntryToLocalEntryOffset(ctx, s.stOther));
@@ -1490,7 +1490,7 @@ RelExpr PPC64::adjustGotPcExpr(RelType type, int64_t addend,
     // It only makes sense to optimize pld since paddi means that the address
     // of the object in the GOT is required rather than the object itself.
     if ((readPrefixedInst(ctx, loc) & 0xfc000000) == 0xe4000000)
-      return R_PPC64_RELAX_GOT_PC;
+      return REL_R_PPC64_RELAX_GOT_PC;
   }
   return R_GOT_PC;
 }
@@ -1574,7 +1574,7 @@ void PPC64::relocateAlloc(InputSectionBase &sec, uint8_t *buf) const {
     uint8_t *loc = buf + rel.offset;
     const uint64_t val = sec.getRelocTargetVA(ctx, rel, secAddr + rel.offset);
     switch (rel.expr) {
-    case R_PPC64_RELAX_GOT_PC: {
+    case REL_R_PPC64_RELAX_GOT_PC: {
       // The R_PPC64_PCREL_OPT relocation must appear immediately after
       // R_PPC64_GOT_PCREL34 in the relocations table at the same offset.
       // We can only relax R_PPC64_PCREL_OPT if we have also relaxed
@@ -1588,7 +1588,7 @@ void PPC64::relocateAlloc(InputSectionBase &sec, uint8_t *buf) const {
       relaxGot(loc, rel, val);
       break;
     }
-    case R_PPC64_RELAX_TOC:
+    case REL_R_PPC64_RELAX_TOC:
       // rel.sym refers to the STT_SECTION symbol associated to the .toc input
       // section. If an R_PPC64_TOC16_LO (.toc + addend) references the TOC
       // entry, there may be R_PPC64_TOC16_HA not paired with
@@ -1598,7 +1598,7 @@ void PPC64::relocateAlloc(InputSectionBase &sec, uint8_t *buf) const {
           !tryRelaxPPC64TocIndirection(ctx, rel, loc))
         relocate(loc, rel, val);
       break;
-    case R_PPC64_CALL:
+    case REL_R_PPC64_CALL:
       // If this is a call to __tls_get_addr, it may be part of a TLS
       // sequence that has been relaxed and turned into a nop. In this
       // case, we don't want to handle it as a call.
diff --git a/lld/ELF/Arch/RISCV.cpp b/lld/ELF/Arch/RISCV.cpp
index 58a71fd9545c5c..050429c25da16d 100644
--- a/lld/ELF/Arch/RISCV.cpp
+++ b/lld/ELF/Arch/RISCV.cpp
@@ -282,7 +282,7 @@ RelExpr RISCV::getRelExpr(const RelType type, const Symbol &s,
   case R_RISCV_SUB16:
   case R_RISCV_SUB32:
   case R_RISCV_SUB64:
-    return R_RISCV_ADD;
+    return REL_R_RISCV_ADD;
   case R_RISCV_JAL:
   case R_RISCV_BRANCH:
   case R_RISCV_PCREL_HI20:
@@ -299,7 +299,7 @@ RelExpr RISCV::getRelExpr(const RelType type, const Symbol &s,
     return R_GOT_PC;
   case R_RISCV_PCREL_LO12_I:
   case R_RISCV_PCREL_LO12_S:
-    return R_RISCV_PC_INDIRECT;
+    return REL_R_RISCV_PC_INDIRECT;
   case R_RISCV_TLSDESC_HI20:
   case R_RISCV_TLSDESC_LOAD_LO12:
   case R_RISCV_TLSDESC_ADD_LO12:
@@ -321,7 +321,7 @@ RelExpr RISCV::getRelExpr(const RelType type, const Symbol &s,
     return ctx.arg.relax ? R_RELAX_HINT : R_NONE;
   case R_RISCV_SET_ULEB128:
   case R_RISCV_SUB_ULEB128:
-    return R_RISCV_LEB128;
+    return REL_R_RISCV_LEB128;
   default:
     Err(ctx) << getErrorLoc(ctx, loc) << "unknown relocation (" << type.v
              << ") against symbol " << &s;
@@ -650,7 +650,7 @@ void RISCV::relocateAlloc(InputSectionBase &sec, uint8_t *buf) const {
       else
         tlsdescToIe(ctx, loc, rel, val);
       continue;
-    case R_RISCV_LEB128:
+    case REL_R_RISCV_LEB128:
       if (i + 1 < size) {
         const Relocation &rel1 = relocs[i + 1];
         if (rel.type == R_RISCV_SET_ULEB128 &&
diff --git a/lld/ELF/InputSection.cpp b/lld/ELF/InputSection.cpp
index 30c2ff4d79ba5b..3236057c803b41 100644
--- a/lld/ELF/InputSection.cpp
+++ b/lld/ELF/InputSection.cpp
@@ -523,7 +523,7 @@ void InputSection::copyRelocations(Ctx &ctx, uint8_t *buf,
         addend = target.getImplicitAddend(bufLoc, type);
 
       if (ctx.arg.emachine == EM_MIPS &&
-          target.getRelExpr(type, sym, bufLoc) == R_MIPS_GOTREL) {
+          target.getRelExpr(type, sym, bufLoc) == REL_R_MIPS_GOTREL) {
         // Some MIPS relocations depend on "gp" value. By default,
         // this value has 0x7ff0 offset from a .got section. But
         // relocatable files produced by a compiler or a linker
@@ -655,7 +655,7 @@ static uint64_t getARMStaticBase(const Symbol &sym) {
   return os->ptLoad->firstSec->addr;
 }
 
-// For R_RISCV_PC_INDIRECT (R_RISCV_PCREL_LO12_{I,S}), the symbol actually
+// For REL_R_RISCV_PC_INDIRECT (R_RISCV_PCREL_LO12_{I,S}), the symbol actually
 // points the corresponding R_RISCV_PCREL_HI20 relocation, and the target VA
 // is calculated using PCREL_HI20's symbol.
 //
@@ -772,25 +772,25 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
   case R_DTPREL:
   case R_RELAX_TLS_LD_TO_LE_ABS:
   case R_RELAX_GOT_PC_NOPIC:
-  case R_AARCH64_AUTH:
-  case R_RISCV_ADD:
-  case R_RISCV_LEB128:
+  case REL_R_AARCH64_AUTH:
+  case REL_R_RISCV_ADD:
+  case REL_R_RISCV_LEB128:
     return r.sym->getVA(ctx, a);
   case R_ADDEND:
     return a;
   case R_RELAX_HINT:
     return 0;
-  case R_ARM_SBREL:
+  case REL_R_ARM_SBREL:
     return r.sym->getVA(ctx, a) - getARMStaticBase(*r.sym);
   case R_GOT:
   case R_RELAX_TLS_GD_TO_IE_ABS:
     return r.sym->getGotVA(ctx) + a;
-  case R_LOONGARCH_GOT:
+  case REL_R_LOONGARCH_GOT:
     // The LoongArch TLS GD relocs reuse the R_LARCH_GOT_PC_LO12 reloc r.type
     // for their page offsets. The arithmetics are different in the TLS case
     // so we have to duplicate some logic here.
     if (r.sym->hasFlag(NEEDS_TLSGD) && r.type != R_LARCH_TLS_IE_PC_LO12)
-      // Like R_LOONGARCH_TLSGD_PAGE_PC but taking the absolute value.
+      // Like REL_R_LOONGARCH_TLSGD_PAGE_PC but taking the absolute value.
       return ctx.in.got->getGlobalDynAddr(*r.sym) + a;
     return r.sym->getGotVA(ctx) + a;
   case R_GOTONLY_PC:
@@ -798,7 +798,7 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
   case R_GOTPLTONLY_PC:
     return ctx.in.gotPlt->getVA() + a - p;
   case R_GOTREL:
-  case R_PPC64_RELAX_TOC:
+  case REL_R_PPC64_RELAX_TOC:
     return r.sym->getVA(ctx, a) - ctx.in.got->getVA();
   case R_GOTPLTREL:
     return r.sym->getVA(ctx, a) - ctx.in.gotPlt->getVA();
@@ -809,10 +809,10 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
   case R_GOT_OFF:
   case R_RELAX_TLS_GD_TO_IE_GOT_OFF:
     return r.sym->getGotOffset(ctx) + a;
-  case R_AARCH64_GOT_PAGE_PC:
-  case R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC:
+  case REL_R_AARCH64_GOT_PAGE_PC:
+  case REL_R_AARCH64_RELAX_TLS_GD_TO_IE_PAGE_PC:
     return getAArch64Page(r.sym->getGotVA(ctx) + a) - getAArch64Page(p);
-  case R_AARCH64_GOT_PAGE:
+  case REL_R_AARCH64_GOT_PAGE:
     return r.sym->getGotVA(ctx) + a - getAArch64Page(ctx.in.got->getVA());
   case R_GOT_PC:
   case R_RELAX_TLS_GD_TO_IE:
@@ -821,17 +821,17 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
     return r.sym->getGotPltVA(ctx) + a - ctx.in.got->getVA();
   case R_GOTPLT_PC:
     return r.sym->getGotPltVA(ctx) + a - p;
-  case R_LOONGARCH_GOT_PAGE_PC:
+  case REL_R_LOONGARCH_GOT_PAGE_PC:
     if (r.sym->hasFlag(NEEDS_TLSGD))
       return getLoongArchPageDelta(ctx.in.got->getGlobalDynAddr(*r.sym) + a, p,
                                    r.type);
     return getLoongArchPageDelta(r.sym->getGotVA(ctx) + a, p, r.type);
-  case R_MIPS_GOTREL:
+  case REL_R_MIPS_GOTREL:
     return r.sym->getVA(ctx, a) - ctx.in.mipsGot->getGp(file);
-  case R_MIPS_GOT_GP:
+  case REL_R_MIPS_GOT_GP:
     return ctx.in.mipsGot->getGp(file) + a;
-  case R_MIPS_GOT_GP_PC: {
-    // R_MIPS_LO16 expression has R_MIPS_GOT_GP_PC r.type iif the target
+  case REL_R_MIPS_GOT_GP_PC: {
+    // R_MIPS_LO16 expression has REL_R_MIPS_GOT_GP_PC r.type iif the target
     // is _gp_disp symbol. In that case we should use the following
     // formula for calculation "AHL + GP - P + 4". For details see p. 4-19 at
     // ftp://www.linux-mips.org/pub/linux/mips/doc/ABI/mipsabi.pdf
@@ -845,43 +845,43 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
       v -= 1;
     return v;
   }
-  case R_MIPS_GOT_LOCAL_PAGE:
+  case REL_R_MIPS_GOT_LOCAL_PAGE:
     // If relocation against MIPS local symbol requires GOT entry, this entry
     // should be initialized by 'page address'. This address is high 16-bits
     // of sum the symbol's value and the addend.
     return ctx.in.mipsGot->getVA() +
            ctx.in.mipsGot->getPageEntryOffset(file, *r.sym, a) -
            ctx.in.mipsGot->getGp(file);
-  case R_MIPS_GOT_OFF:
-  case R_MIPS_GOT_OFF32:
+  case REL_R_MIPS_GOT_OFF:
+  case REL_R_MIPS_GOT_OFF32:
     // In case of MIPS if a GOT relocation has non-zero addend this addend
     // should be applied to the GOT entry content not to the GOT entry offset.
     // That is why we use separate expression r.type.
     return ctx.in.mipsGot->getVA() +
            ctx.in.mipsGot->getSymEntryOffset(file, *r.sym, a) -
            ctx.in.mipsGot->getGp(file);
-  case R_MIPS_TLSGD:
+  case REL_R_MIPS_TLSGD:
     return ctx.in.mipsGot->getVA() +
            ctx.in.mipsGot->getGlobalDynOffset(file, *r.sym) -
            ctx.in.mipsGot->getGp(file);
-  case R_MIPS_TLSLD:
+  case REL_R_MIPS_TLSLD:
     return ctx.in.mipsGot->getVA() + ctx.in.mipsGot->getTlsIndexOffset(file) -
            ctx.in.mipsGot->getGp(file);
-  case R_AARCH64_PAGE_PC: {
+  case REL_R_AARCH64_PAGE_PC: {
     uint64_t val = r.sym->isUndefWeak() ? p + a : r.sym->getVA(ctx, a);
     return getAArch64Page(val) - getAArch64Page(p);
   }
-  case R_RISCV_PC_INDIRECT: {
+  case REL_R_RISCV_PC_INDIRECT: {
     if (const Relocation *hiRel = getRISCVPCRelHi20(ctx, this, r))
       return getRelocTargetVA(ctx, *hiRel, r.sym->getVA(ctx));
     return 0;
   }
-  case R_LOONGARCH_PAGE_PC:
+  case REL_R_LOONGARCH_PAGE_PC:
     return getLoongArchPageDelta(r.sym->getVA(ctx, a), p, r.type);
   case R_PC:
-  case R_ARM_PCA: {
+  case REL_R_ARM_PCA: {
     uint64_t dest;
-    if (r.expr == R_ARM_PCA)
+    if (r.expr == REL_R_ARM_PCA)
       // Some PC relative ARM (Thumb) relocations align down the place.
       p = p & 0xfffffffc;
     if (r.sym->isUndefined()) {
@@ -909,20 +909,20 @@ uint64_t InputSectionBase::getRelocTargetVA(Ctx &ctx, const Relocation &r,
   case R_PLT:
     return r.sym->getPltVA(ctx) + a;
   case R_PLT_PC:
-  case R_PPC64_CALL_PLT:
+  case REL_R_PPC64_CALL_PLT:
     ...
[truncated]

@jrtc27
Copy link
Collaborator

jrtc27 commented Dec 3, 2024

Perhaps RE_* to match RelExpr? REL_ feels a lot like actual ELF things still.

.
Created using spr 1.3.5-bogner
@MaskRay
Copy link
Member Author

MaskRay commented Dec 3, 2024

Perhaps RE_* to match RelExpr? REL_ feels a lot like actual ELF things still.

Thanks for the suggestion. RE_* looks better. Updated.

Copy link
Collaborator

@smithp35 smithp35 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM on the Arm and AArch64 side. I agree that a different prefix to R_ is preferred and RE_ is good enough for that.

Created using spr 1.3.5-bogner
@MaskRay MaskRay merged commit 04996a2 into main Dec 3, 2024
5 of 7 checks passed
@MaskRay MaskRay deleted the users/MaskRay/spr/elf-rename-target-specific-relexpr-enumerators branch December 3, 2024 17:17
kovdan01 added a commit that referenced this pull request Dec 4, 2024
kovdan01 added a commit that referenced this pull request Dec 4, 2024
kovdan01 added a commit that referenced this pull request Dec 4, 2024
kovdan01 added a commit that referenced this pull request Dec 4, 2024
kovdan01 added a commit that referenced this pull request Dec 4, 2024
kovdan01 added a commit that referenced this pull request Dec 4, 2024
kovdan01 added a commit that referenced this pull request Dec 4, 2024
kovdan01 added a commit that referenced this pull request Dec 4, 2024
kovdan01 added a commit that referenced this pull request Dec 8, 2024
kovdan01 added a commit that referenced this pull request Dec 8, 2024
kovdan01 added a commit that referenced this pull request Dec 8, 2024
kovdan01 added a commit that referenced this pull request Dec 15, 2024
kovdan01 added a commit that referenced this pull request Dec 15, 2024
kovdan01 added a commit that referenced this pull request Dec 15, 2024
kovdan01 added a commit that referenced this pull request Dec 16, 2024
kovdan01 added a commit that referenced this pull request Dec 16, 2024
kovdan01 added a commit that referenced this pull request Dec 16, 2024
kovdan01 added a commit that referenced this pull request Dec 16, 2024
kovdan01 added a commit that referenced this pull request Dec 16, 2024
kovdan01 added a commit that referenced this pull request Dec 17, 2024
kovdan01 added a commit that referenced this pull request Dec 17, 2024
kovdan01 added a commit that referenced this pull request Jan 12, 2025
kovdan01 added a commit that referenced this pull request Jan 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants