* [PATCH v3 0/2] livepatch, arm64/module: Enable late module relocations.
@ 2025-05-22 18:42 Dylan Hatch
2025-05-22 18:42 ` [PATCH v3 1/2] livepatch, x86/module: Generalize late module relocation locking Dylan Hatch
2025-05-22 18:42 ` [PATCH v3 2/2] arm64/module: Use text-poke API for late relocations Dylan Hatch
0 siblings, 2 replies; 5+ messages in thread
From: Dylan Hatch @ 2025-05-22 18:42 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Josh Poimboeuf,
Jiri Kosina, Miroslav Benes, Petr Mladek, Joe Lawrence
Cc: Dylan Hatch, Song Liu, Ard Biesheuvel, Sami Tolvanen,
Peter Zijlstra, Mike Rapoport (Microsoft), Andrew Morton,
Dan Carpenter, linux-arm-kernel, linux-kernel, live-patching
Late relocations (after the module is initially loaded) are needed when
livepatches change module code. This is supported by x86, ppc, and s390.
This series borrows the x86 methodology to reach the same level of
support on arm64, and moves the text-poke locking into the core livepatch
code to reduce redundancy.
Dylan Hatch (2):
livepatch, x86/module: Generalize late module relocation locking.
arm64/module: Use text-poke API for late relocations.
arch/arm64/kernel/module.c | 114 ++++++++++++++++++++++---------------
arch/x86/kernel/module.c | 8 +--
kernel/livepatch/core.c | 18 ++++--
3 files changed, 84 insertions(+), 56 deletions(-)
--
2.49.0.1151.ga128411c76-goog
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v3 1/2] livepatch, x86/module: Generalize late module relocation locking.
2025-05-22 18:42 [PATCH v3 0/2] livepatch, arm64/module: Enable late module relocations Dylan Hatch
@ 2025-05-22 18:42 ` Dylan Hatch
2025-05-22 19:29 ` Song Liu
2025-05-22 18:42 ` [PATCH v3 2/2] arm64/module: Use text-poke API for late relocations Dylan Hatch
1 sibling, 1 reply; 5+ messages in thread
From: Dylan Hatch @ 2025-05-22 18:42 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Josh Poimboeuf,
Jiri Kosina, Miroslav Benes, Petr Mladek, Joe Lawrence
Cc: Dylan Hatch, Song Liu, Ard Biesheuvel, Sami Tolvanen,
Peter Zijlstra, Mike Rapoport (Microsoft), Andrew Morton,
Dan Carpenter, linux-arm-kernel, linux-kernel, live-patching
Late module relocations are an issue on any arch that supports
livepatch, so move the text_mutex locking to the livepatch core code.
Signed-off-by: Dylan Hatch <dylanbhatch@google.com>
---
arch/x86/kernel/module.c | 8 ++------
kernel/livepatch/core.c | 18 +++++++++++++-----
2 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index ff07558b7ebc6..38767e0047d0c 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -197,18 +197,14 @@ static int write_relocate_add(Elf64_Shdr *sechdrs,
bool early = me->state == MODULE_STATE_UNFORMED;
void *(*write)(void *, const void *, size_t) = memcpy;
- if (!early) {
+ if (!early)
write = text_poke;
- mutex_lock(&text_mutex);
- }
ret = __write_relocate_add(sechdrs, strtab, symindex, relsec, me,
write, apply);
- if (!early) {
+ if (!early)
text_poke_sync();
- mutex_unlock(&text_mutex);
- }
return ret;
}
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 0e73fac55f8eb..9968441f73510 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -294,9 +294,10 @@ static int klp_write_section_relocs(struct module *pmod, Elf_Shdr *sechdrs,
unsigned int symndx, unsigned int secndx,
const char *objname, bool apply)
{
- int cnt, ret;
+ int cnt, ret = 0;
char sec_objname[MODULE_NAME_LEN];
Elf_Shdr *sec = sechdrs + secndx;
+ bool early = pmod->state == MODULE_STATE_UNFORMED;
/*
* Format: .klp.rela.sec_objname.section_name
@@ -319,12 +320,19 @@ static int klp_write_section_relocs(struct module *pmod, Elf_Shdr *sechdrs,
sec, sec_objname);
if (ret)
return ret;
-
- return apply_relocate_add(sechdrs, strtab, symndx, secndx, pmod);
}
- clear_relocate_add(sechdrs, strtab, symndx, secndx, pmod);
- return 0;
+ if (!early)
+ mutex_lock(&text_mutex);
+
+ if (apply)
+ ret = apply_relocate_add(sechdrs, strtab, symndx, secndx, pmod);
+ else
+ clear_relocate_add(sechdrs, strtab, symndx, secndx, pmod);
+
+ if (!early)
+ mutex_unlock(&text_mutex);
+ return ret;
}
int klp_apply_section_relocs(struct module *pmod, Elf_Shdr *sechdrs,
--
2.49.0.1151.ga128411c76-goog
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v3 2/2] arm64/module: Use text-poke API for late relocations.
2025-05-22 18:42 [PATCH v3 0/2] livepatch, arm64/module: Enable late module relocations Dylan Hatch
2025-05-22 18:42 ` [PATCH v3 1/2] livepatch, x86/module: Generalize late module relocation locking Dylan Hatch
@ 2025-05-22 18:42 ` Dylan Hatch
2025-05-22 20:01 ` Dylan Hatch
1 sibling, 1 reply; 5+ messages in thread
From: Dylan Hatch @ 2025-05-22 18:42 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Josh Poimboeuf,
Jiri Kosina, Miroslav Benes, Petr Mladek, Joe Lawrence
Cc: Dylan Hatch, Song Liu, Ard Biesheuvel, Sami Tolvanen,
Peter Zijlstra, Mike Rapoport (Microsoft), Andrew Morton,
Dan Carpenter, linux-arm-kernel, linux-kernel, live-patching
To enable late module patching, livepatch modules need to be able to
apply some of their relocations well after being loaded. In this
scenario, use the text-poking API to allow this, even with
STRICT_MODULE_RWX.
This patch is partially based off commit 88fc078a7a8f6 ("x86/module: Use
text_poke() for late relocations").
Signed-off-by: Dylan Hatch <dylanbhatch@google.com>
Acked-by: Song Liu <song@kernel.org>
---
arch/arm64/kernel/module.c | 114 ++++++++++++++++++++++---------------
1 file changed, 69 insertions(+), 45 deletions(-)
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 06bb680bfe975..3998fb3322b73 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -18,11 +18,13 @@
#include <linux/moduleloader.h>
#include <linux/random.h>
#include <linux/scs.h>
+#include <linux/memory.h>
#include <asm/alternative.h>
#include <asm/insn.h>
#include <asm/scs.h>
#include <asm/sections.h>
+#include <asm/text-patching.h>
enum aarch64_reloc_op {
RELOC_OP_NONE,
@@ -48,7 +50,8 @@ static u64 do_reloc(enum aarch64_reloc_op reloc_op, __le32 *place, u64 val)
return 0;
}
-static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
+static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len,
+ struct module *me)
{
s64 sval = do_reloc(op, place, val);
@@ -66,7 +69,11 @@ static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
switch (len) {
case 16:
- *(s16 *)place = sval;
+ if (me->state != MODULE_STATE_UNFORMED)
+ aarch64_insn_set(place, sval, sizeof(s16));
+ else
+ *(s16 *)place = sval;
+
switch (op) {
case RELOC_OP_ABS:
if (sval < 0 || sval > U16_MAX)
@@ -82,7 +89,11 @@ static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
}
break;
case 32:
- *(s32 *)place = sval;
+ if (me->state != MODULE_STATE_UNFORMED)
+ aarch64_insn_set(place, sval, sizeof(s32));
+ else
+ *(s32 *)place = sval;
+
switch (op) {
case RELOC_OP_ABS:
if (sval < 0 || sval > U32_MAX)
@@ -98,8 +109,10 @@ static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
}
break;
case 64:
- *(s64 *)place = sval;
- break;
+ if (me->state != MODULE_STATE_UNFORMED)
+ aarch64_insn_set(place, sval, sizeof(s64));
+ else
+ *(s64 *)place = sval; break;
default:
pr_err("Invalid length (%d) for data relocation\n", len);
return 0;
@@ -113,7 +126,8 @@ enum aarch64_insn_movw_imm_type {
};
static int reloc_insn_movw(enum aarch64_reloc_op op, __le32 *place, u64 val,
- int lsb, enum aarch64_insn_movw_imm_type imm_type)
+ int lsb, enum aarch64_insn_movw_imm_type imm_type,
+ struct module *me)
{
u64 imm;
s64 sval;
@@ -145,7 +159,10 @@ static int reloc_insn_movw(enum aarch64_reloc_op op, __le32 *place, u64 val,
/* Update the instruction with the new encoding. */
insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_16, insn, imm);
- *place = cpu_to_le32(insn);
+ if (me->state != MODULE_STATE_UNFORMED)
+ aarch64_insn_set(place, cpu_to_le32(insn), sizeof(insn));
+ else
+ *place = cpu_to_le32(insn);
if (imm > U16_MAX)
return -ERANGE;
@@ -154,7 +171,8 @@ static int reloc_insn_movw(enum aarch64_reloc_op op, __le32 *place, u64 val,
}
static int reloc_insn_imm(enum aarch64_reloc_op op, __le32 *place, u64 val,
- int lsb, int len, enum aarch64_insn_imm_type imm_type)
+ int lsb, int len, enum aarch64_insn_imm_type imm_type,
+ struct module *me)
{
u64 imm, imm_mask;
s64 sval;
@@ -170,7 +188,10 @@ static int reloc_insn_imm(enum aarch64_reloc_op op, __le32 *place, u64 val,
/* Update the instruction's immediate field. */
insn = aarch64_insn_encode_immediate(imm_type, insn, imm);
- *place = cpu_to_le32(insn);
+ if (me->state != MODULE_STATE_UNFORMED)
+ aarch64_insn_set(place, cpu_to_le32(insn), sizeof(insn));
+ else
+ *place = cpu_to_le32(insn);
/*
* Extract the upper value bits (including the sign bit) and
@@ -189,17 +210,17 @@ static int reloc_insn_imm(enum aarch64_reloc_op op, __le32 *place, u64 val,
}
static int reloc_insn_adrp(struct module *mod, Elf64_Shdr *sechdrs,
- __le32 *place, u64 val)
+ __le32 *place, u64 val, struct module *me)
{
u32 insn;
if (!is_forbidden_offset_for_adrp(place))
return reloc_insn_imm(RELOC_OP_PAGE, place, val, 12, 21,
- AARCH64_INSN_IMM_ADR);
+ AARCH64_INSN_IMM_ADR, me);
/* patch ADRP to ADR if it is in range */
if (!reloc_insn_imm(RELOC_OP_PREL, place, val & ~0xfff, 0, 21,
- AARCH64_INSN_IMM_ADR)) {
+ AARCH64_INSN_IMM_ADR, me)) {
insn = le32_to_cpu(*place);
insn &= ~BIT(31);
} else {
@@ -211,7 +232,10 @@ static int reloc_insn_adrp(struct module *mod, Elf64_Shdr *sechdrs,
AARCH64_INSN_BRANCH_NOLINK);
}
- *place = cpu_to_le32(insn);
+ if (me->state != MODULE_STATE_UNFORMED)
+ aarch64_insn_set(place, cpu_to_le32(insn), sizeof(insn));
+ else
+ *place = cpu_to_le32(insn);
return 0;
}
@@ -255,23 +279,23 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
/* Data relocations. */
case R_AARCH64_ABS64:
overflow_check = false;
- ovf = reloc_data(RELOC_OP_ABS, loc, val, 64);
+ ovf = reloc_data(RELOC_OP_ABS, loc, val, 64, me);
break;
case R_AARCH64_ABS32:
- ovf = reloc_data(RELOC_OP_ABS, loc, val, 32);
+ ovf = reloc_data(RELOC_OP_ABS, loc, val, 32, me);
break;
case R_AARCH64_ABS16:
- ovf = reloc_data(RELOC_OP_ABS, loc, val, 16);
+ ovf = reloc_data(RELOC_OP_ABS, loc, val, 16, me);
break;
case R_AARCH64_PREL64:
overflow_check = false;
- ovf = reloc_data(RELOC_OP_PREL, loc, val, 64);
+ ovf = reloc_data(RELOC_OP_PREL, loc, val, 64, me);
break;
case R_AARCH64_PREL32:
- ovf = reloc_data(RELOC_OP_PREL, loc, val, 32);
+ ovf = reloc_data(RELOC_OP_PREL, loc, val, 32, me);
break;
case R_AARCH64_PREL16:
- ovf = reloc_data(RELOC_OP_PREL, loc, val, 16);
+ ovf = reloc_data(RELOC_OP_PREL, loc, val, 16, me);
break;
/* MOVW instruction relocations. */
@@ -280,88 +304,88 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
fallthrough;
case R_AARCH64_MOVW_UABS_G0:
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 0,
- AARCH64_INSN_IMM_MOVKZ);
+ AARCH64_INSN_IMM_MOVKZ, me);
break;
case R_AARCH64_MOVW_UABS_G1_NC:
overflow_check = false;
fallthrough;
case R_AARCH64_MOVW_UABS_G1:
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 16,
- AARCH64_INSN_IMM_MOVKZ);
+ AARCH64_INSN_IMM_MOVKZ, me);
break;
case R_AARCH64_MOVW_UABS_G2_NC:
overflow_check = false;
fallthrough;
case R_AARCH64_MOVW_UABS_G2:
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 32,
- AARCH64_INSN_IMM_MOVKZ);
+ AARCH64_INSN_IMM_MOVKZ, me);
break;
case R_AARCH64_MOVW_UABS_G3:
/* We're using the top bits so we can't overflow. */
overflow_check = false;
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 48,
- AARCH64_INSN_IMM_MOVKZ);
+ AARCH64_INSN_IMM_MOVKZ, me);
break;
case R_AARCH64_MOVW_SABS_G0:
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 0,
- AARCH64_INSN_IMM_MOVNZ);
+ AARCH64_INSN_IMM_MOVNZ, me);
break;
case R_AARCH64_MOVW_SABS_G1:
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 16,
- AARCH64_INSN_IMM_MOVNZ);
+ AARCH64_INSN_IMM_MOVNZ, me);
break;
case R_AARCH64_MOVW_SABS_G2:
ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 32,
- AARCH64_INSN_IMM_MOVNZ);
+ AARCH64_INSN_IMM_MOVNZ, me);
break;
case R_AARCH64_MOVW_PREL_G0_NC:
overflow_check = false;
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 0,
- AARCH64_INSN_IMM_MOVKZ);
+ AARCH64_INSN_IMM_MOVKZ, me);
break;
case R_AARCH64_MOVW_PREL_G0:
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 0,
- AARCH64_INSN_IMM_MOVNZ);
+ AARCH64_INSN_IMM_MOVNZ, me);
break;
case R_AARCH64_MOVW_PREL_G1_NC:
overflow_check = false;
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 16,
- AARCH64_INSN_IMM_MOVKZ);
+ AARCH64_INSN_IMM_MOVKZ, me);
break;
case R_AARCH64_MOVW_PREL_G1:
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 16,
- AARCH64_INSN_IMM_MOVNZ);
+ AARCH64_INSN_IMM_MOVNZ, me);
break;
case R_AARCH64_MOVW_PREL_G2_NC:
overflow_check = false;
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 32,
- AARCH64_INSN_IMM_MOVKZ);
+ AARCH64_INSN_IMM_MOVKZ, me);
break;
case R_AARCH64_MOVW_PREL_G2:
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 32,
- AARCH64_INSN_IMM_MOVNZ);
+ AARCH64_INSN_IMM_MOVNZ, me);
break;
case R_AARCH64_MOVW_PREL_G3:
/* We're using the top bits so we can't overflow. */
overflow_check = false;
ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 48,
- AARCH64_INSN_IMM_MOVNZ);
+ AARCH64_INSN_IMM_MOVNZ, me);
break;
/* Immediate instruction relocations. */
case R_AARCH64_LD_PREL_LO19:
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 19,
- AARCH64_INSN_IMM_19);
+ AARCH64_INSN_IMM_19, me);
break;
case R_AARCH64_ADR_PREL_LO21:
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 0, 21,
- AARCH64_INSN_IMM_ADR);
+ AARCH64_INSN_IMM_ADR, me);
break;
case R_AARCH64_ADR_PREL_PG_HI21_NC:
overflow_check = false;
fallthrough;
case R_AARCH64_ADR_PREL_PG_HI21:
- ovf = reloc_insn_adrp(me, sechdrs, loc, val);
+ ovf = reloc_insn_adrp(me, sechdrs, loc, val, me);
if (ovf && ovf != -ERANGE)
return ovf;
break;
@@ -369,46 +393,46 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
case R_AARCH64_LDST8_ABS_LO12_NC:
overflow_check = false;
ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 0, 12,
- AARCH64_INSN_IMM_12);
+ AARCH64_INSN_IMM_12, me);
break;
case R_AARCH64_LDST16_ABS_LO12_NC:
overflow_check = false;
ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 1, 11,
- AARCH64_INSN_IMM_12);
+ AARCH64_INSN_IMM_12, me);
break;
case R_AARCH64_LDST32_ABS_LO12_NC:
overflow_check = false;
ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 2, 10,
- AARCH64_INSN_IMM_12);
+ AARCH64_INSN_IMM_12, me);
break;
case R_AARCH64_LDST64_ABS_LO12_NC:
overflow_check = false;
ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 3, 9,
- AARCH64_INSN_IMM_12);
+ AARCH64_INSN_IMM_12, me);
break;
case R_AARCH64_LDST128_ABS_LO12_NC:
overflow_check = false;
ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 4, 8,
- AARCH64_INSN_IMM_12);
+ AARCH64_INSN_IMM_12, me);
break;
case R_AARCH64_TSTBR14:
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 14,
- AARCH64_INSN_IMM_14);
+ AARCH64_INSN_IMM_14, me);
break;
case R_AARCH64_CONDBR19:
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 19,
- AARCH64_INSN_IMM_19);
+ AARCH64_INSN_IMM_19, me);
break;
case R_AARCH64_JUMP26:
case R_AARCH64_CALL26:
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 26,
- AARCH64_INSN_IMM_26);
+ AARCH64_INSN_IMM_26, me);
if (ovf == -ERANGE) {
val = module_emit_plt_entry(me, sechdrs, loc, &rel[i], sym);
if (!val)
return -ENOEXEC;
ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2,
- 26, AARCH64_INSN_IMM_26);
+ 26, AARCH64_INSN_IMM_26, me);
}
break;
--
2.49.0.1151.ga128411c76-goog
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v3 1/2] livepatch, x86/module: Generalize late module relocation locking.
2025-05-22 18:42 ` [PATCH v3 1/2] livepatch, x86/module: Generalize late module relocation locking Dylan Hatch
@ 2025-05-22 19:29 ` Song Liu
0 siblings, 0 replies; 5+ messages in thread
From: Song Liu @ 2025-05-22 19:29 UTC (permalink / raw)
To: Dylan Hatch
Cc: Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Josh Poimboeuf,
Jiri Kosina, Miroslav Benes, Petr Mladek, Joe Lawrence,
Ard Biesheuvel, Sami Tolvanen, Peter Zijlstra,
Mike Rapoport (Microsoft), Andrew Morton, Dan Carpenter,
linux-arm-kernel, linux-kernel, live-patching
On Thu, May 22, 2025 at 11:43 AM Dylan Hatch <dylanbhatch@google.com> wrote:
>
> Late module relocations are an issue on any arch that supports
> livepatch, so move the text_mutex locking to the livepatch core code.
>
> Signed-off-by: Dylan Hatch <dylanbhatch@google.com>
Acked-by: Song Liu <song@kernel.org>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v3 2/2] arm64/module: Use text-poke API for late relocations.
2025-05-22 18:42 ` [PATCH v3 2/2] arm64/module: Use text-poke API for late relocations Dylan Hatch
@ 2025-05-22 20:01 ` Dylan Hatch
0 siblings, 0 replies; 5+ messages in thread
From: Dylan Hatch @ 2025-05-22 20:01 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Josh Poimboeuf,
Jiri Kosina, Miroslav Benes, Petr Mladek, Joe Lawrence
Cc: Song Liu, Ard Biesheuvel, Sami Tolvanen, Peter Zijlstra,
Mike Rapoport (Microsoft), Andrew Morton, Dan Carpenter,
linux-arm-kernel, linux-kernel, live-patching
On Thu, May 22, 2025 at 11:43 AM Dylan Hatch <dylanbhatch@google.com> wrote:
> -static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
> +static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len,
> + struct module *me)
> {
> s64 sval = do_reloc(op, place, val);
>
> @@ -66,7 +69,11 @@ static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
>
> switch (len) {
> case 16:
> - *(s16 *)place = sval;
> + if (me->state != MODULE_STATE_UNFORMED)
> + aarch64_insn_set(place, sval, sizeof(s16));
> + else
> + *(s16 *)place = sval;
> +
> switch (op) {
> case RELOC_OP_ABS:
> if (sval < 0 || sval > U16_MAX)
> @@ -82,7 +89,11 @@ static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
> }
> break;
> case 32:
> - *(s32 *)place = sval;
> + if (me->state != MODULE_STATE_UNFORMED)
> + aarch64_insn_set(place, sval, sizeof(s32));
> + else
> + *(s32 *)place = sval;
> +
> switch (op) {
> case RELOC_OP_ABS:
> if (sval < 0 || sval > U32_MAX)
> @@ -98,8 +109,10 @@ static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
> }
> break;
> case 64:
> - *(s64 *)place = sval;
> - break;
> + if (me->state != MODULE_STATE_UNFORMED)
> + aarch64_insn_set(place, sval, sizeof(s64));
> + else
> + *(s64 *)place = sval; break;
> default:
> pr_err("Invalid length (%d) for data relocation\n", len);
> return 0;
> @@ -113,7 +126,8 @@ enum aarch64_insn_movw_imm_type {
> };
Don't merge this. I spotted an issue -- for the data relocations this
looks like an incorrect usage of aarch64_insn_set(). An updated
version will follow soon.
Thanks,
Dylan
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-05-22 20:04 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-22 18:42 [PATCH v3 0/2] livepatch, arm64/module: Enable late module relocations Dylan Hatch
2025-05-22 18:42 ` [PATCH v3 1/2] livepatch, x86/module: Generalize late module relocation locking Dylan Hatch
2025-05-22 19:29 ` Song Liu
2025-05-22 18:42 ` [PATCH v3 2/2] arm64/module: Use text-poke API for late relocations Dylan Hatch
2025-05-22 20:01 ` Dylan Hatch
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).