From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2684D44C63; Tue, 8 Jul 2025 13:24:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751981075; cv=none; b=h6omBawcey6KtsCoiCrhmI0XHXFEbTvUfTKlzDenpRC7JtLEH2ScfSY3K0U5qy9CSpbjSARvu4ikBiwtY5vRRoKo+RAkA4oxNLpkxF6W6qcVSzhKWm+48iuvfaD9D/gfwr2ph3VGOLFTu1ljcY4pqlSxH8CGRzOozCffNvpOYr0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751981075; c=relaxed/simple; bh=6x46Bf8HSgAL6huCN5WefB1AceKspDH2C7TJ/Nu1v+c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q0Dt02auKOavCIyh/2ERl/aPtGM8v03oOpKIqHL06x/46aApKlmuV/vJJl+5SRbDLaqe/Qeqv4HnhC9/CgHuki3zd/y/iP0zdM6XAl9Byz0WlfPamONtkDAxUItsu+1QKBlfXacq8MTYOGYSl43ec+ruIFi2qKEI2eMRXnfOxyE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pHXDDdoS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pHXDDdoS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 62335C4CEED; Tue, 8 Jul 2025 13:24:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751981074; bh=6x46Bf8HSgAL6huCN5WefB1AceKspDH2C7TJ/Nu1v+c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pHXDDdoSq2pcMkSVYRn/vkY9qgB/18gPTrnf6UuBMR/uVFezV5Mqm2te4Y2cgdQxN 4e3s1SARiFT6itJtUnNk8LiHcMOh3QeTyZ405q3CJmNai2gIR7x0Lz2B3AmNNL08tY A7M2NFxnkWzMscRhJQ5XeQKehp/I2OGxzBSYU/3YIK1gsrPt6pms25qcKiUjSoz7al Ksu6myBeNabPMDc2HoB53rpl5BBm8vGOPoSyHUS3z4N6XP5N6Jt587QAuwkCtM2GEu CQ6ixtg1agSQdi0Ooye1WNKrcwGb/j2oq4C0v3ebjFs3/BfbAv6zG/jf9QZVFZ7jH/ NO6xQGzCiYPXA== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: "Masami Hiramatsu (Google)" , bpf@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, x86@kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Alan Maguire , David Laight , =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= , Ingo Molnar Subject: [PATCHv4 perf/core 04/22] uprobes: Add uprobe_write function Date: Tue, 8 Jul 2025 15:23:13 +0200 Message-ID: <20250708132333.2739553-5-jolsa@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250708132333.2739553-1-jolsa@kernel.org> References: <20250708132333.2739553-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Adding uprobe_write function that does what uprobe_write_opcode did so far, but allows to pass verify callback function that checks the memory location before writing the opcode. It will be used in following changes to implement specific checking logic for instruction update. The uprobe_write_opcode now calls uprobe_write with verify_opcode as the verify callback. Acked-by: Masami Hiramatsu (Google) Acked-by: Oleg Nesterov Signed-off-by: Jiri Olsa --- include/linux/uprobes.h | 5 +++++ kernel/events/uprobes.c | 14 ++++++++++---- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 7447e15559b8..e13382054435 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -187,6 +187,9 @@ struct uprobes_state { struct xol_area *xol_area; }; +typedef int (*uprobe_write_verify_t)(struct page *page, unsigned long vaddr, + uprobe_opcode_t *opcode); + extern void __init uprobes_init(void); extern int set_swbp(struct arch_uprobe *aup, struct vm_area_struct *vma, unsigned long vaddr); extern int set_orig_insn(struct arch_uprobe *aup, struct vm_area_struct *vma, unsigned long vaddr); @@ -195,6 +198,8 @@ extern bool is_trap_insn(uprobe_opcode_t *insn); extern unsigned long uprobe_get_swbp_addr(struct pt_regs *regs); extern unsigned long uprobe_get_trap_addr(struct pt_regs *regs); extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, unsigned long vaddr, uprobe_opcode_t); +extern int uprobe_write(struct arch_uprobe *auprobe, struct vm_area_struct *vma, const unsigned long opcode_vaddr, + uprobe_opcode_t opcode, uprobe_write_verify_t verify); extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, loff_t ref_ctr_offset, struct uprobe_consumer *uc); extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool); extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 37d3a3f6e48a..777de9b95dd7 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -399,7 +399,7 @@ static bool orig_page_is_identical(struct vm_area_struct *vma, return identical; } -static int __uprobe_write_opcode(struct vm_area_struct *vma, +static int __uprobe_write(struct vm_area_struct *vma, struct folio_walk *fw, struct folio *folio, unsigned long opcode_vaddr, uprobe_opcode_t opcode) { @@ -488,6 +488,12 @@ static int __uprobe_write_opcode(struct vm_area_struct *vma, */ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, const unsigned long opcode_vaddr, uprobe_opcode_t opcode) +{ + return uprobe_write(auprobe, vma, opcode_vaddr, opcode, verify_opcode); +} + +int uprobe_write(struct arch_uprobe *auprobe, struct vm_area_struct *vma, + const unsigned long opcode_vaddr, uprobe_opcode_t opcode, uprobe_write_verify_t verify) { const unsigned long vaddr = opcode_vaddr & PAGE_MASK; struct mm_struct *mm = vma->vm_mm; @@ -510,7 +516,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, * page that we can safely modify. Use FOLL_WRITE to trigger a write * fault if required. When unregistering, we might be lucky and the * anon page is already gone. So defer write faults until really - * required. Use FOLL_SPLIT_PMD, because __uprobe_write_opcode() + * required. Use FOLL_SPLIT_PMD, because __uprobe_write() * cannot deal with PMDs yet. */ if (is_register) @@ -522,7 +528,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, goto out; folio = page_folio(page); - ret = verify_opcode(page, opcode_vaddr, &opcode); + ret = verify(page, opcode_vaddr, &opcode); if (ret <= 0) { folio_put(folio); goto out; @@ -561,7 +567,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, /* Walk the page tables again, to perform the actual update. */ if (folio_walk_start(&fw, vma, vaddr, 0)) { if (fw.page == page) - ret = __uprobe_write_opcode(vma, &fw, folio, opcode_vaddr, opcode); + ret = __uprobe_write(vma, &fw, folio, opcode_vaddr, opcode); folio_walk_end(&fw, vma); } -- 2.50.0