From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C8BFBD0E6CC for ; Mon, 21 Oct 2024 08:34:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:From:References:CC:To:Subject:MIME-Version:Date: Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=6cJIy5T3QMC7wmN8A4sOxgMuSthXfNyWge96qabZYcc=; b=cmm7O/L5hfYlpjFIBxOpau7O9J H6H15LLenxD8AsEuJhz8U3G09/5AsdxyNKVJvb1TN9vJkLhCDzEhQgs/GCi4VuBML+8iAioQ9+X5c TEDcSfbJFE1tx93CsAzfiepAc5VlEf7jeYOBehj179n+S3CJ2FQ8zNTrdfRE3R2pno6shxSbQjHhI mmQ13SXtPs5z9WNbWBma4zJj9WkmdplNAR8oHyn8ngWEVKN+FFu7d+FoWo7jc2yEdGmrtvkajH21F b+tdsex/fynYto0ce/NogHbvx+WrzgLNE9PfN9Dd/67++QImLgwcxqQ8yX+MMZa1UMvEP9q7Sutkg 3zVnTO5Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t2nru-00000006Zcs-473R; Mon, 21 Oct 2024 08:34:26 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t2nob-00000006YX2-1l3q for linux-arm-kernel@lists.infradead.org; Mon, 21 Oct 2024 08:31:20 +0000 Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XX7lJ46LRz2Df32; Mon, 21 Oct 2024 16:29:32 +0800 (CST) Received: from kwepemh500013.china.huawei.com (unknown [7.202.181.146]) by mail.maildlp.com (Postfix) with ESMTPS id 104AB1A0188; Mon, 21 Oct 2024 16:30:53 +0800 (CST) Received: from [10.67.109.254] (10.67.109.254) by kwepemh500013.china.huawei.com (7.202.181.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 21 Oct 2024 16:30:52 +0800 Message-ID: <0b5e67da-cd23-5159-250a-9f4722655784@huawei.com> Date: Mon, 21 Oct 2024 16:30:51 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.0 Subject: Re: [PATCH v3 0/3] arm64: entry: Convert to generic entry Content-Language: en-US To: Mark Rutland CC: , , , , , , , , , , , , , , , References: <20240629085601.470241-1-ruanjinjie@huawei.com> From: Jinjie Ruan In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.109.254] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemh500013.china.huawei.com (7.202.181.146) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241021_013103_461882_AF112BF2 X-CRM114-Status: GOOD ( 41.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2024/10/17 23:25, Mark Rutland wrote: > Hi, > > On Sat, Jun 29, 2024 at 04:55:58PM +0800, Jinjie Ruan wrote: >> Currently, x86, Riscv, Loongarch use the generic entry. Convert arm64 >> to use the generic entry infrastructure from kernel/entry/*. The generic >> entry makes maintainers' work easier and codes more elegant, which aslo >> removed a lot of duplicate code. > >> arch/arm64/Kconfig | 1 + >> arch/arm64/include/asm/entry-common.h | 172 ++++++++++++ >> arch/arm64/include/asm/ptrace.h | 5 + >> arch/arm64/include/asm/stacktrace.h | 5 +- >> arch/arm64/include/asm/syscall.h | 6 +- >> arch/arm64/include/asm/thread_info.h | 23 +- >> arch/arm64/kernel/entry-common.c | 368 +++++--------------------- >> arch/arm64/kernel/ptrace.c | 90 ------- >> arch/arm64/kernel/signal.c | 3 +- >> arch/arm64/kernel/syscall.c | 18 +- >> include/linux/entry-common.h | 90 +++++++ >> include/linux/thread_info.h | 13 + >> kernel/entry/common.c | 37 +-- >> 13 files changed, 395 insertions(+), 436 deletions(-) >> create mode 100644 arch/arm64/include/asm/entry-common.h > > Looking at this I have a few concerns, which I've tried to explain > below. > > Firstly, this is difficult to review (and will be difficult to test, > queue. and debug in future) because lots of independent changes are made > all at once. I think that needs to be split out more. > > It would be good if preparatory rework/cleanup could be split into a few > patches that we could consider queueing before the rest of the series, > or even if we decide to not pick the rest of the series. For example, > patch 2 should be split into: > > * One patch that replaces arm64's interrupts_enabled() with > regs_irqs_disabled(), removing interrupts_enabled() entirely rather > than implementing regs_irqs_disabled() using interrupts_enabled(). Yes, only the new version is needed, another interrupts_enabled() should be removed. > > That'll require updating existing users, but the end result will be > more consistent and have fewer lines of code. > > * One patch that changes on_thread_stack() from a macro to a function. > The commit message for patch 2 currently says: > > > Make on_thread_stack() compatible with generic entry. > > ... but it's not clear to me *what* that incompatibility is, and that > should be explained in the commit message. This change is not needed, I'll remove it. > > * One patch that splits report_syscall() into report_syscall_enter() and > report_syscall_exit(). This should have no functional change. Yes, that will be more clear. > > Patch 3 in particular is very hard to follow because several unrelated > complex systems are updated simultaneously. It would be really nice if > we could move to the generic sycall code separately from moving the rest > of the entry code, as the sycall handling code is a particularly > important ABI concern, and it's difficult to see whether we're making > ABI changes (accidentaly or knowingly). > > Can we split that up (e.g. splitting the generic code first into > separate entry and syscall files), or are those too tightly coupled for > that to be possible? It will be hard, but I will try to split it, they are surely tightly coupled which make the 3th patch too big when I try to switch to generic entry. > > At the end of the series, pt_regs::{lockdep_hardirqs,exit_rcu} still > exist, though they're unused. It would be nicer if we could get rid of > those in a preparatory patch, e.g. have enter_from_kernel_mode() and > exit_to_kernel_mode() use an irqentry_state_t (or a temporary > arm64-specific version). That would make the subsequent changes clearer > since we'd already have the same structure. You are totally right, when I do as you said, the third switch patch is more smoother and more comprehensible. > > In the end result, there's a lot of bouncing between noinstr functions > where things are inlined today. For example, el0_da() calls > irqentry_enter_from_user_mode(), which is an out-of-line noinstr wrapper > for enter_from_user_mode(), which is an __always_inline function in a > header. It would be nice to avoid unnecessary bouncing through > out-of-line functions. I see s390 and x86 use enter_from_user_mode() > directly.Yes, the enter_from_user_mode() is enough, there is no need to use the wrapper irqentry_enter_from_user_mode(). > > There's also some indirection that I don't think is necessary *and* > hides important ordering concerns and results in mistakes. In > particular, note that before this series, enter_from_kernel_mode() calls > the (instrumentable) MTE checks *after* all the necessary lockdep+RCU > management is performed by __enter_from_kernel_mode(): > > static void noinstr enter_from_kernel_mode(struct pt_regs *regs) > { > __enter_from_kernel_mode(regs); > mte_check_tfsr_entry(); > mte_disable_tco_entry(current); > } > > ... whereas after this series is applied, those MTE checks are placed in > arch_enter_from_kernel_mode(), which irqentry_enter() calls *before* the > necessary lockdep+RCU management. That is broken. Yes, these MTE checks can be wrapped in arm64 version enter_from_kernel_mode() code, and the new defined arch functions can be removed. > > It would be better to keep that explicit in the arm64 entry code with > arm64-specific wrappers, e.g. > > static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *regs) > { > irqentry_state_t state = irqentry_enter(regs); > mte_check_tfsr_entry(); > mte_disable_tco_entry(current); > > return state; > } > > ... which would avoid the need for arch_enter_from_kernel_mode(), make > that ordering obvious, and would remove the need to modify all the > callers. > > Likewise for the user entry/exit paths, which would avoid the visual > imbalance of: > > irqentry_enter_from_user_mode(); > ... > exit_to_user_mode_wrapper() > Yes, it is not clear, we can eliminate this sense of imbalance by renaming them and wrap them. > Thanks, > Mark. >