From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F1194D3C932 for ; Tue, 22 Oct 2024 12:09:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:From:References:CC:To:Subject:MIME-Version:Date: Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=s9Kl81P0KdXAzOY20Frl4RH15O0aWCv8Xrm1YV/CM2A=; b=Jt1QlKxxo9p2T81F1sarP8yY/m Mc9oWR0bNi4GtR3Hzf4waOqgDba9GevfrumvAuLRhF+5X0xgncZSWJuj9FM+VWMD1M0+DLTXcGghJ TR41MK/JIhtJOfMPfT5ariTGMrsoLpqruyaIBkWiifWqbCoiI5xASG2kDq5q5FKh6019GHh8oh+JF YfuQobUrm7xCYiK9TZ8glea7p2eXxNezvG/oXy5WrbQP38INZE7i6u6yrVNNwJVI+BJ7speIzoZHD /e4MImOjoB1b0/5iam93RmRUfg6x1ayg1p0zExmb+6eqE1gGrSBTYJ3sVbYhlhFUIqaubKiCPiipe GvTaw8tQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t3Dhq-0000000AnI7-2jN7; Tue, 22 Oct 2024 12:09:46 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t3DgH-0000000AmsT-31x1 for linux-arm-kernel@lists.infradead.org; Tue, 22 Oct 2024 12:08:12 +0000 Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4XXrXz01YXz1ynKk; Tue, 22 Oct 2024 20:08:02 +0800 (CST) Received: from kwepemh500013.china.huawei.com (unknown [7.202.181.146]) by mail.maildlp.com (Postfix) with ESMTPS id DDC1014037D; Tue, 22 Oct 2024 20:07:55 +0800 (CST) Received: from [10.67.109.254] (10.67.109.254) by kwepemh500013.china.huawei.com (7.202.181.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 22 Oct 2024 20:07:55 +0800 Message-ID: <2cc049fe-8f1a-0829-d879-d89278027be6@huawei.com> Date: Tue, 22 Oct 2024 20:07:54 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.0 Subject: Re: [PATCH v3 0/3] arm64: entry: Convert to generic entry Content-Language: en-US To: Mark Rutland CC: , , , , , , , , , , , , , , , References: <20240629085601.470241-1-ruanjinjie@huawei.com> From: Jinjie Ruan In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.109.254] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemh500013.china.huawei.com (7.202.181.146) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241022_050810_301138_4DADCBF4 X-CRM114-Status: GOOD ( 39.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2024/10/17 23:25, Mark Rutland wrote: > Hi, > > On Sat, Jun 29, 2024 at 04:55:58PM +0800, Jinjie Ruan wrote: >> Currently, x86, Riscv, Loongarch use the generic entry. Convert arm64 >> to use the generic entry infrastructure from kernel/entry/*. The generic >> entry makes maintainers' work easier and codes more elegant, which aslo >> removed a lot of duplicate code. > >> arch/arm64/Kconfig | 1 + >> arch/arm64/include/asm/entry-common.h | 172 ++++++++++++ >> arch/arm64/include/asm/ptrace.h | 5 + >> arch/arm64/include/asm/stacktrace.h | 5 +- >> arch/arm64/include/asm/syscall.h | 6 +- >> arch/arm64/include/asm/thread_info.h | 23 +- >> arch/arm64/kernel/entry-common.c | 368 +++++--------------------- >> arch/arm64/kernel/ptrace.c | 90 ------- >> arch/arm64/kernel/signal.c | 3 +- >> arch/arm64/kernel/syscall.c | 18 +- >> include/linux/entry-common.h | 90 +++++++ >> include/linux/thread_info.h | 13 + >> kernel/entry/common.c | 37 +-- >> 13 files changed, 395 insertions(+), 436 deletions(-) >> create mode 100644 arch/arm64/include/asm/entry-common.h > > Looking at this I have a few concerns, which I've tried to explain > below. > > Firstly, this is difficult to review (and will be difficult to test, > queue. and debug in future) because lots of independent changes are made > all at once. I think that needs to be split out more. > > It would be good if preparatory rework/cleanup could be split into a few > patches that we could consider queueing before the rest of the series, > or even if we decide to not pick the rest of the series. For example, > patch 2 should be split into: > > * One patch that replaces arm64's interrupts_enabled() with > regs_irqs_disabled(), removing interrupts_enabled() entirely rather > than implementing regs_irqs_disabled() using interrupts_enabled(). > > That'll require updating existing users, but the end result will be > more consistent and have fewer lines of code. > > * One patch that changes on_thread_stack() from a macro to a function. > The commit message for patch 2 currently says: > > > Make on_thread_stack() compatible with generic entry. > > ... but it's not clear to me *what* that incompatibility is, and that > should be explained in the commit message. > > * One patch that splits report_syscall() into report_syscall_enter() and > report_syscall_exit(). This should have no functional change. > > Patch 3 in particular is very hard to follow because several unrelated > complex systems are updated simultaneously. It would be really nice if > we could move to the generic sycall code separately from moving the rest > of the entry code, as the sycall handling code is a particularly > important ABI concern, and it's difficult to see whether we're making > ABI changes (accidentaly or knowingly). > > Can we split that up (e.g. splitting the generic code first into > separate entry and syscall files), or are those too tightly coupled for > that to be possible? > > At the end of the series, pt_regs::{lockdep_hardirqs,exit_rcu} still > exist, though they're unused. It would be nicer if we could get rid of > those in a preparatory patch, e.g. have enter_from_kernel_mode() and > exit_to_kernel_mode() use an irqentry_state_t (or a temporary > arm64-specific version). That would make the subsequent changes clearer > since we'd already have the same structure. > > In the end result, there's a lot of bouncing between noinstr functions > where things are inlined today. For example, el0_da() calls > irqentry_enter_from_user_mode(), which is an out-of-line noinstr wrapper > for enter_from_user_mode(), which is an __always_inline function in a > header. It would be nice to avoid unnecessary bouncing through > out-of-line functions. I see s390 and x86 use enter_from_user_mode() > directly. > > There's also some indirection that I don't think is necessary *and* > hides important ordering concerns and results in mistakes. In > particular, note that before this series, enter_from_kernel_mode() calls > the (instrumentable) MTE checks *after* all the necessary lockdep+RCU > management is performed by __enter_from_kernel_mode(): > > static void noinstr enter_from_kernel_mode(struct pt_regs *regs) > { > __enter_from_kernel_mode(regs); > mte_check_tfsr_entry(); > mte_disable_tco_entry(current); > } > > ... whereas after this series is applied, those MTE checks are placed in > arch_enter_from_kernel_mode(), which irqentry_enter() calls *before* the > necessary lockdep+RCU management. That is broken. > > It would be better to keep that explicit in the arm64 entry code with > arm64-specific wrappers, e.g. > > static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *regs) > { > irqentry_state_t state = irqentry_enter(regs); > mte_check_tfsr_entry(); > mte_disable_tco_entry(current); > > return state; > } Hi, Mark, It seems that there is a problem for arm64_preempt_schedule_irq() when wrap irqentry_exit() with exit_to_kernel_mode(). The arm64_preempt_schedule_irq() is about PREEMPT_DYNAMIC and preempt irq which is the raw_irqentry_exit_cond_resched() in generic code called by irqentry_exit(). Only __el1_irq() call arm64_preempt_schedule_irq(), but when we switch all exit_to_kernel_mode() to arm64-specific one that wrap irqentry_exit(), not only __el1_irq() but also el1_abort(), el1_pc(), el1_undef() etc. will try to reschedule by calling arm64_preempt_schedule_irq() similar logic. > > ... which would avoid the need for arch_enter_from_kernel_mode(), make > that ordering obvious, and would remove the need to modify all the > callers. > > Likewise for the user entry/exit paths, which would avoid the visual > imbalance of: > > irqentry_enter_from_user_mode(); > ... > exit_to_user_mode_wrapper() > > Thanks, > Mark. >