From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 05384CAC58D for ; Thu, 11 Sep 2025 12:07:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=c30ZQnsvAEyuVkRVamQoGQaRen0nYIRa+BA7jFqu78U=; b=XVhErnZLU11N5dZ5neQ9b6gRhD A2VJBK3NS7oAORhpGgzH/sI5WFE1mEGncHBfjgkHdNL6jN8vwowZLTJh7fzso8Sj36xiu2RIMHJI/ OJSvxDmgCGFSgYAuzr/DbrpOm6vbnShCZO5uH0CB4xE3Zir7ufAF5AV64/4naPRhtML5VvW4BZA1P +qg6+qlvKMuovfP3ATmg4rPPRltLl54eMJKX5MR4HI5G5cP8mKt4M7hwvpOKkKh4mymSVx3UQMtLZ 8Y0D1gxpxLWpMUJCHVVS8+Y+gtgGVdf7MA3eCm5Kovx89kZS3UG27VeD+AbqD3EVkfZ3yzWc4z9YT Q88FxBPA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uwg5O-00000002sl4-1Tte; Thu, 11 Sep 2025 12:07:34 +0000 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uwg5L-00000002sk4-1jsX for linux-arm-kernel@lists.infradead.org; Thu, 11 Sep 2025 12:07:32 +0000 Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 58B84fHU032047; Thu, 11 Sep 2025 12:06:56 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pp1; bh=c30ZQnsvAEyuVkRVamQoGQaRen0nYI Ra+BA7jFqu78U=; b=k8qq71z07GrReGd9S9mOiL2kfz2BW5pit9iGHWyQ501+3o 18SrSVgXyKIh3fgbTocFTOqLMtqSFK4cxX7BIxE3tS22XZO0MuuzR1offUBCNMnq oKDIaYOBM6OqV/8/2nb5TYMMtbfty7TIxuJR1UwI7ijudG7hBz2n8mEO1/1JtIWQ dCG50x7Is5mNMQX/2OgEzJRaOS4B3WAK5BbVoavZQgJ0vMUStU+M4xrwJU761IBV 72RsuHX8BmU9AXx/ZC8PQ4TAp5bfKtZi3PWfc0gEnh3wsk+FWngrjfyj6wXqFlUw ta9dp/wt1kSFBm9tnzdcdwRJffzZgn8TZgXdZhNg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 490cffmge8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 11 Sep 2025 12:06:56 +0000 (GMT) Received: from m0360083.ppops.net (m0360083.ppops.net [127.0.0.1]) by pps.reinject (8.18.1.12/8.18.0.8) with ESMTP id 58BButst022969; Thu, 11 Sep 2025 12:06:55 GMT Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 490cffmge1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 11 Sep 2025 12:06:55 +0000 (GMT) Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 58B99ZNi011435; Thu, 11 Sep 2025 12:06:54 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 490y9unx8m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 11 Sep 2025 12:06:54 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 58BC6qp621103260 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 11 Sep 2025 12:06:52 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 592C42004B; Thu, 11 Sep 2025 12:06:52 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 95E1D20040; Thu, 11 Sep 2025 12:06:51 +0000 (GMT) Received: from li-008a6a4c-3549-11b2-a85c-c5cc2836eea2.ibm.com (unknown [9.155.204.135]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTPS; Thu, 11 Sep 2025 12:06:51 +0000 (GMT) Date: Thu, 11 Sep 2025 14:06:50 +0200 From: Alexander Gordeev To: Kevin Brodsky Cc: David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , "David S. Miller" , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, Mark Rutland Subject: Re: [PATCH v2 2/7] mm: introduce local state for lazy_mmu sections Message-ID: <80be36e5-d6e1-4b37-a1ca-47e92ac21b02-agordeev@linux.ibm.com> References: <20250908073931.4159362-1-kevin.brodsky@arm.com> <20250908073931.4159362-3-kevin.brodsky@arm.com> <2fecfae7-1140-4a23-a352-9fd339fcbae5-agordeev@linux.ibm.com> <47ee1df7-1602-4200-af94-475f84ca8d80@arm.com> <250835cd-f07a-4b8a-bc01-ace24b407efc@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <250835cd-f07a-4b8a-bc01-ace24b407efc@arm.com> X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: EOlye06iv3wrpFZ8EV5hsz2FcaKHpkNR X-Proofpoint-GUID: 51K6fRL5QpJCzFIQhT_Qazz7bChQjXpl X-Authority-Analysis: v=2.4 cv=EYDIQOmC c=1 sm=1 tr=0 ts=68c2bb60 cx=c_pps a=bLidbwmWQ0KltjZqbj+ezA==:117 a=bLidbwmWQ0KltjZqbj+ezA==:17 a=kj9zAlcOel0A:10 a=yJojWOMRYYMA:10 a=VwQbUJbxAAAA:8 a=7CQSdrXTAAAA:8 a=jRhVRVM2bwpbWWX_RR8A:9 a=CjuIK1q_8ugA:10 a=a-qgeE7W1pNrGK8U0ZQC:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwOTA2MDAyMCBTYWx0ZWRfX8pGJfFgtntNW /kOIUEwDJMICGimD1/ufjSKSVIfmo3TNK7+Fxsinl9Egq+oGXoolAwIU93iuesvkQ3s13VVlzhh JIgGdj6U9PokjpkHg4ykDpABkraxfLb3HERwg3941FM1HXB2esU2ebixaknpbywBxWh5hGHimsp ci9R67AshxoJpy340lLj5ssEfi6eKhyjCh5p8fZq/k83GFNAv5ZYjhl/Hqc7I+sNgjVlWR40LHM kZLurS1M60tPSFVJMaBaPvAXWRbQAESVilgI8Vf2JWna8t2s571X/x3Ahb2bFBInNwL1PkLtgz0 aizOmw9YsKcqSGhvZ6E05+Xu1Xlt46nNfGetQx37HOL9xVINKsAa7u5yRb/jAsXOIQWyZmkfM1v 9y5BFajA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-10_04,2025-09-11_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 suspectscore=0 spamscore=0 impostorscore=0 priorityscore=1501 phishscore=0 clxscore=1015 bulkscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2509060020 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250911_050731_454857_113A094D X-CRM114-Status: GOOD ( 34.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Sep 10, 2025 at 06:11:54PM +0200, Kevin Brodsky wrote: Hi Kevin, > On 09/09/2025 16:38, Alexander Gordeev wrote: > >>>>> Would that integrate well with LAZY_MMU_DEFAULT etc? > >>>> Hmm... I though the idea is to use LAZY_MMU_* by architectures that > >>>> want to use it - at least that is how I read the description above. > >>>> > >>>> It is only kasan_populate|depopulate_vmalloc_pte() in generic code > >>>> that do not follow this pattern, and it looks as a problem to me. > >> This discussion also made me realise that this is problematic, as the > >> LAZY_MMU_{DEFAULT,NESTED} macros were meant only for architectures' > >> convenience, not for generic code (where lazy_mmu_state_t should ideally > >> be an opaque type as mentioned above). It almost feels like the kasan > >> case deserves a different API, because this is not how enter() and > >> leave() are meant to be used. This would mean quite a bit of churn > >> though, so maybe just introduce another arch-defined value to pass to > >> leave() for such a situation - for instance, > >> arch_leave_lazy_mmu_mode(LAZY_MMU_FLUSH)? > > What about to adjust the semantics of apply_to_page_range() instead? > > > > It currently assumes any caller is fine with apply_to_pte_range() to > > enter the lazy mode. By contrast, kasan_(de)populate_vmalloc_pte() are > > not fine at all and must leave the lazy mode. That literally suggests > > the original assumption is incorrect. > > > > We could change int apply_to_pte_range(..., bool create, ...) to e.g. > > apply_to_pte_range(..., unsigned int flags, ...) and introduce a flag > > that simply skips entering the lazy mmu mode. > > This is pretty much what Ryan proposed [1r] some time ago, although for > a different purpose (avoiding nesting). There wasn't much appetite for > it then, but I agree that this would be a more logical way to go about it. > > - Kevin > > [1r] > https://lore.kernel.org/all/20250530140446.2387131-4-ryan.roberts@arm.com/ May be I missing the point, but I read it as an opposition to the whole series in general and to the way apply_to_pte_range() would be altered in particular: static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, unsigned long end, pte_fn_t fn, void *data, bool create, - pgtbl_mod_mask *mask) + pgtbl_mod_mask *mask, bool lazy_mmu) The idea of instructing apply_to_page_range() to skip the lazy mmu mode was not countered. Quite opposite, Liam suggested exactly the same: Could we do something like the pgtbl_mod_mask or zap_details and pass through a struct or one unsigned int for create and lazy_mmu? These wrappers are terrible for readability and annoying for argument lists too. Could we do something like the pgtbl_mod_mask or zap_details and pass through a struct or one unsigned int for create and lazy_mmu? At least we'd have better self-documenting code in the wrappers.. and if we ever need a third boolean, we could avoid multiplying the wrappers again. Thanks!