From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9AE0BCD3423 for ; Fri, 1 May 2026 17:58:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To: Content-Transfer-Encoding:Content-Type:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=u/+O11I2l5uEWQtHiF9iLIDtrtif7F+7W2x9HgtS2Zk=; b=AUeNqL/+OLl/jtud5U9YsMWMyr pHOskvmG7ZHZI+4rQcROnhSkTPfJrXeeNok0jUF4+rep2Q+c93t2VvuzSjhUNSX5nBXKjOHy6+DIL OepmVC3QmOcZBtvyZHbp28lV50VLvqC6eGd2SRsiFvCoDeVhzSny1HWkTnyj4pgm9iN58RVtKynAF PzabaeJZxQPrA9kXgw9unNNogutXP6TK1AyyA813sCmJfo/r8o1VvB2cLsZhh3eHb2GLWEe7J3new cTc/4LlsmjY9aFkQp3jwhGinok2i5s/8wMHOVWczuwV0OLhnjFxv+OdJU1KL0Qz846oOhAhq4wYHT Rdndzfpg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1wIs7o-00000007XYb-2Yc8; Fri, 01 May 2026 17:58:04 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1wIs7n-00000007XYG-0O6i; Fri, 01 May 2026 17:58:03 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=u/+O11I2l5uEWQtHiF9iLIDtrtif7F+7W2x9HgtS2Zk=; b=cS0FYwD2XOEYVbuKnpFKh9ZiSM 0EwDyUhzf4L/Z7u5RfwH9F/lLM1NhuqZPIwTkUmNl1PS4VMevXWN1PepuvnRy9E8BXEEBe1NqpbST JCuDyJciLjPMlOgK/msGkQ/Etm/ww5vna/xrv//xWjQSuG3YKtTWGFNZVR8+M3tsp34wr0aQKFWkJ uFsTeHHXdKUGothOh1mQqNx0qA+TQ+SY8myKKpSqt0cEDRnA3Sr7IVPyxsjySvvgY2FjAsXzlkA0p PsB/KjAACHCD+NoUMzQlqunyAml2flv3i9OPzaSZZMasw49lDXCUAuGtnBBvR4d+htLsXvVM7+CgM /hohbFOw==; Received: from willy by casper.infradead.org with local (Exim 4.98.2 #2 (Red Hat Linux)) id 1wIs7c-000000098SG-2Rmh; Fri, 01 May 2026 17:57:52 +0000 Date: Fri, 1 May 2026 18:57:52 +0100 From: Matthew Wilcox To: Barry Song Cc: akpm@linux-foundation.org, linux-mm@kvack.org, david@kernel.org, ljs@kernel.org, liam@infradead.org, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, jack@suse.cz, pfalcato@suse.de, wanglian@kylinos.cn, chentao@kylinos.cn, lianux.mm@gmail.com, kunwu.chan@gmail.com, liyangouwen1@oppo.com, chrisl@kernel.org, kasong@tencent.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, loongarch@lists.linux.dev, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org Subject: Re: [PATCH v2 0/5] mm: reduce mmap_lock contention and improve page fault performance Message-ID: References: <20260430040427.4672-1-baohua@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Sat, May 02, 2026 at 01:44:34AM +0800, Barry Song wrote: > On Fri, May 1, 2026 at 10:57 PM Matthew Wilcox wrote: > > > > On Fri, May 01, 2026 at 06:49:58AM +0800, Barry Song wrote: > > > 1. There is no deterministic latency for I/O completion. It depends on > > > both the hardware and the software stack (bio/request queues and the > > > block scheduler). Sometimes the latency is short; at other times it can > > > be quite long. In such cases, a high-priority thread performing operations > > > such as mprotect, unmap, prctl_set_vma, or madvise may be forced to wait > > > for an unpredictable amount of time. > > > > But does that actually happen? I find it hard to believe that thread A > > unmaps a VMA while thread B is in the middle of taking a page fault in > > that same VMA. mprotect() and madvise() are more likely to happen, but > > it still seems really unlikely to me. > > It doesn’t have to involve unmapping or applying mprotect to > the entire VMA—just a portion of it is sufficient. Yes, but that still fails to answer "does this actually happen". How much performance is all this complexity in the page fault handler buying us? If you don't answer this question, I'm just going to go in and rip it all out. > BTW, the chain can propagate: a page fault occurs, B wants to write this > VMA, and C (a higher-priority task) wants to write another VMA. D may need > to iterate VMAs under mmap_lock, so B can end up blocking both C and D. I know.