From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.librecores.org (lists.librecores.org [88.198.125.70]) by smtp.lore.kernel.org (Postfix) with ESMTP id C89CAC433F5 for ; Fri, 27 May 2022 14:53:48 +0000 (UTC) Received: from [172.31.1.100] (localhost.localdomain [127.0.0.1]) by mail.librecores.org (Postfix) with ESMTP id 9A34D247C2; Fri, 27 May 2022 16:53:47 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mail.librecores.org (Postfix) with ESMTP id 47DC22431F for ; Fri, 27 May 2022 16:53:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653663225; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=uLEQbfXieXddbws3ZtMPUo3kSGjUH1tIUFKal4fFd0k=; b=ZUWXkMmaLR/WV6SaN23An4IjuzAIza1QX1t8JzsMALCVYB77n5xLXUBVTeDmXG4i9Rz8In 93SdTNCn3lvYQKzM1zI+gr5m3+U3u6Q3xuOoj9aE3GpQX8tvfmVao021lGneA/iU37e0Wi G7M7d7KrJBIJ1UlhRAr3ZuYKmDOg12A= Received: from mail-io1-f69.google.com (mail-io1-f69.google.com [209.85.166.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-460-UGh7jVs6PGeJ6f4njWynDg-1; Fri, 27 May 2022 10:53:44 -0400 X-MC-Unique: UGh7jVs6PGeJ6f4njWynDg-1 Received: by mail-io1-f69.google.com with SMTP id l3-20020a05660227c300b0065a8c141580so2883097ios.19 for ; Fri, 27 May 2022 07:53:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=uLEQbfXieXddbws3ZtMPUo3kSGjUH1tIUFKal4fFd0k=; b=3piXg1OaEv9wAop/M9g0EKxRjUXHNynTVxNeyUDECqA0ryMJ1iJ6Bim9P/Jcaf9vGf d+04/na2OgEd4t7RWpxgJNW4RmLWDAm9B2YmSWupdSeDPnnehDWTPJnhngAGt0w6jz+M iQsgym5P3W47Pmmxi/vjtYJ1UEBjG5VEbYupcsqIbzkg/EFRPlFnUBfg7TYxdhflNrvO HMoun82NDrxEJst3hnKN12029zgX84YKOjmJgbXCCJmSQVSlApwZhnWkU3S5RJJ6CVkN RdroGngVqklZgwHHK/lKKWxW5Q2ukqRvQeA+z+vypvAshxdxaKWq+askX/fP4stcHFE0 kFaA== X-Gm-Message-State: AOAM530wis9IiOxYq+P2kGUey3ioeL8jVbSD9k++5OH+kpiGXP4r3sZY BhmpuBUY08K8rtbF7S/hI7DZCI7c5ZV/02ZyW/r7XqmDts+2g6V+vquCi/qTp8GHNSthLjjLbbB 7NrAbfKgagC3jOwEG6vifnqqMjA== X-Received: by 2002:a05:6e02:1a61:b0:2cf:8a90:7396 with SMTP id w1-20020a056e021a6100b002cf8a907396mr22761937ilv.256.1653663223409; Fri, 27 May 2022 07:53:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxG97+4KBIU6l93iytV0+qRm43dQQLHa2a/K8HbhfEtxo7G4J9rDpss95rHRYaCYZkQNmDyjQ== X-Received: by 2002:a05:6e02:1a61:b0:2cf:8a90:7396 with SMTP id w1-20020a056e021a6100b002cf8a907396mr22761868ilv.256.1653663222806; Fri, 27 May 2022 07:53:42 -0700 (PDT) Received: from xz-m1.local (cpec09435e3e0ee-cmc09435e3e0ec.cpe.net.cable.rogers.com. [99.241.198.116]) by smtp.gmail.com with ESMTPSA id d6-20020a023f06000000b00330efaf1161sm596380jaa.148.2022.05.27.07.53.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 May 2022 07:53:42 -0700 (PDT) Date: Fri, 27 May 2022 10:53:36 -0400 From: Peter Xu To: Ingo Molnar Subject: Re: [PATCH v3] mm: Avoid unnecessary page fault retires on shared memory types Message-ID: References: <20220524234531.1949-1-peterx@redhat.com> MIME-Version: 1.0 In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-BeenThere: openrisc@lists.librecores.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: Discussion around the OpenRISC processor List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: x86@kernel.org, Catalin Marinas , David Hildenbrand , Peter Zijlstra , Benjamin Herrenschmidt , Dave Hansen , linux-mips@vger.kernel.org, "James E . J . Bottomley" , linux-mm@kvack.org, Rich Felker , Paul Mackerras , "H . Peter Anvin" , sparclinux@vger.kernel.org, linux-ia64@vger.kernel.org, Alexander Gordeev , Will Deacon , linux-riscv@lists.infradead.org, Anton Ivanov , Jonas Bonn , linux-s390@vger.kernel.org, linux-snps-arc@lists.infradead.org, Yoshinori Sato , linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org, Helge Deller , Alistair Popple , Hugh Dickins , Russell King , linux-csky@vger.kernel.org, linux-sh@vger.kernel.org, Ingo Molnar , linux-arm-kernel@lists.infradead.org, Vineet Gupta , Matt Turner , Christian Borntraeger , Andrea Arcangeli , Albert Ou , Vasily Gorbik , Brian Cain , Heiko Carstens , Johannes Weiner , linux-um@lists.infradead.org, Nicholas Piggin , Richard Weinberger , linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, Ivan Kokshaysky , Al Viro , Andy Lutomirski , Paul Walmsley , Thomas Gleixner , linux-alpha@vger.kernel.org, Andrew Morton , Vlastimil Babka , Richard Henderson , Chris Zankel , Michal Simek , Thomas Bogendoerfer , linux-parisc@vger.kernel.org, Max Filippov , linux-kernel@vger.kernel.org, Dinh Nguyen , Palmer Dabbelt , Sven Schnelle , Guo Ren , Michael Ellerman , Borislav Petkov , Johannes Berg , linuxppc-dev@lists.ozlabs.org, "David S . Miller" Errors-To: openrisc-bounces@lists.librecores.org Sender: "OpenRISC" On Fri, May 27, 2022 at 12:46:31PM +0200, Ingo Molnar wrote: > > * Peter Xu wrote: > > > This patch provides a ~12% perf boost on my aarch64 test VM with a simple > > program sequentially dirtying 400MB shmem file being mmap()ed and these are > > the time it needs: > > > > Before: 650.980 ms (+-1.94%) > > After: 569.396 ms (+-1.38%) > > Nice! > > > arch/x86/mm/fault.c | 4 ++++ > > Reviewed-by: Ingo Molnar > > Minor comment typo: > > > + /* > > + * We should do the same as VM_FAULT_RETRY, but let's not > > + * return -EBUSY since that's not reflecting the reality on > > + * what has happened - we've just fully completed a page > > + * fault, with the mmap lock released. Use -EAGAIN to show > > + * that we want to take the mmap lock _again_. > > + */ > > s/reflecting the reality on what has happened > /reflecting the reality of what has happened Will fix. > > > ret = handle_mm_fault(vma, address, fault_flags, NULL); > > + > > + if (ret & VM_FAULT_COMPLETED) { > > + /* > > + * NOTE: it's a pity that we need to retake the lock here > > + * to pair with the unlock() in the callers. Ideally we > > + * could tell the callers so they do not need to unlock. > > + */ > > + mmap_read_lock(mm); > > + *unlocked = true; > > + return 0; > > Indeed that's a pity - I guess more performance could be gained here, > especially in highly parallel threaded workloads? Yes I think so. The patch avoids the page fault retry, including the mmap lock/unlock side. Now if we retake the lock for fixup_user_fault() we still safe time for pgtable walks but the lock overhead will be somehow kept, just with smaller critical sections. Some fixup_user_fault() callers won't be affected as long as unlocked==NULL is passed - e.g. the futex code path (fault_in_user_writeable). After all they never needed to retake the lock before/after this patch. It's about the other callers, and they may need some more touch-ups case by case. Examples are follow_fault_pfn() in vfio and hva_to_pfn_remapped() in KVM: both of them returns -EAGAIN when *unlocked==true. We need to teach them to know "*unlocked==true" does not necessarily mean a retry attempt. I think I can look into them if this patch can be accepted as a follow up. Thanks for taking a look! -- Peter Xu