From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FF66C636CD for ; Mon, 30 Jan 2023 21:27:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A27E86B0072; Mon, 30 Jan 2023 16:27:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D7796B0073; Mon, 30 Jan 2023 16:27:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 850DC6B0074; Mon, 30 Jan 2023 16:27:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7676B6B0072 for ; Mon, 30 Jan 2023 16:27:21 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C694F120BFF for ; Mon, 30 Jan 2023 21:27:20 +0000 (UTC) X-FDA: 80412751440.26.6A7B9AE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf04.hostedemail.com (Postfix) with ESMTP id B670D40018 for ; Mon, 30 Jan 2023 21:27:18 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=J788HGw7; spf=pass (imf04.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675114038; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DUjUOCosc+n9O/YY6buBifhq9wPTzqRNFK3SmiShyMc=; b=EFGB5nuKQFVIVJ+IE1pr0HmEuTu+ywTAlJZcHOKifeSfY6J4wbCjJ2UHiSqiLD8LmgpTNQ ot1Xcqgjx+RX3sqADGsO4eHjb18DK111YOFTn1fucmINSQM5U6OZtqXgj9D97Qfh+ANo0J OuUMFmGz+HhrSRPyX9OAD3lmrPtKB+w= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=J788HGw7; spf=pass (imf04.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675114038; a=rsa-sha256; cv=none; b=pSdgzZq/zVTZ8+rnp7VGN8v1swnJsI3wRJgicydFVdRJgqURvZIkajYS24B/2rjWeH+eDn EK56yEgbfuZShvQL/Pkt0gNXlXur3VZeWFqJwBvr/R9Dg3vPB3r3bPxDPQehIxfrxYtH5w JSNV9tyW5ZttF+6AmAoDTr9DPiomsMU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675114037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=DUjUOCosc+n9O/YY6buBifhq9wPTzqRNFK3SmiShyMc=; b=J788HGw7DtcMnlvLWtNItVXhhZWXBcW6+H3qKx9trFrDPSRlDhbpulXYyvU2F6StXo/408 hxarGYturKQrdi/b9ELoeOWq7EjlR4vydP/hA4wscCNv1PhkNHt4Bno/Tq6cp+U7GUPk5X aw48/5kn7mQ3gVFC+NHN1i7cWMrN62k= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-532-9eiWTZGVNbaxen5Xxv50tw-1; Mon, 30 Jan 2023 16:27:16 -0500 X-MC-Unique: 9eiWTZGVNbaxen5Xxv50tw-1 Received: by mail-qv1-f72.google.com with SMTP id ob12-20020a0562142f8c00b004c6c72bf1d0so7130735qvb.9 for ; Mon, 30 Jan 2023 13:27:16 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=DUjUOCosc+n9O/YY6buBifhq9wPTzqRNFK3SmiShyMc=; b=piVRz//PY6kijaiArBqfHI+sW+k0QUbpMsADbwpwRMdff/pNaSxoD7knSMx6E48uBJ d6l/5ciiBtE8Ie/xE5iZGpXKg3xsJ9MDPlwdzNXJha4e6mD0FApIv+Jyu/Lre4W35MTM LYFyO61E2C/xE5qOTdtrwIzlX3r/bLiK5117jYdKkesyl2sSEg/IFvCFQRgfEpxva2Xh ZW8HZXPdrJugPMQTG9jVuDJ1oGngJ/8lJ8J1uUo94V1kNzJtZru0WKE1izyyZS10sP/3 iwYYQp+/w9+LKTKThobUmdvrR+BBwaOf6cC09iR1Id9Bs2XxY8LQcujl4dbtyYE1Q9ym sq9w== X-Gm-Message-State: AO0yUKVFWELXoiUNgx5p2nkqArHm1qVf3yPHyqC6/mt+19Rd/DCOMXqS tFkapst38gd/XiSQ0hyaqVO0Ua31+JTNGAeVv2Ug9oVRqXulsBAmk0awIRvMxwYcjN2E7Fb/+YM FdSImCsGqGuE= X-Received: by 2002:a0c:ebc8:0:b0:537:6e4c:ac60 with SMTP id k8-20020a0cebc8000000b005376e4cac60mr8802587qvq.2.1675114035603; Mon, 30 Jan 2023 13:27:15 -0800 (PST) X-Google-Smtp-Source: AK7set9TUlrBdCpVee0Syc/iu5fDy/5lnof3SbV+U+NkYNoyBgzcAg3OjwV20+VqTlOzMOIBNbiuPg== X-Received: by 2002:a0c:ebc8:0:b0:537:6e4c:ac60 with SMTP id k8-20020a0cebc8000000b005376e4cac60mr8802538qvq.2.1675114035240; Mon, 30 Jan 2023 13:27:15 -0800 (PST) Received: from x1n (bras-base-aurron9127w-grc-56-70-30-145-63.dsl.bell.ca. [70.30.145.63]) by smtp.gmail.com with ESMTPSA id x20-20020a05620a01f400b0071d2cd07560sm3984325qkn.124.2023.01.30.13.27.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 13:27:14 -0800 (PST) Date: Mon, 30 Jan 2023 16:27:12 -0500 From: Peter Xu To: Muhammad Usama Anjum Cc: David Hildenbrand , Andrew Morton , =?utf-8?B?TWljaGHFgiBNaXJvc8WCYXc=?= , Andrei Vagin , Danylo Mocherniuk , Paul Gofman , Cyrill Gorcunov , Alexander Viro , Shuah Khan , Christian Brauner , Yang Shi , Vlastimil Babka , "Liam R . Howlett" , Yun Zhou , Suren Baghdasaryan , Alex Sierra , Matthew Wilcox , Pasha Tatashin , Mike Rapoport , Nadav Amit , Axel Rasmussen , "Gustavo A . R . Silva" , Dan Williams , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Greg KH , kernel@collabora.com Subject: Re: [PATCH v8 1/4] userfaultfd: Add UFFD WP Async support Message-ID: References: <20230124084323.1363825-1-usama.anjum@collabora.com> <20230124084323.1363825-2-usama.anjum@collabora.com> <1968dff9-f48a-3290-a15b-a8b739f31ed2@collabora.com> MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Queue-Id: B670D40018 X-Stat-Signature: p49n8cayyefhxffatducjtsgmaq6zayu X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1675114038-326709 X-HE-Meta: U2FsdGVkX1+jARQoAqUiuhwMUh41hTbvzsN+XF5Fu4EpO+4NgSGvsgDnjIxtDDXjRvUjacwwDofBOFBcBf8fB4YCAXyqm4Rkh5grV5KTexffZ3JmY3KqwFkGrofWQWdkcLC11nsiBmgO1AWE+J7kvn9YLhhK8kIOqqD8yRNGszlpX7HPujLhqdRsKvStbXhx2jPu98EW99UEinwgTmu+Gg1XlrjAf62uLO7OdGRX/dC+1Ik5WNCSaHDeaZrL3wMM7WRsCup50YZfk4qRLnz+b8fuyR0Mq7f1UvaPlyiW//NpkJfFHfHZW6rYrmU+wAZOVTHLKbpPr4slqfqvlg6viMFSEOFXgqpgUwP+knnxbiYIoIoLfbUfLAVduB/fIVBZzAu9lg0Fi8T1ahGhJ0dNtx5QSFdUOqEjWPU0N7+ciOF+DrDsC1a4z7oYD0XiDBzTcgufKK+Dn7ltkwhsq+4zNYLaQMtvU0ZJyg/kVTYWdbdYxwjEeIG1o2yqyP41WmlQBAR4hRYiXiv6t7UdXxIQciWdx81oQYRpYSfbDQvDlsDOQv2t7deW/tMT+mOgvjqwluaFzYYla+I2bYqdK9xfogGoJECii3lxz9Afb7RxJ+YhvuuFL7sN5sYHCZ6bnRpxs94WFqaP+79UvnU4eYK0bctNXJtLxygXpe+oUVz51mGChbNP7w8X9IWU4onMyCqRrxpzkaJQMM0KfPNkXCw1VEcs2DaYN4tKi6pHc9AufjjQEiTWw/mlYePGpKcmJ3PcikW0fIUuBmjJEP2wEE5eJRAx1gL9EnJ/w+BtK5Dxk3ngCo8OU5VvbvM5Y3Nx9cCc1hHXPObKotKHEt38AWqEAJ6YF5HTbGskH6o0gBQF8ajOB/mQcHM2gs0YmVVGJCDBjBnoWMcVFpUD0fXqFXGC8n5Y972mvoT/ud4fmJFivQ01zg8PnpoSgDua81Q1ZBJ2mGu6rdK+64dLUrX/+eb 81+uzehb dXPyEgR8Iv2F1buoghfZGl2jPHwHBLnv4fl41Uxsh/1hHECCTDQAFiNbDYZV3+yCU8axap37nDHr1Hau42TGrhaSaH3cX0SMrBSpTdfamxbkQAGSG4pFxpKWbLNKIq/YUSpg0JSRfWiPNPOQ48UjImazizYZtNZ4IwlMT/87OUH2qk8TsbhPVa0ciweFbzvuvnet2GC1lYZxux5v9JVf8gIQ7WfNXzW0mWu7UF9ESD6rbvUQFQ0jK8mNDziIR4lkGq1nR5LHYrCjyFru3GBBvl2lJlKc1aEzdQkqoUvAhZCtZChdZBOUElNDDEGDZG8EfCwFvgCgOFCSXsX55lmqy7sOQ/+sgznfPO5IxhaGYrFAqbz0WsD0PPW8+nJJ1iA0C3r+NhOonoK2vW6RKWz3HdunFSTyjqsOXxpiWsh32mFO7WkKs2QHn9RLfgA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jan 30, 2023 at 01:38:16PM +0500, Muhammad Usama Anjum wrote: > On 1/27/23 8:32 PM, Peter Xu wrote: > > On Fri, Jan 27, 2023 at 11:47:14AM +0500, Muhammad Usama Anjum wrote: > >>>> diff --git a/mm/memory.c b/mm/memory.c > >>>> index 4000e9f017e0..8c03b133d483 100644 > >>>> --- a/mm/memory.c > >>>> +++ b/mm/memory.c > >>>> @@ -3351,6 +3351,18 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) > >>>> > >>>> if (likely(!unshare)) { > >>>> if (userfaultfd_pte_wp(vma, *vmf->pte)) { > >>>> + if (userfaultfd_wp_async(vma)) { > >>>> + /* > >>>> + * Nothing needed (cache flush, TLB invalidations, > >>>> + * etc.) because we're only removing the uffd-wp bit, > >>>> + * which is completely invisible to the user. This > >>>> + * falls through to possible CoW. > >>> > >>> Here it says it falls through to CoW, but.. > >>> > >>>> + */ > >>>> + pte_unmap_unlock(vmf->pte, vmf->ptl); > >>>> + set_pte_at(vma->vm_mm, vmf->address, vmf->pte, > >>>> + pte_clear_uffd_wp(*vmf->pte)); > >>>> + return 0; > >>> > >>> ... it's not doing so. The original lines should do: > >>> > >>> https://lore.kernel.org/all/Y8qq0dKIJBshua+X@x1n/ > > > > [1] > > > >>> > >>> Side note: you cannot modify pgtable after releasing the pgtable lock. > >>> It's racy. > >> If I don't unlock and return after removing the UFFD_WP flag in case of > >> async wp, the target just gets stuck. Maybe the pte lock is not unlocked in > >> some path. > >> > >> If I unlock and don't return, the crash happens. > >> > >> So I'd put unlock and return from here. Please comment on the below patch > >> and what do you think should be done. I've missed something. > > > > Have you tried to just use exactly what I suggested in [1]? I'll paste > > again: > > > > ---8<--- > > diff --git a/mm/memory.c b/mm/memory.c > > index 4000e9f017e0..09aab434654c 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -3351,8 +3351,20 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) > > > > if (likely(!unshare)) { > > if (userfaultfd_pte_wp(vma, *vmf->pte)) { > > - pte_unmap_unlock(vmf->pte, vmf->ptl); > > - return handle_userfault(vmf, VM_UFFD_WP); > > + if (userfaultfd_uffd_wp_async(vma)) { > > + /* > > + * Nothing needed (cache flush, TLB > > + * invalidations, etc.) because we're only > > + * removing the uffd-wp bit, which is > > + * completely invisible to the user. > > + * This falls through to possible CoW. > > + */ > > + set_pte_at(vma->vm_mm, vmf->address, vmf->pte, > > + pte_clear_uffd_wp(*vmf->pte)); > > + } else { > > + pte_unmap_unlock(vmf->pte, vmf->ptl); > > + return handle_userfault(vmf, VM_UFFD_WP); > > + } > > } > > ---8<--- > > > > Note that there's no "return", neither the unlock. The lock is used in the > > follow up write fault resolution and it's released later. > I've tried out the exact patch above. This doesn't work. The pages keep > their WP flag even after being resolved in do_wp_page() while is written on > the page. > > So I'd added pte_unmap_unlock() and return 0 from here. This makes the > patch to work. Maybe you can try this on your end to see what I'm seeing here? Oh maybe it's because it didn't update orig_pte. If you want, you can try again with doing so by changing: set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte_clear_uffd_wp(*vmf->pte)); into: pte_t pte = pte_clear_uffd_wp(*vmf->pte); set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); /* Update this to be prepared for following up CoW handling */ vmf->orig_pte = pte; > > > > > Meanwhile please fully digest how pgtable lock is used in this path before > > moving forward on any of such changes. > > > >> > >>> > >>>> + } > >>>> pte_unmap_unlock(vmf->pte, vmf->ptl); > >>>> return handle_userfault(vmf, VM_UFFD_WP); > >>>> } > >>>> @@ -4812,8 +4824,21 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf) > >>>> > >>>> if (vma_is_anonymous(vmf->vma)) { > >>>> if (likely(!unshare) && > >>>> - userfaultfd_huge_pmd_wp(vmf->vma, vmf->orig_pmd)) > >>>> - return handle_userfault(vmf, VM_UFFD_WP); > >>>> + userfaultfd_huge_pmd_wp(vmf->vma, vmf->orig_pmd)) { > >>>> + if (userfaultfd_wp_async(vmf->vma)) { > >>>> + /* > >>>> + * Nothing needed (cache flush, TLB invalidations, > >>>> + * etc.) because we're only removing the uffd-wp bit, > >>>> + * which is completely invisible to the user. This > >>>> + * falls through to possible CoW. > >>>> + */ > >>>> + set_pmd_at(vmf->vma->vm_mm, vmf->address, vmf->pmd, > >>>> + pmd_clear_uffd_wp(*vmf->pmd)); > >>> > >>> This is for THP, not hugetlb. > >>> > >>> Clearing uffd-wp bit here for the whole pmd is wrong to me, because we > >>> track writes in small page sizes only. We should just split. > >> By detecting if the fault is async wp, just splitting the PMD doesn't work. > >> The below given snippit is working right now. But definately, the fault of > >> the whole PMD is being resolved which if we can bypass by correctly > >> splitting would be highly desirable. Can you please take a look on UFFD > >> side and suggest the changes? It would be much appreciated. I'm attaching > >> WIP v9 patches for you to apply on next(next-20230105) and pagemap_ioctl > >> selftest can be ran to test things after making changes. > > > > Can you elaborate why thp split didn't work? Or if you want, I can look > > into this and provide the patch to enable uffd async mode. > Sorry, I was doing the wrong way. Splitting the page does work. What do you > think about the following: > > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3351,6 +3351,17 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) > > if (likely(!unshare)) { > if (userfaultfd_pte_wp(vma, *vmf->pte)) { > + if (userfaultfd_wp_async(vma)) { > + /* > + * Nothing needed (cache flush, TLB invalidations, > + * etc.) because we're only removing the uffd-wp bit, > + * which is completely invisible to the user. > + */ > + set_pte_at(vma->vm_mm, vmf->address, vmf->pte, > + pte_clear_uffd_wp(*vmf->pte)); > + pte_unmap_unlock(vmf->pte, vmf->ptl); > + return 0; Please give it a shot with above to see whether we can avoid the "return 0" here. > + } > pte_unmap_unlock(vmf->pte, vmf->ptl); > return handle_userfault(vmf, VM_UFFD_WP); > } > @@ -4812,8 +4823,13 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault > *vmf) > > if (vma_is_anonymous(vmf->vma)) { > if (likely(!unshare) && > - userfaultfd_huge_pmd_wp(vmf->vma, vmf->orig_pmd)) > + userfaultfd_huge_pmd_wp(vmf->vma, vmf->orig_pmd)) { > + if (userfaultfd_wp_async(vmf->vma)) { > + __split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL); > + return 0; Same here, I hope it'll work for you if you just goto __split_huge_pmd() right below and return with VM_FAULT_FALLBACK. It avoids one more round of fault just like the pte case above. > + } > return handle_userfault(vmf, VM_UFFD_WP); > + } > return do_huge_pmd_wp_page(vmf); > } -- Peter Xu