From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FSL_HELO_FAKE, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B842AC433DB for ; Mon, 21 Dec 2020 07:30:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7ABB122BF5 for ; Mon, 21 Dec 2020 07:30:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726176AbgLUHal (ORCPT ); Mon, 21 Dec 2020 02:30:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725878AbgLUHak (ORCPT ); Mon, 21 Dec 2020 02:30:40 -0500 Received: from mail-il1-x132.google.com (mail-il1-x132.google.com [IPv6:2607:f8b0:4864:20::132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14F96C061248 for ; Sun, 20 Dec 2020 23:30:00 -0800 (PST) Received: by mail-il1-x132.google.com with SMTP id q5so8017549ilc.10 for ; Sun, 20 Dec 2020 23:30:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=1wkuFX+4OLAHVsDyhj5wdt/EL4XjKmpOcn6v/SpwNME=; b=MUwgIkEi9l/37Hkh+T2Wj6dx9CwxFSVVf9E5aJyNhLbkRa7ShsiaP3240pfUzlP1WH leMhmiHYYcFWJ9ZhROuVXvayGMNfcZU7NQ0Ueh1/9vAf3tQ2WKnBU0UmAi+dOlhN+Tmn GexTKm5hnQ1ITYktycU/LUANZ7RAOtaWXzPpmK+nFIL8l2zCG177GJzAje+hefq3IPjP 7rKC75m7an6w34T5JLftB9LKfgP3kR5C0SOfzChR1L5PCBmV6GYntxGMES47B4QjXhme PvD/ZH7eeOhjKjGIW7MRO+IhNd04FXN8teNQjuUCuRHEWLI1IOWSABbhs5jdUQ98AQ0w utvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=1wkuFX+4OLAHVsDyhj5wdt/EL4XjKmpOcn6v/SpwNME=; b=dBCyBV3WPjh8ZChcZ3ouWwoM34vd11kBaCIjat3/vtF2QMJ4VxJS1pD3G+wEtUNtNd vfvbOKEU0P48faChvgNgZIYHerB/Q3riGVPCE0B8tDBoWZa5AtsDN0C73z2SMpYtKqj/ c/hXi+O+wBnj9SS/1QGRJAgpGm3u/7qURwfYByMUZAVX2V9AgHs6u4HOT++JodYrHDKq F7Lx+4JiSdZWIXJj7OuGSoXT2HoOzXkPKmh9SqLOwdUQYR1QY8444RyLEOs9+FQCq1Os xctC2CYvT+PtIBNKPBQNOL6E7+89BqbwZDcyn0nI7qyD7XturjBL2AcOxI/Bv4jAPJhT FySg== X-Gm-Message-State: AOAM530mRqg4c5j181AEABT8hL9E3RaM2Yiy2Af5V+F2C0Kk4+PDRhQH 2QaB1U0EQZ1H5s0au74M7j2edg== X-Google-Smtp-Source: ABdhPJxGBfZSXNaVTSxC5NSaZa7uAXTVoNYiGqyJz/xrItRZN3vGe8PPZmMz4os4Fi/I4BDGMhq02Q== X-Received: by 2002:a05:6e02:1150:: with SMTP id o16mr14772434ill.105.1608535799281; Sun, 20 Dec 2020 23:29:59 -0800 (PST) Received: from google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) by smtp.gmail.com with ESMTPSA id f29sm13143576ilg.3.2020.12.20.23.29.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Dec 2020 23:29:58 -0800 (PST) Date: Mon, 21 Dec 2020 00:29:53 -0700 From: Yu Zhao To: Nadav Amit Cc: Andrea Arcangeli , linux-mm , Peter Xu , lkml , Pavel Emelyanov , Mike Kravetz , Mike Rapoport , stable@vger.kernel.org, minchan@kernel.org, Andy Lutomirski , Will Deacon , Peter Zijlstra Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect Message-ID: References: <20201219043006.2206347-1-namit@vmware.com> <729A8C1E-FC5B-4F46-AE01-85E00C66DFFF@gmail.com> <7986D881-3EBD-4197-A1A0-3B06BB2300B1@gmail.com> <7EB8560C-620A-433D-933C-996D7E4F2CA1@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <7EB8560C-620A-433D-933C-996D7E4F2CA1@gmail.com> Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org On Sun, Dec 20, 2020 at 09:39:06PM -0800, Nadav Amit wrote: > > On Dec 20, 2020, at 9:25 PM, Nadav Amit wrote: > > > >> On Dec 20, 2020, at 9:12 PM, Yu Zhao wrote: > >> > >> On Sun, Dec 20, 2020 at 08:36:15PM -0800, Nadav Amit wrote: > >>>> On Dec 19, 2020, at 6:20 PM, Andrea Arcangeli wrote: > >>>> > >>>> On Sat, Dec 19, 2020 at 02:06:02PM -0800, Nadav Amit wrote: > >>>>>> On Dec 19, 2020, at 1:34 PM, Nadav Amit wrote: > >>>>>> > >>>>>> [ cc’ing some more people who have experience with similar problems ] > >>>>>> > >>>>>>> On Dec 19, 2020, at 11:15 AM, Andrea Arcangeli wrote: > >>>>>>> > >>>>>>> Hello, > >>>>>>> > >>>>>>> On Fri, Dec 18, 2020 at 08:30:06PM -0800, Nadav Amit wrote: > >>>>>>>> Analyzing this problem indicates that there is a real bug since > >>>>>>>> mmap_lock is only taken for read in mwriteprotect_range(). This might > >>>>>>> > >>>>>>> Never having to take the mmap_sem for writing, and in turn never > >>>>>>> blocking, in order to modify the pagetables is quite an important > >>>>>>> feature in uffd that justifies uffd instead of mprotect. It's not the > >>>>>>> most important reason to use uffd, but it'd be nice if that guarantee > >>>>>>> would remain also for the UFFDIO_WRITEPROTECT API, not only for the > >>>>>>> other pgtable manipulations. > >>>>>>> > >>>>>>>> Consider the following scenario with 3 CPUs (cpu2 is not shown): > >>>>>>>> > >>>>>>>> cpu0 cpu1 > >>>>>>>> ---- ---- > >>>>>>>> userfaultfd_writeprotect() > >>>>>>>> [ write-protecting ] > >>>>>>>> mwriteprotect_range() > >>>>>>>> mmap_read_lock() > >>>>>>>> change_protection() > >>>>>>>> change_protection_range() > >>>>>>>> ... > >>>>>>>> change_pte_range() > >>>>>>>> [ defer TLB flushes] > >>>>>>>> userfaultfd_writeprotect() > >>>>>>>> mmap_read_lock() > >>>>>>>> change_protection() > >>>>>>>> [ write-unprotect ] > >>>>>>>> ... > >>>>>>>> [ unprotect PTE logically ] > >>>> > >>>> Is the uffd selftest failing with upstream or after your kernel > >>>> modification that removes the tlb flush from unprotect? > >>> > >>> Please see my reply to Yu. I was wrong in this analysis, and I sent a > >>> correction to my analysis. The problem actually happens when > >>> userfaultfd_writeprotect() unprotects the memory. > >>> > >>>> } else if (uffd_wp_resolve) { > >>>> /* > >>>> * Leave the write bit to be handled > >>>> * by PF interrupt handler, then > >>>> * things like COW could be properly > >>>> * handled. > >>>> */ > >>>> ptent = pte_clear_uffd_wp(ptent); > >>>> } > >>>> > >>>> Upstraem this will still do pages++, there's a tlb flush before > >>>> change_protection can return here, so I'm confused. > >>> > >>> You are correct. The problem I encountered with userfaultfd_writeprotect() > >>> is during unprotecting path. > >>> > >>> Having said that, I think that there are additional scenarios that are > >>> problematic. Consider for instance madvise_dontneed_free() that is racing > >>> with userfaultfd_writeprotect(). If madvise_dontneed_free() completed > >>> removing the PTEs, but still did not flush, change_pte_range() will see > >>> non-present PTEs, say a flush is not needed, and then > >>> change_protection_range() will not do a flush, and return while > >>> the memory is still not protected. > >>> > >>>> I don't share your concern. What matters is the PT lock, so it > >>>> wouldn't be one per pte, but a least an order 9 higher, but let's > >>>> assume one flush per pte. > >>>> > >>>> It's either huge mapping and then it's likely running without other > >>>> tlb flushing in background (postcopy snapshotting), or it's a granular > >>>> protect with distributed shared memory in which case the number of > >>>> changd ptes or huge_pmds tends to be always 1 anyway. So it doesn't > >>>> matter if it's deferred. > >>>> > >>>> I agree it may require a larger tlb flush review not just mprotect > >>>> though, but it didn't sound particularly complex. Note the > >>>> UFFDIO_WRITEPROTECT is still relatively recent so backports won't > >>>> risk to reject so heavy as to require a band-aid. > >>>> > >>>> My second thought is, I don't see exactly the bug and it's not clear > >>>> if it's upstream reproducing this, but assuming this happens on > >>>> upstream, even ignoring everything else happening in the tlb flush > >>>> code, this sounds like purely introduced by userfaultfd_writeprotect() > >>>> vs userfaultfd_writeprotect() (since it's the only place changing > >>>> protection with mmap_sem for reading and note we already unmap and > >>>> flush tlb with mmap_sem for reading in MADV_DONTNEED/MADV_FREE clears > >>>> the dirty bit etc..). Flushing tlbs with mmap_sem for reading is > >>>> nothing new, the only new thing is the flush after wrprotect. > >>>> > >>>> So instead of altering any tlb flush code, would it be possible to > >>>> just stick to mmap_lock for reading and then serialize > >>>> userfaultfd_writeprotect() against itself with an additional > >>>> mm->mmap_wprotect_lock mutex? That'd be a very local change to > >>>> userfaultfd too. > >>>> > >>>> Can you look if the rule mmap_sem for reading plus a new > >>>> mm->mmap_wprotect_lock mutex or the mmap_sem for writing, whenever > >>>> wrprotecting ptes, is enough to comply with the current tlb flushing > >>>> code, so not to require any change non local to uffd (modulo the > >>>> additional mutex). > >>> > >>> So I did not fully understand your solution, but I took your point and > >>> looked again on similar cases. To be fair, despite my experience with these > >>> deferred TLB flushes as well as Peter Zijlstra’s great documentation, I keep > >>> getting confused (e.g., can’t we somehow combine tlb_flush_batched and > >>> tlb_flush_pending ?) > >>> > >>> As I said before, my initial scenario was wrong, and the problem is not > >>> userfaultfd_writeprotect() racing against itself. This one seems actually > >>> benign to me. > >>> > >>> Nevertheless, I do think there is a problem in change_protection_range(). > >>> Specifically, see the aforementioned scenario of a race between > >>> madvise_dontneed_free() and userfaultfd_writeprotect(). > >>> > >>> So an immediate solution for such a case can be resolve without holding > >>> mmap_lock for write, by just adding a test for mm_tlb_flush_nested() in > >>> change_protection_range(): > >>> > >>> /* > >>> * Only flush the TLB if we actually modified any entries > >>> * or if there are pending TLB flushes. > >>> */ > >>> if (pages || mm_tlb_flush_nested(mm)) > >>> flush_tlb_range(vma, start, end); > >>> > >>> To be fair, I am not confident I did not miss other problematic cases. > >>> > >>> But for now, this change, with the preserve_write change should address the > >>> immediate issues. Let me know if you agree. > >>> > >>> Let me know whether you agree. > >> > >> The problem starts in UFD, and is related to tlb flush. But its focal > >> point is in do_wp_page(). I'd suggest you look at function and see > >> what it does before and after the commits I listed, with the following > >> conditions > >> > >> PageAnon(), !PageKsm(), !PageSwapCache(), !pte_write(), > >> page_mapcount() = 1, page_count() > 1 or PageLocked() > >> > >> when it runs against the two UFD examples you listed. > > > > Thanks for your quick response. I wanted to write a lengthy response, but I > > do want to sleep on it. I presume page_count() > 1, since I have multiple > > concurrent page-faults on the same address in my test, but I will check. > > > > Anyhow, before I give a further response, I was just wondering - since you > > recently dealt with soft-dirty issue as I remember - isn't this problematic > > COW for non-COW page scenario, in which the copy races with writes to a page > > which is protected in the PTE but not in all TLB, also problematic for > > soft-dirty clearing? Yes, it has the same problem. > Stupid me. You hold mmap_lock for write, so no, it cannot happen when clear > soft-dirty. mmap_write_lock is temporarily held to update vm_page_prot for write notifications. It doesn't help in the context of this problem.