From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5ED8ECAAD3 for ; Wed, 31 Aug 2022 07:15:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 53F036B0071; Wed, 31 Aug 2022 03:15:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FAD58D0002; Wed, 31 Aug 2022 03:15:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 369808D0001; Wed, 31 Aug 2022 03:15:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2285A6B0071 for ; Wed, 31 Aug 2022 03:15:22 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id ECBDC140D2B for ; Wed, 31 Aug 2022 07:15:21 +0000 (UTC) X-FDA: 79859026842.11.B7FC7F2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 2083D4006B for ; Wed, 31 Aug 2022 07:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661930120; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3Ugr0/BQaONBL+J1aV+ArK6pvARf5BKw9bkgKBReafA=; b=FCc91irDiYhS9Hdofr1cnaPdFAGn2sJ1n7Os9K3dwsUn+/mVslL3MsJDLFj2rBp9G4Twwn P823a2+/HXQvDupJfgIq/iDNKEYBxVdEw1DW2larlcU3V7SEHDhquCM2QjjNqi4EWqif9/ ol4ua6xyWjl20Vx2d8fXjf1kFC+rGrk= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-634-oKWR7W8oOPC3XkoEs0SEbA-1; Wed, 31 Aug 2022 03:15:19 -0400 X-MC-Unique: oKWR7W8oOPC3XkoEs0SEbA-1 Received: by mail-wm1-f71.google.com with SMTP id a17-20020a05600c349100b003a545125f6eso11338512wmq.4 for ; Wed, 31 Aug 2022 00:15:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc; bh=3Ugr0/BQaONBL+J1aV+ArK6pvARf5BKw9bkgKBReafA=; b=hNjJvAQyfA2tr8LuR4n/Ltw5SbT/opD45uzJSbwz4bRE2mP+9DbL2p5STGqBuaSNH1 AUReU29STFkCAaApSOMYV1jKvzKq1z2ZA85s0Z56lD7EyrBydyTh0xw489OX6kuSHP8D MufNTGAO1j2+Z6xmNVXKevogZsSNZrydBNHiEmkb9fPTqJXbeHLU+bwFnDkyOVE8mAM6 +qgWrmpt1dHb9hFoC6VS5BqRQL3UfaPYfSscZTuJozttWDJZRhlr/h4xcEktUNi8AsA4 la2iSUBDOWVj+4XaCb/MTbClNNG0sqxpMN9fXkAEVlO+v7WMEpT7ZiDLDmqJ5DZixceY Nk/Q== X-Gm-Message-State: ACgBeo39osk+ZixD+xpvpA5K6PL16bHXpEXUdEebu9W5rzid7VtMxTVe l3pqQQnpC4xQwaivyNCGxOautuZ2NDr/bT+AXaSxrrCIBbd7a8vLHZenpSxn07Isig0CY6BKC8X b80qVLDbm9TY= X-Received: by 2002:a05:6000:1806:b0:225:5c19:6c75 with SMTP id m6-20020a056000180600b002255c196c75mr10653427wrh.524.1661930117854; Wed, 31 Aug 2022 00:15:17 -0700 (PDT) X-Google-Smtp-Source: AA6agR723+Rr5Dy6J76zhlwMTrPcNUT/pDcSIM5sgUhaKBYe4ypSdMgsFrIoyvzF2dpiRyzCLlePGg== X-Received: by 2002:a05:6000:1806:b0:225:5c19:6c75 with SMTP id m6-20020a056000180600b002255c196c75mr10653419wrh.524.1661930117550; Wed, 31 Aug 2022 00:15:17 -0700 (PDT) Received: from ?IPV6:2003:cb:c706:2900:1613:4308:aca3:2786? (p200300cbc706290016134308aca32786.dip0.t-ipconnect.de. [2003:cb:c706:2900:1613:4308:aca3:2786]) by smtp.gmail.com with ESMTPSA id o7-20020adfeac7000000b00226332f9275sm11171246wrn.22.2022.08.31.00.15.16 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 31 Aug 2022 00:15:17 -0700 (PDT) Message-ID: <60bbe75b-8dd1-3c46-2f8f-c2407493ffb8@redhat.com> Date: Wed, 31 Aug 2022 09:15:16 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.12.0 To: Jason Gunthorpe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Mel Gorman , John Hubbard , "Matthew Wilcox (Oracle)" , Andrea Arcangeli , Hugh Dickins , Peter Xu References: <20220825164659.89824-1-david@redhat.com> <20220825164659.89824-3-david@redhat.com> <1892f6de-fd22-0e8b-3ff6-4c8641e1c68e@redhat.com> <2e20c90d-4d1f-dd83-aa63-9d8d17021263@redhat.com> <9ce3aaaa-71a6-5a81-16a3-36e6763feb91@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v1 2/3] mm/gup: use gup_can_follow_protnone() also in GUP-fast In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661930121; a=rsa-sha256; cv=none; b=f417j+Nmy2ZnwpVx8LYWQW4Kj9k7L+9LjxN+hXVwTuTz3oIlVZdCOca3zR0dfEmfv6JCX4 AY5MvYTpistYCPEas9yf+2PGXCZn168ijKdBUYyIsN4b+8Hni83ovCfsqpujie4EpTrU26 6ZtjUU8fIk94FTc6mrB9OtU0g9g3e/o= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FCc91irD; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661930121; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3Ugr0/BQaONBL+J1aV+ArK6pvARf5BKw9bkgKBReafA=; b=FE8j+6kItjX3V9VvWzXzymGvsf1jdmlUFRuc2FjgcTB05GU3Q0aM5iOZcrioLQqZWt55hy BsI9TiEEGXNGpu3LAEF7qErDtJYSzlEokNUPp8jg8qSoUHUf/nXDli5uK1LFCSNlt/15zs j8i/LjP1X0Cr6jhLP2/DtRsvReuoiPI= X-Stat-Signature: o9twecucandt6tiiwpk5ybaj5mj3oro5 X-Rspamd-Queue-Id: 2083D4006B X-Rspamd-Server: rspam04 X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FCc91irD; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-HE-Tag: 1661930120-692148 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 30.08.22 21:57, Jason Gunthorpe wrote: > On Tue, Aug 30, 2022 at 08:53:06PM +0200, David Hildenbrand wrote: >> On 30.08.22 20:45, Jason Gunthorpe wrote: >>> On Tue, Aug 30, 2022 at 08:23:52PM +0200, David Hildenbrand wrote: >>>> ... and looking into the details of TLB flush and GUP-fast interaction >>>> nowadays, that case is no longer relevant. A TLB flush is no longer >>>> sufficient to stop concurrent GUP-fast ever since we introduced generic >>>> RCU GUP-fast. >>> >>> Yes, we've had RCU GUP fast for a while, and it is more widely used >>> now, IIRC. >>> >>> It has been a bit, but if I remember, GUP fast in RCU mode worked on a >>> few principles: >>> >>> - The PTE page must not be freed without RCU >>> - The PTE page content must be convertable to a struct page using the >>> usual rules (eg PTE Special) >>> - That struct page refcount may go from 0->1 inside the RCU >>> - In the case the refcount goes from 0->1 there must be sufficient >>> barriers such that GUP fast observing the refcount of 1 will also >>> observe the PTE entry has changed. ie before the refcount is >>> dropped in the zap it has to clear the PTE entry, the refcount >>> decr has to be a 'release' and the refcount incr in gup fast has be >>> to be an 'acquire'. >>> - The rest of the system must tolerate speculative refcount >>> increments from GUP on any random page >>>> The basic idea being that if GUP fast obtains a valid reference on a >>> page *and* the PTE entry has not changed then everything is fine. >>> >>> The tricks with TLB invalidation are just a "poor mans" RCU, and >>> arguably these days aren't really needed since I think we could make >>> everything use real RCU always without penalty if we really wanted. >>> >>> Today we can create a unique 'struct pagetable_page' as Matthew has >>> been doing in other places that guarentees a rcu_head is always >>> available for every page used in a page table. Using that we could >>> drop the code in the TLB flusher that allocates memory for the >>> rcu_head and hopes for the best. (Or even is the common struct page >>> rcu_head already guarenteed to exist for pagetable pages now a days?) >>> >>> IMHO that is the main reason we still have the non-RCU mode at all.. >> >> >> Good, I managed to attract the attention of someone who understands that machinery :) >> >> While validating whether GUP-fast and PageAnonExclusive code work correctly, >> I started looking at the whole RCU GUP-fast machinery. I do have a patch to >> improve PageAnonExclusive clearing (I think we're missing memory barriers to >> make it work as expected in any possible case), but I also stumbled eventually >> over a more generic issue that might need memory barriers. >> >> Any thoughts whether I am missing something or this is actually missing >> memory barriers? > > I don't like the use of smb_mb very much, I deliberately choose the > more modern language of release/acquire because it makes it a lot > clearer what barriers are doing.. > > So, if we dig into it, using what I said above, the atomic refcount is: > > gup_pte_range() > try_grab_folio() > try_get_folio() > folio_ref_try_add_rcu() > folio_ref_add_unless() > page_ref_add_unless() > atomic_add_unless() Right, that seems to work as expected for checking the refcount after clearing the PTE. Unfortunately, it's not sufficien to identify whether a page may be pinned, because the flow continues as folio = try_get_folio(page, refs) ... if (folio_test_large(folio)) atomic_add(refs, folio_pincount_ptr(folio)); else folio_ref_add(folio, ...) So I guess we'd need a smb_mb() before re-checking the PTE for that case. > > So that wants to be an acquire > > The pairing release is in the page table code that does the put_page, > it wants to be an atomic_dec_return() as a release. > > Now, we go and look at Documentation/atomic_t.txt to try to understand > what are the ordering semantics of the atomics we are using and become > dazed-confused like me: I read that 3 times and got dizzy. Thanks for double-checking, very much appreciated! > > ORDERING (go read memory-barriers.txt first) > -------- > > - RMW operations that have a return value are fully ordered; > > - RMW operations that are conditional are unordered on FAILURE, > otherwise the above rules apply. > > Fully ordered primitives are ordered against everything prior and everything > subsequent. Therefore a fully ordered primitive is like having an smp_mb() > before and an smp_mb() after the primitive. > > So, I take that to mean that both atomic_add_unless() and > atomic_dec_return() are "fully ordered" and "fully ordered" is a super > set of acquire/release. > > Thus, we already have the necessary barriers integrated into the > atomic being used. > > The smb_mb_after_atomic stuff is to be used with atomics that don't > return values, there are some examples in the doc Yes, I missed the point that RMW operations that return a value are fully ordered and imply smp_mb() before / after. -- Thanks, David / dhildenb