From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1757C7EE22 for ; Tue, 2 May 2023 17:14:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234380AbjEBROo (ORCPT ); Tue, 2 May 2023 13:14:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234184AbjEBROm (ORCPT ); Tue, 2 May 2023 13:14:42 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26DAF1710 for ; Tue, 2 May 2023 10:13:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1683047635; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tThVxt1nuGUymGM3Mj1B1ERCCnE2U7N2IjH5Ucb4/08=; b=dXorhwp6TNYUwdstsRQkgUVCXjnhYAfvkn5DI0CNdk7HRmeZ8KM87KmueToDP7LF3OxCkY sZBU2NniCkULBHcVWLQrNq0v/RX8osYDt1c08sZTtmpC/TIzI24BdO85PUh2NHBgNSPvIz g/nz6BKBmHZzLFiCdjljxkZC0aEhwVs= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-116-3hTapXv1OGKaaN3EwsQOOA-1; Tue, 02 May 2023 13:13:53 -0400 X-MC-Unique: 3hTapXv1OGKaaN3EwsQOOA-1 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-3062e5d0cd3so870692f8f.3 for ; Tue, 02 May 2023 10:13:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683047632; x=1685639632; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=tThVxt1nuGUymGM3Mj1B1ERCCnE2U7N2IjH5Ucb4/08=; b=PAe2jvWowShE78Ze0PQ+WVXAr7HT4MeMz9DhK85iHcGWGcbQeHYUs8DST/rIVQcbnZ EA2HYl9K7rK/kho1xfYkaq/RJZ877OZt4DUcx0oA1XAbkakq0JkfS8wfhznmL9IQsFOU wiSnL4MDVBKJAVVFKu71fn7QsYCydhxB3zeKG9WdPUNs4w0Zxe7v+hI7We5RekcJZZHt jxj7ivV2LvsxSw8VAMfHmUQDWxs8yXGS+kyN5eh3/JR+uOgECp2fL4ixGbH4G2P+spTU Ke/sc3hery7QCGYJ0D1cmT1KzV+GOu6ZmUM2WkACsCheJ5rC2Fb03Sk65WwgOupfX9SW WtXg== X-Gm-Message-State: AC+VfDzNqr7/uspxmyjcq0PUnTvRfHjA7sncOLYHjaAMXTqLSOtfbVYD x3UZgHSm8gAejpBYj0rJ015IMN7jOdY+881LbFxLv1v4t1UclymqA9XvZDCUvoeKJAcPZYFuPWg tTBMixtm65PHKGUWYAatDXCmRKw== X-Received: by 2002:adf:fe02:0:b0:2d2:29a4:4457 with SMTP id n2-20020adffe02000000b002d229a44457mr12812802wrr.13.1683047632636; Tue, 02 May 2023 10:13:52 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5MdjSLMNym55tU5r7M2pAs93jqzP8q4u+jG7QWacNZyS3DMZ/LgP/rzfEA9/8iHAVgzgcXOg== X-Received: by 2002:adf:fe02:0:b0:2d2:29a4:4457 with SMTP id n2-20020adffe02000000b002d229a44457mr12812755wrr.13.1683047632213; Tue, 02 May 2023 10:13:52 -0700 (PDT) Received: from ?IPV6:2003:cb:c700:2400:6b79:2aa:9602:7016? (p200300cbc70024006b7902aa96027016.dip0.t-ipconnect.de. [2003:cb:c700:2400:6b79:2aa:9602:7016]) by smtp.gmail.com with ESMTPSA id q11-20020a5d574b000000b003049d7b9f4csm17477958wrw.32.2023.05.02.10.13.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 02 May 2023 10:13:51 -0700 (PDT) Message-ID: <1691115d-dba4-636b-d736-6a20359a67c3@redhat.com> Date: Tue, 2 May 2023 19:13:49 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Content-Language: en-US To: Lorenzo Stoakes , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , Dave Chinner , Theodore Ts'o , Peter Xu , Matthew Rosato , "Paul E . McKenney" , Christian Borntraeger , Mike Rapoport References: From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v7 3/3] mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org [...] > +{ > + struct address_space *mapping; > + > + /* > + * GUP-fast disables IRQs - this prevents IPIs from causing page tables > + * to disappear from under us, as well as preventing RCU grace periods > + * from making progress (i.e. implying rcu_read_lock()). > + * > + * This means we can rely on the folio remaining stable for all > + * architectures, both those that set CONFIG_MMU_GATHER_RCU_TABLE_FREE > + * and those that do not. > + * > + * We get the added benefit that given inodes, and thus address_space, > + * objects are RCU freed, we can rely on the mapping remaining stable > + * here with no risk of a truncation or similar race. > + */ > + lockdep_assert_irqs_disabled(); > + > + /* > + * If no mapping can be found, this implies an anonymous or otherwise > + * non-file backed folio so in this instance we permit the pin. > + * > + * shmem and hugetlb mappings do not require dirty-tracking so we > + * explicitly whitelist these. > + * > + * Other non dirty-tracked folios will be picked up on the slow path. > + */ > + mapping = folio_mapping(folio); > + return !mapping || shmem_mapping(mapping) || folio_test_hugetlb(folio); "Folios in the swap cache return the swap mapping" -- you might disallow pinning anonymous pages that are in the swap cache. I recall that there are corner cases where we can end up with an anon page that's mapped writable but still in the swap cache ... so you'd fallback to the GUP slow path (acceptable for these corner cases, I guess), however especially the comment is a bit misleading then. So I'd suggest not dropping the folio_test_anon() check, or open-coding it ... which will make this piece of code most certainly easier to get when staring at folio_mapping(). Or to spell it out in the comment (usually I prefer code over comments). > +} > + > /** > * try_grab_folio() - Attempt to get or pin a folio. > * @page: pointer to page to be grabbed > @@ -123,6 +170,8 @@ static inline struct folio *try_get_folio(struct page *page, int refs) > */ > struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) > { > + bool is_longterm = flags & FOLL_LONGTERM; > + > if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page))) > return NULL; > > @@ -136,8 +185,7 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) > * right zone, so fail and let the caller fall back to the slow > * path. > */ > - if (unlikely((flags & FOLL_LONGTERM) && > - !is_longterm_pinnable_page(page))) > + if (unlikely(is_longterm && !is_longterm_pinnable_page(page))) > return NULL; > > /* > @@ -148,6 +196,16 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) > if (!folio) > return NULL; > > + /* > + * Can this folio be safely pinned? We need to perform this > + * check after the folio is stabilised. > + */ > + if ((flags & FOLL_WRITE) && is_longterm && > + !folio_longterm_write_pin_allowed(folio)) { > + folio_put_refs(folio, refs); > + return NULL; > + } So we perform this change before validating whether the PTE changed. Hmm, naturally, I would have done it afterwards. IIRC, without IPI syncs during TLB flush (i.e., CONFIG_MMU_GATHER_RCU_TABLE_FREE), there is the possibility that (1) We lookup the pte (2) The page was unmapped and free (3) The page gets reallocated and used (4) We pin the page (5) We dereference page->mapping If we then de-reference page->mapping that gets used by whoever allocated it for something completely different (not a pointer to something reasonable), I wonder if we might be in trouble. Checking first, whether the PTE changed makes sure that what we pinned and what we're looking at is what we expected. ... I can spot that the page_is_secretmem() check is also done before that. But it at least makes sure that it's still an LRU page before staring at the mapping (making it a little safer?). BUT, I keep messing up this part of the story. Maybe it all works as expected because we will be synchronizing RCU somehow before actually freeing the page in the !IPI case. ... but I think that's only true for page tables with CONFIG_MMU_GATHER_RCU_TABLE_FREE. -- Thanks, David / dhildenb