From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28315C0502C for ; Mon, 29 Aug 2022 10:09:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9FD14940008; Mon, 29 Aug 2022 06:09:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9853C940007; Mon, 29 Aug 2022 06:09:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FF48940008; Mon, 29 Aug 2022 06:09:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6FB77940007 for ; Mon, 29 Aug 2022 06:09:48 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4356FA02A8 for ; Mon, 29 Aug 2022 10:09:48 +0000 (UTC) X-FDA: 79852208856.30.F9F5EA6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf27.hostedemail.com (Postfix) with ESMTP id C265A4002B for ; Mon, 29 Aug 2022 10:09:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661767787; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h0iJeEEMiB2DCikCIjHSgM1lSRaaW/hXlDWBwCpAOeg=; b=Al7EFHbtTm7vaYeDLad5xbAAm8duHVgE1P5Hd1yX/j3QDwtu592YWaNj3Vdbu6SpLI9fjh g+mcNw3275YHFZ7gc2722/eXQ7nTOqv7mlgG44JavGEA+hn6/WLrgnemAXB3PfxXZYRXNX WjM1lctoX3r8NVyRtsNu3hFVE+TvwQE= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-499-SQ7OjCGvPe2Pg582Z3BflA-1; Mon, 29 Aug 2022 06:09:46 -0400 X-MC-Unique: SQ7OjCGvPe2Pg582Z3BflA-1 Received: by mail-wm1-f72.google.com with SMTP id b16-20020a05600c4e1000b003a5a47762c3so4861102wmq.9 for ; Mon, 29 Aug 2022 03:09:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc; bh=h0iJeEEMiB2DCikCIjHSgM1lSRaaW/hXlDWBwCpAOeg=; b=B0pHlgrI0cYnY3kLWPK0ewuy9R7LnN/nUkNKlPmRxhtXAmJwARWlRDpoCxwuEVaxLv YmN2KNydDnzjvnF1R8d2jXhgDFPLJjXQAEhOVTrAGtfrmSqy2taT/zwfcAhi11PJ8y+C 4EehE+Y58V7S3RR3sps06fnvJKqbDOQ5s6Y3+3S8WQErI+IQxmBradJSZovoEdBDnA6d +hwRPn5zxPauZ2RSTjsrIWcu91E9pvT3mn7RCNnHu4ZyNuHZOopnCC90VR5gVyEAmgkl 0lH28fb/yQVFmKkOxarGg/DbRQU8SHgi7MpNL2qTCjjrzPZR7Kls5HBK8DgkEOj8jhq7 GAdQ== X-Gm-Message-State: ACgBeo3wibpf38kNiyEpUEmic5bv0eSs7otaX9HZb6BEkiklhFTFhJhr K9aoy6AA1VwuGOEr9g2jzC9E6I52fsAovRFEp3C9tHg4MeiXwtM5foAw0P+8ZdDVLZdYpTH4egD nPRwd2i2lCDg= X-Received: by 2002:adf:e18e:0:b0:225:5e7c:3621 with SMTP id az14-20020adfe18e000000b002255e7c3621mr5745113wrb.183.1661767785082; Mon, 29 Aug 2022 03:09:45 -0700 (PDT) X-Google-Smtp-Source: AA6agR5Uz0seJ1E4OF755dE5cBZQF6ifsbu5oFppvT/FnFXCk4E6RGBkH5JlKIfh9ZfcMI7RtROe2g== X-Received: by 2002:adf:e18e:0:b0:225:5e7c:3621 with SMTP id az14-20020adfe18e000000b002255e7c3621mr5745096wrb.183.1661767784794; Mon, 29 Aug 2022 03:09:44 -0700 (PDT) Received: from ?IPV6:2003:cb:c707:3900:658b:bed0:4260:4c25? (p200300cbc7073900658bbed042604c25.dip0.t-ipconnect.de. [2003:cb:c707:3900:658b:bed0:4260:4c25]) by smtp.gmail.com with ESMTPSA id a12-20020a5d508c000000b0022571d43d32sm1269298wrt.21.2022.08.29.03.09.43 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 29 Aug 2022 03:09:44 -0700 (PDT) Message-ID: Date: Mon, 29 Aug 2022 12:09:43 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.12.0 To: Qi Zheng , akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, jgg@nvidia.com, tglx@linutronix.de, willy@infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, muchun.song@linux.dev References: <20220825101037.96517-1-zhengqi.arch@bytedance.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [RFC PATCH 0/7] Try to free empty and zero user PTE page table pages In-Reply-To: <20220825101037.96517-1-zhengqi.arch@bytedance.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Al7EFHbt; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661767787; a=rsa-sha256; cv=none; b=3aJ9WXOkr3p2mL/hakDG0Ipua1ItwgyWLoGJMb9GgK/H0ZUdAMi7n+3IWOVFxsrbKymkwD b9X9mvVI5WV1XH0MLGp+dmPn3oIArdQ96xL0/XDrc/V7PvAPWa8VdyDPx04PGZLxd6Fnaq bFnd2HKgODdhJ0w3onTeeMedvk83I28= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661767787; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h0iJeEEMiB2DCikCIjHSgM1lSRaaW/hXlDWBwCpAOeg=; b=rfVrMT5iqUjjartEDHaAre6KNd75/Hm9gxeH3gW8k4B5y9ofk4gNZrMoVG75111xZXW3/L 3xhJIvMJPKEH4/Pcja1PLcyUT23avaXzpyw2TwSIdDnYWt8UJbeE4DqcOwQLMJnVMzgnAe eDiIw+I2VNjqpHBdyOKFJVVlM9u8/dY= Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Al7EFHbt; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: 4bo71icyj6gjafxw54sejgfqqa4duqwn X-Rspamd-Queue-Id: C265A4002B X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1661767787-204346 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 25.08.22 12:10, Qi Zheng wrote: > Hi, > > Before this, in order to free empty user PTE page table pages, I posted the > following patch sets of two solutions: > - atomic refcount version: > https://lore.kernel.org/lkml/20211110105428.32458-1-zhengqi.arch@bytedance.com/ > - percpu refcount version: > https://lore.kernel.org/lkml/20220429133552.33768-1-zhengqi.arch@bytedance.com/ > > Both patch sets have the following behavior: > a. Protect the page table walker by hooking pte_offset_map{_lock}() and > pte_unmap{_unlock}() > b. Will automatically reclaim PTE page table pages in the non-reclaiming path > > For behavior a, there may be the following disadvantages mentioned by > David Hildenbrand: > - It introduces a lot of complexity. It's not something easy to get in and most > probably not easy to get out again > - It is inconvenient to extend to other architectures. For example, for the > continuous ptes of arm64, the pointer to the PTE entry is obtained directly > through pte_offset_kernel() instead of pte_offset_map{_lock}() > - It has been found that pte_unmap() is missing in some places that only > execute on 64-bit systems, which is a disaster for pte_refcount > > For behavior b, it may not be necessary to actively reclaim PTE pages, especially > when memory pressure is not high, and deferring to the reclaim path may be a > better choice. > > In addition, the above two solutions are only for empty PTE pages (a PTE page > where all entries are empty), and do not deal with the zero PTE page ( a PTE > page where all page table entries are mapped to shared zero page) mentioned by > David Hildenbrand: > "Especially the shared zeropage is nasty, because there are > sane use cases that can trigger it. Assume you have a VM > (e.g., QEMU) that inflated the balloon to return free memory > to the hypervisor. > > Simply migrating that VM will populate the shared zeropage to > all inflated pages, because migration code ends up reading all > VM memory. Similarly, the guest can just read that memory as > well, for example, when the guest issues kdump itself." > > The purpose of this RFC patch is to continue the discussion and fix the above > issues. The following is the solution to be discussed. Thanks for providing an alternative! It's certainly easier to digest :) > > In order to quickly identify the above two types of PTE pages, we still > introduced a pte_refcount for each PTE page. We put the mapped and zero PTE > entry counter into the pte_refcount of the PTE page. The bitmask has the > following meaning: > > - bits 0-9 are mapped PTE entry count > - bits 10-19 are zero PTE entry count I guess we could factor the zero PTE change out, to have an even simpler first version. The issue is that some features (userfaultfd) don't expect page faults when something was aleady mapped previously. PTE markers as introduced by Peter might require a thought -- we don't have anything mapped but do have additional information that we have to maintain. > > In this way, when mapped PTE entry count is 0, we can know that the current PTE > page is an empty PTE page, and when zero PTE entry count is PTRS_PER_PTE, we can > know that the current PTE page is a zero PTE page. > > We only update the pte_refcount when setting and clearing of PTE entry, and > since they are both protected by pte lock, pte_refcount can be a non-atomic > variable with little performance overhead. > > For page table walker, we mutually exclusive it by holding write lock of > mmap_lock when doing pmd_clear() (in the newly added path to reclaim PTE pages). I recall when I played with that idea that the mmap_lock is not sufficient to rip out a page table. IIRC, we also have to hold the rmap lock(s), to prevent RMAP walkers from still using the page table. Especially if multiple VMAs intersect a page table, things might get tricky, because multiple rmap locks could be involved. We might want/need another mechanism to synchronize against page table walkers. -- Thanks, David / dhildenb