From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2B47EE0211 for ; Wed, 11 Sep 2024 06:56:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3844D94000F; Wed, 11 Sep 2024 02:56:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2981D94000B; Wed, 11 Sep 2024 02:56:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1122594000F; Wed, 11 Sep 2024 02:56:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DB56294000B for ; Wed, 11 Sep 2024 02:56:34 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 847D21A0F08 for ; Wed, 11 Sep 2024 06:56:34 +0000 (UTC) X-FDA: 82551549108.17.27D100F Received: from mail-wr1-f54.google.com (mail-wr1-f54.google.com [209.85.221.54]) by imf12.hostedemail.com (Postfix) with ESMTP id 92EC84000C for ; Wed, 11 Sep 2024 06:56:32 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=suse.com header.s=google header.b=LbQAp5dl; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf12.hostedemail.com: domain of mhocko@suse.com designates 209.85.221.54 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726037677; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2W/6yqc36Yx5JYfTAvZPPQFKX/7iJh9OIgOV9O+ITC4=; b=aTG5wCrjXvdvHHzOzlBWA8g5alxY8jbU+JFo8YGgIVewpgugosHf2NN5xbiXvSuHctNGsz hLNsJWYCQaRBieiX9ds8NRyb5kn1D2VtW+4GqFZ2e1pbMEw0tsKKyI5TkNqlD8xsg1N/0h sL4Uy35V/79YvucL0cuTQsNcWsgd+cM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726037677; a=rsa-sha256; cv=none; b=IhFjhVcuNT2FfA6oKx6qqJIH/ZjyqCukd2B0Kx0TWZbEJ8aLvzBfRoR5xrNksVZKlmBz3E DD+/H1o+0Xh1AQhPmMOukfxExM/3QaQp9WVQqyD3plYs1U4VYKsWwLjxrT6ZnKvsTyPWqn P/37NukQHKLGVC42hKfgYo0vS41W6I4= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=suse.com header.s=google header.b=LbQAp5dl; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf12.hostedemail.com: domain of mhocko@suse.com designates 209.85.221.54 as permitted sender) smtp.mailfrom=mhocko@suse.com Received: by mail-wr1-f54.google.com with SMTP id ffacd0b85a97d-374c3eef39eso3633254f8f.0 for ; Tue, 10 Sep 2024 23:56:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1726037791; x=1726642591; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=2W/6yqc36Yx5JYfTAvZPPQFKX/7iJh9OIgOV9O+ITC4=; b=LbQAp5dl+7X3wVeyeUmBL6D14GXR1ZVYOqSxcO15+UnQnoY2aoaBXBNdqyFbAqZK+m blMjjXSmHalpw6Ic5UzZjt241eNLywYo/eb1trHPSyy3jbrRv229AelrE9szFAQ2dcTf yZydrR2cut7qMbDgUs8B6kRQB7UGb9l/1pSxZbZJ4BllwKa26rXbXAAK7+LDSO1Bm/Wo tguCr7RRuqDzQe/V2ObhZO7KG3OIkCKm805Nt0JKRNxN6CeNoeU9oMKi5pwsBccy/h6Y V6OLZ6h2JXHFBDVBHf6KqgIhhBcccyfMcLrtMp1wPgwT5WpRgS/EwoAKg9uxrL9ztn8G aopA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726037791; x=1726642591; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=2W/6yqc36Yx5JYfTAvZPPQFKX/7iJh9OIgOV9O+ITC4=; b=nX6PibCF8m2wxKzmRS6jTjqGnqi0Wz2zHEEYk5TLix+6ovipWkgS0IMmwGRRb0L+ZH 274M2zNS61RkB6JctHdrc6Xr8ra1d4Gtxex3ZLSKsqysSzWXpbit+50hIMWwKucj0qL2 9vBCDF6fC3I51g8lbbBKF7ZBUqH+wUMpDvskaKUtJlP6hz5u3ghuigPcJY4V1WVjeMuH SGEUTUKkfXwhc74QDRqNGfkrDPD9sIn9rd228De+w3elq3EULKEeSC4gTJpNb5dyB8si Q0mKp2bt72w2YLSoJvBUY7TF1l5Ve0VBTpMLl9PPd64eOffwFFbFtpKaSxFHZZgj6rUM vo7g== X-Forwarded-Encrypted: i=1; AJvYcCXuZTJvwAYNMA1RqWHrB0FD/w1O+9exD/FHZQ6m3NHl6DdQj+EpdFSrsD2YEgN4Krx/gZaQEmFp8w==@kvack.org X-Gm-Message-State: AOJu0YwdKl8+EBmOhup2UyP0pwm/aQGp20WvuIZhiVxobF7XZwwo3TSf tA9rZlgAqOdBH/yU2GPsUYBwdtUvJvsRGlZ2ecC/3OFW8bysLA2WLwMS9aq9K1k= X-Google-Smtp-Source: AGHT+IGV6renJvHaBCc1TWVIVsopoWasW9VIRFmjUsSDpFXThDJPoqGDaKj88scjdQNxgZQvl7GeCg== X-Received: by 2002:adf:e3c9:0:b0:368:74c0:6721 with SMTP id ffacd0b85a97d-37889674a48mr9587261f8f.38.1726037790728; Tue, 10 Sep 2024 23:56:30 -0700 (PDT) Received: from localhost (109-81-83-158.rct.o2.cz. [109.81.83.158]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-42caeb21a68sm132660525e9.8.2024.09.10.23.56.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Sep 2024 23:56:30 -0700 (PDT) Date: Wed, 11 Sep 2024 08:56:29 +0200 From: Michal Hocko To: Ackerley Tng Cc: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com, erdemaktas@google.com, vannapurve@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org, Oscar Salvador Subject: Re: [RFC PATCH 00/39] 1G page support for guest_memfd Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 92EC84000C X-Stat-Signature: eujx1gxqp9akesxtpk39y319d9j6g9pk X-Rspam-User: X-HE-Tag: 1726037792-75547 X-HE-Meta: U2FsdGVkX1+Ex25HB9HWI6+h5MRI9KDp82tad2UuM8MUtI0PH7EE0Y8yENiO394sNqIPXpE7d6oVIXQybnlcZj4nPhGj+VtNFyFy5zBSGg2tsQY+KXwtps/5pBh20r2tEMJckcIyvnZyNVVT3T7KpwNthCyVh0l1o9CVrLwCAcH7CQTx7IwEzNtH+fmGTw3nF1JqBeFoFqaDuweIgVgibAyNNwen31+UbMC/937mvtSlItLq6f7XG+9kBPGTKCa/Ub2dWmY+qxoRlwd7UW72ioQt1/B7c6TRtlS+CQeLw4sql7KVpxO3PedQ8VjGf14225XSyjaCQLX0x6WTkPYY6v3a6NG9BdAAbveYSmZawnB+2TXVv2izXu1SG0O+o0w9W1cXD2UDebrNk7UE7WCN8rjZW3l3+c+ClGVxDoSB0XlHPQPd3WXNKHkEGDYz9dBJy21IPMRx6bSwXROn0rsHHjdLY2q74lHh2g5DKbsigHdnqzh3v95uaG6tN3f9ou4/RyT55xU7acaYvFnyhqq8hzG5vA0Qe/5a5YoJxjypVYfUdO2A4iBb3YZOs+Vv3ZPu0T+XXf0j5lZBJbbJJh+3H6fr/q093WZHY3dfUGtbS3Y2Z8CzaALits4VJzkvGmX2RrQSIG0un2yUXplnK0TFqhJxF/V/5uPxEjB6FuCylLCgKZ/F4I5oKIxIz2IOCLMxT5A49LGF3JWDUQsDEjqnLw8D3V4EQLEjfN+wQ51mPpznz67d26+cygnHXGwvxc07wgDAsXIbAs99oDEFJCO7c26WIr2MbnRlCzqw0jCRMZ/um9BJETinr/KPiXKmzrk52XXkQfIjwqSPv2q3lW14G5qyIFoeb+DBkzbIu0isAi5C5MxSUSEtp8XWYv8dDp7x0NZdoPBKDraFMC/eyuUPY+my8cnldt8HaknS46sM9T1DI5SKWTBOhnuyx8KsdaOJ0AMQMUb7D2gyqNE115J YKWH6M4q E44bbiowq75Zjt5EXz28XDg2cQpQsbNL5JiwVh4geqpEYE4GjQRKlJ5fBtKPMnmAiqgppVlir+CTKuEezzSp/v1WLzoKV2ACE5EfrxWYNVdhshvXYtW/hk7wVTQvgXclTRVmIooGgRi7J2TcjWCuVCgq12Fgq3Dt+QbIglTr+gS5UvocaUUGtPlEY1z+eYVdb5LtZ1YzaCpINmFPhvcKjL/cN70xqfniTh6k30YKS321Ec947o4aPwpJ6YuL1tXXdfFMN1EzIIAe/8d+36TzqG1h9vJB/wsrDIhQeP/008Xrr2yyeqss8uZJ23BbQv5kDSq7uZUxinlnewj5agq3c7gzEd68VnALVptpQjlES1ydk2/NXfhTx8Q6HqDeVwRZp06iLubk/yQr4syldfySWgBYwTgcAZIpZCygeZCnn8th7GdD+QaAUaYvCQNmvEDsmVuLnwa7G1SljR5v3CPBbPdwsOVuhs7P+wStYyOnyo/tifHY34DJDW5YyqeouuXQXkW0G X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Cc Oscar for awareness On Tue 10-09-24 23:43:31, Ackerley Tng wrote: > Hello, > > This patchset is our exploration of how to support 1G pages in guest_memfd, and > how the pages will be used in Confidential VMs. > > The patchset covers: > > + How to get 1G pages > + Allowing mmap() of guest_memfd to userspace so that both private and shared > memory can use the same physical pages > + Splitting and reconstructing pages to support conversions and mmap() > + How the VM, userspace and guest_memfd interact to support conversions > + Selftests to test all the above > + Selftests also demonstrate the conversion flow between VM, userspace and > guest_memfd. > > Why 1G pages in guest memfd? > > Bring guest_memfd to performance and memory savings parity with VMs that are > backed by HugeTLBfs. > > + Performance is improved with 1G pages by more TLB hits and faster page walks > on TLB misses. > + Memory savings from 1G pages comes from HugeTLB Vmemmap Optimization (HVO). > > Options for 1G page support: > > 1. HugeTLB > 2. Contiguous Memory Allocator (CMA) > 3. Other suggestions are welcome! > > Comparison between options: > > 1. HugeTLB > + Refactor HugeTLB to separate allocator from the rest of HugeTLB > + Pro: Graceful transition for VMs backed with HugeTLB to guest_memfd > + Near term: Allows co-tenancy of HugeTLB and guest_memfd backed VMs > + Pro: Can provide iterative steps toward new future allocator > + Unexplored: Managing userspace-visible changes > + e.g. HugeTLB's free_hugepages will decrease if HugeTLB is used, > but not when future allocator is used > 2. CMA > + Port some HugeTLB features to be applied on CMA > + Pro: Clean slate > > What would refactoring HugeTLB involve? > > (Some refactoring was done in this RFC, more can be done.) > > 1. Broadly involves separating the HugeTLB allocator from the rest of HugeTLB > + Brings more modularity to HugeTLB > + No functionality change intended > + Likely step towards HugeTLB's integration into core-mm > 2. guest_memfd will use just the allocator component of HugeTLB, not including > the complex parts of HugeTLB like > + Userspace reservations (resv_map) > + Shared PMD mappings > + Special page walkers > > What features would need to be ported to CMA? > > + Improved allocation guarantees > + Per NUMA node pool of huge pages > + Subpools per guest_memfd > + Memory savings > + Something like HugeTLB Vmemmap Optimization > + Configuration/reporting features > + Configuration of number of pages available (and per NUMA node) at and > after host boot > + Reporting of memory usage/availability statistics at runtime > > HugeTLB was picked as the source of 1G pages for this RFC because it allows a > graceful transition, and retains memory savings from HVO. > > To illustrate this, if a host machine uses HugeTLBfs to back VMs, and a > confidential VM were to be scheduled on that host, some HugeTLBfs pages would > have to be given up and returned to CMA for guest_memfd pages to be rebuilt from > that memory. This requires memory to be reserved for HVO to be removed and > reapplied on the new guest_memfd memory. This not only slows down memory > allocation but also trims the benefits of HVO. Memory would have to be reserved > on the host to facilitate these transitions. > > Improving how guest_memfd uses the allocator in a future revision of this RFC: > > To provide an easier transition away from HugeTLB, guest_memfd's use of HugeTLB > should be limited to these allocator functions: > > + reserve(node, page_size, num_pages) => opaque handle > + Used when a guest_memfd inode is created to reserve memory from backend > allocator > + allocate(handle, mempolicy, page_size) => folio > + To allocate a folio from guest_memfd's reservation > + split(handle, folio, target_page_size) => void > + To take a huge folio, and split it to smaller folios, restore to filemap > + reconstruct(handle, first_folio, nr_pages) => void > + To take a folio, and reconstruct a huge folio out of nr_pages from the > first_folio > + free(handle, folio) => void > + To return folio to guest_memfd's reservation > + error(handle, folio) => void > + To handle memory errors > + unreserve(handle) => void > + To return guest_memfd's reservation to allocator backend > > Userspace should only provide a page size when creating a guest_memfd and should > not have to specify HugeTLB. > > Overview of patches: > > + Patches 01-12 > + Many small changes to HugeTLB, mostly to separate HugeTLBfs concepts from > HugeTLB, and to expose HugeTLB functions. > + Patches 13-16 > + Letting guest_memfd use HugeTLB > + Creation of each guest_memfd reserves pages from HugeTLB's global hstate > and puts it into the guest_memfd inode's subpool > + Each folio allocation takes a page from the guest_memfd inode's subpool > + Patches 17-21 > + Selftests for new HugeTLB features in guest_memfd > + Patches 22-24 > + More small changes on the HugeTLB side to expose functions needed by > guest_memfd > + Patch 25: > + Uses the newly available functions from patches 22-24 to split HugeTLB > pages. In this patch, HugeTLB folios are always split to 4K before any > usage, private or shared. > + Patches 26-28 > + Allow mmap() in guest_memfd and faulting in shared pages > + Patch 29 > + Enables conversion between private/shared pages > + Patch 30 > + Required to zero folios after conversions to avoid leaking initialized > kernel memory > + Patch 31-38 > + Add selftests to test mapping pages to userspace, guest/host memory > sharing and update conversions tests > + Patch 33 illustrates the conversion flow between VM/userspace/guest_memfd > + Patch 39 > + Dynamically split and reconstruct HugeTLB pages instead of always > splitting before use. All earlier selftests are expected to still pass. > > TODOs: > > + Add logic to wait for safe_refcount [1] > + Look into lazy splitting/reconstruction of pages > + Currently, when the KVM_SET_MEMORY_ATTRIBUTES is invoked, not only is the > mem_attr_array and faultability updated, the pages in the requested range > are also split/reconstructed as necessary. We want to look into delaying > splitting/reconstruction to fault time. > + Solve race between folios being faulted in and being truncated > + When running private_mem_conversions_test with more than 1 vCPU, a folio > getting truncated may get faulted in by another process, causing elevated > mapcounts when the folio is freed (VM_BUG_ON_FOLIO). > + Add intermediate splits (1G should first split to 2M and not split directly to > 4K) > + Use guest's lock instead of hugetlb_lock > + Use multi-index xarray/replace xarray with some other data struct for > faultability flag > + Refactor HugeTLB better, present generic allocator interface > > Please let us know your thoughts on: > > + HugeTLB as the choice of transitional allocator backend > + Refactoring HugeTLB to provide generic allocator interface > + Shared/private conversion flow > + Requiring user to request kernel to unmap pages from userspace using > madvise(MADV_DONTNEED) > + Failing conversion on elevated mapcounts/pincounts/refcounts > + Process of splitting/reconstructing page > + Anything else! > > [1] https://lore.kernel.org/all/20240829-guest-memfd-lib-v2-0-b9afc1ff3656@quicinc.com/T/ > > Ackerley Tng (37): > mm: hugetlb: Simplify logic in dequeue_hugetlb_folio_vma() > mm: hugetlb: Refactor vma_has_reserves() to should_use_hstate_resv() > mm: hugetlb: Remove unnecessary check for avoid_reserve > mm: mempolicy: Refactor out policy_node_nodemask() > mm: hugetlb: Refactor alloc_buddy_hugetlb_folio_with_mpol() to > interpret mempolicy instead of vma > mm: hugetlb: Refactor dequeue_hugetlb_folio_vma() to use mpol > mm: hugetlb: Refactor out hugetlb_alloc_folio > mm: truncate: Expose preparation steps for truncate_inode_pages_final > mm: hugetlb: Expose hugetlb_subpool_{get,put}_pages() > mm: hugetlb: Add option to create new subpool without using surplus > mm: hugetlb: Expose hugetlb_acct_memory() > mm: hugetlb: Move and expose hugetlb_zero_partial_page() > KVM: guest_memfd: Make guest mem use guest mem inodes instead of > anonymous inodes > KVM: guest_memfd: hugetlb: initialization and cleanup > KVM: guest_memfd: hugetlb: allocate and truncate from hugetlb > KVM: guest_memfd: Add page alignment check for hugetlb guest_memfd > KVM: selftests: Add basic selftests for hugetlb-backed guest_memfd > KVM: selftests: Support various types of backing sources for private > memory > KVM: selftests: Update test for various private memory backing source > types > KVM: selftests: Add private_mem_conversions_test.sh > KVM: selftests: Test that guest_memfd usage is reported via hugetlb > mm: hugetlb: Expose vmemmap optimization functions > mm: hugetlb: Expose HugeTLB functions for promoting/demoting pages > mm: hugetlb: Add functions to add/move/remove from hugetlb lists > KVM: guest_memfd: Track faultability within a struct kvm_gmem_private > KVM: guest_memfd: Allow mmapping guest_memfd files > KVM: guest_memfd: Use vm_type to determine default faultability > KVM: Handle conversions in the SET_MEMORY_ATTRIBUTES ioctl > KVM: guest_memfd: Handle folio preparation for guest_memfd mmap > KVM: selftests: Allow vm_set_memory_attributes to be used without > asserting return value of 0 > KVM: selftests: Test using guest_memfd memory from userspace > KVM: selftests: Test guest_memfd memory sharing between guest and host > KVM: selftests: Add notes in private_mem_kvm_exits_test for mmap-able > guest_memfd > KVM: selftests: Test that pinned pages block KVM from setting memory > attributes to PRIVATE > KVM: selftests: Refactor vm_mem_add to be more flexible > KVM: selftests: Add helper to perform madvise by memslots > KVM: selftests: Update private_mem_conversions_test for mmap()able > guest_memfd > > Vishal Annapurve (2): > KVM: guest_memfd: Split HugeTLB pages for guest_memfd use > KVM: guest_memfd: Dynamically split/reconstruct HugeTLB page > > fs/hugetlbfs/inode.c | 35 +- > include/linux/hugetlb.h | 54 +- > include/linux/kvm_host.h | 1 + > include/linux/mempolicy.h | 2 + > include/linux/mm.h | 1 + > include/uapi/linux/kvm.h | 26 + > include/uapi/linux/magic.h | 1 + > mm/hugetlb.c | 346 ++-- > mm/hugetlb_vmemmap.h | 11 - > mm/mempolicy.c | 36 +- > mm/truncate.c | 26 +- > tools/include/linux/kernel.h | 4 +- > tools/testing/selftests/kvm/Makefile | 3 + > .../kvm/guest_memfd_hugetlb_reporting_test.c | 222 +++ > .../selftests/kvm/guest_memfd_pin_test.c | 104 ++ > .../selftests/kvm/guest_memfd_sharing_test.c | 160 ++ > .../testing/selftests/kvm/guest_memfd_test.c | 238 ++- > .../testing/selftests/kvm/include/kvm_util.h | 45 +- > .../testing/selftests/kvm/include/test_util.h | 18 + > tools/testing/selftests/kvm/lib/kvm_util.c | 443 +++-- > tools/testing/selftests/kvm/lib/test_util.c | 99 ++ > .../kvm/x86_64/private_mem_conversions_test.c | 158 +- > .../x86_64/private_mem_conversions_test.sh | 91 + > .../kvm/x86_64/private_mem_kvm_exits_test.c | 11 +- > virt/kvm/guest_memfd.c | 1563 ++++++++++++++++- > virt/kvm/kvm_main.c | 17 + > virt/kvm/kvm_mm.h | 16 + > 27 files changed, 3288 insertions(+), 443 deletions(-) > create mode 100644 tools/testing/selftests/kvm/guest_memfd_hugetlb_reporting_test.c > create mode 100644 tools/testing/selftests/kvm/guest_memfd_pin_test.c > create mode 100644 tools/testing/selftests/kvm/guest_memfd_sharing_test.c > create mode 100755 tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.sh > > -- > 2.46.0.598.g6f2099f65c-goog -- Michal Hocko SUSE Labs