From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2EF7C2BA18 for ; Thu, 20 Jun 2024 20:48:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A4396B04EF; Thu, 20 Jun 2024 16:48:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 653776B04F0; Thu, 20 Jun 2024 16:48:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 480376B04F1; Thu, 20 Jun 2024 16:48:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 25D806B04EF for ; Thu, 20 Jun 2024 16:48:07 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E5F9FA019F for ; Thu, 20 Jun 2024 20:48:06 +0000 (UTC) X-FDA: 82252454172.20.2EC6AC6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 7F0B9140004 for ; Thu, 20 Jun 2024 20:48:04 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WYn0I2HR; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718916477; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=725sMhwzSeqjmupxLu0hwJ7+pZtvicrTuC9XsgQTBK8=; b=lBphZBjzMDNB1MFjPro6W8r+prxiL13+yRd+WCwK1HBZ804yM8kDwP6hDj8mZEPj8cEsjV xprDxzTTlHPZ1TZCsXkHXpsDb5zE0tqeRd4vRJf1krAJPDyNeCsF9vekeq677VMPMyXcHU WuzOBKDtqTpAEvCipH4Ry/ICioBXevg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WYn0I2HR; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718916477; a=rsa-sha256; cv=none; b=Optb4iT2KgeaGgKpc3dv/sYgIDg73D7LKghB4p/HZ30gdgzN9aLpi5QA8uUwuORnXH0yh0 81N0opC87XR+621FuMZFbz37zJSWw98GyS48ngsiATlz4klWUElRqkhcWbBKaANWc8OEbn uIXkv+JUEk1sZJ0xoMfRMC7VqJqRV3o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718916483; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=725sMhwzSeqjmupxLu0hwJ7+pZtvicrTuC9XsgQTBK8=; b=WYn0I2HR0/fxGHx1hZbgY7HnHIi97B1WGCO9YCgp5VCAOoraQcY8WGJVyCfVtgAcYkJF8h JfgvQTQ8akHtQyMPbDu5SUMkKcOtbSRwLNxT5x1Fewxoy0RzUofK6bz+nB26qwRBhIyLdv p9ZvdF8ojLJuECRgaTaRyD+4ytjZSRw= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-379--ng-r9vqOxyGz9lzVPkMOg-1; Thu, 20 Jun 2024 16:48:02 -0400 X-MC-Unique: -ng-r9vqOxyGz9lzVPkMOg-1 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-363740a6f5fso646520f8f.1 for ; Thu, 20 Jun 2024 13:48:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718916481; x=1719521281; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:from:references:cc:to:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=725sMhwzSeqjmupxLu0hwJ7+pZtvicrTuC9XsgQTBK8=; b=fjX6T9v6CfJPOJEjOR8kMXLfZQAHA3YdL28/4tuz50/saQPYv6WQtXr3ZfOt7cfJYO OIbE7c18O/kLsQIRO0qXimUQZAFBXYjPH04+Q1JweAdWN7tB/+MlR3wJJj2YEOdSjRMc yb1baFqehDaPWu9T+2FY0/yw395WXq1GWKzyWPuAFx9n1iMedAVED3Z5e4zaUH29YIf9 F98nEF/CnuJGqoXULjbFROSMYxBKF+c6R3OT5BRohJN501JtXmD60jW+XzjOV8f31AWV ZDyEUJJ4k2jpxzLadWutzLrHmjKg9Bs5WZAvdKUmAHpmb/nv3CNckTVoZGAept5QrmkQ jTvQ== X-Forwarded-Encrypted: i=1; AJvYcCWI4iALhNtI7bqKILDaAZd7r8XQIMJLk6gBWacAymjw6io6UY2GAYGW/TNVQ2Xca7D1U6cRynqPKix1Q+pmczNZ6rI= X-Gm-Message-State: AOJu0YzJ2yJl/ceLgoeSLqwYAzpmLjU5ygPjhtry2MoNOeDiSaQq4Bb6 1z5Lb++3KFpnutGaI0cUJt3TE+ZEiHo44RpmMmIKFj3RRal++IixDCq1jKS+udwZIqPqzCycDll aMiOiI6Pmneb9IBI6rr4V0Ry5MWdCS1Jps6dEc912jxz+mmXL X-Received: by 2002:a5d:5509:0:b0:362:e357:5ceb with SMTP id ffacd0b85a97d-363175b8d26mr4451519f8f.18.1718916480943; Thu, 20 Jun 2024 13:48:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFrzrquPG/94lKhzK3aDSpYVSMtKUQthY8gwE3WdilMXHD1TiygK87vbYj5UA8qzbVrakNlew== X-Received: by 2002:a5d:5509:0:b0:362:e357:5ceb with SMTP id ffacd0b85a97d-363175b8d26mr4451498f8f.18.1718916480436; Thu, 20 Jun 2024 13:48:00 -0700 (PDT) Received: from ?IPV6:2003:cb:c719:5b00:61af:900f:3aef:3af3? (p200300cbc7195b0061af900f3aef3af3.dip0.t-ipconnect.de. [2003:cb:c719:5b00:61af:900f:3aef:3af3]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-362cd0d79fdsm6839188f8f.77.2024.06.20.13.47.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 20 Jun 2024 13:47:59 -0700 (PDT) Message-ID: <53d1e7c5-3e77-467b-be33-a618c3bb6cb3@redhat.com> Date: Thu, 20 Jun 2024 22:47:58 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning To: Sean Christopherson Cc: Jason Gunthorpe , Fuad Tabba , Christoph Hellwig , John Hubbard , Elliot Berman , Andrew Morton , Shuah Khan , Matthew Wilcox , maz@kernel.org, kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, pbonzini@redhat.com References: <20240619115135.GE2494510@nvidia.com> <20240620135540.GG2494510@nvidia.com> <6d7b180a-9f80-43a4-a4cc-fd79a45d7571@redhat.com> <20240620142956.GI2494510@nvidia.com> <385a5692-ffc8-455e-b371-0449b828b637@redhat.com> <20240620163626.GK2494510@nvidia.com> <66a285fc-e54e-4247-8801-e7e17ad795a6@redhat.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: 1d4h8gwe9538xag9955jnd46c6sm7753 X-Rspam-User: X-Rspamd-Queue-Id: 7F0B9140004 X-Rspamd-Server: rspam02 X-HE-Tag: 1718916484-83589 X-HE-Meta: U2FsdGVkX18+N6QIQRtHS3i1ybP3b2zNvXn73l6yIjUYpxwD9NZ9YK+OP0TC2xUui62FinQ3JoBYe+A2sOTx0jkqCj2bzFR9b+Dzw1Ch9a4ORx7VvqMFEC/J3baOCgN3tubiGDl5NuT0E6dAL/dkhlR1fdwOj4DVzWTmX/rOqAHKnXX5KML7OzI4cKCo+dl5HZFhm+uKF5jO9FA2pFY5GqAQEdYxuCVjCSQ6QHUOv7uYrUgM2gfabVnMjhJKMEiei8T9s7YLtbI9An2F7L7UmZivypAsPpSlg7zKXKDZiwaRJPQ+tPyh0S+EEDWnz2DZDpb2XNhMhgobBZa23k48IVlc3DxwEoWkvoePdZFOPFAb9xr2cNnisnAL2bM6/M4PyKxbDuQAirkVkYLLTk7xB1qACnk9hhpthmwfDwEilYCetdwc9Briacau4q5hRN9VxmZLF3DfmkSGx2TEa1kpcTPaSIY0TiKpZAjaBvpSF/D2y4G4ClDUAOSWt31NpIP0Y7PFK3lSQlKNIFFre3hv5EFJ4oraitEa+b/v6nJkC5aPFftZOiypFgpVvM2WsSDjzEXfYBvscpV+lORrG9nS9aJDOKU90isBL1C5BDs59VBIlQD5TrjCM7D5WmdO/qqDWvVEGl1gjGYV4ACRCjhGPXCQtbvGAW84QCCpbWpXP6X183ragyt0HWIZlgh+MPCUKg98yj99axojzvG1Tv3G2ozufkmtdGKdjgI3MT4RWxumm1jhDEGItEUHUczQpP2On2+sOqeikf3aH0oQ71jXW8jgpOUgfXteIK5nOxYLi0s0hkBclMXoT/RMIUYFy74EXX5jCEHCafNNsH5/iSZaYsUs8eVLBNtvx+P0dbxc9p95Vn5MiJiFzZ5D2LZHZ0o0YTdswU4z7oZqFbpKFYYNBSD2W5Uu+MGaUeZTLAwzh9a+8NM4jQP75g3LrztVFpymyS53Zvp64Vmq5KvoW3r hCar4bVF 9GhfGhLH5wGfQDuqbYk54BZfHtj0SkGMnOyy7HuExy1RNjnz3JM2U6E1vGQjJgTspIOCrbf2OkxALYIRaroc0jSiW0y2CdIe1wB/R87gg60QeBRJ5N5blCE+58i8WVhwWOkc2jkpbORZ5X7TxsQ3NeXplVrrEoS8iMkB9qJXYDibDi4jEmyJ3VzGj6y9gWC62ghkLQBM+pTPvbzcT+ID25531RQX2wf/XrBAPycyWVuRT8Sx2JSRGHbLvqlcgVNlR5DUebj3KI6QyXm4r+FQCmEW5Ah2mhcT0DT2xmaRsrgxDURNqelQnsHUtpWR+6/CGVDiw9HDrjIxB7mPoUEeBrsNzOYS0yfyMzFsj2AOPuYTH6K7ZJJ4VJhSvhcTM0sLAL2k2abf2yGPx95CLPr7kB6paI5TbTapA+Zvw4Mkoq01LvS0LkidsnYz9CJrdJXVooi59BGlB8wJsGuQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 20.06.24 22:30, Sean Christopherson wrote: > On Thu, Jun 20, 2024, David Hildenbrand wrote: >> On 20.06.24 18:36, Jason Gunthorpe wrote: >>> On Thu, Jun 20, 2024 at 04:45:08PM +0200, David Hildenbrand wrote: >>> >>>> If we could disallow pinning any shared pages, that would make life a lot >>>> easier, but I think there were reasons for why we might require it. To >>>> convert shared->private, simply unmap that folio (only the shared parts >>>> could possibly be mapped) from all user page tables. >>> >>> IMHO it should be reasonable to make it work like ZONE_MOVABLE and >>> FOLL_LONGTERM. Making a shared page private is really no different >>> from moving it. >>> >>> And if you have built a VMM that uses VMA mapped shared pages and >>> short-term pinning then you should really also ensure that the VM is >>> aware when the pins go away. For instance if you are doing some virtio >>> thing with O_DIRECT pinning then the guest will know the pins are gone >>> when it observes virtio completions. >>> >>> In this way making private is just like moving, we unmap the page and >>> then drive the refcount to zero, then move it. >> Yes, but here is the catch: what if a single shared subpage of a large folio >> is (validly) longterm pinned and you want to convert another shared subpage >> to private? >> >> Sure, we can unmap the whole large folio (including all shared parts) before >> the conversion, just like we would do for migration. But we cannot detect >> that nobody pinned that subpage that we want to convert to private. >> >> Core-mm is not, and will not, track pins per subpage. >> >> So I only see two options: >> >> a) Disallow long-term pinning. That means, we can, with a bit of wait, >> always convert subpages shared->private after unmapping them and >> waiting for the short-term pin to go away. Not too bad, and we >> already have other mechanisms disallow long-term pinnings (especially >> writable fs ones!). > > I don't think disallowing _just_ long-term GUP will suffice, if we go the "disallow > GUP" route than I think it needs to disallow GUP, period. Like the whole "GUP > writes to file-back memory" issue[*], which I think you're alluding to, short-term > GUP is also problematic. But unlike file-backed memory, for TDX and SNP (and I > think pKVM), a single rogue access has a high probability of being fatal to the > entire system. Disallowing short-term should work, in theory, because the writes-to-fileback has different issues (the PIN is not the problem but the dirtying). It's more related us not allowing long-term pins for FSDAX pages, because the lifetime of these pages is determined by the FS. What we would do is 1) Unmap the large folio completely and make any refaults block. -> No new pins can pop up 2) If the folio is pinned, busy-wait until all the short-term pins are gone. 3) Safely convert the relevant subpage from shared -> private Not saying it's the best approach, but it should be doable. > > I.e. except for blatant bugs, e.g. use-after-free, we need to be able to guarantee > with 100% accuracy that there are no outstanding mappings when converting a page > from shared=>private. Crossing our fingers and hoping that short-term GUP will > have gone away isn't enough. We do have the mapcount and the refcount that will be completely reliable for our cases. folio_mapcount()==0 not mapped folio_ref_count()==1 we hold the single folio reference. (-> no mapping, no GUP, no unexpected references) (folio_maybe_dma_pinned() could be used as well, but things like vmsplice() and some O_DIRECT might still take references. folio_ref_count() is more reliable in that regard) > > [*] https://lore.kernel.org/all/cover.1683235180.git.lstoakes@gmail.com > >> b) Expose the large folio as multiple 4k folios to the core-mm. >> >> >> b) would look as follows: we allocate a gigantic page from the (hugetlb) >> reserve into guest_memfd. Then, we break it down into individual 4k folios >> by splitting/demoting the folio. We make sure that all 4k folios are >> unmovable (raised refcount). We keep tracking internally that these 4k >> folios comprise a single large gigantic page. >> >> Core-mm can track for us now without any modifications per (previously >> subpage,) now small folios GUP pins and page table mappings without >> modifications. >> >> Once we unmap the gigantic page from guest_memfd, we recronstruct the >> gigantic page and hand it back to the reserve (only possible once all pins >> are gone). >> >> We can still map the whole thing into the KVM guest+iommu using a single >> large unit, because guest_memfd knows the origin/relationship of these >> pages. But we would only map individual pages into user page tables (unless >> we use large VM_PFNMAP mappings, but then also pinning would not work, so >> that's likely also not what we want). > > Not being to map guest_memfd into userspace with 1GiB mappings should be ok, at > least for CoCo VMs. If the guest shares an entire 1GiB chunk, e.g. for DMA or > whatever, then userspace can simply punch a hole in guest_memfd and allocate 1GiB > of memory from regular memory. Even losing 2MiB mappings should be ok. > > For non-CoCo VMs, I expect we'll want to be much more permissive, but I think > they'll be a complete non-issue because there is no shared vs. private to worry > about. We can simply allow any and all userspace mappings for guest_memfd that is > attached to a "regular" VM, because a misbehaving userspace only loses whatever > hardening (or other benefits) was being provided by using guest_memfd. I.e. the > kernel and system at-large isn't at risk. > >> The downside is that we won't benefit from vmemmap optimizations for large >> folios from hugetlb, and have more tracking overhead when mapping individual >> pages into user page tables. > > Hmm, I suspect losing the vmemmap optimizations would be acceptable, especially > if we could defer the shattering until the guest actually tried to partially > convert a 1GiB/2MiB region, and restore the optimizations when the memory is > converted back. We can only shatter/collapse if there are no unexpected folio references. So GUP would have to be handles as well ... so that is certainly problematic. -- Cheers, David / dhildenb