From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 628AFC43214 for ; Wed, 1 Sep 2021 07:51:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 16D1C604AC for ; Wed, 1 Sep 2021 07:51:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 16D1C604AC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8ECD88D0001; Wed, 1 Sep 2021 03:51:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 89CF86B0071; Wed, 1 Sep 2021 03:51:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 765078D0001; Wed, 1 Sep 2021 03:51:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id 66E976B006C for ; Wed, 1 Sep 2021 03:51:22 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 03D971801925E for ; Wed, 1 Sep 2021 07:51:21 +0000 (UTC) X-FDA: 78538234404.17.7ADA4A0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 952C9B0000A1 for ; Wed, 1 Sep 2021 07:51:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1630482681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5/n4oX+wgvboyZg/iFTnmBK58XCroGgLcRn2/A07gOQ=; b=Oc4EzyZOaFGKcO9FtwKX35hqRDwqlVH2Sk9gQ3CCvgMpo1K4VTWrxBvXvT/aP0dp6Jmfih tXAttoPieJQLgsHlIZmef4513r8x0g6fvxRi/yG0p+PjXBADIR5spZMpjUiKTrYRPw+Yfy NyrBWIAr2FdVtXtfgYFCsNH5dHdZU4c= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-182-Z81SXNPENkyhBZFkXciHVw-1; Wed, 01 Sep 2021 03:51:20 -0400 X-MC-Unique: Z81SXNPENkyhBZFkXciHVw-1 Received: by mail-wr1-f69.google.com with SMTP id 102-20020adf82ef000000b001576e345169so499743wrc.7 for ; Wed, 01 Sep 2021 00:51:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:organization :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=5/n4oX+wgvboyZg/iFTnmBK58XCroGgLcRn2/A07gOQ=; b=Qq9ztLMREI+mOUfo2Thpj60kv38tbPLBnUurS0HztX36tM9yP2hQjsG6XreXl5TAJ4 piFSXDZRZrpMLqMQ5lcmSS1+oMVVn8kXnRpyzVcliwhq6fcLjelithW3SokDYj9IvsWS 4BMnYQRWi3k/pami4oSfazrDJGPu6OMR5RlGcm+RzA5tsZ2H9BAe1gi8TKoQhTdF9cMG 9GYIDJnxHmRQM95jrV7ukdsziO1wu0cavWLZIWuPhn3Tld4NadNy+4JUS/V8y4RSq8Ee W7QNugr3TitiJKOuT8vsgH7iTHxdIW7fd2cgmFWVrEt/h8LnhOzUzD+wSJqRsvn16GI4 rGqQ== X-Gm-Message-State: AOAM531a3fkFSQVxxMnHHkdIrQU5JvbE2EeUYnqVhR1OgfyJb2xw+PRD K8uKazJvAa88oTyTccvGeRS2iETBRIslJCjGolb0fiJyr4eQjNRr5uu7+rrDkyFJFmtxcRa8tNx NzKAu69BGXHY= X-Received: by 2002:a5d:6909:: with SMTP id t9mr34318584wru.44.1630482678970; Wed, 01 Sep 2021 00:51:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxMTvey89SA3DoF/1V8U6PpKmCzoHc/KiiggH+ZLjefcj776vc8ndYgcBGPfQkb8V+1P+tq7Q== X-Received: by 2002:a5d:6909:: with SMTP id t9mr34318566wru.44.1630482678787; Wed, 01 Sep 2021 00:51:18 -0700 (PDT) Received: from [192.168.3.132] (p4ff23f71.dip0.t-ipconnect.de. [79.242.63.113]) by smtp.gmail.com with ESMTPSA id v13sm21108768wrf.55.2021.09.01.00.51.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 01 Sep 2021 00:51:18 -0700 (PDT) Subject: Re: [RFC] KVM: mm: fd-based approach for supporting KVM guest private memory To: Sean Christopherson Cc: Andy Lutomirski , Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm list , Linux Kernel Mailing List , Borislav Petkov , Andrew Morton , Joerg Roedel , Andi Kleen , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , "Peter Zijlstra (Intel)" , Ingo Molnar , Varad Gautam , Dario Faggioli , the arch/x86 maintainers , linux-mm@kvack.org, linux-coco@lists.linux.dev, "Kirill A. Shutemov" , "Kirill A . Shutemov" , Sathyanarayanan Kuppuswamy , Dave Hansen , Yu Zhang References: <20210824005248.200037-1-seanjc@google.com> <307d385a-a263-276f-28eb-4bc8dd287e32@redhat.com> <40af9d25-c854-8846-fdab-13fe70b3b279@kernel.org> <73319f3c-6f5e-4f39-a678-7be5fddd55f2@www.fastmail.com> <949e6d95-266d-0234-3b86-6bd3c5267333@redhat.com> From: David Hildenbrand Organization: Red Hat Message-ID: Date: Wed, 1 Sep 2021 09:51:17 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 952C9B0000A1 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Oc4EzyZO; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf24.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-Rspamd-Server: rspam01 X-Stat-Signature: p14np9zmz5yy4pqo6hrxy7f41bgs5b4c X-HE-Tag: 1630482681-704571 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 31.08.21 22:45, Sean Christopherson wrote: > On Tue, Aug 31, 2021, David Hildenbrand wrote: >> On 28.08.21 00:28, Sean Christopherson wrote: >>> On Fri, Aug 27, 2021, Andy Lutomirski wrote: >>>> >>>> On Thu, Aug 26, 2021, at 2:26 PM, David Hildenbrand wrote: >>>>> On 26.08.21 19:05, Andy Lutomirski wrote: >>>> >>>>>> Oof. That's quite a requirement. What's the point of the VMA once all >>>>>> this is done? >>>>> >>>>> You can keep using things like mbind(), madvise(), ... and the GUP code >>>>> with a special flag might mostly just do what you want. You won't have >>>>> to reinvent too many wheels on the page fault logic side at least. >>> >>> Ya, Kirill's RFC more or less proved a special GUP flag would indeed Just Work. >>> However, the KVM page fault side of things would require only a handful of small >>> changes to send private memslots down a different path. Compared to the rest of >>> the enabling, it's quite minor. >>> >>> The counter to that is other KVM architectures would need to learn how to use the >>> new APIs, though I suspect that there will be a fair bit of arch enabling regardless >>> of what route we take. >>> >>>> You can keep calling the functions. The implementations working is a >>>> different story: you can't just unmap (pte_numa-style or otherwise) a private >>>> guest page to quiesce it, move it with memcpy(), and then fault it back in. >>> >>> Ya, I brought this up in my earlier reply. Even the initial implementation (without >>> real NUMA support) would likely be painful, e.g. the KVM TDX RFC/PoC adds dedicated >>> logic in KVM to handle the case where NUMA balancing zaps a _pinned_ page and then >>> KVM fault in the same pfn. It's not thaaat ugly, but it's arguably more invasive >>> to KVM's page fault flows than a new fd-based private memslot scheme. >> >> I might have a different mindset, but less code churn doesn't necessarily >> translate to "better approach". > > I wasn't referring to code churn. By "invasive" I mean number of touchpoints in > KVM as well as the nature of the touchpoints. E.g. poking into how KVM uses > available bits in its shadow PTEs and adding multiple checks through KVM's page > fault handler, versus two callbacks to get the PFN and page size. > >> I'm certainly not pushing for what I proposed (it's a rough, broken sketch). >> I'm much rather trying to come up with alternatives that try solving the >> same issue, handling the identified requirements. >> >> I have a gut feeling that the list of requirements might not be complete >> yet. For example, I wonder if we have to protect against user space >> replacing private pages by shared pages or punishing random holes into the >> encrypted memory fd. > > Replacing a private page with a shared page for a given GFN is very much a > requirement as it's expected behavior for all VMM+guests when converting guest > memory between shared and private. > > Punching holes is a sort of optional requirement. It's a "requirement" in that > it's allowed if the backing store supports such a behavior, optional in that > support wouldn't be strictly necessary and/or could come with constraints. The > expected use case is that host userspace would punch a hole to free unreachable > private memory, e.g. after the corresponding GFN(s) is converted to shared, so > that it doesn't consume 2x memory for the guest. > Okay, that matches my understanding then. I was rather thinking about "what happens if we punch a hole where private memory was not converted to shared yet". AFAIU, we will simply crash the guest then. -- Thanks, David / dhildenb