From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7667DEE6457 for ; Fri, 15 Sep 2023 14:27:16 +0000 (UTC) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20230601 header.b=e6bzjZ5i; dkim-atps=neutral Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4RnGjZ6GtJz3dFF for ; Sat, 16 Sep 2023 00:27:14 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20230601 header.b=e6bzjZ5i; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=flex--seanjc.bounces.google.com (client-ip=2607:f8b0:4864:20::b4a; helo=mail-yb1-xb4a.google.com; envelope-from=3imkezqykdj0pb7kg9dlldib.9ljifkrumm9-absifpqp.lwi78p.lod@flex--seanjc.bounces.google.com; receiver=lists.ozlabs.org) Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4RnGhZ5yYRz3cD5 for ; Sat, 16 Sep 2023 00:26:21 +1000 (AEST) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d8153284d6eso2682114276.3 for ; Fri, 15 Sep 2023 07:26:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1694787978; x=1695392778; darn=lists.ozlabs.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=n3wwM0IxTv36mTuzbjanMwII1EwyeoxiFmog5O5ddM4=; b=e6bzjZ5ipSNEhNqHxP1U8Hj3NX3W6ehv2tDrP8IuoBqgJyyvN+gl2bHHrVswRT217e fBUfTbxL8cXXmFS/EPqDqccvGV8bQJpUDtwRjMEM75px/8pIqykm4rRqiPvB0KnPTtfw Mi5NbSk9eaSO9+TYWblZyuEmgtCYCuCpHg7iPEXoRUadcAfqrXWBhmT+x0B2rdRyNjsD Kh7cLVr+2r/iF6tDYsI9UepNf1pGRnkwni3tinnZmINd1znMOgzL1n75VePHpFqikVPB oQR2adUifnxJEjRjspmASQPIpJj4KgI6x9G7/CSnsJVAsAZ9k6r2m2jtlCO1g6qwCjM4 VKXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694787978; x=1695392778; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=n3wwM0IxTv36mTuzbjanMwII1EwyeoxiFmog5O5ddM4=; b=k7KzP4QnxNBmD0rzr/Q/EAR70QC3kgjEGotN9SKgGkfrM2EvJ3lznktdunkvX1bSzz GIicYr4cAvezBGzhFo/TmTv0JsFr+/+aRheGZdQTikNLBAWyfLHavCJTloFk88ydRjHy 26LiDixc+tOIajNA3RXJhPEyEpn3k+xJeNeX/LFBbT8axyOjGvC4vHWYPP1eMpAB56EF nbOe/9dnw3TCpF1xw7Db+IUUNa48QJUW8lkB1lyxK5dsNjQrztRFn/GpkeZKpyLn6w9L rz/3C7tVxfTUZkhqwpM3hWFaMN8MM0fTOBWg9saE1p6f8AvgZXxmi15ONJbVyilSMRrZ X19A== X-Gm-Message-State: AOJu0YxZXEAjeYigQNA+piRIdvE77YbaPhsu4G1Wpq4t5Mycao/IcCEV yJXoVYCzrOuVboOJBiVVde6hTAk+e2s= X-Google-Smtp-Source: AGHT+IEtd+ZFiwjGOxGj1IwoYm6uY+zMlhHeo3LWWwltDCF+cgsfqNfOOXS7Bflkew1pSHX6ZJTGM/96YR0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:bc8:0:b0:d80:eb4:9ca with SMTP id 191-20020a250bc8000000b00d800eb409camr37310ybl.0.1694787978719; Fri, 15 Sep 2023 07:26:18 -0700 (PDT) Date: Fri, 15 Sep 2023 07:26:16 -0700 In-Reply-To: Mime-Version: 1.0 References: <20230914015531.1419405-1-seanjc@google.com> <20230914015531.1419405-19-seanjc@google.com> Message-ID: Subject: Re: [RFC PATCH v12 18/33] KVM: x86/mmu: Handle page fault for private memory From: Sean Christopherson To: Yan Zhao Content-Type: text/plain; charset="us-ascii" X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, David Hildenbrand , Yu Zhang , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chao Peng , linux-riscv@lists.infradead.org, Isaku Yamahata , Paul Moore , Marc Zyngier , Huacai Chen , James Morris , "Matthew Wilcox \(Oracle\)" , Wang , Fuad Tabba , Jarkko Sakkinen , "Serge E. Hallyn" , Maciej Szmigiero , Albert Ou , Vlastimil Babka , Michael Roth , Ackerley Tng , Paul Walmsley , kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, Isaku Yamahata , Quentin Perret , Liam Merwick , linux-mips@vger.kernel.org, Oliver Upton , linux-security-module@vger.kernel.org, Palmer Dabbelt , "Kirill A . Shutemov" , kvm-riscv@lists.infradead.org, Anup Patel , linux-fsdevel@vger.kernel.org, Paolo Bonzini , Andrew Morton , Vishal Annapurve , linuxppc-dev@lists.ozlabs.org, Xu Yilun , Anish Moorthy Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Fri, Sep 15, 2023, Yan Zhao wrote: > On Wed, Sep 13, 2023 at 06:55:16PM -0700, Sean Christopherson wrote: > .... > > +static void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, > > + struct kvm_page_fault *fault) > > +{ > > + kvm_prepare_memory_fault_exit(vcpu, fault->gfn << PAGE_SHIFT, > > + PAGE_SIZE, fault->write, fault->exec, > > + fault->is_private); > > +} > > + > > +static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, > > + struct kvm_page_fault *fault) > > +{ > > + int max_order, r; > > + > > + if (!kvm_slot_can_be_private(fault->slot)) { > > + kvm_mmu_prepare_memory_fault_exit(vcpu, fault); > > + return -EFAULT; > > + } > > + > > + r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, > > + &max_order); > > + if (r) { > > + kvm_mmu_prepare_memory_fault_exit(vcpu, fault); > > + return r; > > + } > > + > > + fault->max_level = min(kvm_max_level_for_order(max_order), > > + fault->max_level); > > + fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); > > + > > + return RET_PF_CONTINUE; > > +} > > + > > static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > { > > struct kvm_memory_slot *slot = fault->slot; > > @@ -4293,6 +4356,14 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > > return RET_PF_EMULATE; > > } > > > > + if (fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) { > In patch 21, > fault->is_private is set as: > ".is_private = kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT)", > then, the inequality here means memory attribute has been updated after > last check. > So, why an exit to user space for converting is required instead of a mere retry? > > Or, is it because how .is_private is assigned in patch 21 is subjected to change > in future? This. Retrying on SNP or TDX would hang the guest. I suppose we could special case VMs where .is_private is derived from the memory attributes, but the SW_PROTECTED_VM type is primary a development vehicle at this point. I'd like to have it mimic SNP/TDX as much as possible; performance is a secondary concern. E.g. userspace needs to be prepared for "spurious" exits due to races on SNP and TDX, which this can theoretically exercise. Though the window is quite small so I doubt that'll actually happen in practice; which of course also makes it less important to retry instead of exiting.