From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4ED13CD4959 for ; Thu, 21 Sep 2023 05:59:10 +0000 (UTC) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=SCLz2jJ5; dkim-atps=neutral Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4Rrl8X6sxHz3cF2 for ; Thu, 21 Sep 2023 15:59:08 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=SCLz2jJ5; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=linux.intel.com (client-ip=192.55.52.115; helo=mgamail.intel.com; envelope-from=binbin.wu@linux.intel.com; receiver=lists.ozlabs.org) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4Rrl7Y5JDyz2yVP for ; Thu, 21 Sep 2023 15:58:17 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695275898; x=1726811898; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=D2Ofh4NhqIgE7X1sOt5XtBxAR4C9rpdFRo7VG5iLwuA=; b=SCLz2jJ5aPL+zzZHoy6zVkB94Nh3BVMRE/U7bpgGboRJknrYtnC4h/Ca 87+I4VdkXr/rqEwjo39IzuQbbfgorIm408POM43PdjDyU8q6lB1Hvt0dL kFSt5aqiwuWv2viDmWliSrTWh8SDO8/ofnnFZ9k5hvahIm7S+UObY8mVP 3SDZlpI281ZepwqiukAw9ZkwfLq1aPJyk0hdavvf1/jZuYu53GTKhVy9q DwoRI86prbfbY9SbV2rqQXbAeMNdg4/ugZYOH5KGNGr43YTQ4pIm1T5nM x4eAheUOBsHjkUYTPcUjtVWeVajteAhwUh4VWjFQzh7tehBgLoJikIYN2 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10839"; a="380337409" X-IronPort-AV: E=Sophos;i="6.03,164,1694761200"; d="scan'208";a="380337409" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2023 22:58:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10839"; a="740494287" X-IronPort-AV: E=Sophos;i="6.03,164,1694761200"; d="scan'208";a="740494287" Received: from binbinwu-mobl.ccr.corp.intel.com (HELO [10.93.17.222]) ([10.93.17.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2023 22:58:02 -0700 Message-ID: Date: Thu, 21 Sep 2023 13:58:00 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [RFC PATCH v12 14/33] KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory To: Sean Christopherson References: <20230914015531.1419405-1-seanjc@google.com> <20230914015531.1419405-15-seanjc@google.com> From: Binbin Wu In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, David Hildenbrand , Yu Zhang , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chao Peng , linux-riscv@lists.infradead.org, Isaku Yamahata , Paul Moore , Marc Zyngier , Huacai Chen , James Morris , "Matthew Wilcox \(Oracle\)" , Wang , Fuad Tabba , Jarkko Sakkinen , "Serge E. Hallyn" , Maciej Szmigiero , Albert Ou , Vlastimil Babka , Michael Roth , Ackerley Tng , Paul Walmsley , kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, Isaku Yamahata , Quentin Perret , Liam Merwick , linux-mips@vger.kernel.org, Oliver Upton , linux-security-module@vger.kernel.org, Palmer Dabbelt , "Kirill A . Shutemov" , kvm-riscv@lists.infradead.org, Anup Patel , linux-fsdevel@vger.kernel.org, Paolo Bonzini , Andrew Morton , Vishal Annapurve , linuxppc-dev@lists.ozlabs.org, Xu Yilun , Anish Moorthy Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On 9/20/2023 10:24 PM, Sean Christopherson wrote: > On Tue, Sep 19, 2023, Binbin Wu wrote: >> >> On 9/14/2023 9:55 AM, Sean Christopherson wrote: >> [...] >>> + >>> +static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, >>> + pgoff_t end) >>> +{ >>> + struct kvm_memory_slot *slot; >>> + struct kvm *kvm = gmem->kvm; >>> + unsigned long index; >>> + bool flush = false; >>> + >>> + KVM_MMU_LOCK(kvm); >>> + >>> + kvm_mmu_invalidate_begin(kvm); >>> + >>> + xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) { >>> + pgoff_t pgoff = slot->gmem.pgoff; >>> + >>> + struct kvm_gfn_range gfn_range = { >>> + .start = slot->base_gfn + max(pgoff, start) - pgoff, >>> + .end = slot->base_gfn + min(pgoff + slot->npages, end) - pgoff, >>> + .slot = slot, >>> + .may_block = true, >>> + }; >>> + >>> + flush |= kvm_mmu_unmap_gfn_range(kvm, &gfn_range); >>> + } >>> + >>> + if (flush) >>> + kvm_flush_remote_tlbs(kvm); >>> + >>> + KVM_MMU_UNLOCK(kvm); >>> +} >>> + >>> +static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, >>> + pgoff_t end) >>> +{ >>> + struct kvm *kvm = gmem->kvm; >>> + >>> + KVM_MMU_LOCK(kvm); >>> + if (xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) >>> + kvm_mmu_invalidate_end(kvm); >> kvm_mmu_invalidate_begin() is called unconditionally in >> kvm_gmem_invalidate_begin(), >> but kvm_mmu_invalidate_end() is not here. >> This makes the kvm_gmem_invalidate_{begin, end}() calls asymmetric. > Another ouch :-( > > And there should be no need to acquire mmu_lock() unconditionally, the inode's > mutex protects the bindings, not mmu_lock. > > I'll get a fix posted today. I think KVM can also add a sanity check to detect > unresolved invalidations, e.g. > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 7ba1ab1832a9..2a2d18070856 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -1381,8 +1381,13 @@ static void kvm_destroy_vm(struct kvm *kvm) > * No threads can be waiting in kvm_swap_active_memslots() as the > * last reference on KVM has been dropped, but freeing > * memslots would deadlock without this manual intervention. > + * > + * If the count isn't unbalanced, i.e. KVM did NOT unregister between > + * a start() and end(), then there shouldn't be any in-progress > + * invalidations. > */ > WARN_ON(rcuwait_active(&kvm->mn_memslots_update_rcuwait)); > + WARN_ON(!kvm->mn_active_invalidate_count && kvm->mmu_invalidate_in_progress); > kvm->mn_active_invalidate_count = 0; > #else > kvm_flush_shadow_all(kvm); > > > or an alternative style > > if (kvm->mn_active_invalidate_count) > kvm->mn_active_invalidate_count = 0; > else > WARN_ON(kvm->mmu_invalidate_in_progress) > >>> + KVM_MMU_UNLOCK(kvm); >>> +} >>> + >>> +static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) >>> +{ >>> + struct list_head *gmem_list = &inode->i_mapping->private_list; >>> + pgoff_t start = offset >> PAGE_SHIFT; >>> + pgoff_t end = (offset + len) >> PAGE_SHIFT; >>> + struct kvm_gmem *gmem; >>> + >>> + /* >>> + * Bindings must stable across invalidation to ensure the start+end >>> + * are balanced. >>> + */ >>> + filemap_invalidate_lock(inode->i_mapping); >>> + >>> + list_for_each_entry(gmem, gmem_list, entry) { >>> + kvm_gmem_invalidate_begin(gmem, start, end); >>> + kvm_gmem_invalidate_end(gmem, start, end); >>> + } >> Why to loop for each gmem in gmem_list here? >> >> IIUIC, offset is the offset according to the inode, it is only meaningful to >> the inode passed in, i.e, it is only meaningful to the gmem binding with the >> inode, not others. > The code is structured to allow for multiple gmem instances per inode. This isn't > actually possible in the initial code base, but it's on the horizon[*]. I included > the list-based infrastructure in this initial series to ensure that guest_memfd > can actually support multiple files per inode, and to minimize the churn when the > "link" support comes along. > > [*] https://lore.kernel.org/all/cover.1691446946.git.ackerleytng@google.com Got it, thanks for the explanation!