From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6DF1CD4959 for ; Thu, 21 Sep 2023 05:58:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 40B306B01A5; Thu, 21 Sep 2023 01:58:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BBAF6B01F0; Thu, 21 Sep 2023 01:58:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 283026B01F4; Thu, 21 Sep 2023 01:58:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 154666B01A5 for ; Thu, 21 Sep 2023 01:58:18 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D3ECA1A1042 for ; Thu, 21 Sep 2023 05:58:17 +0000 (UTC) X-FDA: 81259549434.23.60921DC Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by imf11.hostedemail.com (Postfix) with ESMTP id A7F2940005 for ; Thu, 21 Sep 2023 05:58:15 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=V3sZM7oF; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf11.hostedemail.com: domain of binbin.wu@linux.intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=binbin.wu@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695275895; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=a5um5yqUNHErmEAM3L8ipa95CZIidWvdrU1HBep+/SE=; b=wwEbOHW47VegV8eL/GKg3l88p5+2UKD6DC3WRd/gmay7dRnQfvw+z/gQ9n2oqDZvXWIkGS VorYKTmVlmZFlq5zap4JV47diT0OK5agfUVGybjBbv71n8i4gOLJNF94TjI/ZofCb/f0Jv HX//zvdKIZAeBanHZrf21EQo7JLMqUo= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=V3sZM7oF; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf11.hostedemail.com: domain of binbin.wu@linux.intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=binbin.wu@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695275895; a=rsa-sha256; cv=none; b=2E4LgSmUTfCm7q5hW/3XpUhNHDGWOIVg+VlHCKse3wVNy2NQiuA15zQzYhBEJBvoLHE+wv 5XUXNXn/SvJTC2i05FdHlxLufUOQco7S3RX3+/+qq4XOAiTBmmrxQjJFKVFTBRc8KPQrwN wm0BX/10oD1u6oiqYcHE7mbhhD2g71Y= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695275895; x=1726811895; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=D2Ofh4NhqIgE7X1sOt5XtBxAR4C9rpdFRo7VG5iLwuA=; b=V3sZM7oFoiW1qYshMe4H4aylx73Q8n4mETWUk4YHd+RL+gouly2Onvel t0+DGipn3uM5lVEXzyk/0AgUe/HIzQh35fGSW6vI4QHuL1/y0NZ9TjsZZ 3X8i25BuS17BWnOx6JZr+hu9ln1gw9AcheAFhVKSQV3DC4Iwxw9n5xyYS vCz1hJj7ULFQtKrJ8JwYSH6GFPJMQxcw3KnVinpPJkuSepfxZ3b5g2Gja tViHqGUuK1ZXMTrBZMm+ex4AIRsGUTLNT9W7Xxfc2XLNvzvfFPR+aFIiC PNez1lRrY0TB0Kz7hIXFhUOgtBMluz3s+rx0N8K6UUlripbsa+G2I8mzL w==; X-IronPort-AV: E=McAfee;i="6600,9927,10839"; a="380337419" X-IronPort-AV: E=Sophos;i="6.03,164,1694761200"; d="scan'208";a="380337419" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2023 22:58:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10839"; a="740494287" X-IronPort-AV: E=Sophos;i="6.03,164,1694761200"; d="scan'208";a="740494287" Received: from binbinwu-mobl.ccr.corp.intel.com (HELO [10.93.17.222]) ([10.93.17.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2023 22:58:02 -0700 Message-ID: Date: Thu, 21 Sep 2023 13:58:00 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [RFC PATCH v12 14/33] KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory To: Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-kernel@vger.kernel.org, Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , "Matthew Wilcox (Oracle)" , Andrew Morton , Paul Moore , James Morris , "Serge E. Hallyn" , Chao Peng , Fuad Tabba , Jarkko Sakkinen , Anish Moorthy , Yu Zhang , Isaku Yamahata , Xu Yilun , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" References: <20230914015531.1419405-1-seanjc@google.com> <20230914015531.1419405-15-seanjc@google.com> From: Binbin Wu In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: A7F2940005 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: bgj83upb77xt1fk9sfgsbkfkxjzkpt5i X-HE-Tag: 1695275895-772 X-HE-Meta: U2FsdGVkX19TQrk8FvIae5qBHblJ/6fi/OlyfQqX0P/t7Z5wU17OIzdMNFKDZi6aPjqu4apMNaINr8ihOFi9uruCBHyxBeR2SPOlAMgwfW6O5oJecDTjarAsbYnrbcLpTnqDtLCnSfchW1E2wyJ3MEiJjSL0scxKS1WTwlY8Iyku4BuVQSwlimT9XT5cXOSHDaVV0A9V1jIvrN8hxssyngIjnHkAtrfAKpxtccy90PKLxeZDB9t5h9s/hVrcbXC0Fy2dktzr1W/m8gHadJQ9F3Q0joXD5CUBylMRU4k7fiNTlilmfMrO0uepsVvTF4o0Y4C24FiUy8lxaRsbqwe01YUq3gzmVy2c3vJ7am8FTWmT5ee2Bc2C1n+AxeKO7NMFoW+IgSC/9EQvextU9jfhEwksllyumcQ1alTKUzkO7olxD+bGtpNQLSWEh5K3QB0dcypIqSSNDKU8uf9BxewRWBFQxMZyz5YiCY4YryrUF1MsSoR/5x/3fR60citpHddjZY+v6mr29lWNnZ7+2GaAspjDCm74rAE6qqC5W/72ufRUtbtk5waH4BTRmq+Zyk9K4Kwi0LMA0gRonZJVewHTiDi5cBeciFzfTB3xoaWHNDXTaEoPmW0OAOdSoZkUaWEVAbrj6lDSeUbl0wKwhHfRW9NHOU+F4jOBs9xiMlKqtknXU57lOSMwb0ohwgY2V8CWRMOaT8kf6JcBpfsGps4OM8s7o/gnblebfwUDKKIG7q5/XJz+oZDnOOCoDdexVhg/rjovx0GM34EZQlZ5Le6XYRw/KGfYUgDhMHUyZgnapaFaWhNhchp5DD6Bl9m6M571VsxJp9wlc/aGhf5leC4LI/s6MMtkZIvkiHfD77T2zLtCL6VUp9+O61BqFWgtuGhJZ9emsSd17hGL3lxs+fk/wTtHbzLFITth8mXi3j6dytbIvbb5/XvaYwYNWFMT48d103rDmxzi3kP0s87Dti8 9XLiwyBA ETes9WVz+JiYFaFvdj2xI4543vjRc1uoylsBXlCqYCMcC96waSVE/rhz8iL/hUEAN5jF4YNcqk9f/lu5x8J6rr3pYF18crg7qaQZxt6mz+/EcabvPRhIHR2V2vGN1IMi4RZEnHN8kirct44EkpXQj7yE9Fc7Hs/j+uByWlsj0jUVAYGI09otNcZAICpHuh2mQvRt5k8BCoaI55teZEad/7nSJFJqGEgzwSKlKE70QiWbWeOMrfzSeF+/eKoZtA4gzfE9pm/syDUKSsbl/liGh12EssQdPhnnRCVHdVhTpzhn3irFtK5uXi7rmSzoa8Aw2/C/EhrEIpEz9bkc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 9/20/2023 10:24 PM, Sean Christopherson wrote: > On Tue, Sep 19, 2023, Binbin Wu wrote: >> >> On 9/14/2023 9:55 AM, Sean Christopherson wrote: >> [...] >>> + >>> +static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, >>> + pgoff_t end) >>> +{ >>> + struct kvm_memory_slot *slot; >>> + struct kvm *kvm = gmem->kvm; >>> + unsigned long index; >>> + bool flush = false; >>> + >>> + KVM_MMU_LOCK(kvm); >>> + >>> + kvm_mmu_invalidate_begin(kvm); >>> + >>> + xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) { >>> + pgoff_t pgoff = slot->gmem.pgoff; >>> + >>> + struct kvm_gfn_range gfn_range = { >>> + .start = slot->base_gfn + max(pgoff, start) - pgoff, >>> + .end = slot->base_gfn + min(pgoff + slot->npages, end) - pgoff, >>> + .slot = slot, >>> + .may_block = true, >>> + }; >>> + >>> + flush |= kvm_mmu_unmap_gfn_range(kvm, &gfn_range); >>> + } >>> + >>> + if (flush) >>> + kvm_flush_remote_tlbs(kvm); >>> + >>> + KVM_MMU_UNLOCK(kvm); >>> +} >>> + >>> +static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, >>> + pgoff_t end) >>> +{ >>> + struct kvm *kvm = gmem->kvm; >>> + >>> + KVM_MMU_LOCK(kvm); >>> + if (xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) >>> + kvm_mmu_invalidate_end(kvm); >> kvm_mmu_invalidate_begin() is called unconditionally in >> kvm_gmem_invalidate_begin(), >> but kvm_mmu_invalidate_end() is not here. >> This makes the kvm_gmem_invalidate_{begin, end}() calls asymmetric. > Another ouch :-( > > And there should be no need to acquire mmu_lock() unconditionally, the inode's > mutex protects the bindings, not mmu_lock. > > I'll get a fix posted today. I think KVM can also add a sanity check to detect > unresolved invalidations, e.g. > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 7ba1ab1832a9..2a2d18070856 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -1381,8 +1381,13 @@ static void kvm_destroy_vm(struct kvm *kvm) > * No threads can be waiting in kvm_swap_active_memslots() as the > * last reference on KVM has been dropped, but freeing > * memslots would deadlock without this manual intervention. > + * > + * If the count isn't unbalanced, i.e. KVM did NOT unregister between > + * a start() and end(), then there shouldn't be any in-progress > + * invalidations. > */ > WARN_ON(rcuwait_active(&kvm->mn_memslots_update_rcuwait)); > + WARN_ON(!kvm->mn_active_invalidate_count && kvm->mmu_invalidate_in_progress); > kvm->mn_active_invalidate_count = 0; > #else > kvm_flush_shadow_all(kvm); > > > or an alternative style > > if (kvm->mn_active_invalidate_count) > kvm->mn_active_invalidate_count = 0; > else > WARN_ON(kvm->mmu_invalidate_in_progress) > >>> + KVM_MMU_UNLOCK(kvm); >>> +} >>> + >>> +static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) >>> +{ >>> + struct list_head *gmem_list = &inode->i_mapping->private_list; >>> + pgoff_t start = offset >> PAGE_SHIFT; >>> + pgoff_t end = (offset + len) >> PAGE_SHIFT; >>> + struct kvm_gmem *gmem; >>> + >>> + /* >>> + * Bindings must stable across invalidation to ensure the start+end >>> + * are balanced. >>> + */ >>> + filemap_invalidate_lock(inode->i_mapping); >>> + >>> + list_for_each_entry(gmem, gmem_list, entry) { >>> + kvm_gmem_invalidate_begin(gmem, start, end); >>> + kvm_gmem_invalidate_end(gmem, start, end); >>> + } >> Why to loop for each gmem in gmem_list here? >> >> IIUIC, offset is the offset according to the inode, it is only meaningful to >> the inode passed in, i.e, it is only meaningful to the gmem binding with the >> inode, not others. > The code is structured to allow for multiple gmem instances per inode. This isn't > actually possible in the initial code base, but it's on the horizon[*]. I included > the list-based infrastructure in this initial series to ensure that guest_memfd > can actually support multiple files per inode, and to minimize the churn when the > "link" support comes along. > > [*] https://lore.kernel.org/all/cover.1691446946.git.ackerleytng@google.com Got it, thanks for the explanation!