From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 01627CD54A9 for ; Tue, 19 Sep 2023 09:02:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To:Subject: MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WhK/w8Q8/Q7781CUEsY9kGeckdUxuBdxJLAGINDmt2k=; b=RKum6Jad9GEHZk PxGlWuhs1oVNOZNxlXeT60w6KaGns/UHsrnQgXOAIq4XlrwE8bQWCZ7QHXNoMJE1ihkGIWKoYPwR6 KTfhWhDQbyr/nSQnNQiI3PAQ9L/md2cY/hl+f1IaUwJYmsSj5Y4lVsHA4eJkCfQVbtffGmnqDqNS2 PY3979HLLbpOmwsx6bGMUD463XyPgr96i4DP40/WG8geboxeSiBb0Vmq9DwSQaJ4amM6YHWokzQHA 5WmldJay8HZa6TyfKs36QNu8KEFfkPLdskIuXfs2JWp06VI+XCUVY8sXUK/0qcynRopt0zQBH+LBY ml04ItnISfKt8xqGFIog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qiWcC-00HPXL-1l; Tue, 19 Sep 2023 09:01:52 +0000 Received: from mgamail.intel.com ([192.55.52.93]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qiWc9-00HPVy-03; Tue, 19 Sep 2023 09:01:50 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695114108; x=1726650108; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=3lbpMHnCnN4tgBMsnlxIOe9ua6EYGffh/u6hevJylxY=; b=SEv6rbhcAG8Brikv8KxJ0iQqqHg8DpUiAaNs2NfGenCq9A8bNsfHsqOn 1ZdP+JZplJJalD33nSs/kZOqOBBD8MM7rSCvpNUB5aF+SoCpiSAEB1WWw Po/wTwi0+8idSbCLrB6wiqfT5vA2PB11nF/c9fQfjLAlGnyGmOwGD/dmO YSf81UZ7oFrzNhOeiSr0zH5pDk10Hn3XPvP5ZzyGyEnPVMNgkWvItHiGX r+oihA0xvHZhdqaDCXXZnBu7uN741b++en1OTt1h3oZxNUDUJBqhGThHb zLPyIHzCI1l/ckkx2RbKOxocPk6DBUYn9r51JpbsfpI0zzxnl/zJB5zVm Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10837"; a="377201135" X-IronPort-AV: E=Sophos;i="6.02,159,1688454000"; d="scan'208";a="377201135" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2023 02:01:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10837"; a="695833326" X-IronPort-AV: E=Sophos;i="6.02,159,1688454000"; d="scan'208";a="695833326" Received: from binbinwu-mobl.ccr.corp.intel.com (HELO [10.238.8.84]) ([10.238.8.84]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2023 02:01:36 -0700 Message-ID: Date: Tue, 19 Sep 2023 17:01:31 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [RFC PATCH v12 14/33] KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory To: Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-kernel@vger.kernel.org, Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , "Matthew Wilcox (Oracle)" , Andrew Morton , Paul Moore , James Morris , "Serge E. Hallyn" , Chao Peng , Fuad Tabba , Jarkko Sakkinen , Anish Moorthy , Yu Zhang , Isaku Yamahata , Xu Yilun , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" References: <20230914015531.1419405-1-seanjc@google.com> <20230914015531.1419405-15-seanjc@google.com> From: Binbin Wu In-Reply-To: <20230914015531.1419405-15-seanjc@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230919_020149_116948_61AADC9D X-CRM114-Status: GOOD ( 15.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 9/14/2023 9:55 AM, Sean Christopherson wrote: [...] > + > +static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, > + pgoff_t end) > +{ > + struct kvm_memory_slot *slot; > + struct kvm *kvm = gmem->kvm; > + unsigned long index; > + bool flush = false; > + > + KVM_MMU_LOCK(kvm); > + > + kvm_mmu_invalidate_begin(kvm); > + > + xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) { > + pgoff_t pgoff = slot->gmem.pgoff; > + > + struct kvm_gfn_range gfn_range = { > + .start = slot->base_gfn + max(pgoff, start) - pgoff, > + .end = slot->base_gfn + min(pgoff + slot->npages, end) - pgoff, > + .slot = slot, > + .may_block = true, > + }; > + > + flush |= kvm_mmu_unmap_gfn_range(kvm, &gfn_range); > + } > + > + if (flush) > + kvm_flush_remote_tlbs(kvm); > + > + KVM_MMU_UNLOCK(kvm); > +} > + > +static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, > + pgoff_t end) > +{ > + struct kvm *kvm = gmem->kvm; > + > + KVM_MMU_LOCK(kvm); > + if (xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) > + kvm_mmu_invalidate_end(kvm); kvm_mmu_invalidate_begin() is called unconditionally in kvm_gmem_invalidate_begin(), but kvm_mmu_invalidate_end() is not here. This makes the kvm_gmem_invalidate_{begin, end}() calls asymmetric. > + KVM_MMU_UNLOCK(kvm); > +} > + > +static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) > +{ > + struct list_head *gmem_list = &inode->i_mapping->private_list; > + pgoff_t start = offset >> PAGE_SHIFT; > + pgoff_t end = (offset + len) >> PAGE_SHIFT; > + struct kvm_gmem *gmem; > + > + /* > + * Bindings must stable across invalidation to ensure the start+end > + * are balanced. > + */ > + filemap_invalidate_lock(inode->i_mapping); > + > + list_for_each_entry(gmem, gmem_list, entry) { > + kvm_gmem_invalidate_begin(gmem, start, end); > + kvm_gmem_invalidate_end(gmem, start, end); > + } Why to loop for each gmem in gmem_list here? IIUIC, offset is the offset according to the inode, it is only meaningful to the inode passed in, i.e, it is only meaningful to the gmem binding with the inode, not others. > + > + filemap_invalidate_unlock(inode->i_mapping); > + > + return 0; > +} > + [...] _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel