From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 46EC1C3DA7D for ; Thu, 5 Jan 2023 04:44:58 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pDI6i-0006PI-NZ; Wed, 04 Jan 2023 23:44:01 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pDI6g-0006Os-9V for qemu-devel@nongnu.org; Wed, 04 Jan 2023 23:43:58 -0500 Received: from mga11.intel.com ([192.55.52.93]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pDI6d-0003ip-MT for qemu-devel@nongnu.org; Wed, 04 Jan 2023 23:43:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672893835; x=1704429835; h=date:from:to:cc:subject:message-id:reply-to:references: mime-version:in-reply-to; bh=M2HzjY4mMzV2AyGn4ieao8qywB/GzVzqvY8BmkeNGAs=; b=HWXqA/GxnBrM/9ppgb5tBUsjRNqCqtP3NXQn7N0IijBJceZ9u+fnpNbk 4XYGLFMQQ5ColPyjlShQNCkWd+zuEFMMNVZD4781L38umgQxcaGx8tp5A 3u/fq/ffiLx0FuqH7X3DDGLeqLZDloptNsfO6YA/Y2zYNbWW+M/DnYHHZ 8garx0A6T25kjXSWyro8YKlLOz0/2VefFBgitOs+pbUjuD/k2h65PWNGU E/yjOFwjlZApmEIS076+1UdDBihE9fbV1ikvBXt2lXKp+eZaeP/BBvG1l 70qePfxPdibUuad1e2TlKBwQcXPk9C0UQdEkRXB/q6aNpCne3l7HZHifd Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10580"; a="319820023" X-IronPort-AV: E=Sophos;i="5.96,302,1665471600"; d="scan'208";a="319820023" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2023 20:43:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10580"; a="687765563" X-IronPort-AV: E=Sophos;i="5.96,302,1665471600"; d="scan'208";a="687765563" Received: from chaop.bj.intel.com (HELO localhost) ([10.240.193.75]) by orsmga001.jf.intel.com with ESMTP; 04 Jan 2023 20:43:36 -0800 Date: Thu, 5 Jan 2023 12:39:23 +0800 From: Chao Peng To: Sean Christopherson Cc: "Wang, Wei W" , "Qiang, Chenyi" , "kvm@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "linux-fsdevel@vger.kernel.org" , "linux-arch@vger.kernel.org" , "linux-api@vger.kernel.org" , "linux-doc@vger.kernel.org" , "qemu-devel@nongnu.org" , Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , "x86@kernel.org" , "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , "Lutomirski, Andy" , "Nakajima, Jun" , "Hansen, Dave" , "ak@linux.intel.com" , "david@redhat.com" , "aarcange@redhat.com" , "ddutile@redhat.com" , "dhildenb@redhat.com" , Quentin Perret , "tabba@google.com" , Michael Roth , "Hocko, Michal" Subject: Re: [PATCH v10 2/9] KVM: Introduce per-page memory attributes Message-ID: <20230105043923.GC2251521@chaop.bj.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> <20221202061347.1070246-3-chao.p.peng@linux.intel.com> <1c9bbaa5-eea3-351e-d6a0-cfbc32115c82@intel.com> <20230103013948.GA2178318@chaop.bj.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Received-SPF: none client-ip=192.55.52.93; envelope-from=chao.p.peng@linux.intel.com; helo=mga11.intel.com X-Spam_score_int: -42 X-Spam_score: -4.3 X-Spam_bar: ---- X-Spam_report: (-4.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Chao Peng Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org On Tue, Jan 03, 2023 at 11:06:37PM +0000, Sean Christopherson wrote: > On Tue, Jan 03, 2023, Wang, Wei W wrote: > > On Tuesday, January 3, 2023 9:40 AM, Chao Peng wrote: > > > > Because guest memory defaults to private, and now this patch stores > > > > the attributes with KVM_MEMORY_ATTRIBUTE_PRIVATE instead of > > > _SHARED, > > > > it would bring more KVM_EXIT_MEMORY_FAULT exits at the beginning of > > > > boot time. Maybe it can be optimized somehow in other places? e.g. set > > > > mem attr in advance. > > > > > > KVM defaults to 'shared' because this ioctl can also be potentially used by > > > normal VMs and 'shared' sounds a value meaningful for both normal VMs and > > > confidential VMs. > > > > Do you mean a normal VM could have pages marked private? What's the usage? > > (If all the pages are just marked shared for normal VMs, then why do we need it) > > No, there are potential use cases for per-page attribute/permissions, e.g. to > make select pages read-only, exec-only, no-exec, etc... Right, normal VMs are not likely use private/shared bit. Not sure pKVM, but perhaps not call it 'normal' VMs in this context. But since the ioctl can be used by normal VMs for other bits (read-only, exec-only, no-exec, etc), a default 'private' looks strange for them. That's why I default it to 'shared' and for confidential guest, we can issue another call to this ioctl to set all the memory to 'private' before guest booting, if default 'private' is needed for guest. Like Wei mentioned, it's also possible to make the default dependents on vm_type, but that looks awkward to me from the API definition as well as the implementation, also the vm_type has not been introduced at this time. > > > > As for more KVM_EXIT_MEMORY_FAULT exits during the > > > booting time, yes, setting all memory to 'private' for confidential VMs through > > > this ioctl in userspace before guest launch is an approach for KVM userspace to > > > 'override' the KVM default and reduce the number of implicit conversions. > > > > Most pages of a confidential VM are likely to be private pages. It seems more efficient > > (and not difficult to check vm_type) to have KVM defaults to "private" for confidential VMs > > and defaults to "shared" for normal VMs. > > If done right, the default shouldn't matter all that much for efficiency. KVM > needs to be able to effeciently track large ranges regardless of the default, > otherwise the memory overhead and the presumably cost of lookups will be painful. > E.g. converting a 1GiB chunk to shared should ideally require one entry, not 256k > entries. I agree, KVM should have the ability to track large ranges efficiently. > > Looks like that behavior was changed in v8 in response to feedback[*] that doing > xa_store_range() on a subset of an existing range (entry) would overwrite the > entire existing range (entry), not just the smaller subset. xa_store_range() does > appear to be too simplistic for this use case, but looking at __filemap_add_folio(), > splitting an existing entry isn't super complex. Yes, xa_store_range() looks a perfect match for us initially but the 'overwriting the entire entry' behavior makes it incorrect for us when storing a subset on an existing large entry. xarray lib has utilities for splitting, the hard part is merging existing entries, as you also said below. Thanks for pointing out the __filemap_add_folio() example, it does look not too complex for splitting. > > Using xa_store() for the very initial implementation is ok, and probably a good > idea since it's more obviously correct and will give us a bisection point. But > we definitely want a more performant implementation sooner than later. The hardest > part will likely be merging existing entries, but that can be done separately too, > and is probably lower priority. > > E.g. (1) use xa_store() and always track at 4KiB granularity, (2) support storing > metadata in multi-index entries, and finally (3) support merging adjacent entries > with identical values. This path looks good to me. Thanks, Chao > > [*] https://lore.kernel.org/all/CAGtprH9xyw6bt4=RBWF6-v2CSpabOCpKq5rPz+e-9co7EisoVQ@mail.gmail.com