From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45260C636CC for ; Tue, 31 Jan 2023 19:06:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230202AbjAaTGw (ORCPT ); Tue, 31 Jan 2023 14:06:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229816AbjAaTGv (ORCPT ); Tue, 31 Jan 2023 14:06:51 -0500 Received: from out-233.mta0.migadu.com (out-233.mta0.migadu.com [91.218.175.233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F7A715574 for ; Tue, 31 Jan 2023 11:06:50 -0800 (PST) Date: Tue, 31 Jan 2023 19:06:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1675192007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ZmtUF7xjL5ODD9h13/vWb0RTU4ZRTPtj2CaIlUkmR3g=; b=rzlb8oI2umgUIUXqdrIkpJ1G/vmiAMvKva0tvD2Tqk4I0kM+77GIAAeOI1jpkERECidISD q+Ni/TsCsk8mkx15lx6qkB/TMkm2IN0I5xuWohcc0+m+0TDQvQ4REJXcACtdujBxo4v5bh 38rnhVTeFFZK3oD/qm0mUiq8f3bHg/k= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Oliver Upton To: Sean Christopherson Cc: Ricardo Koller , Marc Zyngier , pbonzini@redhat.com, yuzenghui@huawei.com, dmatlack@google.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com Subject: Re: [PATCH 6/9] KVM: arm64: Split huge pages when dirty logging is enabled Message-ID: References: <20230113035000.480021-1-ricarkol@google.com> <20230113035000.480021-7-ricarkol@google.com> <86v8ktkqfx.wl-maz@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Tue, Jan 31, 2023 at 05:54:45PM +0000, Sean Christopherson wrote: > On Tue, Jan 31, 2023, Oliver Upton wrote: > > On Tue, Jan 31, 2023 at 01:18:15AM +0000, Sean Christopherson wrote: > > > On Mon, Jan 30, 2023, Oliver Upton wrote: > > > > I think that Marc's suggestion of having userspace configure this is > > > > sound. After all, userspace _should_ know the granularity of the backing > > > > source it chose for guest memory. > > > > > > > > We could also interpret a cache size of 0 to signal that userspace wants > > > > to disable eager page split for a VM altogether. It is entirely possible that > > > > the user will want a differing QoS between slice-of-hardware and > > > > overcommitted VMs. > > > > > > Maybe. It's also entirely possible that QoS is never factored in, e.g. if QoS > > > guarantees for all VMs on a system are better met by enabling eager splitting > > > across the board. > > > > > > There are other reasons to use module/kernel params beyond what Marc listed, e.g. > > > to let the user opt out even when something is on by default. x86's TDP MMU has > > > benefited greatly from downstream users being able to do A/B performance testing > > > this way. I suspect x86's eager_page_split knob was added largely for this > > > reason, e.g. to easily see how a specific workload is affected by eager splitting. > > > That seems like a reasonable fit on the ARM side as well. > > > > There's a rather important distinction here in that we'd allow userspace > > to select the page split cache size, which should be correctly sized for > > the backing memory source. Considering the break-before-make rules of > > the architecture, the only way eager split is performant on arm64 is by > > replacing a block entry with a fully populated table hierarchy in one > > operation. AFAICT, you don't have this problem on x86, as the > > architecture generally permits a direct valid->valid transformation > > without an intermediate invalidation. Well, ignoring iTLB multihit :) > > > > So, the largest transformation we need to do right now is on a PUD w/ > > PAGE_SIZE=4K, leading to 513 pages as proposed in the series. Exposing > > that configuration option in a module parameter is presumptive that all > > VMs on a host use the exact same memory configuration, which doesn't > > feel right to me. > > Can you elaborate on the cache size needing to be tied to the backing source? The proposed eager split mechanism attempts to replace a block with a a fully populated page table hierarchy (i.e. mapped at PTE granularity) in order to avoid successive break-before-make invalidations. The cache size must be >= the number of pages required to build out that fully mapped page table hierarchy. > Do the issues arise if you get to a point where KVM can have PGD-sized hugepages > with PAGE_SIZE=4KiB? Those problems when splitting any hugepage larger than a PMD. It just so happens that the only configuration that supports larger mappings is 4K at the moment. If we were to take the step-down approach to eager page splitting, there will be a lot of knock-on break-before-make operations as we go PUD -> PMD -> PTE. > Or do you want to let userspace optimize _now_ for PMD+4KiB? The default cache value should probably optimize for PMD splitting and give userspace the option to scale that up for PUD or greater if it sees fit. -- Thanks, Oliver