public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Leonardo Bras <leo.bras@arm.com>
To: Marc Zyngier <maz@kernel.org>
Cc: "Leonardo Bras" <leo.bras@arm.com>,
	"Catalin Marinas" <catalin.marinas@arm.com>,
	"Will Deacon" <will@kernel.org>,
	"Oliver Upton" <oupton@kernel.org>,
	"Joey Gouly" <joey.gouly@arm.com>,
	"Suzuki K Poulose" <suzuki.poulose@arm.com>,
	"Zenghui Yu" <yuzenghui@huawei.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	"Len Brown" <lenb@kernel.org>,
	"Saket Dumbre" <saket.dumbre@intel.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Chengwen Feng" <fengchengwen@huawei.com>,
	"Jonathan Cameron" <jic23@kernel.org>,
	"Kees Cook" <kees@kernel.org>,
	"Mikołaj Lenczewski" <miko.lenczewski@arm.com>,
	"Ryan Roberts" <ryan.roberts@arm.com>,
	"Yang Shi" <yang@os.amperecomputing.com>,
	"Thomas Huth" <thuth@redhat.com>,
	mrigendrachaubey <mrigendra.chaubey@gmail.com>,
	"Yeoreum Yun" <yeoreum.yun@arm.com>,
	"Mark Brown" <broonie@kernel.org>,
	"Kevin Brodsky" <kevin.brodsky@arm.com>,
	"James Clark" <james.clark@linaro.org>,
	"Ard Biesheuvel" <ardb@kernel.org>,
	"Fuad Tabba" <tabba@google.com>,
	"Raghavendra Rao Ananta" <rananta@google.com>,
	"Nathan Chancellor" <nathan@kernel.org>,
	"Vincent Donnefort" <vdonnefort@google.com>,
	"Lorenzo Pieralisi" <lpieralisi@kernel.org>,
	"Sascha Bischoff" <Sascha.Bischoff@arm.com>,
	"Anshuman Khandual" <anshuman.khandual@arm.com>,
	"Tian Zheng" <zhengtian10@huawei.com>,
	"Wei-Lin Chang" <weilin.chang@arm.com>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-acpi@vger.kernel.org, acpica-devel@lists.linux.dev,
	kvm@vger.kernel.org
Subject: Re: [PATCH v1 00/12] KVM Dirty-bit cleaning accelerator (HACDBS)
Date: Thu, 30 Apr 2026 16:35:52 +0100	[thread overview]
Message-ID: <afN21w6J4Awg3gV3@devkitleo> (raw)
In-Reply-To: <86a4ukzel3.wl-maz@kernel.org>

On Thu, Apr 30, 2026 at 03:51:20PM +0100, Marc Zyngier wrote:
> On Thu, 30 Apr 2026 14:29:37 +0100,
> Leonardo Bras <leo.bras@arm.com> wrote:
> > 
> > On Thu, Apr 30, 2026 at 02:14:22PM +0100, Marc Zyngier wrote:
> > > On Thu, 30 Apr 2026 12:14:04 +0100,
> > > Leonardo Bras <leo.bras@arm.com> wrote:
> > > 
> > > > d - In __kvm_arch_dirty_log_clear() there is no way to predict how long
> > > >     should be the buffer, so I used 1x PAGE_SIZE, and when it gets full
> > > >     it's cleaned and reused. Should I let users configure that over a
> > > >     parameter, or is it overthinking?
> > > 
> > > How long is a piece of string? We can't know that. A single page feels
> > > very small in the 4kB case, and letting userspace define the size of
> > > that buffer seems a likely requirement.
> > > 
> > 
> > Ok, as a KVM parameter, or as a compile-time option?
> 
> Noticed the "userspace" word in there? It *has* to be controlled by
> userspace one way or another. So not as a kernel parameter, and
> *never* as a compile option.

Okay, I would suggest that a module parameter could be set by userspace, 
but I remember now that it is usually built in the kernel instead. Also, it 
could be bad having this set for the whole system, instead of per-VM.

How do you suggest letting userspace control that?
(All I could think was using an ioctl / API of any sorts, which would 
require changing the VMMs as well.)

> 
> > > > Kernel v7.0.0 + this patchset builds properly, passing both kvm selftests
> > > > for dirty-bit tracking[2], on HW HACDBS enabled or disabled.
> > > 
> > > I have absolutely no trust in these tests.
> > > 
> > > Have you enabled a VMM to make use of these APIs, and actively
> > > migrated running guests? That's the level of testing I'd like to see,
> > > as the selftests are not what people run in production...
> > > 
> > 
> > There is no enablement needed on VMM side.
> > Yes, I have created a VM on upstream qemu with --enable-kvm and migrated it 
> > on the same host. (Inside a model)
> > 
> > That was the first test I used, but then I found out that kvm selftests 
> > stress up multiple scenarios in an easier way.
> 
> Except when they don't. In my experience, the selftests are only there
> to give the CI people the fuzzy feeling that they are doing something
> useful.

LOL

> I have a collection of examples indicating that what these
> things test is not representative of the bugs we have in KVM.
> 

Fair enough... it was tested in qemu live migration, and it works properly 
(migrated from 2 instances of qemu in the same host, emulated by model).

> > Do you prefer me to test on any specific scenario, or does whatever qemu
> > uses as a default parameter work well enough?
> 
> I want to hear about testing at a scale that make sense for production
> VMs, including live migrating between hosts while under memory
> pressure (swapping out).

I agree that's a more interesting test.

> 
> I'm also interested in efficiency: how much better is HACDBS compared
> to the current page faulting? 

The terms are indeed confusing, but HACDBS is just the cleaning accelerator 
for dirty-bit. It means it will only affect how long it takes to transverse 
the page table making pages in the array writable-dirty -> writable-clean.

That being said, it regards to efficiency:
Well, as I only have the model to test that, I am limmited to those 
results, which may not reflect reality.

As an example, on dirty_log_perf_test, the cleaning process took much 
longer (8x) compared to software cleaning, even when faced with no error, 
and entries that fit the array (4k page above). If it took that long even 
in this ideal scenario, it means the HACDBS mechanism implemented in the 
model takes much longer than software, which is counter-intuitive.

> Just having patches for a feature is not
> enough to decide adoption of that feature. Show me the benefits in a
> quantitative way (within the limits of the model, of course).

Sure, I will try measuring migration between 2 instances of the model, and 
see how qemu live migration time is affected, then post the results in this 
thread for us to compare.

Thanks!
Leo


  reply	other threads:[~2026-04-30 15:36 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-30 11:14 [PATCH v1 00/12] KVM Dirty-bit cleaning accelerator (HACDBS) Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 01/12] KVM: arm64: Enable eager hugepage splitting if HDBSS is available Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 02/12] KVM: arm64: HDBSS bits Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 03/12] arm64/cpufeature: Add system-wide FEAT_HACDBS detection Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 04/12] arm64/sysreg: Add HACDBS consumer and base registers Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 05/12] KVM: arm64: Detect (via ACPI) and initialize HACDBSIRQ Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 06/12] KVM: arm64: dirty_bit: Add base FEAT_HACDBS cleaning routine Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 07/12] kvm: Add arch-generic interface for hw-accelerated dirty-bitmap cleaning Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 08/12] KVM: arm64: Add hardware-accelerated dirty-bitmap cleaning routine Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 09/12] kvm/dirty_ring: Introduce get_memslot and move helpers to header Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 10/12] kvm/dirty_ring: Add arch-generic interface for hw-accelerated dirty-ring cleaning Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 11/12] KVM: arm64: Add hardware-accelerated dirty-ring cleaning routine Leonardo Bras
2026-04-30 11:14 ` [PATCH v1 12/12] KVM: arm64: Enable KVM_HW_DIRTY_BIT Leonardo Bras
2026-04-30 13:14 ` [PATCH v1 00/12] KVM Dirty-bit cleaning accelerator (HACDBS) Marc Zyngier
2026-04-30 13:29   ` Leonardo Bras
2026-04-30 14:51     ` Marc Zyngier
2026-04-30 15:35       ` Leonardo Bras [this message]
2026-05-01  2:11       ` Mark Brown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=afN21w6J4Awg3gV3@devkitleo \
    --to=leo.bras@arm.com \
    --cc=Sascha.Bischoff@arm.com \
    --cc=acpica-devel@lists.linux.dev \
    --cc=anshuman.khandual@arm.com \
    --cc=ardb@kernel.org \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=fengchengwen@huawei.com \
    --cc=james.clark@linaro.org \
    --cc=jic23@kernel.org \
    --cc=joey.gouly@arm.com \
    --cc=kees@kernel.org \
    --cc=kevin.brodsky@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=lenb@kernel.org \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lpieralisi@kernel.org \
    --cc=maz@kernel.org \
    --cc=miko.lenczewski@arm.com \
    --cc=mrigendra.chaubey@gmail.com \
    --cc=nathan@kernel.org \
    --cc=oupton@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=rafael@kernel.org \
    --cc=rananta@google.com \
    --cc=ryan.roberts@arm.com \
    --cc=saket.dumbre@intel.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=thuth@redhat.com \
    --cc=vdonnefort@google.com \
    --cc=weilin.chang@arm.com \
    --cc=will@kernel.org \
    --cc=yang@os.amperecomputing.com \
    --cc=yeoreum.yun@arm.com \
    --cc=yuzenghui@huawei.com \
    --cc=zhengtian10@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox