All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: "Kalra, Ashish" <ashish.kalra@amd.com>,
	Sean Christopherson <seanjc@google.com>
Cc: tglx@kernel.org, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	peterz@infradead.org, thomas.lendacky@amd.com,
	herbert@gondor.apana.org.au, davem@davemloft.net,
	ardb@kernel.org, pbonzini@redhat.com, aik@amd.com,
	Michael.Roth@amd.com, KPrateek.Nayak@amd.com,
	Tycho.Andersen@amd.com, Nathan.Fontenot@amd.com,
	jackyli@google.com, pgonda@google.com, rientjes@google.com,
	jacobhxu@google.com, xin@zytor.com,
	pawan.kumar.gupta@linux.intel.com, babu.moger@amd.com,
	dyoung@redhat.com, nikunj@amd.com, john.allen@amd.com,
	darwi@linutronix.de, linux-kernel@vger.kernel.org,
	linux-crypto@vger.kernel.org, kvm@vger.kernel.org,
	linux-coco@lists.linux.dev
Subject: Re: [PATCH v2 3/7] x86/sev: add support for RMPOPT instruction
Date: Wed, 11 Mar 2026 15:20:50 -0700	[thread overview]
Message-ID: <5102edd8-8eaa-4688-b3f7-3004c4cbc8f3@intel.com> (raw)
In-Reply-To: <cc9bf918-a14b-4619-a084-3f424fa16ea1@amd.com>

On 3/11/26 14:24, Kalra, Ashish wrote:
...
> There are 2 active SNP VMs here, with one SNP VM being terminated, the other SNP VM is still running, both VMs are configured with 100GB guest RAM: 
> 
> When this loop is executed when the SNP guest terminates:
> 
> [  232.789187] SEV-SNP: RMPOPT execution time 391609638 ns for physical address range 0x0000000000000000 - 0x0000020000000000 on all cpus -> ~391 ms
> 
> [  234.647462] SEV-SNP: RMPOPT execution time 457933019 ns for physical address range 0x0000000000000000 - 0x0000020000000000 on all cpus -> ~457 ms

That's better, but it's not quite what am looking for.

The most important case (IMNHO) is when RMPOPT falls flat on its face:
it tries to optimize the full 2TB of memory and manages to optimize nothing.

I doubt that two 100GB VMs will get close to that case. It's
theoretically possible, but unlikely.

You also didn't mention 4k vs. 2M vs. 1G mappings.

> Now, there are a couple of additional RMPOPT optimizations which can be applied to this loop : 
> 
> 1). RMPOPT can skip the bulk of its work if another CPU has already optimized that region.
> The optimal thing may be to optimize all memory on one CPU first, and then let all the others
> run RMPOPT in parallel.

Ahh, so the RMP table itself caches the result of the RMPOPT in its 1G
metadata, then the CPUs can just copy it into their core-local
optimization table at RMPOPT time?

That's handy.

*But*, for the purposes of finding pathological behavior, it's actually
contrary to what I think I was asking for which was having all 1G pages
filled with some private memory. If the system was in the state I want
to see tested, that optimization won't function.

> [  363.926595] SEV-SNP: RMPOPT execution time 317016656 ns for physical address range 0x0000000000000000 - 0x0000020000000000 on all cpus -> ~317 ms
> 
> [  365.415243] SEV-SNP: RMPOPT execution time 369659769 ns for physical address range 0x0000000000000000 - 0x0000020000000000 on all cpus -> ~369 ms.
> 
> So, with these two optimizations applied, there is like a ~16-20% performance improvement (when SNP guest terminates) in the execution of this loop
> which is executing RMPOPT on upto 2TB of RAM on all CPUs.
> 
> Any thoughts, feedback on the performance numbers ? 

16-20% isn't horrible, but it isn't really a fundamental change.

It would also be nice to see elapsed time for each CPU. Having one
pegged CPU for 400ms and 99 mostly idle ones is way different than
having 100 pegged CPUs for 400ms.

That's why I was interested in "how long it takes per-cpu".

But you could get some pretty good info with your new optimized loop:

                start = ktime_get();

                for (pa = pa_start; pa < pa_end; pa += PUD_SIZE)
                        rmpopt() // current CPU

                middle = ktime_get();

                for (pa = pa_start; pa < pa_end; pa += PUD_SIZE)
                        on_each_cpu_mask(...) // remote CPUs

                end = ktime_get();

If you do that ^ with a system:

	1. full of private memory
	2. empty of private memory
	3. empty again

You'll hopefully see:

	1. RMPOPT fall on its face. Worst case scenario (what I want to
	   see most)
	2. RMPOPT sees great success, but has to scan the RMP at least
	   once. Remote CPUs get a free ride on the first CPU's scan.
	   Largest (middle-start) vs. (end-middle)/nr_cpus delta.
	3. RMPOPT best case. Everything is already optimized.

> Ideally we should be issuing RMPOPTs to only optimize the 1G regions that contained memory associated with that guest and that should be 
> significantly less than the whole 2TB RAM range. 
> 
> But that is something we planned for 1GB hugetlb guest_memfd support getting merged and which i believe has dependency on:
> 1). in-place conversion for guest_memfd, 
> 2). 2M hugepage support for guest_memfd and finally 
> 3). 1GB hugeTLB support for guest_memfd.

It's a no-brainer to do RMPOPT when you have 1GB pages around. You'll
see zero argument from me.

Doing things per-guest and for smaller pages gets a little bit harder to
reason about. In the end, this is all about trying to optimize against
the RMP table which is a global resource. It's going to get wonky if
RMPOPT is driven purely by guest-local data. There are lots of potential
pitfalls.

For now, let's just do it as simply as possible. Get maximum bang for
our buck with minimal data structures and see how that works out. It
might end up being a:

	queue_delayed_work()

to do some cleanup a few seconds out after each SNP guest terminates. If
a bunch of guests terminate all at once it'll at least only do a single
set of IPIs.

  reply	other threads:[~2026-03-11 22:20 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-02 21:35 [PATCH v2 0/7] Add RMPOPT support Ashish Kalra
2026-03-02 21:35 ` [PATCH v2 1/7] x86/cpufeatures: Add X86_FEATURE_AMD_RMPOPT feature flag Ashish Kalra
2026-03-02 23:00   ` Dave Hansen
2026-03-05 12:36   ` Borislav Petkov
2026-03-02 21:35 ` [PATCH v2 2/7] x86/sev: add support for enabling RMPOPT Ashish Kalra
2026-03-02 22:32   ` Dave Hansen
2026-03-02 22:55     ` Kalra, Ashish
2026-03-02 23:00       ` Dave Hansen
2026-03-02 23:11         ` Kalra, Ashish
2026-03-02 22:33   ` Dave Hansen
2026-03-06 15:18   ` Borislav Petkov
2026-03-06 15:33     ` Tom Lendacky
2026-03-02 21:36 ` [PATCH v2 3/7] x86/sev: add support for RMPOPT instruction Ashish Kalra
2026-03-02 22:57   ` Dave Hansen
2026-03-02 23:09     ` Kalra, Ashish
2026-03-02 23:15       ` Dave Hansen
2026-03-04 15:56     ` Andrew Cooper
2026-03-04 16:03       ` Dave Hansen
2026-03-25 21:53       ` Kalra, Ashish
2026-03-26  0:40         ` Andrew Cooper
2026-03-26  2:02           ` Kalra, Ashish
2026-03-26  2:14             ` Kalra, Ashish
2026-03-04 15:01   ` Sean Christopherson
2026-03-04 15:25     ` Dave Hansen
2026-03-04 15:32       ` Dave Hansen
2026-03-05  1:40       ` Kalra, Ashish
2026-03-05 19:22         ` Kalra, Ashish
2026-03-05 19:40           ` Dave Hansen
2026-03-11 21:24             ` Kalra, Ashish
2026-03-11 22:20               ` Dave Hansen [this message]
2026-03-16 19:03                 ` Kalra, Ashish
2026-03-18 14:00                   ` Dave Hansen
2026-03-02 21:36 ` [PATCH v2 4/7] x86/sev: Add interface to re-enable RMP optimizations Ashish Kalra
2026-03-02 21:36 ` [PATCH v2 5/7] KVM: guest_memfd: Add cleanup interface for guest teardown Ashish Kalra
2026-03-09  9:01   ` Ackerley Tng
2026-03-10 22:18     ` Kalra, Ashish
2026-03-11  6:00       ` Ackerley Tng
2026-03-11 21:49         ` Kalra, Ashish
2026-03-27 17:16           ` Ackerley Tng
2026-03-31  4:45             ` Kalra, Ashish
2026-03-02 21:37 ` [PATCH v2 6/7] KVM: SEV: Implement SEV-SNP specific guest cleanup Ashish Kalra
2026-03-02 21:37 ` [PATCH v2 7/7] x86/sev: Add debugfs support for RMPOPT Ashish Kalra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5102edd8-8eaa-4688-b3f7-3004c4cbc8f3@intel.com \
    --to=dave.hansen@intel.com \
    --cc=KPrateek.Nayak@amd.com \
    --cc=Michael.Roth@amd.com \
    --cc=Nathan.Fontenot@amd.com \
    --cc=Tycho.Andersen@amd.com \
    --cc=aik@amd.com \
    --cc=ardb@kernel.org \
    --cc=ashish.kalra@amd.com \
    --cc=babu.moger@amd.com \
    --cc=bp@alien8.de \
    --cc=darwi@linutronix.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=davem@davemloft.net \
    --cc=dyoung@redhat.com \
    --cc=herbert@gondor.apana.org.au \
    --cc=hpa@zytor.com \
    --cc=jackyli@google.com \
    --cc=jacobhxu@google.com \
    --cc=john.allen@amd.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=nikunj@amd.com \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pgonda@google.com \
    --cc=rientjes@google.com \
    --cc=seanjc@google.com \
    --cc=tglx@kernel.org \
    --cc=thomas.lendacky@amd.com \
    --cc=x86@kernel.org \
    --cc=xin@zytor.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.