From: "Kalra, Ashish" <ashish.kalra@amd.com>
To: Dave Hansen <dave.hansen@intel.com>,
tglx@kernel.org, mingo@redhat.com, bp@alien8.de,
dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
seanjc@google.com, peterz@infradead.org, thomas.lendacky@amd.com,
herbert@gondor.apana.org.au, davem@davemloft.net,
ardb@kernel.org
Cc: pbonzini@redhat.com, aik@amd.com, Michael.Roth@amd.com,
KPrateek.Nayak@amd.com, Tycho.Andersen@amd.com,
Nathan.Fontenot@amd.com, jackyli@google.com, pgonda@google.com,
rientjes@google.com, jacobhxu@google.com, xin@zytor.com,
pawan.kumar.gupta@linux.intel.com, babu.moger@amd.com,
dyoung@redhat.com, nikunj@amd.com, john.allen@amd.com,
darwi@linutronix.de, linux-kernel@vger.kernel.org,
linux-crypto@vger.kernel.org, kvm@vger.kernel.org,
linux-coco@lists.linux.dev
Subject: Re: [PATCH 5/6] x86/sev: Use configfs to re-enable RMP optimizations.
Date: Tue, 17 Feb 2026 22:39:42 -0600 [thread overview]
Message-ID: <84c08e66-c0dc-4e2a-834a-67190b89bded@amd.com> (raw)
In-Reply-To: <e72165ed-c65d-4d21-bff6-9981b46311cf@amd.com>
On 2/17/2026 9:34 PM, Kalra, Ashish wrote:
> Hello Dave,
>
> On 2/17/2026 4:19 PM, Dave Hansen wrote:
>> On 2/17/26 12:11, Ashish Kalra wrote:
>>> From: Ashish Kalra <ashish.kalra@amd.com>
>>>
>>> Use configfs as an interface to re-enable RMP optimizations at runtime
>>>
>>> When SNP guests are launched, RMPUPDATE disables the corresponding
>>> RMPOPT optimizations. Therefore, an interface is required to manually
>>> re-enable RMP optimizations, as no mechanism currently exists to do so
>>> during SNP guest cleanup.
>>
>> Is this like a proof-of-concept to poke the hardware and show it works?
>> Or, is this intended to be the way that folks actually interact with
>> SEV-SNP optimization in real production scenarios?
>>
>> Shouldn't freeing SEV-SNP memory back to the system do this
>> automatically? Worst case, keep a 1-bit-per-GB bitmap of memory that's
>> been freed and schedule_work() to run in 1 or 10 or 100 seconds. That
>> should batch things up nicely enough. No?
And there is a cost associated with re-enabling the optimizations for all
system RAM (even though it runs as a background kernel thread executing RMPOPT
on different 1GB regions in parallel and with inline cond_resched()'s),
we don't want to run this periodically.
In case of running SNP guests, this scheduled/periodic run will conflict with
RMPUPDATE(s) being executed for assigning the guest pages and marking them as private.
Even though the hardware takes care of handling such race conditions where
one CPU is doing RMPOPT on it while another is changing one of the pages in that
region to be assigned via RMPUPDATE. In this case, the hardware ensures that after
the RMPUPDATE completes, the CPU that did RMPOPT will see the region as un-optimized.
Once 1GB hugetlb support (for guest_memfd) has been merged, however it will be
straightforward to plumb it into the 1GB hugetlb cleanup path.
Thanks,
Ashish
>
> Actually, the RMPOPT implementation is going to be a multi-phased development.
>
> In the first phase (which is this patch-series) we enable RMPOPT globally, and let RMPUPDATE(s)
> slowly switch it off over time as SNP guest spin up, and then in phase#2 once 1GB hugetlb is in place,
> we enable re-issuing of RMPOPT during 1GB page cleanup.
>
> So automatic re-issuing of RMPOPT will be done when SNP guests are shutdown and as part of
> SNP guest cleanup once 1GB hugetlb support (for guest_memfd) has been merged.
>
> As currently, i.e, as part of this patch series, there is no mechanism to re-issue RMPOPT
> automatically as part of SNP guest cleanup, therefore this support exists to doing it
> manually at runtime via configfs.
>
> I will describe this multi-phased RMPOPT implementation plan in the cover letter for
> next revision of this patch series.
>
>
>>
>> I can't fathom that users don't want this to be done automatically for them.
>>
>> Is the optimization scan really expensive or something? 1GB of memory
>> should have a small number of megabytes of metadata to scan.
next prev parent reply other threads:[~2026-02-18 4:39 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-17 20:09 [PATCH 0/6] Add RMPOPT support Ashish Kalra
2026-02-17 20:09 ` [PATCH 1/6] x86/cpufeatures: Add X86_FEATURE_AMD_RMPOPT feature flag Ashish Kalra
2026-02-17 23:06 ` Ahmed S. Darwish
2026-02-17 20:10 ` [PATCH 2/6] x86/sev: add support for enabling RMPOPT Ashish Kalra
2026-02-17 22:06 ` Dave Hansen
2026-02-18 3:08 ` K Prateek Nayak
2026-02-18 14:59 ` Dave Hansen
2026-02-18 16:55 ` Kalra, Ashish
2026-02-18 17:01 ` Dave Hansen
2026-02-18 17:07 ` Kalra, Ashish
2026-02-18 17:17 ` Dave Hansen
2026-02-18 22:17 ` Kalra, Ashish
2026-02-18 22:56 ` Dave Hansen
2026-02-17 20:10 ` [PATCH 3/6] x86/sev: add support for RMPOPT instruction Ashish Kalra
2026-02-18 16:28 ` Uros Bizjak
2026-02-17 20:11 ` [PATCH 4/6] x86/sev: Add interface to re-enable RMP optimizations Ashish Kalra
2026-02-17 20:11 ` [PATCH 5/6] x86/sev: Use configfs " Ashish Kalra
2026-02-17 22:19 ` Dave Hansen
2026-02-18 3:34 ` Kalra, Ashish
2026-02-18 4:39 ` Kalra, Ashish [this message]
2026-02-18 15:10 ` Dave Hansen
2026-02-17 20:11 ` [PATCH 6/6] x86/sev: Add debugfs support for RMPOPT Ashish Kalra
2026-02-17 22:42 ` Ahmed S. Darwish
2026-02-17 22:11 ` [PATCH 0/6] Add RMPOPT support Dave Hansen
2026-02-18 4:12 ` Kalra, Ashish
2026-02-18 15:03 ` Dave Hansen
2026-02-18 17:03 ` Kalra, Ashish
2026-02-18 17:15 ` Dave Hansen
2026-02-18 21:09 ` Kalra, Ashish
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=84c08e66-c0dc-4e2a-834a-67190b89bded@amd.com \
--to=ashish.kalra@amd.com \
--cc=KPrateek.Nayak@amd.com \
--cc=Michael.Roth@amd.com \
--cc=Nathan.Fontenot@amd.com \
--cc=Tycho.Andersen@amd.com \
--cc=aik@amd.com \
--cc=ardb@kernel.org \
--cc=babu.moger@amd.com \
--cc=bp@alien8.de \
--cc=darwi@linutronix.de \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=dyoung@redhat.com \
--cc=herbert@gondor.apana.org.au \
--cc=hpa@zytor.com \
--cc=jackyli@google.com \
--cc=jacobhxu@google.com \
--cc=john.allen@amd.com \
--cc=kvm@vger.kernel.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=nikunj@amd.com \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=pgonda@google.com \
--cc=rientjes@google.com \
--cc=seanjc@google.com \
--cc=tglx@kernel.org \
--cc=thomas.lendacky@amd.com \
--cc=x86@kernel.org \
--cc=xin@zytor.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox