From: Gavin Shan <gshan@redhat.com>
To: Steven Price <steven.price@arm.com>,
kvm@vger.kernel.org, kvmarm@lists.linux.dev
Cc: Catalin Marinas <catalin.marinas@arm.com>,
Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
James Morse <james.morse@arm.com>,
Oliver Upton <oliver.upton@linux.dev>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, Joey Gouly <joey.gouly@arm.com>,
Alexandru Elisei <alexandru.elisei@arm.com>,
Christoffer Dall <christoffer.dall@arm.com>,
Fuad Tabba <tabba@google.com>,
linux-coco@lists.linux.dev,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>,
Shanker Donthineni <sdonthineni@nvidia.com>,
Alper Gun <alpergun@google.com>,
"Aneesh Kumar K . V" <aneesh.kumar@kernel.org>
Subject: Re: [PATCH v7 18/45] arm64: RME: Handle RMI_EXIT_RIPAS_CHANGE
Date: Wed, 9 Apr 2025 10:13:40 +1000 [thread overview]
Message-ID: <f87fa539-9abd-4a7e-8ce6-9515f26bed71@redhat.com> (raw)
In-Reply-To: <3b563b01-5090-4c9d-a47c-a0aaa13c474b@arm.com>
Hi Steve,
On 4/8/25 2:34 AM, Steven Price wrote:
> On 04/03/2025 04:35, Gavin Shan wrote:
>> On 2/14/25 2:13 AM, Steven Price wrote:
>>> The guest can request that a region of it's protected address space is
>>> switched between RIPAS_RAM and RIPAS_EMPTY (and back) using
>>> RSI_IPA_STATE_SET. This causes a guest exit with the
>>> RMI_EXIT_RIPAS_CHANGE code. We treat this as a request to convert a
>>> protected region to unprotected (or back), exiting to the VMM to make
>>> the necessary changes to the guest_memfd and memslot mappings. On the
>>> next entry the RIPAS changes are committed by making RMI_RTT_SET_RIPAS
>>> calls.
>>>
>>> The VMM may wish to reject the RIPAS change requested by the guest. For
>>> now it can only do with by no longer scheduling the VCPU as we don't
>>> currently have a usecase for returning that rejection to the guest, but
>>> by postponing the RMI_RTT_SET_RIPAS changes to entry we leave the door
>>> open for adding a new ioctl in the future for this purpose.
>>>
>>> Signed-off-by: Steven Price <steven.price@arm.com>
>>> ---
>>> New patch for v7: The code was previously split awkwardly between two
>>> other patches.
>>> ---
>>> arch/arm64/kvm/rme.c | 87 ++++++++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 87 insertions(+)
>>>
>>
>> With the following comments addressed:
>>
>> Reviewed-by: Gavin Shan <gshan@redhat.com>
>>
>>> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
>>> index 507eb4b71bb7..f965869e9ef7 100644
>>> --- a/arch/arm64/kvm/rme.c
>>> +++ b/arch/arm64/kvm/rme.c
>>> @@ -624,6 +624,64 @@ void kvm_realm_unmap_range(struct kvm *kvm,
>>> unsigned long start, u64 size,
>>> realm_unmap_private_range(kvm, start, end);
>>> }
>>> +static int realm_set_ipa_state(struct kvm_vcpu *vcpu,
>>> + unsigned long start,
>>> + unsigned long end,
>>> + unsigned long ripas,
>>> + unsigned long *top_ipa)
>>> +{
>>> + struct kvm *kvm = vcpu->kvm;
>>> + struct realm *realm = &kvm->arch.realm;
>>> + struct realm_rec *rec = &vcpu->arch.rec;
>>> + phys_addr_t rd_phys = virt_to_phys(realm->rd);
>>> + phys_addr_t rec_phys = virt_to_phys(rec->rec_page);
>>> + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
>>> + unsigned long ipa = start;
>>> + int ret = 0;
>>> +
>>> + while (ipa < end) {
>>> + unsigned long next;
>>> +
>>> + ret = rmi_rtt_set_ripas(rd_phys, rec_phys, ipa, end, &next);
>>> +
>>
>> This doesn't look correct to me. Looking at RMM::smc_rtt_set_ripas(),
>> it's possible
>> the SMC call is returned without updating 'next' to a valid address. In
>> this case,
>> the garbage content resident in 'next' can be used to updated to 'ipa'
>> in next
>> iternation. So we need to initialize it in advance, like below.
>>
>> unsigned long ipa = start;
>> unsigned long next = start;
>>
>> while (ipa < end) {
>> ret = rmi_rtt_set_ripas(rd_phys, rec_phys, ipa, end, &next);
>
> I agree this might not be the clearest code, but 'next' should be set if
> the return state is RMI_SUCCESS, and we don't actually get to the "ipa =
> next" line unless that is the case. But I'll rejig things because it's
> not clear.
>
Yes, 'next' is always updated when RMI_SUCCESS is returned. However, 'next'
won't be updated when RMI_ERROR_RTT is returned. I've overlooked the code,
when RMI_ERROR_RTT is returned for the first iteration, 'ipa' is kept as
intact and the 'ipa' is retried after stage2 page-table is populated. So
everything should be fine.
>>> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
>>> + int walk_level = RMI_RETURN_INDEX(ret);
>>> + int level = find_map_level(realm, ipa, end);
>>> +
>>> + /*
>>> + * If the RMM walk ended early then more tables are
>>> + * needed to reach the required depth to set the RIPAS.
>>> + */
>>> + if (walk_level < level) {
>>> + ret = realm_create_rtt_levels(realm, ipa,
>>> + walk_level,
>>> + level,
>>> + memcache);
>>> + /* Retry with RTTs created */
>>> + if (!ret)
>>> + continue;
>>> + } else {
>>> + ret = -EINVAL;
>>> + }
>>> +
>>> + break;
>>> + } else if (RMI_RETURN_STATUS(ret) != RMI_SUCCESS) {
>>> + WARN(1, "Unexpected error in %s: %#x\n", __func__,
>>> + ret);
>>> + ret = -EINVAL;
>>
>> ret = -ENXIO;
>
> Ack
>
>>> + break;
>>> + }
>>> + ipa = next;
>>> + }
>>> +
>>> + *top_ipa = ipa;
>>> +
>>> + if (ripas == RMI_EMPTY && ipa != start)
>>> + realm_unmap_private_range(kvm, start, ipa);
>>> +
>>> + return ret;
>>> +}
>>> +
>>> static int realm_init_ipa_state(struct realm *realm,
>>> unsigned long ipa,
>>> unsigned long end)
>>> @@ -863,6 +921,32 @@ void kvm_destroy_realm(struct kvm *kvm)
>>> kvm_free_stage2_pgd(&kvm->arch.mmu);
>>> }
>>> +static void kvm_complete_ripas_change(struct kvm_vcpu *vcpu)
>>> +{
>>> + struct kvm *kvm = vcpu->kvm;
>>> + struct realm_rec *rec = &vcpu->arch.rec;
>>> + unsigned long base = rec->run->exit.ripas_base;
>>> + unsigned long top = rec->run->exit.ripas_top;
>>> + unsigned long ripas = rec->run->exit.ripas_value;
>>> + unsigned long top_ipa;
>>> + int ret;
>>> +
>>
>> Some checks are needed here to ensure the addresses (@base and @top)
>> falls inside
>> the protected (private) space for two facts: (1) Those parameters
>> originates from
>> the guest, which can be misbehaving. (2) RMM::smc_rtt_set_ripas() isn't
>> limited to
>> the private space, meaning it also can change RIPAS for the ranges in
>> the shared
>> space.
>
> I might be missing something, but AFAICT this is safe:
>
> 1. The RMM doesn't permit RIPAS changes within the shared space:
> RSI_IPA_STATE_SET has a precondition [rgn_bound]:
> AddrRangeIsProtected(base, top, realm)
> So a malicious guest shouldn't get passed the RMM.
>
> 2. The RMM validates that the range passed here is (a subset of) the
> one provided to the NS-world [base_bound / top_bound].
>
> And even if somehow a malicious guest managed to bypass these checks I
> don't see how it would cause harm to the host operating on the wrong region.
>
> I'm happy to be corrected though! What am I missing?
>
No, you don't miss anything, I did. I missed that the requested address range
is ensured to be part of the private space by RMM::handle_rsi_ipa_state_set().
So everything should be fine.
void handle_rsi_ipa_state_set(struct rec *rec,
struct rmi_rec_exit *rec_exit,
struct rsi_result *res)
{
:
if ((ripas_val > RIPAS_RAM) ||
!GRANULE_ALIGNED(base) || !GRANULE_ALIGNED(top) ||
(top <= base) || /* Size is zero, or range overflows */
!region_in_rec_par(rec, base, top)) {
res->action = UPDATE_REC_RETURN_TO_REALM;
res->smc_res.x[0] = RSI_ERROR_INPUT;
return;
}
:
}
> Thank,
> Steve
>
>>> + do {
>>> + kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache,
>>> + kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu));
>>> + write_lock(&kvm->mmu_lock);
>>> + ret = realm_set_ipa_state(vcpu, base, top, ripas, &top_ipa);
>>> + write_unlock(&kvm->mmu_lock);
>>> +
>>> + if (WARN_RATELIMIT(ret && ret != -ENOMEM,
>>> + "Unable to satisfy RIPAS_CHANGE for %#lx - %#lx,
>>> ripas: %#lx\n",
>>> + base, top, ripas))
>>> + break;
>>> +
>>> + base = top_ipa;
>>> + } while (top_ipa < top);
>>> +}
>>> +
>>> int kvm_rec_enter(struct kvm_vcpu *vcpu)
>>> {
>>> struct realm_rec *rec = &vcpu->arch.rec;
>>> @@ -873,6 +957,9 @@ int kvm_rec_enter(struct kvm_vcpu *vcpu)
>>> for (int i = 0; i < REC_RUN_GPRS; i++)
>>> rec->run->enter.gprs[i] = vcpu_get_reg(vcpu, i);
>>> break;
>>> + case RMI_EXIT_RIPAS_CHANGE:
>>> + kvm_complete_ripas_change(vcpu);
>>> + break;
>>> }
>>> if (kvm_realm_state(vcpu->kvm) != REALM_STATE_ACTIVE)
>>
Thanks,
Gavin
next prev parent reply other threads:[~2025-04-09 0:13 UTC|newest]
Thread overview: 103+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-13 16:13 [PATCH v7 00/45] arm64: Support for Arm CCA in KVM Steven Price
2025-02-13 16:13 ` [PATCH v7 01/45] KVM: Prepare for handling only shared mappings in mmu_notifier events Steven Price
2025-03-02 23:36 ` Gavin Shan
2025-03-03 15:05 ` Steven Price
2025-02-13 16:13 ` [PATCH v7 02/45] kvm: arm64: Include kvm_emulate.h in kvm/arm_psci.h Steven Price
2025-03-02 23:39 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 03/45] arm64: RME: Handle Granule Protection Faults (GPFs) Steven Price
2025-03-02 23:43 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 04/45] arm64: RME: Add SMC definitions for calling the RMM Steven Price
2025-03-02 23:52 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 05/45] arm64: RME: Add wrappers for RMI calls Steven Price
2025-03-03 3:42 ` Gavin Shan
2025-03-03 15:05 ` Steven Price
2025-03-05 0:15 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 06/45] arm64: RME: Check for RME support at KVM init Steven Price
2025-03-03 3:58 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 07/45] arm64: RME: Define the user ABI Steven Price
2025-02-14 13:09 ` Aneesh Kumar K.V
2025-03-03 4:10 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 08/45] arm64: RME: ioctls to create and configure realms Steven Price
2025-03-03 4:42 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 09/45] kvm: arm64: Expose debug HW register numbers for Realm Steven Price
2025-03-03 4:48 ` Gavin Shan
2025-03-05 16:25 ` Steven Price
2025-03-05 23:31 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 10/45] arm64: kvm: Allow passing machine type in KVM creation Steven Price
2025-03-03 4:53 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 11/45] arm64: RME: RTT tear down Steven Price
2025-03-03 6:25 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 12/45] arm64: RME: Allocate/free RECs to match vCPUs Steven Price
2025-03-03 7:08 ` Gavin Shan
2025-03-07 15:43 ` Steven Price
2025-04-08 4:55 ` Gavin Shan
2025-04-07 15:06 ` Wei-Lin Chang
2025-02-13 16:13 ` [PATCH v7 13/45] KVM: arm64: vgic: Provide helper for number of list registers Steven Price
2025-02-13 16:13 ` [PATCH v7 14/45] arm64: RME: Support for the VGIC in realms Steven Price
2025-03-03 18:02 ` Suzuki K Poulose
2025-02-13 16:13 ` [PATCH v7 15/45] KVM: arm64: Support timers in realm RECs Steven Price
2025-03-04 17:59 ` Suzuki K Poulose
2025-02-13 16:13 ` [PATCH v7 16/45] arm64: RME: Allow VMM to set RIPAS Steven Price
2025-03-04 0:45 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 17/45] arm64: RME: Handle realm enter/exit Steven Price
2025-03-04 1:03 ` Gavin Shan
2025-04-07 16:34 ` Steven Price
2025-04-08 5:03 ` Gavin Shan
2025-02-13 16:13 ` [PATCH v7 18/45] arm64: RME: Handle RMI_EXIT_RIPAS_CHANGE Steven Price
2025-03-04 4:35 ` Gavin Shan
2025-04-07 16:34 ` Steven Price
2025-04-09 0:13 ` Gavin Shan [this message]
2025-02-13 16:13 ` [PATCH v7 19/45] KVM: arm64: Handle realm MMIO emulation Steven Price
2025-03-04 4:52 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 20/45] arm64: RME: Allow populating initial contents Steven Price
2025-03-04 5:09 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 21/45] arm64: RME: Runtime faulting of memory Steven Price
2025-02-13 16:14 ` [PATCH v7 22/45] KVM: arm64: Handle realm VCPU load Steven Price
2025-03-04 5:15 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 23/45] KVM: arm64: Validate register access for a Realm VM Steven Price
2025-03-04 5:29 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 24/45] KVM: arm64: Handle Realm PSCI requests Steven Price
2025-03-04 5:38 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 25/45] KVM: arm64: WARN on injected undef exceptions Steven Price
2025-03-04 5:39 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 26/45] arm64: Don't expose stolen time for realm guests Steven Price
2025-03-04 5:42 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 27/45] arm64: rme: allow userspace to inject aborts Steven Price
2025-03-04 5:47 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 28/45] arm64: rme: support RSI_HOST_CALL Steven Price
2025-03-04 6:01 ` Gavin Shan
2025-04-07 16:34 ` Steven Price
2025-04-08 5:19 ` Gavin Shan
2025-04-09 17:31 ` Steven Price
2025-02-13 16:14 ` [PATCH v7 29/45] arm64: rme: Allow checking SVE on VM instance Steven Price
2025-03-04 6:02 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 30/45] arm64: RME: Always use 4k pages for realms Steven Price
2025-03-04 6:23 ` Gavin Shan
2025-04-07 16:34 ` Steven Price
2025-02-13 16:14 ` [PATCH v7 31/45] arm64: rme: Prevent Device mappings for Realms Steven Price
2025-03-04 6:27 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 32/45] arm_pmu: Provide a mechanism for disabling the physical IRQ Steven Price
2025-02-13 16:14 ` [PATCH v7 33/45] arm64: rme: Enable PMU support with a realm guest Steven Price
2025-02-13 16:14 ` [PATCH v7 34/45] kvm: rme: Hide KVM_CAP_READONLY_MEM for realm guests Steven Price
2025-03-04 11:51 ` Gavin Shan
2025-04-07 16:34 ` Steven Price
2025-04-08 6:37 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 35/45] arm64: RME: Propagate number of breakpoints and watchpoints to userspace Steven Price
2025-03-04 23:45 ` Gavin Shan
2025-04-07 16:35 ` Steven Price
2025-04-08 6:39 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 36/45] arm64: RME: Set breakpoint parameters through SET_ONE_REG Steven Price
2025-03-04 23:46 ` Gavin Shan
2025-02-13 16:14 ` [PATCH v7 37/45] arm64: RME: Initialize PMCR.N with number counter supported by RMM Steven Price
2025-02-13 16:14 ` [PATCH v7 38/45] arm64: RME: Propagate max SVE vector length from RMM Steven Price
2025-02-13 16:14 ` [PATCH v7 39/45] arm64: RME: Configure max SVE vector length for a Realm Steven Price
2025-02-13 16:14 ` [PATCH v7 40/45] arm64: RME: Provide register list for unfinalized RME RECs Steven Price
2025-02-13 16:14 ` [PATCH v7 41/45] arm64: RME: Provide accurate register list Steven Price
2025-02-13 16:14 ` [PATCH v7 42/45] arm64: kvm: Expose support for private memory Steven Price
2025-02-13 16:14 ` [PATCH v7 43/45] KVM: arm64: Expose KVM_ARM_VCPU_REC to user space Steven Price
2025-02-13 16:14 ` [PATCH v7 44/45] KVM: arm64: Allow activating realms Steven Price
2025-02-13 16:14 ` [PATCH v7 45/45] WIP: Enable support for PAGE_SIZE>4k Steven Price
2025-03-05 3:53 ` [PATCH v7 00/45] arm64: Support for Arm CCA in KVM Gavin Shan
2025-03-26 2:14 ` Emi Kisanuki (Fujitsu)
2025-03-26 6:14 ` Oliver Upton
2025-04-01 6:54 ` Emi Kisanuki (Fujitsu)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f87fa539-9abd-4a7e-8ce6-9515f26bed71@redhat.com \
--to=gshan@redhat.com \
--cc=alexandru.elisei@arm.com \
--cc=alpergun@google.com \
--cc=aneesh.kumar@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=christoffer.dall@arm.com \
--cc=gankulkarni@os.amperecomputing.com \
--cc=james.morse@arm.com \
--cc=joey.gouly@arm.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=sdonthineni@nvidia.com \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox