From: Gavin Shan <gshan@redhat.com>
To: Steven Price <steven.price@arm.com>,
kvm@vger.kernel.org, kvmarm@lists.linux.dev
Cc: Catalin Marinas <catalin.marinas@arm.com>,
Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
James Morse <james.morse@arm.com>,
Oliver Upton <oliver.upton@linux.dev>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, Joey Gouly <joey.gouly@arm.com>,
Alexandru Elisei <alexandru.elisei@arm.com>,
Christoffer Dall <christoffer.dall@arm.com>,
Fuad Tabba <tabba@google.com>,
linux-coco@lists.linux.dev,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>,
Shanker Donthineni <sdonthineni@nvidia.com>,
Alper Gun <alpergun@google.com>,
"Aneesh Kumar K . V" <aneesh.kumar@kernel.org>,
Emi Kisanuki <fj0570is@fujitsu.com>
Subject: Re: [PATCH v9 20/43] arm64: RME: Runtime faulting of memory
Date: Wed, 2 Jul 2025 11:04:46 +1000 [thread overview]
Message-ID: <bb75b5fd-7186-4c93-80ff-0a398dc6c78d@redhat.com> (raw)
In-Reply-To: <20250611104844.245235-21-steven.price@arm.com>
On 6/11/25 8:48 PM, Steven Price wrote:
> At runtime if the realm guest accesses memory which hasn't yet been
> mapped then KVM needs to either populate the region or fault the guest.
>
> For memory in the lower (protected) region of IPA a fresh page is
> provided to the RMM which will zero the contents. For memory in the
> upper (shared) region of IPA, the memory from the memslot is mapped
> into the realm VM non secure.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v8:
> * Propagate the may_block flag.
> * Minor comments and coding style changes.
> Changes since v7:
> * Remove redundant WARN_ONs for realm_create_rtt_levels() - it will
> internally WARN when necessary.
> Changes since v6:
> * Handle PAGE_SIZE being larger than RMM granule size.
> * Some minor renaming following review comments.
> Changes since v5:
> * Reduce use of struct page in preparation for supporting the RMM
> having a different page size to the host.
> * Handle a race when delegating a page where another CPU has faulted on
> a the same page (and already delegated the physical page) but not yet
> mapped it. In this case simply return to the guest to either use the
> mapping from the other CPU (or refault if the race is lost).
> * The changes to populate_par_region() are moved into the previous
> patch where they belong.
> Changes since v4:
> * Code cleanup following review feedback.
> * Drop the PTE_SHARED bit when creating unprotected page table entries.
> This is now set by the RMM and the host has no control of it and the
> spec requires the bit to be set to zero.
> Changes since v2:
> * Avoid leaking memory if failing to map it in the realm.
> * Correctly mask RTT based on LPA2 flag (see rtt_get_phys()).
> * Adapt to changes in previous patches.
> ---
> arch/arm64/include/asm/kvm_emulate.h | 10 ++
> arch/arm64/include/asm/kvm_rme.h | 10 ++
> arch/arm64/kvm/mmu.c | 133 ++++++++++++++++++++-
> arch/arm64/kvm/rme.c | 165 +++++++++++++++++++++++++++
> 4 files changed, 312 insertions(+), 6 deletions(-)
>
With @may_block set to true in kvm_free_stage2_pgd(), as commented previously.
With below nitpicks addressed:
Reviewed-by: Gavin Shan <gshan@redhat.com>
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 302a691b3723..126c98cded90 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -709,6 +709,16 @@ static inline bool kvm_realm_is_created(struct kvm *kvm)
> return kvm_is_realm(kvm) && kvm_realm_state(kvm) != REALM_STATE_NONE;
> }
>
> +static inline gpa_t kvm_gpa_from_fault(struct kvm *kvm, phys_addr_t ipa)
> +{
> + if (kvm_is_realm(kvm)) {
> + struct realm *realm = &kvm->arch.realm;
> +
> + return ipa & ~BIT(realm->ia_bits - 1);
> + }
> + return ipa;
> +}
> +
It may be more clearer with something like below. Note non-coco VM is still
preferred than coco VM.
static inline gpa_t kvm_gpa_from_fault(struct kvm *kvm, phys_addr_t ipa)
{
if (!kvm_is_realm(kvm)
return ipa;
return ipa & ~BIT(kvm->arch.realm->ia_bits -1);
}
> static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
> {
> if (static_branch_unlikely(&kvm_rme_is_available))
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index 321970779669..df88ae51b7c9 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -110,6 +110,16 @@ void kvm_realm_unmap_range(struct kvm *kvm,
> unsigned long size,
> bool unmap_private,
> bool may_block);
> +int realm_map_protected(struct realm *realm,
> + unsigned long base_ipa,
> + kvm_pfn_t pfn,
> + unsigned long size,
> + struct kvm_mmu_memory_cache *memcache);
> +int realm_map_non_secure(struct realm *realm,
> + unsigned long ipa,
> + kvm_pfn_t pfn,
> + unsigned long size,
> + struct kvm_mmu_memory_cache *memcache);
>
> static inline bool kvm_realm_is_private_address(struct realm *realm,
> unsigned long addr)
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 37403eaa5699..1dc644ea26ce 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -338,8 +338,14 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
>
> lockdep_assert_held_write(&kvm->mmu_lock);
> WARN_ON(size & ~PAGE_MASK);
> - WARN_ON(stage2_apply_range(mmu, start, end, KVM_PGT_FN(kvm_pgtable_stage2_unmap),
> - may_block));
> +
> + if (kvm_is_realm(kvm))
> + kvm_realm_unmap_range(kvm, start, size, !only_shared,
> + may_block);
> + else
> + WARN_ON(stage2_apply_range(mmu, start, end,
> + KVM_PGT_FN(kvm_pgtable_stage2_unmap),
> + may_block));
> }
>
{} is needed here.
> void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start,
> @@ -359,7 +365,10 @@ static void stage2_flush_memslot(struct kvm *kvm,
> phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
> phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
>
> - kvm_stage2_flush_range(&kvm->arch.mmu, addr, end);
> + if (kvm_is_realm(kvm))
> + kvm_realm_unmap_range(kvm, addr, end - addr, false, true);
> + else
> + kvm_stage2_flush_range(&kvm->arch.mmu, addr, end);
> }
>
> /**
> @@ -1053,6 +1062,10 @@ void stage2_unmap_vm(struct kvm *kvm)
> struct kvm_memory_slot *memslot;
> int idx, bkt;
>
> + /* For realms this is handled by the RMM so nothing to do here */
> + if (kvm_is_realm(kvm))
> + return;
> +
> idx = srcu_read_lock(&kvm->srcu);
> mmap_read_lock(current->mm);
> write_lock(&kvm->mmu_lock);
> @@ -1078,6 +1091,9 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
> if (kvm_is_realm(kvm) &&
> (kvm_realm_state(kvm) != REALM_STATE_DEAD &&
> kvm_realm_state(kvm) != REALM_STATE_NONE)) {
> + struct realm *realm = &kvm->arch.realm;
> +
> + kvm_stage2_unmap_range(mmu, 0, BIT(realm->ia_bits - 1), false);
> write_unlock(&kvm->mmu_lock);
> kvm_realm_destroy_rtts(kvm, pgt->ia_bits);
>
> @@ -1486,6 +1502,85 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> return vma->vm_flags & VM_MTE_ALLOWED;
> }
>
> +static int realm_map_ipa(struct kvm *kvm, phys_addr_t ipa,
> + kvm_pfn_t pfn, unsigned long map_size,
> + enum kvm_pgtable_prot prot,
> + struct kvm_mmu_memory_cache *memcache)
> +{
> + struct realm *realm = &kvm->arch.realm;
> +
> + /*
> + * Write permission is required for now even though it's possible to
> + * map unprotected pages (granules) as read-only. It's impossible to
> + * map protected pages (granules) as read-only.
> + */
> + if (WARN_ON(!(prot & KVM_PGTABLE_PROT_W)))
> + return -EFAULT;
> +
> + ipa = ALIGN_DOWN(ipa, PAGE_SIZE);
> +
Empty line can be dropped.
> + if (!kvm_realm_is_private_address(realm, ipa))
> + return realm_map_non_secure(realm, ipa, pfn, map_size,
> + memcache);
> +
> + return realm_map_protected(realm, ipa, pfn, map_size, memcache);
> +}
> +
> +static int private_memslot_fault(struct kvm_vcpu *vcpu,
> + phys_addr_t fault_ipa,
> + struct kvm_memory_slot *memslot)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + gpa_t gpa = kvm_gpa_from_fault(kvm, fault_ipa);
> + gfn_t gfn = gpa >> PAGE_SHIFT;
> + bool is_priv_gfn = kvm_mem_is_private(kvm, gfn);
> + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> + struct page *page;
> + kvm_pfn_t pfn;
> + int ret;
> + /*
> + * For Realms, the shared address is an alias of the private GPA with
> + * the top bit set. Thus is the fault address matches the GPA then it
> + * is the private alias.
> + */
> + bool is_priv_fault = (gpa == fault_ipa);
> +
> + if (is_priv_gfn != is_priv_fault) {
> + kvm_prepare_memory_fault_exit(vcpu, gpa, PAGE_SIZE,
> + kvm_is_write_fault(vcpu), false,
> + is_priv_fault);
> +
> + /*
> + * KVM_EXIT_MEMORY_FAULT requires an return code of -EFAULT,
> + * see the API documentation
> + */
> + return -EFAULT;
> + }
> +
> + if (!is_priv_fault) {
> + /* Not a private mapping, handling normally */
> + return -EINVAL;
> + }
> +
> + ret = kvm_mmu_topup_memory_cache(memcache,
> + kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu));
> + if (ret)
> + return ret;
> +
> + ret = kvm_gmem_get_pfn(kvm, memslot, gfn, &pfn, &page, NULL);
> + if (ret)
> + return ret;
> +
> + /* FIXME: Should be able to use bigger than PAGE_SIZE mappings */
> + ret = realm_map_ipa(kvm, fault_ipa, pfn, PAGE_SIZE, KVM_PGTABLE_PROT_W,
> + memcache);
> + if (!ret)
> + return 1; /* Handled */
> +
> + put_page(page);
> + return ret;
> +}
> +
> static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> struct kvm_s2_trans *nested,
> struct kvm_memory_slot *memslot, unsigned long hva,
> @@ -1513,6 +1608,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (fault_is_perm)
> fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu);
> write_fault = kvm_is_write_fault(vcpu);
> +
> + /*
> + * Realms cannot map protected pages read-only
> + * FIXME: It should be possible to map unprotected pages read-only
> + */
> + if (vcpu_is_rec(vcpu))
> + write_fault = true;
> +
> exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
> VM_BUG_ON(write_fault && exec_fault);
>
> @@ -1630,7 +1733,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> ipa &= ~(vma_pagesize - 1);
> }
>
> - gfn = ipa >> PAGE_SHIFT;
> + gfn = kvm_gpa_from_fault(kvm, ipa) >> PAGE_SHIFT;
> mte_allowed = kvm_vma_mte_allowed(vma);
>
> vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED;
> @@ -1763,6 +1866,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> */
> prot &= ~KVM_NV_GUEST_MAP_SZ;
> ret = KVM_PGT_FN(kvm_pgtable_stage2_relax_perms)(pgt, fault_ipa, prot, flags);
> + } else if (kvm_is_realm(kvm)) {
> + ret = realm_map_ipa(kvm, fault_ipa, pfn, vma_pagesize,
> + prot, memcache);
> } else {
> ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, vma_pagesize,
> __pfn_to_phys(pfn), prot,
> @@ -1911,8 +2017,15 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> nested = &nested_trans;
> }
>
> - gfn = ipa >> PAGE_SHIFT;
> + gfn = kvm_gpa_from_fault(vcpu->kvm, ipa) >> PAGE_SHIFT;
> memslot = gfn_to_memslot(vcpu->kvm, gfn);
> +
> + if (kvm_slot_can_be_private(memslot)) {
> + ret = private_memslot_fault(vcpu, ipa, memslot);
> + if (ret != -EINVAL)
> + goto out;
> + }
> +
> hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
> write_fault = kvm_is_write_fault(vcpu);
> if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
> @@ -1956,7 +2069,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> * of the page size.
> */
> ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0);
> - ret = io_mem_abort(vcpu, ipa);
> + ret = io_mem_abort(vcpu, kvm_gpa_from_fault(vcpu->kvm, ipa));
> goto out_unlock;
> }
>
> @@ -2004,6 +2117,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> if (!kvm->arch.mmu.pgt)
> return false;
>
> + /* We don't support aging for Realms */
> + if (kvm_is_realm(kvm))
> + return true;
> +
> return KVM_PGT_FN(kvm_pgtable_stage2_test_clear_young)(kvm->arch.mmu.pgt,
> range->start << PAGE_SHIFT,
> size, true);
> @@ -2020,6 +2137,10 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> if (!kvm->arch.mmu.pgt)
> return false;
>
> + /* We don't support aging for Realms */
> + if (kvm_is_realm(kvm))
> + return true;
> +
> return KVM_PGT_FN(kvm_pgtable_stage2_test_clear_young)(kvm->arch.mmu.pgt,
> range->start << PAGE_SHIFT,
> size, false);
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index d7bb11583506..0fe55e369782 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -750,6 +750,171 @@ static int realm_create_protected_data_page(struct realm *realm,
> return -ENXIO;
> }
>
> +static int fold_rtt(struct realm *realm, unsigned long addr, int level)
> +{
> + phys_addr_t rtt_addr;
> + int ret;
> +
> + ret = realm_rtt_fold(realm, addr, level, &rtt_addr);
> + if (ret)
> + return ret;
> +
> + free_rtt(rtt_addr);
> +
> + return 0;
> +}
> +
> +int realm_map_protected(struct realm *realm,
> + unsigned long ipa,
> + kvm_pfn_t pfn,
> + unsigned long map_size,
> + struct kvm_mmu_memory_cache *memcache)
> +{
> + phys_addr_t phys = __pfn_to_phys(pfn);
> + phys_addr_t rd = virt_to_phys(realm->rd);
> + unsigned long base_ipa = ipa;
> + unsigned long size;
> + int map_level = IS_ALIGNED(map_size, RMM_L2_BLOCK_SIZE) ?
> + RMM_RTT_BLOCK_LEVEL : RMM_RTT_MAX_LEVEL;
> + int ret = 0;
> +
> + if (WARN_ON(!IS_ALIGNED(map_size, RMM_PAGE_SIZE) ||
> + !IS_ALIGNED(ipa, map_size)))
> + return -EINVAL;
> +
> + if (map_level < RMM_RTT_MAX_LEVEL) {
> + /*
> + * A temporary RTT is needed during the map, precreate it,
> + * however if there is an error (e.g. missing parent tables)
> + * this will be handled below.
> + */
> + realm_create_rtt_levels(realm, ipa, map_level,
> + RMM_RTT_MAX_LEVEL, memcache);
> + }
> +
> + for (size = 0; size < map_size; size += RMM_PAGE_SIZE) {
> + if (rmi_granule_delegate(phys)) {
> + /*
> + * It's likely we raced with another VCPU on the same
> + * fault. Assume the other VCPU has handled the fault
> + * and return to the guest.
> + */
> + return 0;
> + }
> +
> + ret = rmi_data_create_unknown(rd, phys, ipa);
> +
> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> + /* Create missing RTTs and retry */
> + int level = RMI_RETURN_INDEX(ret);
> +
> + WARN_ON(level == RMM_RTT_MAX_LEVEL);
> +
Unnecessary empty line.
> + ret = realm_create_rtt_levels(realm, ipa, level,
> + RMM_RTT_MAX_LEVEL,
> + memcache);
> + if (ret)
> + goto err_undelegate;
> +
> + ret = rmi_data_create_unknown(rd, phys, ipa);
> + }
> +
> + if (WARN_ON(ret))
> + goto err_undelegate;
> +
> + phys += RMM_PAGE_SIZE;
> + ipa += RMM_PAGE_SIZE;
> + }
> +
> + if (map_size == RMM_L2_BLOCK_SIZE) {
> + ret = fold_rtt(realm, base_ipa, map_level + 1);
> + if (WARN_ON(ret))
> + goto err;
> + }
> +
> + return 0;
> +
> +err_undelegate:
> + if (WARN_ON(rmi_granule_undelegate(phys))) {
> + /* Page can't be returned to NS world so is lost */
> + get_page(phys_to_page(phys));
> + }
> +err:
> + while (size > 0) {
> + unsigned long data, top;
> +
> + phys -= RMM_PAGE_SIZE;
> + size -= RMM_PAGE_SIZE;
> + ipa -= RMM_PAGE_SIZE;
> +
> + WARN_ON(rmi_data_destroy(rd, ipa, &data, &top));
> +
> + if (WARN_ON(rmi_granule_undelegate(phys))) {
> + /* Page can't be returned to NS world so is lost */
> + get_page(phys_to_page(phys));
> + }
> + }
> + return -ENXIO;
> +}
> +
> +int realm_map_non_secure(struct realm *realm,
> + unsigned long ipa,
> + kvm_pfn_t pfn,
> + unsigned long size,
> + struct kvm_mmu_memory_cache *memcache)
> +{
> + phys_addr_t rd = virt_to_phys(realm->rd);
> + phys_addr_t phys = __pfn_to_phys(pfn);
> + unsigned long offset;
> + /* TODO: Support block mappings */
> + int map_level = RMM_RTT_MAX_LEVEL;
> + int map_size = rme_rtt_level_mapsize(map_level);
> + int ret = 0;
> +
> + if (WARN_ON(!IS_ALIGNED(size, RMM_PAGE_SIZE) ||
> + !IS_ALIGNED(ipa, size)))
> + return -EINVAL;
> +
> + for (offset = 0; offset < size; offset += map_size) {
> + /*
> + * realm_map_ipa() enforces that the memory is writable,
> + * so for now we permit both read and write.
> + */
> + unsigned long desc = phys |
> + PTE_S2_MEMATTR(MT_S2_FWB_NORMAL) |
> + KVM_PTE_LEAF_ATTR_LO_S2_S2AP_R |
> + KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W;
> + ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
> +
> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> + /* Create missing RTTs and retry */
> + int level = RMI_RETURN_INDEX(ret);
> +
> + ret = realm_create_rtt_levels(realm, ipa, level,
> + map_level, memcache);
> + if (ret)
> + return -ENXIO;
> +
> + ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
> + }
> + /*
> + * RMI_ERROR_RTT can be reported for two reasons: either the
> + * RTT tables are not there, or there is an RTTE already
> + * present for the address. The above call to create RTTs
> + * handles the first case, and in the second case this
> + * indicates that another thread has already populated the RTTE
> + * for us, so we can ignore the error and continue.
> + */
> + if (ret && RMI_RETURN_STATUS(ret) != RMI_ERROR_RTT)
> + return -ENXIO;
> +
> + ipa += map_size;
> + phys += map_size;
> + }
> +
> + return 0;
> +}
> +
> static int populate_region(struct kvm *kvm,
> phys_addr_t ipa_base,
> phys_addr_t ipa_end,
Thanks,
Gavin
next prev parent reply other threads:[~2025-07-02 1:04 UTC|newest]
Thread overview: 89+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-11 10:47 [PATCH v9 00/43] arm64: Support for Arm CCA in KVM Steven Price
2025-06-11 10:47 ` [PATCH v9 01/43] kvm: arm64: Include kvm_emulate.h in kvm/arm_psci.h Steven Price
2025-06-11 10:47 ` [PATCH v9 02/43] arm64: RME: Handle Granule Protection Faults (GPFs) Steven Price
2025-06-11 10:48 ` [PATCH v9 03/43] arm64: RME: Add SMC definitions for calling the RMM Steven Price
2025-06-11 10:48 ` [PATCH v9 04/43] arm64: RME: Add wrappers for RMI calls Steven Price
2025-06-11 10:48 ` [PATCH v9 05/43] arm64: RME: Check for RME support at KVM init Steven Price
2025-06-11 10:48 ` [PATCH v9 06/43] arm64: RME: Define the user ABI Steven Price
2025-07-01 6:29 ` Gavin Shan
2025-06-11 10:48 ` [PATCH v9 07/43] arm64: RME: ioctls to create and configure realms Steven Price
2025-06-16 10:47 ` Suzuki K Poulose
2025-06-23 13:17 ` zhuangyiwei
2025-06-23 14:45 ` Steven Price
2025-06-11 10:48 ` [PATCH v9 08/43] kvm: arm64: Don't expose debug capabilities for realm guests Steven Price
2025-06-11 10:48 ` [PATCH v9 09/43] KVM: arm64: Allow passing machine type in KVM creation Steven Price
2025-07-01 6:38 ` Gavin Shan
2025-06-11 10:48 ` [PATCH v9 10/43] arm64: RME: RTT tear down Steven Price
2025-06-16 10:41 ` Suzuki K Poulose
2025-06-23 14:45 ` Steven Price
2025-06-11 10:48 ` [PATCH v9 11/43] arm64: RME: Allocate/free RECs to match vCPUs Steven Price
2025-06-25 9:00 ` Joey Gouly
2025-06-27 10:37 ` Steven Price
2025-06-11 10:48 ` [PATCH v9 12/43] KVM: arm64: vgic: Provide helper for number of list registers Steven Price
2025-07-01 10:16 ` Suzuki K Poulose
2025-06-11 10:48 ` [PATCH v9 13/43] arm64: RME: Support for the VGIC in realms Steven Price
2025-07-01 6:41 ` Gavin Shan
2025-07-01 10:20 ` Suzuki K Poulose
2025-07-03 13:22 ` Suzuki K Poulose
2025-07-09 14:42 ` Steven Price
2025-06-11 10:48 ` [PATCH v9 14/43] KVM: arm64: Support timers in realm RECs Steven Price
2025-07-01 6:42 ` Gavin Shan
2025-07-09 14:49 ` Joey Gouly
2025-07-09 15:29 ` Steven Price
2025-06-11 10:48 ` [PATCH v9 15/43] arm64: RME: Allow VMM to set RIPAS Steven Price
2025-06-17 12:56 ` zhuangyiwei
2025-06-23 14:45 ` Steven Price
2025-06-18 12:33 ` Andre Przywara
2025-06-23 14:45 ` Steven Price
2025-07-02 0:37 ` Gavin Shan
2025-07-09 14:42 ` Steven Price
2025-07-10 5:24 ` Gavin Shan
2025-06-11 10:48 ` [PATCH v9 16/43] arm64: RME: Handle realm enter/exit Steven Price
2025-06-25 1:45 ` Emi Kisanuki (Fujitsu)
2025-07-02 0:41 ` Gavin Shan
2025-06-11 10:48 ` [PATCH v9 17/43] arm64: RME: Handle RMI_EXIT_RIPAS_CHANGE Steven Price
2025-07-02 0:44 ` Gavin Shan
2025-06-11 10:48 ` [PATCH v9 18/43] KVM: arm64: Handle realm MMIO emulation Steven Price
2025-06-11 10:48 ` [PATCH v9 19/43] arm64: RME: Allow populating initial contents Steven Price
2025-08-01 1:56 ` Vishal Annapurve
2025-08-13 9:30 ` Steven Price
2025-08-14 16:26 ` Vishal Annapurve
2025-08-15 15:48 ` Steven Price
2025-08-15 18:18 ` Vishal Annapurve
2025-08-16 1:56 ` Vishal Annapurve
2025-06-11 10:48 ` [PATCH v9 20/43] arm64: RME: Runtime faulting of memory Steven Price
2025-06-16 11:55 ` Gavin Shan
2025-06-23 16:04 ` Steven Price
2025-07-02 1:04 ` Gavin Shan [this message]
2025-06-11 10:48 ` [PATCH v9 21/43] KVM: arm64: Handle realm VCPU load Steven Price
2025-06-11 10:48 ` [PATCH v9 22/43] KVM: arm64: Validate register access for a Realm VM Steven Price
2025-06-24 15:10 ` Joey Gouly
2025-06-11 10:48 ` [PATCH v9 23/43] KVM: arm64: Handle Realm PSCI requests Steven Price
2025-06-11 10:48 ` [PATCH v9 24/43] KVM: arm64: WARN on injected undef exceptions Steven Price
2025-06-11 10:48 ` [PATCH v9 25/43] arm64: Don't expose stolen time for realm guests Steven Price
2025-06-11 10:48 ` [PATCH v9 26/43] arm64: RME: allow userspace to inject aborts Steven Price
2025-06-11 10:48 ` [PATCH v9 27/43] arm64: RME: support RSI_HOST_CALL Steven Price
2025-06-11 10:48 ` [PATCH v9 28/43] arm64: RME: Allow checking SVE on VM instance Steven Price
2025-06-24 12:50 ` Joey Gouly
2025-06-11 10:48 ` [PATCH v9 29/43] arm64: RME: Always use 4k pages for realms Steven Price
2025-06-11 10:48 ` [PATCH v9 30/43] arm64: RME: Prevent Device mappings for Realms Steven Price
2025-06-11 10:48 ` [PATCH v9 31/43] arm_pmu: Provide a mechanism for disabling the physical IRQ Steven Price
2025-06-11 10:48 ` [PATCH v9 32/43] arm64: RME: Enable PMU support with a realm guest Steven Price
2025-06-11 10:48 ` [PATCH v9 33/43] arm64: RME: Hide KVM_CAP_READONLY_MEM for realm guests Steven Price
2025-06-11 10:48 ` [PATCH v9 34/43] arm64: RME: Propagate number of breakpoints and watchpoints to userspace Steven Price
2025-07-24 10:20 ` Joey Gouly
2025-06-11 10:48 ` [PATCH v9 35/43] arm64: RME: Set breakpoint parameters through SET_ONE_REG Steven Price
2025-06-11 10:48 ` [PATCH v9 36/43] arm64: RME: Initialize PMCR.N with number counter supported by RMM Steven Price
2025-07-24 10:47 ` Joey Gouly
2025-06-11 10:48 ` [PATCH v9 37/43] arm64: RME: Propagate max SVE vector length from RMM Steven Price
2025-06-11 10:48 ` [PATCH v9 38/43] arm64: RME: Configure max SVE vector length for a Realm Steven Price
2025-06-11 10:48 ` [PATCH v9 39/43] arm64: RME: Provide register list for unfinalized RME RECs Steven Price
2025-06-11 10:48 ` [PATCH v9 40/43] arm64: RME: Provide accurate register list Steven Price
2025-06-11 10:48 ` [PATCH v9 41/43] KVM: arm64: Expose support for private memory Steven Price
2025-06-12 15:14 ` Joey Gouly
2025-06-12 15:32 ` Steven Price
2025-06-11 10:48 ` [PATCH v9 42/43] KVM: arm64: Expose KVM_ARM_VCPU_REC to user space Steven Price
2025-06-11 10:48 ` [PATCH v9 43/43] KVM: arm64: Allow activating realms Steven Price
2025-06-25 1:51 ` [PATCH v9 00/43] arm64: Support for Arm CCA in KVM Emi Kisanuki (Fujitsu)
2025-06-27 10:37 ` Steven Price
2025-07-04 4:58 ` Gavin Shan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bb75b5fd-7186-4c93-80ff-0a398dc6c78d@redhat.com \
--to=gshan@redhat.com \
--cc=alexandru.elisei@arm.com \
--cc=alpergun@google.com \
--cc=aneesh.kumar@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=christoffer.dall@arm.com \
--cc=fj0570is@fujitsu.com \
--cc=gankulkarni@os.amperecomputing.com \
--cc=james.morse@arm.com \
--cc=joey.gouly@arm.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=sdonthineni@nvidia.com \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).