From: Suzuki K Poulose <suzuki.poulose@arm.com>
To: Steven Price <steven.price@arm.com>,
kvm@vger.kernel.org, kvmarm@lists.linux.dev
Cc: Catalin Marinas <catalin.marinas@arm.com>,
Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
James Morse <james.morse@arm.com>,
Oliver Upton <oliver.upton@linux.dev>,
Zenghui Yu <yuzenghui@huawei.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, Joey Gouly <joey.gouly@arm.com>,
Alexandru Elisei <alexandru.elisei@arm.com>,
Christoffer Dall <christoffer.dall@arm.com>,
Fuad Tabba <tabba@google.com>,
linux-coco@lists.linux.dev,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
Subject: Re: [PATCH v2 17/43] arm64: RME: Allow VMM to set RIPAS
Date: Fri, 19 Apr 2024 11:20:21 +0100 [thread overview]
Message-ID: <cba1c8ce-9299-4013-910c-0ba6d205cd90@arm.com> (raw)
In-Reply-To: <d2957090-fcf0-4dff-901e-d8ea975f2452@arm.com>
On 19/04/2024 10:34, Suzuki K Poulose wrote:
> On 12/04/2024 09:42, Steven Price wrote:
>> Each page within the protected region of the realm guest can be marked
>> as either RAM or EMPTY. Allow the VMM to control this before the guest
>> has started and provide the equivalent functions to change this (with
>> the guest's approval) at runtime.
>>
>> When transitioning from RIPAS RAM (1) to RIPAS EMPTY (0) the memory is
>> unmapped from the guest and undelegated allowing the memory to be reused
>> by the host. When transitioning to RIPAS RAM the actual population of
>> the leaf RTTs is done later on stage 2 fault, however it may be
>> necessary to allocate additional RTTs to represent the range requested.
>
> minor nit: To give a bit more context:
>
> "however it may be necessary to allocate additional RTTs in order for
> the RMM to track the RIPAS for the requested range".
>
>>
>> When freeing a block mapping it is necessary to temporarily unfold the
>> RTT which requires delegating an extra page to the RMM, this page can
>> then be recovered once the contents of the block mapping have been
>> freed. A spare, delegated page (spare_page) is used for this purpose.
>>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> arch/arm64/include/asm/kvm_rme.h | 16 ++
>> arch/arm64/kvm/mmu.c | 8 +-
>> arch/arm64/kvm/rme.c | 390 +++++++++++++++++++++++++++++++
>> 3 files changed, 411 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_rme.h
>> b/arch/arm64/include/asm/kvm_rme.h
>> index 915e76068b00..cc8f81cfc3c0 100644
>> --- a/arch/arm64/include/asm/kvm_rme.h
>> +++ b/arch/arm64/include/asm/kvm_rme.h
>> @@ -96,6 +96,14 @@ void kvm_realm_destroy_rtts(struct kvm *kvm, u32
>> ia_bits);
>> int kvm_create_rec(struct kvm_vcpu *vcpu);
>> void kvm_destroy_rec(struct kvm_vcpu *vcpu);
>> +void kvm_realm_unmap_range(struct kvm *kvm,
>> + unsigned long ipa,
>> + u64 size,
>> + bool unmap_private);
>> +int realm_set_ipa_state(struct kvm_vcpu *vcpu,
>> + unsigned long addr, unsigned long end,
>> + unsigned long ripas);
>> +
>> #define RME_RTT_BLOCK_LEVEL 2
>> #define RME_RTT_MAX_LEVEL 3
>> @@ -114,4 +122,12 @@ static inline unsigned long
>> rme_rtt_level_mapsize(int level)
>> return (1UL << RME_RTT_LEVEL_SHIFT(level));
>> }
>> +static inline bool realm_is_addr_protected(struct realm *realm,
>> + unsigned long addr)
>> +{
>> + unsigned int ia_bits = realm->ia_bits;
>> +
>> + return !(addr & ~(BIT(ia_bits - 1) - 1));
>> +}
>> +
>> #endif
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 46f0c4e80ace..8a7b5449697f 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -310,6 +310,7 @@ static void invalidate_icache_guest_page(void *va,
>> size_t size)
>> * @start: The intermediate physical base address of the range to unmap
>> * @size: The size of the area to unmap
>> * @may_block: Whether or not we are permitted to block
>> + * @only_shared: If true then protected mappings should not be unmapped
>> *
>> * Clear a range of stage-2 mappings, lowering the various
>> ref-counts. Must
>> * be called while holding mmu_lock (unless for freeing the stage2
>> pgd before
>> @@ -317,7 +318,7 @@ static void invalidate_icache_guest_page(void *va,
>> size_t size)
>> * with things behind our backs.
>> */
>> static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t
>> start, u64 size,
>> - bool may_block)
>> + bool may_block, bool only_shared)
>> {
>> struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
>> phys_addr_t end = start + size;
>> @@ -330,7 +331,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu
>> *mmu, phys_addr_t start, u64
>> static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t
>> start, u64 size)
>> {
>> - __unmap_stage2_range(mmu, start, size, true);
>> + __unmap_stage2_range(mmu, start, size, true, false);
>> }
>> static void stage2_flush_memslot(struct kvm *kvm,
>> @@ -1771,7 +1772,8 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct
>> kvm_gfn_range *range)
>> __unmap_stage2_range(&kvm->arch.mmu, range->start << PAGE_SHIFT,
>> (range->end - range->start) << PAGE_SHIFT,
>> - range->may_block);
>> + range->may_block,
>> + range->only_shared);
>> return false;
>> }
>> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
>> index 629a095bea61..9e5983c51393 100644
>> --- a/arch/arm64/kvm/rme.c
>> +++ b/arch/arm64/kvm/rme.c
>> @@ -79,6 +79,12 @@ static phys_addr_t __alloc_delegated_page(struct
>> realm *realm,
>> return phys;
>> }
>> +static phys_addr_t alloc_delegated_page(struct realm *realm,
>> + struct kvm_mmu_memory_cache *mc)
>> +{
>> + return __alloc_delegated_page(realm, mc, GFP_KERNEL);
>> +}
>> +
>> static void free_delegated_page(struct realm *realm, phys_addr_t phys)
>> {
>> if (realm->spare_page == PHYS_ADDR_MAX) {
>> @@ -94,6 +100,151 @@ static void free_delegated_page(struct realm
>> *realm, phys_addr_t phys)
>> free_page((unsigned long)phys_to_virt(phys));
>> }
>> +static int realm_rtt_create(struct realm *realm,
>> + unsigned long addr,
>> + int level,
>> + phys_addr_t phys)
>> +{
>> + addr = ALIGN_DOWN(addr, rme_rtt_level_mapsize(level - 1));
>> + return rmi_rtt_create(virt_to_phys(realm->rd), phys, addr, level);
>> +}
>> +
>> +static int realm_rtt_fold(struct realm *realm,
>> + unsigned long addr,
>> + int level,
>> + phys_addr_t *rtt_granule)
>> +{
>> + unsigned long out_rtt;
>> + int ret;
>> +
>> + ret = rmi_rtt_fold(virt_to_phys(realm->rd), addr, level, &out_rtt);
>> +
>> + if (RMI_RETURN_STATUS(ret) == RMI_SUCCESS && rtt_granule)
>> + *rtt_granule = out_rtt;
>> +
>> + return ret;
>> +}
>> +
>> +static int realm_destroy_protected(struct realm *realm,
>> + unsigned long ipa,
>> + unsigned long *next_addr)
>> +{
>> + unsigned long rd = virt_to_phys(realm->rd);
>> + unsigned long addr;
>> + phys_addr_t rtt;
>> + int ret;
>> +
>> +loop:
>> + ret = rmi_data_destroy(rd, ipa, &addr, next_addr);
>> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
>> + if (*next_addr > ipa)
>> + return 0; /* UNASSIGNED */
>> + rtt = alloc_delegated_page(realm, NULL);
>> + if (WARN_ON(rtt == PHYS_ADDR_MAX))
>> + return -1;
>> + /* ASSIGNED - ipa is mapped as a block, so split */
>> + ret = realm_rtt_create(realm, ipa,
>> + RMI_RETURN_INDEX(ret) + 1, rtt);
>
> Could we not go all the way to L3 (rather than 1 level deeper) and try
> again ? That way, we are covered for block mappings at L1 (1G).
>
>> + if (WARN_ON(ret)) {
>> + free_delegated_page(realm, rtt);
>> + return -1;
>> + }
>> + /* retry */
>> + goto loop;
>> + } else if (WARN_ON(ret)) {
>> + return -1;
>> + }
>> + ret = rmi_granule_undelegate(addr);
>> +
>> + /*
>> + * If the undelegate fails then something has gone seriously
>> + * wrong: take an extra reference to just leak the page
>> + */
>> + if (WARN_ON(ret))
>> + get_page(phys_to_page(addr));
>> +
>> + return 0;
>> +}
>> +
>> +static void realm_unmap_range_shared(struct kvm *kvm,
>> + int level,
>> + unsigned long start,
>> + unsigned long end)
>> +{
>> + struct realm *realm = &kvm->arch.realm;
>> + unsigned long rd = virt_to_phys(realm->rd);
>> + ssize_t map_size = rme_rtt_level_mapsize(level);
>> + unsigned long next_addr, addr;
>> + unsigned long shared_bit = BIT(realm->ia_bits - 1);
>> +
>> + if (WARN_ON(level > RME_RTT_MAX_LEVEL))
>> + return;
>> +
>> + start |= shared_bit;
>> + end |= shared_bit;
>> +
>> + for (addr = start; addr < end; addr = next_addr) {
>> + unsigned long align_addr = ALIGN(addr, map_size);
>> + int ret;
>> +
>> + next_addr = ALIGN(addr + 1, map_size);
>> +
>> + if (align_addr != addr || next_addr > end) {
>> + /* Need to recurse deeper */
>> + if (addr < align_addr)
>> + next_addr = align_addr;
>> + realm_unmap_range_shared(kvm, level + 1, addr,
>> + min(next_addr, end));
>> + continue;
>> + }
>> +
>> + ret = rmi_rtt_unmap_unprotected(rd, addr, level, &next_addr);
>
> minor nit: We could potentially use rmi_rtt_destroy() to tear down
> shared mappings without unmapping them individually, if the range
> is big enough. All such optimisations could come later though.
>
>> + switch (RMI_RETURN_STATUS(ret)) {
>> + case RMI_SUCCESS:
>> + break;
>> + case RMI_ERROR_RTT:
>> + if (next_addr == addr) {
>
> At this point we have a block aligned address, but the mapping is
> further deep. Given, start from top to down, we implicitly handle
> the case of block mappings. Not sure if that needs to be in a comment
> here.
>
>> + next_addr = ALIGN(addr + 1, map_size);
>
> Reset to the "actual next" as it was overwritten by the RMI call.
>
>> + realm_unmap_range_shared(kvm, level + 1, addr,
>> + next_addr);
>> + }
>> + break;
>> + default:
>> + WARN_ON(1);
>> + }
>> + }
>> +}
>> +
>> +static void realm_unmap_range_private(struct kvm *kvm,
>> + unsigned long start,
>> + unsigned long end)
>> +{
>> + struct realm *realm = &kvm->arch.realm;
>> + ssize_t map_size = RME_PAGE_SIZE;
>> + unsigned long next_addr, addr;
>> +
>> + for (addr = start; addr < end; addr = next_addr) {
>> + int ret;
>> +
>> + next_addr = ALIGN(addr + 1, map_size);
>> +
>> + ret = realm_destroy_protected(realm, addr, &next_addr);
>> +
>> + if (WARN_ON(ret))
>> + break;
>> + }
>> +}
>> +
>> +static void realm_unmap_range(struct kvm *kvm,
>> + unsigned long start,
>> + unsigned long end,
>> + bool unmap_private)
>> +{
>> + realm_unmap_range_shared(kvm, RME_RTT_MAX_LEVEL - 1, start, end);
>
> minor nit: We already have a helper to find a suitable start level
> (defined below), may be we could use that ? And even do the rtt_destroy
> optimisation for unprotected range.
>
>> + if (unmap_private)
>> + realm_unmap_range_private(kvm, start, end);
>> +}
>> +
>> u32 kvm_realm_ipa_limit(void)
>> {
>> return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_S2SZ);
>> @@ -190,6 +341,30 @@ static int realm_rtt_destroy(struct realm *realm,
>> unsigned long addr,
>> return ret;
>> }
>> +static int realm_create_rtt_levels(struct realm *realm,
>> + unsigned long ipa,
>> + int level,
>> + int max_level,
>> + struct kvm_mmu_memory_cache *mc)
>> +{
>> + if (WARN_ON(level == max_level))
>> + return 0;
>> +
>> + while (level++ < max_level) {
>> + phys_addr_t rtt = alloc_delegated_page(realm, mc);
>> +
>> + if (rtt == PHYS_ADDR_MAX)
>> + return -ENOMEM;
>> +
>> + if (realm_rtt_create(realm, ipa, level, rtt)) {
>> + free_delegated_page(realm, rtt);
>> + return -ENXIO;
>> + }
>> + }
>> +
>> + return 0;
>> +}
>> +
>> static int realm_tear_down_rtt_level(struct realm *realm, int level,
>> unsigned long start, unsigned long end)
>> {
>> @@ -265,6 +440,68 @@ static int realm_tear_down_rtt_range(struct realm
>> *realm,
>> start, end);
>> }
>> +/*
>> + * Returns 0 on successful fold, a negative value on error, a
>> positive value if
>> + * we were not able to fold all tables at this level.
>> + */
>> +static int realm_fold_rtt_level(struct realm *realm, int level,
>> + unsigned long start, unsigned long end)
>> +{
>> + int not_folded = 0;
>> + ssize_t map_size;
>> + unsigned long addr, next_addr;
>> +
>> + if (WARN_ON(level > RME_RTT_MAX_LEVEL))
>> + return -EINVAL;
>> +
>> + map_size = rme_rtt_level_mapsize(level - 1);
>> +
>> + for (addr = start; addr < end; addr = next_addr) {
>> + phys_addr_t rtt_granule;
>> + int ret;
>> + unsigned long align_addr = ALIGN(addr, map_size);
>> +
>> + next_addr = ALIGN(addr + 1, map_size);
>> +
>> + ret = realm_rtt_fold(realm, align_addr, level, &rtt_granule);
>> +
>> + switch (RMI_RETURN_STATUS(ret)) {
>> + case RMI_SUCCESS:
>> + if (!WARN_ON(rmi_granule_undelegate(rtt_granule)))
>> + free_page((unsigned long)phys_to_virt(rtt_granule));
>
> minor nit: Do we need a wrapper function for things like this, abd
> leaking the page if undelegate fails, something like
> rme_reclaim_delegated_page() ?
>
>
>> + break;
>> + case RMI_ERROR_RTT:
>> + if (level == RME_RTT_MAX_LEVEL ||
>> + RMI_RETURN_INDEX(ret) < level) {
>> + not_folded++;
>> + break;
>> + }
>> + /* Recurse a level deeper */
>> + ret = realm_fold_rtt_level(realm,
>> + level + 1,
>> + addr,
>> + next_addr);
>> + if (ret < 0)
>> + return ret;
>> + else if (ret == 0)
>> + /* Try again at this level */
>> + next_addr = addr;
>> + break;
>> + default:
>> + return -ENXIO;
>> + }
>> + }
>> +
>> + return not_folded;
>> +}
>> +
>> +static int realm_fold_rtt_range(struct realm *realm,
>> + unsigned long start, unsigned long end)
>> +{
>> + return realm_fold_rtt_level(realm, get_start_level(realm) + 1,
>> + start, end);
>> +}
>> +
>> static void ensure_spare_page(struct realm *realm)
>> {
>> phys_addr_t tmp_rtt;
>> @@ -295,6 +532,147 @@ void kvm_realm_destroy_rtts(struct kvm *kvm, u32
>> ia_bits)
>> WARN_ON(realm_tear_down_rtt_range(realm, 0, (1UL << ia_bits)));
>> }
>> +void kvm_realm_unmap_range(struct kvm *kvm, unsigned long ipa, u64 size,
>> + bool unmap_private)
>> +{
>> + unsigned long end = ipa + size;
>> + struct realm *realm = &kvm->arch.realm;
>> +
>> + end = min(BIT(realm->ia_bits - 1), end);
>> +
>> + ensure_spare_page(realm);
>> +
>> + realm_unmap_range(kvm, ipa, end, unmap_private);
>> +
>> + realm_fold_rtt_range(realm, ipa, end);
>
> Shouldn't this be :
>
> if (unmap_private)
> realm_fold_rtt_range(realm, ipa, end);
>
> Also it is fine to reclaim RTTs from the protected space, not the
> unprotected half, as long as we use RTT_DESTROY in unmap_shared routine.
Thinking about this a bit more, we could :
1. Rename this to realm_reclaim_rtts_range()
2. Use "FOLD" vs "DESTROY" depending on the state of the Realm. If the
realm is DYING (or add a state in the kvm_pgtable_stage2_destroy() to
indicate that stage2 can now be "destroyed") and use DESTROY
wherever it is safe to do so.
Suzuki
next prev parent reply other threads:[~2024-04-19 10:20 UTC|newest]
Thread overview: 138+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-12 8:40 [v2] Support for Arm CCA VMs on Linux Steven Price
2024-04-11 18:54 ` Itaru Kitayama
2024-04-15 8:14 ` Steven Price
2024-06-01 20:40 ` Jason Gunthorpe
2024-04-12 8:41 ` [PATCH v2 00/14] arm64: Support for running as a guest in Arm CCA Steven Price
2024-04-12 8:42 ` [PATCH v2 01/14] arm64: rsi: Add RSI definitions Steven Price
2024-04-12 8:42 ` [PATCH v2 02/14] arm64: Detect if in a realm and set RIPAS RAM Steven Price
2024-05-10 17:35 ` Catalin Marinas
2024-05-14 10:18 ` Suzuki K Poulose
2024-05-16 14:32 ` Catalin Marinas
2024-05-15 15:03 ` Steven Price
2024-04-12 8:42 ` [PATCH v2 03/14] arm64: realm: Query IPA size from the RMM Steven Price
2024-05-13 14:03 ` Catalin Marinas
2024-05-16 15:13 ` Steven Price
2024-04-12 8:42 ` [PATCH v2 04/14] arm64: Mark all I/O as non-secure shared Steven Price
2024-04-12 8:42 ` [PATCH v2 05/14] fixmap: Allow architecture overriding set_fixmap_io Steven Price
2024-04-12 8:42 ` [PATCH v2 06/14] arm64: Override set_fixmap_io Steven Price
2024-05-13 16:14 ` Catalin Marinas
2024-05-14 10:21 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 07/14] arm64: Make the PHYS_MASK_SHIFT dynamic Steven Price
2024-05-13 16:38 ` Catalin Marinas
2024-05-16 15:34 ` Steven Price
2024-04-12 8:42 ` [PATCH v2 08/14] arm64: Enforce bounce buffers for realm DMA Steven Price
2024-05-13 16:56 ` Catalin Marinas
2024-04-12 8:42 ` [PATCH v2 09/14] arm64: Enable memory encrypt for Realms Steven Price
2024-04-15 3:13 ` kernel test robot
2024-04-25 13:42 ` Suzuki K Poulose
2024-04-25 15:52 ` Steven Price
2024-04-25 16:29 ` Suzuki K Poulose
2024-04-25 18:16 ` Emanuele Rocca
2024-05-14 18:00 ` Catalin Marinas
2024-05-15 10:47 ` Suzuki K Poulose
2024-05-16 7:48 ` Catalin Marinas
2024-05-16 9:06 ` Suzuki K Poulose
2024-05-20 16:53 ` Catalin Marinas
2024-05-20 20:32 ` Michael Kelley
2024-05-21 10:14 ` Catalin Marinas
2024-05-21 15:58 ` Michael Kelley
2024-04-12 8:42 ` [PATCH v2 10/14] arm64: Force device mappings to be non-secure shared Steven Price
2024-05-15 9:01 ` Catalin Marinas
2024-05-15 11:00 ` Suzuki K Poulose
2024-05-17 9:34 ` Catalin Marinas
2024-04-12 8:42 ` [PATCH v2 11/14] efi: arm64: Map Device with Prot Shared Steven Price
2024-04-12 8:42 ` [PATCH v2 12/14] arm64: realm: Support nonsecure ITS emulation shared Steven Price
2024-05-15 11:01 ` Catalin Marinas
2024-05-22 15:52 ` Steven Price
2024-05-22 17:05 ` Catalin Marinas
2024-05-23 9:57 ` Steven Price
2024-04-12 8:42 ` [PATCH v2 13/14] arm64: rsi: Interfaces to query attestation token Steven Price
2024-05-15 11:10 ` Catalin Marinas
2024-05-22 15:52 ` Steven Price
2024-05-31 16:29 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 14/14] virt: arm-cca-guest: TSM_REPORT support for realms Steven Price
2024-04-24 13:06 ` Thomas Fossati
2024-04-24 13:27 ` Suzuki K Poulose
2024-04-24 13:19 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 00/43] arm64: Support for Arm CCA in KVM Steven Price
2024-04-12 8:42 ` [PATCH v2 01/43] KVM: Prepare for handling only shared mappings in mmu_notifier events Steven Price
2024-04-25 9:48 ` Fuad Tabba
2024-04-25 15:58 ` Steven Price
2024-04-25 22:56 ` Sean Christopherson
2024-04-12 8:42 ` [PATCH v2 02/43] kvm: arm64: pgtable: Track the number of pages in the entry level Steven Price
2024-04-12 8:42 ` [PATCH v2 03/43] kvm: arm64: Include kvm_emulate.h in kvm/arm_psci.h Steven Price
2024-04-12 8:42 ` [PATCH v2 04/43] arm64: RME: Handle Granule Protection Faults (GPFs) Steven Price
2024-04-16 11:17 ` Suzuki K Poulose
2024-04-18 13:17 ` Steven Price
2024-04-12 8:42 ` [PATCH v2 05/43] arm64: RME: Add SMC definitions for calling the RMM Steven Price
2024-04-16 12:38 ` Suzuki K Poulose
2024-04-18 13:17 ` Steven Price
2024-04-12 8:42 ` [PATCH v2 06/43] arm64: RME: Add wrappers for RMI calls Steven Price
2024-04-16 13:14 ` Suzuki K Poulose
2024-04-19 11:18 ` Steven Price
2024-04-12 8:42 ` [PATCH v2 07/43] arm64: RME: Check for RME support at KVM init Steven Price
2024-04-16 13:30 ` Suzuki K Poulose
2024-04-22 15:39 ` Steven Price
2024-04-12 8:42 ` [PATCH v2 08/43] arm64: RME: Define the user ABI Steven Price
2024-04-12 8:42 ` [PATCH v2 09/43] arm64: RME: ioctls to create and configure realms Steven Price
2024-04-17 9:51 ` Suzuki K Poulose
2024-04-22 16:33 ` Steven Price
2024-04-18 16:04 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 10/43] kvm: arm64: Expose debug HW register numbers for Realm Steven Price
2024-04-12 8:42 ` [PATCH v2 11/43] arm64: kvm: Allow passing machine type in KVM creation Steven Price
2024-04-17 10:20 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 12/43] arm64: RME: Keep a spare page delegated to the RMM Steven Price
2024-04-17 10:19 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 13/43] arm64: RME: RTT handling Steven Price
2024-04-17 13:37 ` Suzuki K Poulose
2024-04-24 10:59 ` Steven Price
2024-04-12 8:42 ` [PATCH v2 14/43] arm64: RME: Allocate/free RECs to match vCPUs Steven Price
2024-04-18 9:23 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 15/43] arm64: RME: Support for the VGIC in realms Steven Price
2024-04-12 8:42 ` [PATCH v2 16/43] KVM: arm64: Support timers in realm RECs Steven Price
2024-04-18 9:30 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 17/43] arm64: RME: Allow VMM to set RIPAS Steven Price
2024-04-19 9:34 ` Suzuki K Poulose
2024-04-19 10:20 ` Suzuki K Poulose [this message]
2024-05-01 15:47 ` Steven Price
2024-05-02 10:16 ` Suzuki K Poulose
2024-04-25 9:53 ` Fuad Tabba
2024-05-01 14:27 ` Jean-Philippe Brucker
2024-05-01 14:56 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 18/43] arm64: RME: Handle realm enter/exit Steven Price
2024-04-19 13:00 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 19/43] KVM: arm64: Handle realm MMIO emulation Steven Price
2024-04-12 8:42 ` [PATCH v2 20/43] arm64: RME: Allow populating initial contents Steven Price
2024-04-19 13:17 ` Suzuki K Poulose
2024-04-12 8:42 ` [PATCH v2 21/43] arm64: RME: Runtime faulting of memory Steven Price
2024-04-25 10:43 ` Fuad Tabba
2024-05-31 16:03 ` Steven Price
2024-04-12 8:42 ` [PATCH v2 22/43] KVM: arm64: Handle realm VCPU load Steven Price
2024-04-12 8:42 ` [PATCH v2 23/43] KVM: arm64: Validate register access for a Realm VM Steven Price
2024-04-12 8:42 ` [PATCH v2 24/43] KVM: arm64: Handle Realm PSCI requests Steven Price
2024-04-12 8:42 ` [PATCH v2 25/43] KVM: arm64: WARN on injected undef exceptions Steven Price
2024-04-12 8:42 ` [PATCH v2 26/43] arm64: Don't expose stolen time for realm guests Steven Price
2024-04-12 8:42 ` [PATCH v2 27/43] arm64: rme: allow userspace to inject aborts Steven Price
2024-04-12 8:42 ` [PATCH v2 28/43] arm64: rme: support RSI_HOST_CALL Steven Price
2024-04-12 8:42 ` [PATCH v2 29/43] arm64: rme: Allow checking SVE on VM instance Steven Price
2024-04-12 8:42 ` [PATCH v2 30/43] arm64: RME: Always use 4k pages for realms Steven Price
2024-04-12 8:42 ` [PATCH v2 31/43] arm64: rme: Prevent Device mappings for Realms Steven Price
2024-04-12 8:42 ` [PATCH v2 32/43] arm_pmu: Provide a mechanism for disabling the physical IRQ Steven Price
2024-04-12 8:42 ` [PATCH v2 33/43] arm64: rme: Enable PMU support with a realm guest Steven Price
2024-04-13 23:44 ` kernel test robot
2024-04-18 16:06 ` Suzuki K Poulose
2024-04-12 8:43 ` [PATCH v2 34/43] kvm: rme: Hide KVM_CAP_READONLY_MEM for realm guests Steven Price
2024-04-12 8:43 ` [PATCH v2 35/43] arm64: RME: Propagate number of breakpoints and watchpoints to userspace Steven Price
2024-04-12 8:43 ` [PATCH v2 36/43] arm64: RME: Set breakpoint parameters through SET_ONE_REG Steven Price
2024-04-12 8:43 ` [PATCH v2 37/43] arm64: RME: Initialize PMCR.N with number counter supported by RMM Steven Price
2024-04-12 8:43 ` [PATCH v2 38/43] arm64: RME: Propagate max SVE vector length from RMM Steven Price
2024-04-12 8:43 ` [PATCH v2 39/43] arm64: RME: Configure max SVE vector length for a Realm Steven Price
2024-04-12 8:43 ` [PATCH v2 40/43] arm64: RME: Provide register list for unfinalized RME RECs Steven Price
2024-04-12 8:43 ` [PATCH v2 41/43] arm64: RME: Provide accurate register list Steven Price
2024-04-12 8:43 ` [PATCH v2 42/43] arm64: kvm: Expose support for private memory Steven Price
2024-04-25 14:44 ` Fuad Tabba
2024-04-12 8:43 ` [PATCH v2 43/43] KVM: arm64: Allow activating realms Steven Price
2024-04-12 16:52 ` [v2] Support for Arm CCA VMs on Linux Jean-Philippe Brucker
2024-06-24 6:13 ` Itaru Kitayama
2024-06-26 13:39 ` Steven Price
2024-07-08 0:47 ` Itaru Kitayama
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cba1c8ce-9299-4013-910c-0ba6d205cd90@arm.com \
--to=suzuki.poulose@arm.com \
--cc=alexandru.elisei@arm.com \
--cc=catalin.marinas@arm.com \
--cc=christoffer.dall@arm.com \
--cc=gankulkarni@os.amperecomputing.com \
--cc=james.morse@arm.com \
--cc=joey.gouly@arm.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=steven.price@arm.com \
--cc=tabba@google.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).