linux-coco.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Gavin Shan <gshan@redhat.com>
To: Steven Price <steven.price@arm.com>,
	kvm@vger.kernel.org, kvmarm@lists.linux.dev
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
	James Morse <james.morse@arm.com>,
	Oliver Upton <oliver.upton@linux.dev>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Zenghui Yu <yuzenghui@huawei.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, Joey Gouly <joey.gouly@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Christoffer Dall <christoffer.dall@arm.com>,
	Fuad Tabba <tabba@google.com>,
	linux-coco@lists.linux.dev,
	Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>,
	Shanker Donthineni <sdonthineni@nvidia.com>,
	Alper Gun <alpergun@google.com>,
	"Aneesh Kumar K . V" <aneesh.kumar@kernel.org>
Subject: Re: [PATCH v6 20/43] arm64: RME: Runtime faulting of memory
Date: Thu, 30 Jan 2025 15:22:41 +1000	[thread overview]
Message-ID: <3f0caace-ee05-4ddf-ae75-2157e77aa57c@redhat.com> (raw)
In-Reply-To: <20241212155610.76522-21-steven.price@arm.com>

On 12/13/24 1:55 AM, Steven Price wrote:
> At runtime if the realm guest accesses memory which hasn't yet been
> mapped then KVM needs to either populate the region or fault the guest.
> 
> For memory in the lower (protected) region of IPA a fresh page is
> provided to the RMM which will zero the contents. For memory in the
> upper (shared) region of IPA, the memory from the memslot is mapped
> into the realm VM non secure.
> 
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v5:
>   * Reduce use of struct page in preparation for supporting the RMM
>     having a different page size to the host.
>   * Handle a race when delegating a page where another CPU has faulted on
>     a the same page (and already delegated the physical page) but not yet
>     mapped it. In this case simply return to the guest to either use the
>     mapping from the other CPU (or refault if the race is lost).
>   * The changes to populate_par_region() are moved into the previous
>     patch where they belong.
> Changes since v4:
>   * Code cleanup following review feedback.
>   * Drop the PTE_SHARED bit when creating unprotected page table entries.
>     This is now set by the RMM and the host has no control of it and the
>     spec requires the bit to be set to zero.
> Changes since v2:
>   * Avoid leaking memory if failing to map it in the realm.
>   * Correctly mask RTT based on LPA2 flag (see rtt_get_phys()).
>   * Adapt to changes in previous patches.
> ---
>   arch/arm64/include/asm/kvm_emulate.h |  10 ++
>   arch/arm64/include/asm/kvm_rme.h     |  10 ++
>   arch/arm64/kvm/mmu.c                 | 124 +++++++++++++++++++--
>   arch/arm64/kvm/rme.c                 | 156 +++++++++++++++++++++++++++
>   4 files changed, 293 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index ec2b6d9c9c07..b13e367b6972 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -720,6 +720,16 @@ static inline bool kvm_realm_is_created(struct kvm *kvm)
>   	return kvm_is_realm(kvm) && kvm_realm_state(kvm) != REALM_STATE_NONE;
>   }
>   
> +static inline gpa_t kvm_gpa_from_fault(struct kvm *kvm, phys_addr_t fault_ipa)
> +{
> +	if (kvm_is_realm(kvm)) {
> +		struct realm *realm = &kvm->arch.realm;
> +
> +		return fault_ipa & ~BIT(realm->ia_bits - 1);
> +	}
> +	return fault_ipa;
> +}
> +

'fault' has been included in 'kvm_gpa_from_fault' and 'fault_ipa'. To avoid the
duplication, 'fault_ipa' can be renamed to 'ipa'.

>   static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
>   {
>   	if (static_branch_unlikely(&kvm_rme_is_available))
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index 0410650cd545..158f77e24a26 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -99,6 +99,16 @@ void kvm_realm_unmap_range(struct kvm *kvm,
>   			   unsigned long ipa,
>   			   u64 size,
>   			   bool unmap_private);
> +int realm_map_protected(struct realm *realm,
> +			unsigned long base_ipa,
> +			kvm_pfn_t pfn,
> +			unsigned long size,
> +			struct kvm_mmu_memory_cache *memcache);
> +int realm_map_non_secure(struct realm *realm,
> +			 unsigned long ipa,
> +			 kvm_pfn_t pfn,
> +			 unsigned long size,
> +			 struct kvm_mmu_memory_cache *memcache);
>   int realm_set_ipa_state(struct kvm_vcpu *vcpu,
>   			unsigned long addr, unsigned long end,
>   			unsigned long ripas,
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index b100d4b3aa29..e88714903ce5 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -325,8 +325,13 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
>   
>   	lockdep_assert_held_write(&kvm->mmu_lock);
>   	WARN_ON(size & ~PAGE_MASK);
> -	WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap,
> -				   may_block));
> +
> +	if (kvm_is_realm(kvm))
> +		kvm_realm_unmap_range(kvm, start, size, !only_shared);
> +	else
> +		WARN_ON(stage2_apply_range(mmu, start, end,
> +					   kvm_pgtable_stage2_unmap,
> +					   may_block));
>   }
>   
>   void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start,
> @@ -346,7 +351,10 @@ static void stage2_flush_memslot(struct kvm *kvm,
>   	phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
>   	phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
>   
> -	kvm_stage2_flush_range(&kvm->arch.mmu, addr, end);
> +	if (kvm_is_realm(kvm))
> +		kvm_realm_unmap_range(kvm, addr, end - addr, false);
> +	else
> +		kvm_stage2_flush_range(&kvm->arch.mmu, addr, end);
>   }
>   
>   /**
> @@ -1037,6 +1045,10 @@ void stage2_unmap_vm(struct kvm *kvm)
>   	struct kvm_memory_slot *memslot;
>   	int idx, bkt;
>   
> +	/* For realms this is handled by the RMM so nothing to do here */
> +	if (kvm_is_realm(kvm))
> +		return;
> +
>   	idx = srcu_read_lock(&kvm->srcu);
>   	mmap_read_lock(current->mm);
>   	write_lock(&kvm->mmu_lock);
> @@ -1062,6 +1074,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
>   	if (kvm_is_realm(kvm) &&
>   	    (kvm_realm_state(kvm) != REALM_STATE_DEAD &&
>   	     kvm_realm_state(kvm) != REALM_STATE_NONE)) {
> +		kvm_stage2_unmap_range(mmu, 0, (~0ULL) & PAGE_MASK, false);
>   		write_unlock(&kvm->mmu_lock);
>   		kvm_realm_destroy_rtts(kvm, pgt->ia_bits);
>   
> @@ -1446,6 +1459,76 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
>   	return vma->vm_flags & VM_MTE_ALLOWED;
>   }
>   
> +static int realm_map_ipa(struct kvm *kvm, phys_addr_t ipa,
> +			 kvm_pfn_t pfn, unsigned long map_size,
> +			 enum kvm_pgtable_prot prot,
> +			 struct kvm_mmu_memory_cache *memcache)
> +{
> +	struct realm *realm = &kvm->arch.realm;
> +
> +	if (WARN_ON(!(prot & KVM_PGTABLE_PROT_W)))
> +		return -EFAULT;
> +
> +	if (!realm_is_addr_protected(realm, ipa))
> +		return realm_map_non_secure(realm, ipa, pfn, map_size,
> +					    memcache);
> +
> +	return realm_map_protected(realm, ipa, pfn, map_size, memcache);
> +}
> +
> +static int private_memslot_fault(struct kvm_vcpu *vcpu,
> +				 phys_addr_t fault_ipa,
> +				 struct kvm_memory_slot *memslot)
> +{
> +	struct kvm *kvm = vcpu->kvm;
> +	gpa_t gpa = kvm_gpa_from_fault(kvm, fault_ipa);
> +	gfn_t gfn = gpa >> PAGE_SHIFT;
> +	bool priv_exists = kvm_mem_is_private(kvm, gfn);
> +	struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> +	struct page *page;
> +	kvm_pfn_t pfn;
> +	int ret;
> +	/*
> +	 * For Realms, the shared address is an alias of the private GPA with
> +	 * the top bit set. Thus is the fault address matches the GPA then it
> +	 * is the private alias.
> +	 */
> +	bool is_priv_gfn = (gpa == fault_ipa);
> +

We may rename 'priv_exists' to 'was_priv_gfn', which is consistent to 'is_priv_gfn'.
Alternatively, we may use 'was_private' and 'is_private'.

> +	if (priv_exists != is_priv_gfn) {
> +		kvm_prepare_memory_fault_exit(vcpu,
> +					      gpa,
> +					      PAGE_SIZE,
> +					      kvm_is_write_fault(vcpu),
> +					      false, is_priv_gfn);
> +
> +		return -EFAULT;
> +	}
> +
> +	if (!is_priv_gfn) {
> +		/* Not a private mapping, handling normally */
> +		return -EINVAL;
> +	}
> +
> +	ret = kvm_mmu_topup_memory_cache(memcache,
> +					 kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu));
> +	if (ret)
> +		return ret;
> +
> +	ret = kvm_gmem_get_pfn(kvm, memslot, gfn, &pfn, &page, NULL);
> +	if (ret)
> +		return ret;
> +
> +	/* FIXME: Should be able to use bigger than PAGE_SIZE mappings */
> +	ret = realm_map_ipa(kvm, fault_ipa, pfn, PAGE_SIZE, KVM_PGTABLE_PROT_W,
> +			    memcache);
> +	if (!ret)
> +		return 1; /* Handled */
> +
> +	put_page(page);
> +	return ret;
> +}
> +
>   static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   			  struct kvm_s2_trans *nested,
>   			  struct kvm_memory_slot *memslot, unsigned long hva,
> @@ -1472,6 +1555,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   	if (fault_is_perm)
>   		fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu);
>   	write_fault = kvm_is_write_fault(vcpu);
> +
> +	/*
> +	 * Realms cannot map protected pages read-only
> +	 * FIXME: It should be possible to map unprotected pages read-only
> +	 */
> +	if (vcpu_is_rec(vcpu))
> +		write_fault = true;
> +
>   	exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
>   	VM_BUG_ON(write_fault && exec_fault);
>   
> @@ -1579,7 +1670,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   		ipa &= ~(vma_pagesize - 1);
>   	}
>   
> -	gfn = ipa >> PAGE_SHIFT;
> +	gfn = kvm_gpa_from_fault(kvm, ipa) >> PAGE_SHIFT;
>   	mte_allowed = kvm_vma_mte_allowed(vma);
>   
>   	vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED;
> @@ -1660,7 +1751,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   	 * If we are not forced to use page mapping, check if we are
>   	 * backed by a THP and thus use block mapping if possible.
>   	 */
> -	if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) {
> +	/* FIXME: We shouldn't need to disable this for realms */
> +	if (vma_pagesize == PAGE_SIZE && !(force_pte || device || kvm_is_realm(kvm))) {
>   		if (fault_is_perm && fault_granule > PAGE_SIZE)
>   			vma_pagesize = fault_granule;
>   		else
> @@ -1712,6 +1804,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   		 */
>   		prot &= ~KVM_NV_GUEST_MAP_SZ;
>   		ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot);
> +	} else if (kvm_is_realm(kvm)) {
> +		ret = realm_map_ipa(kvm, fault_ipa, pfn, vma_pagesize,
> +				    prot, memcache);
>   	} else {
>   		ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize,
>   					     __pfn_to_phys(pfn), prot,
> @@ -1854,8 +1949,15 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>   		nested = &nested_trans;
>   	}
>   
> -	gfn = ipa >> PAGE_SHIFT;
> +	gfn = kvm_gpa_from_fault(vcpu->kvm, ipa) >> PAGE_SHIFT;
>   	memslot = gfn_to_memslot(vcpu->kvm, gfn);
> +
> +	if (kvm_slot_can_be_private(memslot)) {
> +		ret = private_memslot_fault(vcpu, ipa, memslot);
> +		if (ret != -EINVAL)
> +			goto out;
> +	}
> +
>   	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
>   	write_fault = kvm_is_write_fault(vcpu);
>   	if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
> @@ -1899,7 +2001,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>   		 * of the page size.
>   		 */
>   		ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0);
> -		ret = io_mem_abort(vcpu, ipa);
> +		ret = io_mem_abort(vcpu, kvm_gpa_from_fault(vcpu->kvm, ipa));
>   		goto out_unlock;
>   	}
>   
> @@ -1947,6 +2049,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>   	if (!kvm->arch.mmu.pgt)
>   		return false;
>   
> +	/* We don't support aging for Realms */
> +	if (kvm_is_realm(kvm))
> +		return true;
> +
>   	return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt,
>   						   range->start << PAGE_SHIFT,
>   						   size, true);
> @@ -1963,6 +2069,10 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>   	if (!kvm->arch.mmu.pgt)
>   		return false;
>   
> +	/* We don't support aging for Realms */
> +	if (kvm_is_realm(kvm))
> +		return true;
> +
>   	return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt,
>   						   range->start << PAGE_SHIFT,
>   						   size, false);
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index d4561e368cd5..146ef598a581 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -602,6 +602,162 @@ static int fold_rtt(struct realm *realm, unsigned long addr, int level)
>   	return 0;
>   }
>   
> +int realm_map_protected(struct realm *realm,
> +			unsigned long ipa,
> +			kvm_pfn_t pfn,
> +			unsigned long map_size,
> +			struct kvm_mmu_memory_cache *memcache)
> +{
> +	phys_addr_t phys = __pfn_to_phys(pfn);
> +	phys_addr_t rd = virt_to_phys(realm->rd);
> +	unsigned long base_ipa = ipa;
> +	unsigned long size;
> +	int map_level;
> +	int ret = 0;
> +
> +	if (WARN_ON(!IS_ALIGNED(ipa, map_size)))
> +		return -EINVAL;
> +
> +	switch (map_size) {
> +	case PAGE_SIZE:
> +		map_level = 3;
> +		break;
> +	case RMM_L2_BLOCK_SIZE:
> +		map_level = 2;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +

The same block of code, to return the RTT level according to the map size, has been
used for multiple times. It would be nice to introduce a helper for this.

> +	if (map_level < RMM_RTT_MAX_LEVEL) {
> +		/*
> +		 * A temporary RTT is needed during the map, precreate it,
> +		 * however if there is an error (e.g. missing parent tables)
> +		 * this will be handled below.
> +		 */
> +		realm_create_rtt_levels(realm, ipa, map_level,
> +					RMM_RTT_MAX_LEVEL, memcache);
> +	}
> +

This block of code could be dropped. If the RTTs have been existing, realm_create_rtt_levels()
doesn't nothing, but several RMI calls are issued. RMI calls aren't cheap and it can cause
performance lost.

> +	for (size = 0; size < map_size; size += PAGE_SIZE) {
> +		if (rmi_granule_delegate(phys)) {
> +			/*
> +			 * It's likely we raced with another VCPU on the same
> +			 * fault. Assume the other VCPU has handled the fault
> +			 * and return to the guest.
> +			 */
> +			return 0;
> +		}

We probably can't bail immediately when error is returned from rmi_granule_delegate()
because we intend to map a region whose size is 'map_size'. So a 'continue' instead
of 'return 0' seems correct to me.

> +
> +		ret = rmi_data_create_unknown(rd, phys, ipa);
> +
> +		if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> +			/* Create missing RTTs and retry */
> +			int level = RMI_RETURN_INDEX(ret);
> +
> +			ret = realm_create_rtt_levels(realm, ipa, level,
> +						      RMM_RTT_MAX_LEVEL,
> +						      memcache);
> +			WARN_ON(ret);
> +			if (ret)
> +				goto err_undelegate;

			if (WARN_ON(ret))
> +
> +			ret = rmi_data_create_unknown(rd, phys, ipa);
> +		}
> +		WARN_ON(ret);
> +
> +		if (ret)
> +			goto err_undelegate;

		if (WARN_ON(ret))

> +
> +		phys += PAGE_SIZE;
> +		ipa += PAGE_SIZE;
> +	}
> +
> +	if (map_size == RMM_L2_BLOCK_SIZE)
> +		ret = fold_rtt(realm, base_ipa, map_level);
> +	if (WARN_ON(ret))
> +		goto err;
> +

The nested if statements are needed here because the WARN_ON() only
take effect on the return value from fold_rtt().

	if (map_size == RMM_L2_BLOCK_SIZE) {
		ret = fold_rtt(realm, base_ipa, map_level);
		if (WARN_ON(ret))
			goto err;
	}

> +	return 0;
> +
> +err_undelegate:
> +	if (WARN_ON(rmi_granule_undelegate(phys))) {
> +		/* Page can't be returned to NS world so is lost */
> +		get_page(phys_to_page(phys));
> +	}
> +err:
> +	while (size > 0) {
> +		unsigned long data, top;
> +
> +		phys -= PAGE_SIZE;
> +		size -= PAGE_SIZE;
> +		ipa -= PAGE_SIZE;
> +
> +		WARN_ON(rmi_data_destroy(rd, ipa, &data, &top));
> +
> +		if (WARN_ON(rmi_granule_undelegate(phys))) {
> +			/* Page can't be returned to NS world so is lost */
> +			get_page(phys_to_page(phys));
> +		}
> +	}
> +	return -ENXIO;
> +}
> +
> +int realm_map_non_secure(struct realm *realm,
> +			 unsigned long ipa,
> +			 kvm_pfn_t pfn,
> +			 unsigned long map_size,
> +			 struct kvm_mmu_memory_cache *memcache)
> +{
> +	phys_addr_t rd = virt_to_phys(realm->rd);
> +	int map_level;
> +	int ret = 0;
> +	unsigned long desc = __pfn_to_phys(pfn) |
> +			     PTE_S2_MEMATTR(MT_S2_FWB_NORMAL) |
> +			     /* FIXME: Read+Write permissions for now */
> +			     (3 << 6);
> +
> +	if (WARN_ON(!IS_ALIGNED(ipa, map_size)))
> +		return -EINVAL;
> +
> +	switch (map_size) {
> +	case PAGE_SIZE:
> +		map_level = 3;
> +		break;
> +	case RMM_L2_BLOCK_SIZE:
> +		map_level = 2;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +

As above.

> +	ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
> +
> +	if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> +		/* Create missing RTTs and retry */
> +		int level = RMI_RETURN_INDEX(ret);
> +
> +		ret = realm_create_rtt_levels(realm, ipa, level, map_level,
> +					      memcache);
> +		if (WARN_ON(ret))
> +			return -ENXIO;
> +
> +		ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
> +	}
> +	/*
> +	 * RMI_ERROR_RTT can be reported for two reasons: either the RTT tables
> +	 * are not there, or there is an RTTE already present for the address.
> +	 * The call to realm_create_rtt_levels() above handles the first case,
> +	 * and in the second case this indicates that another thread has
> +	 * already populated the RTTE for us, so we can ignore the error and
> +	 * continue.
> +	 */
> +	if (WARN_ON(ret && RMI_RETURN_STATUS(ret) != RMI_ERROR_RTT))
> +		return -ENXIO;
> +
> +	return 0;
> +}
> +
>   static int populate_par_region(struct kvm *kvm,
>   			       phys_addr_t ipa_base,
>   			       phys_addr_t ipa_end,

Thanks,
Gavin


  reply	other threads:[~2025-01-30  5:22 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
2024-12-12 15:55 ` [PATCH v6 01/43] KVM: Prepare for handling only shared mappings in mmu_notifier events Steven Price
2024-12-12 15:55 ` [PATCH v6 02/43] kvm: arm64: Include kvm_emulate.h in kvm/arm_psci.h Steven Price
2024-12-12 15:55 ` [PATCH v6 03/43] arm64: RME: Handle Granule Protection Faults (GPFs) Steven Price
2024-12-16 16:45   ` Suzuki K Poulose
2024-12-12 15:55 ` [PATCH v6 04/43] arm64: RME: Add SMC definitions for calling the RMM Steven Price
2024-12-16 17:51   ` Suzuki K Poulose
2024-12-12 15:55 ` [PATCH v6 05/43] arm64: RME: Add wrappers for RMI calls Steven Price
2024-12-12 15:55 ` [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init Steven Price
2024-12-19  5:44   ` Aneesh Kumar K.V
2025-01-28  5:47     ` Gavin Shan
2025-01-29  3:57   ` Gavin Shan
2025-01-29 15:24     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 07/43] arm64: RME: Define the user ABI Steven Price
2025-01-28 23:51   ` Gavin Shan
2025-01-29 16:31     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 08/43] arm64: RME: ioctls to create and configure realms Steven Price
2025-01-29  0:43   ` Gavin Shan
2025-01-30 11:57     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 09/43] kvm: arm64: Expose debug HW register numbers for Realm Steven Price
2024-12-12 15:55 ` [PATCH v6 10/43] arm64: kvm: Allow passing machine type in KVM creation Steven Price
2025-01-29  4:07   ` Gavin Shan
2025-01-30 14:14     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 11/43] arm64: RME: RTT tear down Steven Price
2025-01-29 23:06   ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs Steven Price
2024-12-19  5:37   ` Aneesh Kumar K.V
2025-01-29  4:50   ` Gavin Shan
2025-01-30 16:40     ` Steven Price
2025-02-03 11:18       ` Suzuki K Poulose
2025-02-03 16:34         ` Steven Price
2024-12-12 15:55 ` [PATCH v6 13/43] KVM: arm64: vgic: Provide helper for number of list registers Steven Price
2024-12-12 15:55 ` [PATCH v6 14/43] arm64: RME: Support for the VGIC in realms Steven Price
2024-12-12 15:55 ` [PATCH v6 15/43] KVM: arm64: Support timers in realm RECs Steven Price
2024-12-12 15:55 ` [PATCH v6 16/43] arm64: RME: Allow VMM to set RIPAS Steven Price
2025-01-29 23:25   ` Gavin Shan
2025-01-30 16:56     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 17/43] arm64: RME: Handle realm enter/exit Steven Price
2025-01-29 23:41   ` Gavin Shan
2025-02-03 16:34     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 18/43] KVM: arm64: Handle realm MMIO emulation Steven Price
2024-12-12 15:55 ` [PATCH v6 19/43] arm64: RME: Allow populating initial contents Steven Price
2025-01-30  4:38   ` Gavin Shan
2025-02-03 16:52     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 20/43] arm64: RME: Runtime faulting of memory Steven Price
2025-01-30  5:22   ` Gavin Shan [this message]
2025-02-06  4:33     ` Aneesh Kumar K.V
2025-02-07 17:04     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 21/43] KVM: arm64: Handle realm VCPU load Steven Price
2024-12-12 15:55 ` [PATCH v6 22/43] KVM: arm64: Validate register access for a Realm VM Steven Price
2025-02-02  1:22   ` Gavin Shan
2025-02-07 17:04     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 23/43] KVM: arm64: Handle Realm PSCI requests Steven Price
2025-02-02  2:06   ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 24/43] KVM: arm64: WARN on injected undef exceptions Steven Price
2025-02-02  2:11   ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 25/43] arm64: Don't expose stolen time for realm guests Steven Price
2025-02-02  2:15   ` Gavin Shan
2025-02-07 17:05     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 26/43] arm64: rme: allow userspace to inject aborts Steven Price
2024-12-12 15:55 ` [PATCH v6 27/43] arm64: rme: support RSI_HOST_CALL Steven Price
2025-02-02  6:41   ` Gavin Shan
2025-02-07 17:05     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 28/43] arm64: rme: Allow checking SVE on VM instance Steven Price
2025-02-02  6:00   ` Gavin Shan
2025-02-07 17:05     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 29/43] arm64: RME: Always use 4k pages for realms Steven Price
2025-02-02  6:52   ` Gavin Shan
2025-02-07 17:05     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 30/43] arm64: rme: Prevent Device mappings for Realms Steven Price
2025-02-02  7:12   ` Gavin Shan
2025-02-07 17:08     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 31/43] arm_pmu: Provide a mechanism for disabling the physical IRQ Steven Price
2024-12-12 15:55 ` [PATCH v6 32/43] arm64: rme: Enable PMU support with a realm guest Steven Price
2024-12-12 16:54   ` Joey Gouly
2025-02-10 11:58     ` Steven Price
2024-12-12 15:55 ` [PATCH v6 33/43] kvm: rme: Hide KVM_CAP_READONLY_MEM for realm guests Steven Price
2024-12-12 15:55 ` [PATCH v6 34/43] arm64: RME: Propagate number of breakpoints and watchpoints to userspace Steven Price
2025-02-02 23:15   ` Gavin Shan
2025-02-02 23:17     ` Gavin Shan
2024-12-12 15:56 ` [PATCH v6 35/43] arm64: RME: Set breakpoint parameters through SET_ONE_REG Steven Price
2024-12-12 15:56 ` [PATCH v6 36/43] arm64: RME: Initialize PMCR.N with number counter supported by RMM Steven Price
2024-12-12 15:56 ` [PATCH v6 37/43] arm64: RME: Propagate max SVE vector length from RMM Steven Price
2024-12-12 15:56 ` [PATCH v6 38/43] arm64: RME: Configure max SVE vector length for a Realm Steven Price
2025-02-02 23:26   ` Gavin Shan
2024-12-12 15:56 ` [PATCH v6 39/43] arm64: RME: Provide register list for unfinalized RME RECs Steven Price
2024-12-12 15:56 ` [PATCH v6 40/43] arm64: RME: Provide accurate register list Steven Price
2024-12-12 15:56 ` [PATCH v6 41/43] arm64: kvm: Expose support for private memory Steven Price
2024-12-12 15:56 ` [PATCH v6 42/43] KVM: arm64: Expose KVM_ARM_VCPU_REC to user space Steven Price
2024-12-12 15:56 ` [PATCH v6 43/43] KVM: arm64: Allow activating realms Steven Price

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3f0caace-ee05-4ddf-ae75-2157e77aa57c@redhat.com \
    --to=gshan@redhat.com \
    --cc=alexandru.elisei@arm.com \
    --cc=alpergun@google.com \
    --cc=aneesh.kumar@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=christoffer.dall@arm.com \
    --cc=gankulkarni@os.amperecomputing.com \
    --cc=james.morse@arm.com \
    --cc=joey.gouly@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=sdonthineni@nvidia.com \
    --cc=steven.price@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).