From: Alexandru Elisei <alexandru.elisei@arm.com>
To: Reiji Watanabe <reijiw@google.com>
Cc: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com,
linux-arm-kernel@lists.infradead.org,
kvmarm@lists.cs.columbia.edu, will@kernel.org,
mark.rutland@arm.com
Subject: Re: [RFC PATCH v5 08/38] KVM: arm64: Unlock memslots after stage 2 tables are freed
Date: Mon, 21 Mar 2022 17:29:14 +0000 [thread overview]
Message-ID: <Yji1z3Q9cd8njbwL@monolith.localdoman> (raw)
In-Reply-To: <0ce1011b-6e77-f00c-6d19-5b5394e0d0c2@google.com>
Hi,
On Thu, Mar 17, 2022 at 10:19:56PM -0700, Reiji Watanabe wrote:
> Hi Alex,
>
> On 11/17/21 7:38 AM, Alexandru Elisei wrote:
> > Unpin the backing pages mapped at stage 2 after the stage 2 translation
> > tables are destroyed.
> >
> > Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> > ---
> > arch/arm64/kvm/mmu.c | 23 ++++++++++++++++++-----
> > 1 file changed, 18 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index cd6f1bc7842d..072e2aba371f 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1627,11 +1627,19 @@ int kvm_mmu_lock_memslot(struct kvm *kvm, u64 slot, u64 flags)
> > return ret;
> > }
> > -static void unlock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot)
> > +static void __unlock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot)
> > {
> > bool writable = memslot->arch.flags & KVM_MEMSLOT_LOCK_WRITE;
> > unsigned long npages = memslot->npages;
> > + unpin_memslot_pages(memslot, writable);
> > + account_locked_vm(current->mm, npages, false);
> > +
> > + memslot->arch.flags &= ~KVM_MEMSLOT_LOCK_MASK;
> > +}
> > +
> > +static void unlock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot)
> > +{
> > /*
> > * MMU maintenace operations aren't performed on an unlocked memslot.
> > * Unmap it from stage 2 so the abort handler performs the necessary
> > @@ -1640,10 +1648,7 @@ static void unlock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot)
> > if (kvm_mmu_has_pending_ops(kvm))
> > kvm_arch_flush_shadow_memslot(kvm, memslot);
> > - unpin_memslot_pages(memslot, writable);
> > - account_locked_vm(current->mm, npages, false);
> > -
> > - memslot->arch.flags &= ~KVM_MEMSLOT_LOCK_MASK;
> > + __unlock_memslot(kvm, memslot);
> > }
> > int kvm_mmu_unlock_memslot(struct kvm *kvm, u64 slot, u64 flags)
> > @@ -1951,7 +1956,15 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
> > void kvm_arch_flush_shadow_all(struct kvm *kvm)
> > {
> > + struct kvm_memory_slot *memslot;
> > +
> > kvm_free_stage2_pgd(&kvm->arch.mmu);
> > +
> > + kvm_for_each_memslot(memslot, kvm_memslots(kvm)) {
> > + if (!memslot_is_locked(memslot))
> > + continue;
> > + __unlock_memslot(kvm, memslot);
> > + }
> > }
>
> Perhaps it might be useful to manage the number of locked memslots ?
> (can be used in the fix for kvm_mmu_unlock_memslot in the patch-7 as well)
I don't think it's very useful to manage the number, as we usually want to
find all locked memslots, and there's absolutely no guarantee that the
locked memslot will be at the start of the list, in which case we would
have saved iterating over the last memslots.
In the case above, this is done when the VM is being destroyed, which is
not particularly performance sensitive. And certainly a few linked list
accesses won't make much of a difference.
In patch #7, KVM iterates through the memslots and calls
kvm_arch_flush_shadow_memslot(), which is several orders of magnitude
slower than iterating through a few extra memslots. Also, I don't think
userspace locking then unlocking a memslot before running any VCPUs is
something that will happen very often.
Thanks,
Alex
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-03-21 17:30 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-17 15:38 [RFC PATCH v5 00/38] KVM: arm64: Add Statistical Profiling Extension (SPE) support Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 01/38] KVM: arm64: Make lock_all_vcpus() available to the rest of KVM Alexandru Elisei
2022-02-15 5:34 ` Reiji Watanabe
2022-02-15 10:34 ` Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 02/38] KVM: arm64: Add lock/unlock memslot user API Alexandru Elisei
2022-02-15 5:59 ` Reiji Watanabe
2022-02-15 11:03 ` Alexandru Elisei
2022-02-15 12:02 ` Marc Zyngier
2022-02-15 12:13 ` Alexandru Elisei
2022-02-17 7:35 ` Reiji Watanabe
2022-02-17 10:31 ` Alexandru Elisei
2022-02-18 4:41 ` Reiji Watanabe
2021-11-17 15:38 ` [RFC PATCH v5 03/38] KVM: arm64: Implement the memslot lock/unlock functionality Alexandru Elisei
2022-02-15 7:46 ` Reiji Watanabe
2022-02-15 11:26 ` Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 04/38] KVM: arm64: Defer CMOs for locked memslots until a VCPU is run Alexandru Elisei
2022-02-24 5:56 ` Reiji Watanabe
2022-03-21 17:10 ` Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 05/38] KVM: arm64: Perform CMOs on locked memslots when userspace resets VCPUs Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 06/38] KVM: arm64: Delay tag scrubbing for locked memslots until a VCPU runs Alexandru Elisei
2022-03-18 5:03 ` Reiji Watanabe
2022-03-21 17:17 ` Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 07/38] KVM: arm64: Unmap unlocked memslot from stage 2 if kvm_mmu_has_pending_ops() Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 08/38] KVM: arm64: Unlock memslots after stage 2 tables are freed Alexandru Elisei
2022-03-18 5:19 ` Reiji Watanabe
2022-03-21 17:29 ` Alexandru Elisei [this message]
2021-11-17 15:38 ` [RFC PATCH v5 09/38] KVM: arm64: Deny changes to locked memslots Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 10/38] KVM: Add kvm_warn{,_ratelimited} macros Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 11/38] KVM: arm64: Print a warning for unexpected faults on locked memslots Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 12/38] KVM: arm64: Allow userspace to lock and unlock memslots Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 13/38] KVM: arm64: Add CONFIG_KVM_ARM_SPE Kconfig option Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 14/38] KVM: arm64: Add SPE capability and VCPU feature Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 15/38] perf: arm_spe_pmu: Move struct arm_spe_pmu to a separate header file Alexandru Elisei
2022-07-05 16:57 ` Calvin Owens
2022-07-06 10:51 ` Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 16/38] KVM: arm64: Allow SPE emulation when the SPE hardware is present Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 17/38] KVM: arm64: Allow userspace to set the SPE feature only if SPE " Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 18/38] KVM: arm64: Expose SPE version to guests Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 19/38] KVM: arm64: Do not run a VCPU on a CPU without SPE Alexandru Elisei
2022-01-10 11:40 ` Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 20/38] KVM: arm64: Add a new VCPU device control group for SPE Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 21/38] KVM: arm64: Add SPE VCPU device attribute to set the interrupt number Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 22/38] KVM: arm64: Add SPE VCPU device attribute to initialize SPE Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 23/38] KVM: arm64: debug: Configure MDCR_EL2 when a VCPU has SPE Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 24/38] KVM: arm64: Move accesses to MDCR_EL2 out of __{activate, deactivate}_traps_common Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 25/38] KVM: arm64: VHE: Change MDCR_EL2 at world switch if VCPU has SPE Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 26/38] KVM: arm64: Add SPE system registers to VCPU context Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 27/38] KVM: arm64: nVHE: Save PMSCR_EL1 to the host context Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 28/38] KVM: arm64: Rename DEBUG_STATE_SAVE_SPE -> DEBUG_SAVE_SPE_BUFFER flags Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 29/38] KVM: arm64: nVHE: Context switch SPE state if VCPU has SPE Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 30/38] KVM: arm64: VHE: " Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 31/38] KVM: arm64: Save/restore PMSNEVFR_EL1 on VCPU put/load Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 32/38] KVM: arm64: Allow guest to use physical timestamps if perfmon_capable() Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 33/38] KVM: arm64: Emulate SPE buffer management interrupt Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 34/38] KVM: arm64: Add an userspace API to stop a VCPU profiling Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 35/38] KVM: arm64: Implement " Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 36/38] KVM: arm64: Add PMSIDR_EL1 to the SPE register context Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 37/38] KVM: arm64: Make CONFIG_KVM_ARM_SPE depend on !CONFIG_NUMA_BALANCING Alexandru Elisei
2021-11-17 15:38 ` [RFC PATCH v5 38/38] KVM: arm64: Allow userspace to enable SPE for guests Alexandru Elisei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yji1z3Q9cd8njbwL@monolith.localdoman \
--to=alexandru.elisei@arm.com \
--cc=james.morse@arm.com \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=reijiw@google.com \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox