linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC] KVM: arm64: improving IO performance during unmap?
@ 2024-03-28 19:04 Krister Johansen
  2024-03-28 19:05 ` [PATCH] KVM: arm64: Limit stage2_apply_range() batch size to smallest block Krister Johansen
  0 siblings, 1 reply; 9+ messages in thread
From: Krister Johansen @ 2024-03-28 19:04 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton
  Cc: James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
	Will Deacon, Ali Saidi, David Reaver, linux-arm-kernel, kvmarm,
	linux-kernel

Hi,
Ali and I have been looking into ways to reduce the impact a unmap_stage2_range
operation has on IO performance when a device interrupt shares the cpu where the
unmap operation is occurring.

This came to our attention after porting a container VM / hardware virtualized
containers workload to arm64 from x86_64.  On ARM64, the unmap operations took
longer. kvm_tlb_flush_vmid_ipa runs with interrupts disabled.  Unmaps that don't
check for reschedule promptly may delay the IO.

One approach that we investigated was to modify the deferred TLBI code to run
even if range based operations were not supported.  (Provided FWB is enabled).
If range based operations were supported, the code would use them.  However, if
the CPU didn't support FEAT_TLBIRANGE or the unmap was larger than a certain
size, we'd fall back to vmalls12e1is instead.  This reduced the performance
impact of the unmap operation to less than 5% impact on IO performance.
However, with Will's recent patches[1] to fix cases where free'd PTEs may still
be referenced, we were concerned this might not be a viable approach.

As a follow-up to this e-mail, I'm sending a patch for a different approach.  It
shrinks the stage2_apply_range batch size to the minimum block size instead of
the maximum block size.  This eliminates the IO performance regressions, but
increases the overall map / unmap operation times when the CPU is receiving IO
interrupts.  I'm unsure if this is the optimal solution, however, since it may
generate extra unmap walks on 1gb hugepages. I'm also unclear if this creates
problems for any of the other users of stage2_apply_range().

I'd love to get some feedback on the best way to proceed here.

Thanks,

-K

[1] https://lore.kernel.org/kvmarm/20240325185158.8565-1-will@kernel.org/

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-04-04 21:42 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-28 19:04 [RFC] KVM: arm64: improving IO performance during unmap? Krister Johansen
2024-03-28 19:05 ` [PATCH] KVM: arm64: Limit stage2_apply_range() batch size to smallest block Krister Johansen
2024-03-29 13:48   ` Oliver Upton
2024-03-29 19:15     ` Krister Johansen
2024-03-30 10:17       ` Marc Zyngier
2024-04-02 17:00         ` Krister Johansen
2024-04-04  4:40           ` Krister Johansen
2024-04-04 21:27             ` Ali Saidi
2024-04-04 21:41               ` Krister Johansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).