public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* dirty page tracking in kvm/qemu -- page faults inevitable?
@ 2014-07-29 22:12 Chris Friesen
  2014-07-30  6:09 ` Xiao Guangrong
  0 siblings, 1 reply; 6+ messages in thread
From: Chris Friesen @ 2014-07-29 22:12 UTC (permalink / raw)
  To: avi, mtosatti, kvm

Hi,

I've got an issue where we're hitting major performance penalties while 
doing live migration, and it seems like it might be due to page faults 
triggering hypervisor exits, and then we get stuck waiting for the 
iothread lock which is held by the qemu dirty page scanning code.

Accordingly, I'm trying to figure out the actual mechanism whereby dirty 
pages are tracked in qemu/kvm.  I've got an Ivy Bridge CPU, a 3.4 kernel 
on the host, and qemu 1.4.

Looking at the qemu code, it seems to be calling down into kvm to get 
the dirty page information.

Looking at kvm, most of what I read seems to be doing the usual "mark it 
read-only and then when we take a page fault mark it as dirty" trick.

However, I read something about Intel EPT having hardware support for 
tracking dirty pages.  It seems like this might avoid the need for a 
page fault, but might only be available on Haswell or later CPUs--is 
that correct?  Is it supported in kvm?  If so, when was support added?

Thanks,
Chris

P.S.  Please CC me on reply, I'm not subscribed to the list.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-07-30 16:18 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-29 22:12 dirty page tracking in kvm/qemu -- page faults inevitable? Chris Friesen
2014-07-30  6:09 ` Xiao Guangrong
2014-07-30  7:41   ` Chris Friesen
2014-07-30 15:42     ` Paolo Bonzini
2014-07-30 16:02       ` Chris Friesen
2014-07-30 16:18         ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox