From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sasha Levin Subject: Re: [PATCH] kvm tools: Add MMIO coalescing support Date: Sat, 04 Jun 2011 13:28:38 +0300 Message-ID: <1307183318.7239.6.camel@lappy> References: <1307130668-5652-1-git-send-email-levinsasha928@gmail.com> <20110604093847.GB14524@elte.hu> <1307182441.7239.2.camel@lappy> <20110604101711.GB16292@elte.hu> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: penberg@kernel.org, kvm@vger.kernel.org, asias.hejun@gmail.com, gorcunov@gmail.com, prasadjoshi124@gmail.com To: Ingo Molnar Return-path: Received: from mail-ww0-f44.google.com ([74.125.82.44]:34699 "EHLO mail-ww0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753359Ab1FDK2o (ORCPT ); Sat, 4 Jun 2011 06:28:44 -0400 Received: by wwa36 with SMTP id 36so2402611wwa.1 for ; Sat, 04 Jun 2011 03:28:43 -0700 (PDT) In-Reply-To: <20110604101711.GB16292@elte.hu> Sender: kvm-owner@vger.kernel.org List-ID: On Sat, 2011-06-04 at 12:17 +0200, Ingo Molnar wrote: > * Sasha Levin wrote: > > > On Sat, 2011-06-04 at 11:38 +0200, Ingo Molnar wrote: > > > * Sasha Levin wrote: > > > > > > > Coalescing MMIO allows us to avoid an exit every time we have a > > > > MMIO write, instead - MMIO writes are coalesced in a ring which > > > > can be flushed once an exit for a different reason is needed. > > > > A MMIO exit is also trigged once the ring is full. > > > > > > > > Coalesce all MMIO regions registered in the MMIO mapper. > > > > Add a coalescing handler under kvm_cpu. > > > > > > Does this have any effect on latency? I.e. does the guest side > > > guarantee that the pending queue will be flushed after a group of > > > updates have been done? > > > > Theres nothing that detects groups of MMIO writes, but the ring size is > > a bit less than PAGE_SIZE (half of it is overhead - rest is data) and > > we'll exit once the ring is full. > > But if the page is only filled partially and if mmio is not submitted > by the guest indefinitely (say it runs a lot of user-space code) then > the mmio remains pending in the partial-page buffer? We flush the ring on any exit from the guest, not just MMIO exit. But yes, from what I understand from the code - if the buffer is only partially full and we don't take an exit, the buffer doesn't get back to the host. ioeventfds and such are making exits less common, so yes - it's possible we won't have an exit in a while. > If that's how it works then i *really* don't like this, this looks > like a seriously mis-designed batching feature which might have > improved a few server benchmarks but which will introduce random, > hard to debug delays all around the place! -- Sasha.