From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sasha Levin Subject: Re: [PATCH v2] MMIO: Make coalesced mmio use a device per zone Date: Tue, 19 Jul 2011 15:34:38 +0300 Message-ID: <1311078878.20113.2.camel@lappy> References: <1311071471-15546-1-git-send-email-levinsasha928@gmail.com> <4E256301.1040700@redhat.com> <1311073516.9174.10.camel@lappy> <4E25777F.1060604@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, Marcelo Tosatti To: Avi Kivity Return-path: Received: from mail-ww0-f44.google.com ([74.125.82.44]:49306 "EHLO mail-ww0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751657Ab1GSMfe (ORCPT ); Tue, 19 Jul 2011 08:35:34 -0400 Received: by wwe5 with SMTP id 5so4145016wwe.1 for ; Tue, 19 Jul 2011 05:35:33 -0700 (PDT) In-Reply-To: <4E25777F.1060604@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, 2011-07-19 at 15:24 +0300, Avi Kivity wrote: > On 07/19/2011 02:05 PM, Sasha Levin wrote: > > On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote: > > > On 07/19/2011 01:31 PM, Sasha Levin wrote: > > > > This patch changes coalesced mmio to create one mmio device per > > > > zone instead of handling all zones in one device. > > > > > > > > Doing so enables us to take advantage of existing locking and prevents > > > > a race condition between coalesced mmio registration/unregistration > > > > and lookups. > > > > > > > > @@ -63,7 +63,7 @@ extern struct kmem_cache *kvm_vcpu_cache; > > > > */ > > > > struct kvm_io_bus { > > > > int dev_count; > > > > -#define NR_IOBUS_DEVS 200 > > > > +#define NR_IOBUS_DEVS 300 > > > > struct kvm_io_device *devs[NR_IOBUS_DEVS]; > > > > }; > > > > > > This means that a lot of non-coalesced-mmio users can squeeze out > > > coalesced-mmio. I don't know if it's really worthwhile, but the 100 > > > coalesced mmio slots should be reserved so we are guaranteed they are > > > available. > > > > We are currently registering 4 devices, plus how many > > ioeventfds/coalesced mmio zones the user wants. I felt bad about upping > > it to 300 really. > > It's just a few kilobytes, where even a small guest occupies half a > gigabyte. Even just its pagetables swallow up megabytes. > > An array means less opportunities to screw up the code and better cache > usage with small objects. > > > > > > > > > > > > @@ -95,6 +85,8 @@ static void coalesced_mmio_destructor(struct kvm_io_device *this) > > > > { > > > > struct kvm_coalesced_mmio_dev *dev = to_mmio(this); > > > > > > > > + list_del(&dev->list); > > > > + > > > > kfree(dev); > > > > } > > > > > > > > > > No lock? > > > > The lock is there to synchronize access to the coalesced ring (it was > > here before this patch too, it's not something new), not the device > > list. > > > > The device list is only accessed when kvm->slots_lock is held, so it > > takes care of that. > > Right. A comment please. > > btw, don't we leak all zones on guest destruction? the array didn't need > any cleanup, but this list does. > No, the destructor is called for all devices on the bus when the bus is going down. We're handling it in coalesced_mmio_destructor() which frees the device. -- Sasha.