public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Sasha Levin <levinsasha928@gmail.com>
To: Avi Kivity <avi@redhat.com>
Cc: kvm@vger.kernel.org, Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [PATCH v2] MMIO: Make coalesced mmio use a device per zone
Date: Tue, 19 Jul 2011 15:34:38 +0300	[thread overview]
Message-ID: <1311078878.20113.2.camel@lappy> (raw)
In-Reply-To: <4E25777F.1060604@redhat.com>

On Tue, 2011-07-19 at 15:24 +0300, Avi Kivity wrote:
> On 07/19/2011 02:05 PM, Sasha Levin wrote:
> > On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote:
> > >  On 07/19/2011 01:31 PM, Sasha Levin wrote:
> > >  >  This patch changes coalesced mmio to create one mmio device per
> > >  >  zone instead of handling all zones in one device.
> > >  >
> > >  >  Doing so enables us to take advantage of existing locking and prevents
> > >  >  a race condition between coalesced mmio registration/unregistration
> > >  >  and lookups.
> > >  >
> > >  >  @@ -63,7 +63,7 @@ extern struct kmem_cache *kvm_vcpu_cache;
> > >  >     */
> > >  >    struct kvm_io_bus {
> > >  >    	int                   dev_count;
> > >  >  -#define NR_IOBUS_DEVS 200
> > >  >  +#define NR_IOBUS_DEVS 300
> > >  >    	struct kvm_io_device *devs[NR_IOBUS_DEVS];
> > >  >    };
> > >
> > >  This means that a lot of non-coalesced-mmio users can squeeze out
> > >  coalesced-mmio.  I don't know if it's really worthwhile, but the 100
> > >  coalesced mmio slots should be reserved so we are guaranteed they are
> > >  available.
> >
> > We are currently registering 4 devices, plus how many
> > ioeventfds/coalesced mmio zones the user wants. I felt bad about upping
> > it to 300 really.
> 
> It's just a few kilobytes, where even a small guest occupies half a 
> gigabyte.  Even just its pagetables swallow up megabytes.
> 
> An array means less opportunities to screw up the code and better cache 
> usage with small objects.
> 
> > >
> > >  >
> > >  >  @@ -95,6 +85,8 @@ static void coalesced_mmio_destructor(struct kvm_io_device *this)
> > >  >    {
> > >  >    	struct kvm_coalesced_mmio_dev *dev = to_mmio(this);
> > >  >
> > >  >  +	list_del(&dev->list);
> > >  >  +
> > >  >    	kfree(dev);
> > >  >    }
> > >  >
> > >
> > >  No lock?
> >
> > The lock is there to synchronize access to the coalesced ring (it was
> > here before this patch too, it's not something new), not the device
> > list.
> >
> > The device list is only accessed when kvm->slots_lock is held, so it
> > takes care of that.
> 
> Right.  A comment please.
> 
> btw, don't we leak all zones on guest destruction? the array didn't need 
> any cleanup, but this list does.
> 

No, the destructor is called for all devices on the bus when the bus is
going down. We're handling it in coalesced_mmio_destructor() which frees
the device.

-- 

Sasha.


  reply	other threads:[~2011-07-19 12:35 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-19 10:31 [PATCH v2] MMIO: Make coalesced mmio use a device per zone Sasha Levin
2011-07-19 10:57 ` Avi Kivity
2011-07-19 11:05   ` Sasha Levin
2011-07-19 12:24     ` Avi Kivity
2011-07-19 12:34       ` Sasha Levin [this message]
2011-07-19 12:39         ` Avi Kivity
2011-12-21  9:12       ` Amos Kong
2012-02-13  2:06         ` Amos Kong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1311078878.20113.2.camel@lappy \
    --to=levinsasha928@gmail.com \
    --cc=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox