From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sasha Levin Subject: Re: [PATCH v2 1/2] KVM: MMIO: Lock coalesced device when checking for available entry Date: Mon, 18 Jul 2011 12:29:49 +0300 Message-ID: <1310981389.8209.3.camel@lappy> References: <1310729869-1451-1-git-send-email-levinsasha928@gmail.com> <4E23EACD.1020407@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, Ingo Molnar , Marcelo Tosatti , Pekka Enberg To: Avi Kivity Return-path: Received: from mail-ww0-f44.google.com ([74.125.82.44]:39672 "EHLO mail-ww0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750758Ab1GRJaI (ORCPT ); Mon, 18 Jul 2011 05:30:08 -0400 Received: by wwe5 with SMTP id 5so2989810wwe.1 for ; Mon, 18 Jul 2011 02:30:07 -0700 (PDT) In-Reply-To: <4E23EACD.1020407@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Mon, 2011-07-18 at 11:11 +0300, Avi Kivity wrote: > On 07/15/2011 02:37 PM, Sasha Levin wrote: > > Move the check whether there are available entries to within the spinlock. > > This allows working with larger amount of VCPUs and reduces premature > > exits when using a large number of VCPUs. > > > > Cc: Avi Kivity > > Cc: Ingo Molnar > > Cc: Marcelo Tosatti > > Cc: Pekka Enberg > > Signed-off-by: Sasha Levin > > --- > > virt/kvm/coalesced_mmio.c | 9 ++++++--- > > 1 files changed, 6 insertions(+), 3 deletions(-) > > > > diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c > > index fc84875..34188db 100644 > > --- a/virt/kvm/coalesced_mmio.c > > +++ b/virt/kvm/coalesced_mmio.c > > @@ -37,7 +37,7 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev, > > */ > > ring = dev->kvm->coalesced_mmio_ring; > > avail = (ring->first - ring->last - 1) % KVM_COALESCED_MMIO_MAX; > > - if (avail< KVM_MAX_VCPUS) { > > + if (avail == 0) { > > /* full */ > > return 0; > > } > > @@ -63,11 +63,14 @@ static int coalesced_mmio_write(struct kvm_io_device *this, > > { > > struct kvm_coalesced_mmio_dev *dev = to_mmio(this); > > struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring; > > - if (!coalesced_mmio_in_range(dev, addr, len)) > > - return -EOPNOTSUPP; > > > > spin_lock(&dev->lock); > > > > + if (!coalesced_mmio_in_range(dev, addr, len)) { > > + spin_unlock(&dev->lock); > > + return -EOPNOTSUPP; > > + } > > + > > /* copy data in first free entry of the ring */ > > Hmm. This means we take the lock for every I/O, whether it hits > coalesced mmio or not. > > We need to do the range check before taking the lock and the space check > after taking the lock. > I'll fix that. Shouldn't the range check be also locked somehow? Currently it is possible that a coalesced region was removed while we are checking the ranges, and we won't issue a mmio exit as the host expects. -- Sasha.