From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44785) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XiMRu-0004RK-Et for qemu-devel@nongnu.org; Sun, 26 Oct 2014 07:57:35 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XiMRp-0000au-Kd for qemu-devel@nongnu.org; Sun, 26 Oct 2014 07:57:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43470) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XiMRp-0000ai-Dj for qemu-devel@nongnu.org; Sun, 26 Oct 2014 07:57:25 -0400 Date: Sun, 26 Oct 2014 13:56:46 +0200 From: "Michael S. Tsirkin" Message-ID: <20141026115646.GB5497@redhat.com> References: <1414225494-2208-1-git-send-email-john.liuli@huawei.com> <1414225494-2208-3-git-send-email-john.liuli@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1414225494-2208-3-git-send-email-john.liuli@huawei.com> Subject: Re: [Qemu-devel] [RFC PATCH 2/2] Assign a new irq handler while irqfd enabled List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "john.liuli" Cc: joel.schopp@amd.com, yingshiuan.pan@gmail.com, qemu-devel@nongnu.org, linux-kernel@vger.kernel.org, remy.gauguey@cea.fr, rusty@rustcorp.com.au, peter.huangpeng@huawei.com, n.nikolaev@virtualopensystems.com, virtualization@lists.linux-foundation.org On Sat, Oct 25, 2014 at 04:24:54PM +0800, john.liuli wrote: > From: Li Liu > > This irq handler will get the interrupt reason from a > shared memory. And will be assigned only while irqfd > enabled. > > Signed-off-by: Li Liu > --- > drivers/virtio/virtio_mmio.c | 34 ++++++++++++++++++++++++++++++++-- > 1 file changed, 32 insertions(+), 2 deletions(-) > > diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c > index 28ddb55..7229605 100644 > --- a/drivers/virtio/virtio_mmio.c > +++ b/drivers/virtio/virtio_mmio.c > @@ -259,7 +259,31 @@ static irqreturn_t vm_interrupt(int irq, void *opaque) > return ret; > } > > +/* Notify all virtqueues on an interrupt. */ > +static irqreturn_t vm_interrupt_irqfd(int irq, void *opaque) > +{ > + struct virtio_mmio_device *vm_dev = opaque; > + struct virtio_mmio_vq_info *info; > + unsigned long status; > + unsigned long flags; > + irqreturn_t ret = IRQ_NONE; > > + /* Read the interrupt reason and reset it */ > + status = *vm_dev->isr_mem; > + *vm_dev->isr_mem = 0x0; you are reading and modifying shared memory without atomics and any memory barriers. Why is this safe? > + > + if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)) { > + virtio_config_changed(&vm_dev->vdev); > + ret = IRQ_HANDLED; > + } > + > + spin_lock_irqsave(&vm_dev->lock, flags); > + list_for_each_entry(info, &vm_dev->virtqueues, node) > + ret |= vring_interrupt(irq, info->vq); > + spin_unlock_irqrestore(&vm_dev->lock, flags); > + > + return ret; > +} > > static void vm_del_vq(struct virtqueue *vq) > { So you invoke callbacks for all VQs. This won't scale well as the number of VQs grows, will it? > @@ -391,6 +415,7 @@ error_available: > return ERR_PTR(err); > } > > +#define VIRTIO_MMIO_F_IRQFD (1 << 7) > static int vm_find_vqs(struct virtio_device *vdev, unsigned nvqs, > struct virtqueue *vqs[], > vq_callback_t *callbacks[], > @@ -400,8 +425,13 @@ static int vm_find_vqs(struct virtio_device *vdev, unsigned nvqs, > unsigned int irq = platform_get_irq(vm_dev->pdev, 0); > int i, err; > > - err = request_irq(irq, vm_interrupt, IRQF_SHARED, > - dev_name(&vdev->dev), vm_dev); > + if (*vm_dev->isr_mem & VIRTIO_MMIO_F_IRQFD) { > + err = request_irq(irq, vm_interrupt_irqfd, IRQF_SHARED, > + dev_name(&vdev->dev), vm_dev); > + } else { > + err = request_irq(irq, vm_interrupt, IRQF_SHARED, > + dev_name(&vdev->dev), vm_dev); > + } > if (err) > return err; So still a single interrupt for all VQs. Again this doesn't scale: a single CPU has to handle interrupts for all of them. I think you need to find a way to get per-VQ interrupts. > -- > 1.7.9.5 >