From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Nnc8d-0000G5-N2 for qemu-devel@nongnu.org; Fri, 05 Mar 2010 13:20:39 -0500 Received: from [199.232.76.173] (port=50972 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Nnc8d-0000Fw-Df for qemu-devel@nongnu.org; Fri, 05 Mar 2010 13:20:39 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1Nnc8c-0002BG-0o for qemu-devel@nongnu.org; Fri, 05 Mar 2010 13:20:39 -0500 Received: from mx1.redhat.com ([209.132.183.28]:35224) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Nnc8b-0002Az-L3 for qemu-devel@nongnu.org; Fri, 05 Mar 2010 13:20:37 -0500 Date: Fri, 5 Mar 2010 23:49:11 +0530 From: Amit Shah Message-ID: <20100305181911.GA24686@amit-x200.redhat.com> References: <36c10281d19b4c845444363273657c4709210e35.1267636215.git.mst@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <36c10281d19b4c845444363273657c4709210e35.1267636215.git.mst@redhat.com> Subject: [Qemu-devel] Re: [PATCHv4 09/12] vhost: vhost net support List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: quintela@redhat.com, qemu-devel@nongnu.org, kraxel@redhat.com On (Wed) Mar 03 2010 [19:16:35], Michael S. Tsirkin wrote: > +static int vhost_virtqueue_init(struct vhost_dev *dev, > + struct VirtIODevice *vdev, > + struct vhost_virtqueue *vq, > + unsigned idx) > +{ > + target_phys_addr_t s, l, a; > + int r; > + struct vhost_vring_file file = { > + .index = idx, > + }; > + struct vhost_vring_state state = { > + .index = idx, > + }; > + struct VirtQueue *q = virtio_queue(vdev, idx); Why depart from using 'vq' for VirtQueue? Why not use 'hvq' for vhost_virtqueue? That'll make reading through this code easier... Also, 'hvdev' for vhost_dev will be apt as well. > + vq->num = state.num = virtio_queue_get_num(vdev, idx); I think this should be named 'virtio_queue_get_vq_num' for clarity. > + r = ioctl(dev->control, VHOST_SET_VRING_NUM, &state); > + if (r) { > + return -errno; > + } > + > + state.num = virtio_queue_last_avail_idx(vdev, idx); > + r = ioctl(dev->control, VHOST_SET_VRING_BASE, &state); > + if (r) { > + return -errno; > + } > + > + s = l = virtio_queue_get_desc_size(vdev, idx); > + a = virtio_queue_get_desc(vdev, idx); > + vq->desc = cpu_physical_memory_map(a, &l, 0); > + if (!vq->desc || l != s) { > + r = -ENOMEM; > + goto fail_alloc_desc; > + } > + s = l = virtio_queue_get_avail_size(vdev, idx); > + a = virtio_queue_get_avail(vdev, idx); > + vq->avail = cpu_physical_memory_map(a, &l, 0); > + if (!vq->avail || l != s) { > + r = -ENOMEM; > + goto fail_alloc_avail; > + } > + vq->used_size = s = l = virtio_queue_get_used_size(vdev, idx); > + vq->used_phys = a = virtio_queue_get_used(vdev, idx); > + vq->used = cpu_physical_memory_map(a, &l, 1); > + if (!vq->used || l != s) { > + r = -ENOMEM; > + goto fail_alloc_used; > + } > + > + vq->ring_size = s = l = virtio_queue_get_ring_size(vdev, idx); > + vq->ring_phys = a = virtio_queue_get_ring(vdev, idx); > + vq->ring = cpu_physical_memory_map(a, &l, 1); > + if (!vq->ring || l != s) { > + r = -ENOMEM; > + goto fail_alloc_ring; > + } > + > + r = vhost_virtqueue_set_addr(dev, vq, idx, dev->log_enabled); > + if (r < 0) { > + r = -errno; > + goto fail_alloc; > + } > + if (!vdev->binding->guest_notifier || !vdev->binding->host_notifier) { > + fprintf(stderr, "binding does not support irqfd/queuefd\n"); > + r = -ENOSYS; > + goto fail_alloc; > + } This could be checked much earlier on in the function; so that we avoid doing all that stuff above and the cleanup. Amit