From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50163) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFo3A-0001Qm-JD for qemu-devel@nongnu.org; Thu, 16 Jul 2015 14:38:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZFo35-0002Xq-Fi for qemu-devel@nongnu.org; Thu, 16 Jul 2015 14:38:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58999) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFo35-0002X2-AW for qemu-devel@nongnu.org; Thu, 16 Jul 2015 14:38:23 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (Postfix) with ESMTPS id CC80C2B786E for ; Thu, 16 Jul 2015 18:38:21 +0000 (UTC) From: Wei Huang Date: Thu, 16 Jul 2015 14:38:13 -0400 Message-Id: <1437071893-19457-1-git-send-email-wei@redhat.com> Subject: [Qemu-devel] [PATCH 1/1] virtio-mmio: return the max queue num of virtio-mmio with initial value List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: amit.shah@redhat.com, Wei Huang , drjones@redhat.com, mst@redhat.com Recently we found that virtio-console devices consumes lots AArch64 guest memory, roughly 1GB with 8 devices. After debugging, it turns out that lots of factors contribute to this problem: i) guest PAGE_SIZE=64KB, ii) virtio-mmio based devices, and iii) virtio-console device. Here is the detailed analysis: 1. First, during initialization, virtio-mmio driver in guest pokes vq size by reading VIRTIO_MMIO_QUEUE_NUM_MAX (see virtio_mmio.c file). 2. QEMU returns VIRTQUEUE_MAX_SIZE (1024) to guest VM; And virtio-mmio uses it as the default vq size. 3. virtio-console driver allocates vring buffers based on this value (see add_inbuf() function of virtio_console.c file). Because PAGE_SIZE=64KB, ~64MB is allocated for each virtio-console vq. This patch addresses the problem by returning the iniatlized vring size when VM queries QEMU about VIRTIO_MMIO_QUEUE_NUM_MAX. This is similar to virtio-pci's approach. By doing this, the vq memory consumption is reduced substantially. Signed-off-by: Wei Huang --- hw/virtio/virtio-mmio.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c index 10123f3..27840fe 100644 --- a/hw/virtio/virtio-mmio.c +++ b/hw/virtio/virtio-mmio.c @@ -93,6 +93,7 @@ static uint64_t virtio_mmio_read(void *opaque, hwaddr offset, unsigned size) { VirtIOMMIOProxy *proxy = (VirtIOMMIOProxy *)opaque; VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus); + uint64_t queue_num; DPRINTF("virtio_mmio_read offset 0x%x\n", (int)offset); @@ -149,10 +150,8 @@ static uint64_t virtio_mmio_read(void *opaque, hwaddr offset, unsigned size) } return proxy->host_features; case VIRTIO_MMIO_QUEUENUMMAX: - if (!virtio_queue_get_num(vdev, vdev->queue_sel)) { - return 0; - } - return VIRTQUEUE_MAX_SIZE; + queue_num = virtio_queue_get_num(vdev, vdev->queue_sel); + return queue_num; case VIRTIO_MMIO_QUEUEPFN: return virtio_queue_get_addr(vdev, vdev->queue_sel) >> proxy->guest_page_shift; -- 1.8.3.1