From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59258) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zf4rE-0007Yb-Hg for qemu-devel@nongnu.org; Thu, 24 Sep 2015 07:38:37 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Zf4rD-0006jC-81 for qemu-devel@nongnu.org; Thu, 24 Sep 2015 07:38:36 -0400 Received: from mail-qk0-x232.google.com ([2607:f8b0:400d:c09::232]:36710) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zf4rD-0006if-2L for qemu-devel@nongnu.org; Thu, 24 Sep 2015 07:38:35 -0400 Received: by qkcf65 with SMTP id f65so28206265qkc.3 for ; Thu, 24 Sep 2015 04:38:34 -0700 (PDT) Sender: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= From: marcandre.lureau@redhat.com Date: Thu, 24 Sep 2015 13:37:18 +0200 Message-Id: <1443094669-4144-17-git-send-email-marcandre.lureau@redhat.com> In-Reply-To: <1443094669-4144-1-git-send-email-marcandre.lureau@redhat.com> References: <1443094669-4144-1-git-send-email-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [Qemu-devel] [PATCH v4 16/47] ivshmem: remove max_peer field List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: drjones@redhat.com, claudio.fontana@huawei.com, stefanha@redhat.com, =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= , pbonzini@redhat.com, cam@cs.ualberta.ca From: Marc-André Lureau max_peer isn't really useful, it tracks the maximum received VM id, but that quickly matches nb_peers, the size of the peers array. Since VM come and go, there might be sparse peers so it doesn't help much in general to have this value around. Signed-off-by: Marc-André Lureau --- hw/misc/ivshmem.c | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c index 0716deb..c4c130d 100644 --- a/hw/misc/ivshmem.c +++ b/hw/misc/ivshmem.c @@ -90,7 +90,6 @@ typedef struct IVShmemState { Peer *peers; int nb_peers; /* how many guests we have space for */ - int max_peer; /* maximum numbered peer */ int vm_id; uint32_t vectors; @@ -200,7 +199,7 @@ static void ivshmem_io_write(void *opaque, hwaddr addr, case DOORBELL: /* check that dest VM ID is reasonable */ - if (dest > s->max_peer) { + if (dest >= s->nb_peers) { IVSHMEM_DPRINTF("Invalid destination VM ID (%d)\n", dest); break; } @@ -574,11 +573,6 @@ static void ivshmem_read(void *opaque, const uint8_t *buf, int size) /* increment count for particular guest */ s->peers[incoming_posn].nb_eventfds++; - /* keep track of the maximum VM ID */ - if (incoming_posn > s->max_peer) { - s->max_peer = incoming_posn; - } - if (incoming_posn == s->vm_id) { s->eventfd_chr[guest_max_eventfd] = create_eventfd_chr_device(s, &s->peers[s->vm_id].eventfds[guest_max_eventfd], @@ -721,8 +715,6 @@ static void pci_ivshmem_realize(PCIDevice *dev, Error **errp) PCI_BASE_ADDRESS_MEM_PREFETCH; Error *local_err = NULL; - s->max_peer = -1; - if (s->sizearg == NULL) { s->ivshmem_size = 4 << 20; /* 4 MB default */ } else { -- 2.4.3