From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35486) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zjalq-0003YB-Vc for qemu-devel@nongnu.org; Tue, 06 Oct 2015 18:31:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZjXn9-0004EI-QI for qemu-devel@nongnu.org; Tue, 06 Oct 2015 15:21:27 -0400 Received: from mail-qk0-x236.google.com ([2607:f8b0:400d:c09::236]:34456) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZjXn7-0004A2-VR for qemu-devel@nongnu.org; Tue, 06 Oct 2015 15:20:50 -0400 Received: by qkbi190 with SMTP id i190so70125291qkb.1 for ; Tue, 06 Oct 2015 12:20:49 -0700 (PDT) Sender: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= From: marcandre.lureau@redhat.com Date: Tue, 6 Oct 2015 21:19:12 +0200 Message-Id: <1444159184-18153-17-git-send-email-marcandre.lureau@redhat.com> In-Reply-To: <1444159184-18153-1-git-send-email-marcandre.lureau@redhat.com> References: <1444159184-18153-1-git-send-email-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [Qemu-devel] [PULL 16/48] ivshmem: remove max_peer field List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: peter.maydell@linaro.org Cc: =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= , qemu-devel@nongnu.org From: Marc-André Lureau max_peer isn't really useful, it tracks the maximum received VM id, but that quickly matches nb_peers, the size of the peers array. Since VM come and go, there might be sparse peers so it doesn't help much in general to have this value around. Signed-off-by: Marc-André Lureau Reviewed-by: Claudio Fontana --- hw/misc/ivshmem.c | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c index 0716deb..c4c130d 100644 --- a/hw/misc/ivshmem.c +++ b/hw/misc/ivshmem.c @@ -90,7 +90,6 @@ typedef struct IVShmemState { Peer *peers; int nb_peers; /* how many guests we have space for */ - int max_peer; /* maximum numbered peer */ int vm_id; uint32_t vectors; @@ -200,7 +199,7 @@ static void ivshmem_io_write(void *opaque, hwaddr addr, case DOORBELL: /* check that dest VM ID is reasonable */ - if (dest > s->max_peer) { + if (dest >= s->nb_peers) { IVSHMEM_DPRINTF("Invalid destination VM ID (%d)\n", dest); break; } @@ -574,11 +573,6 @@ static void ivshmem_read(void *opaque, const uint8_t *buf, int size) /* increment count for particular guest */ s->peers[incoming_posn].nb_eventfds++; - /* keep track of the maximum VM ID */ - if (incoming_posn > s->max_peer) { - s->max_peer = incoming_posn; - } - if (incoming_posn == s->vm_id) { s->eventfd_chr[guest_max_eventfd] = create_eventfd_chr_device(s, &s->peers[s->vm_id].eventfds[guest_max_eventfd], @@ -721,8 +715,6 @@ static void pci_ivshmem_realize(PCIDevice *dev, Error **errp) PCI_BASE_ADDRESS_MEM_PREFETCH; Error *local_err = NULL; - s->max_peer = -1; - if (s->sizearg == NULL) { s->ivshmem_size = 4 << 20; /* 4 MB default */ } else { -- 2.4.3