From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48899) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zc9BI-0000H7-3b for qemu-devel@nongnu.org; Wed, 16 Sep 2015 05:39:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Zc9BE-0000PF-Pf for qemu-devel@nongnu.org; Wed, 16 Sep 2015 05:39:11 -0400 Received: from lhrrgout.huawei.com ([194.213.3.17]:32640) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zc9BE-0000Ot-HE for qemu-devel@nongnu.org; Wed, 16 Sep 2015 05:39:08 -0400 References: <1442333283-13119-1-git-send-email-marcandre.lureau@redhat.com> <1442333283-13119-17-git-send-email-marcandre.lureau@redhat.com> From: Claudio Fontana Message-ID: <55F938B6.1060008@huawei.com> Date: Wed, 16 Sep 2015 11:39:02 +0200 MIME-Version: 1.0 In-Reply-To: <1442333283-13119-17-git-send-email-marcandre.lureau@redhat.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH v3 16/46] ivshmem: remove max_peer field List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: marcandre.lureau@redhat.com, qemu-devel@nongnu.org Cc: drjones@redhat.com, cam@cs.ualberta.ca, stefanha@redhat.com On 15.09.2015 18:07, marcandre.lureau@redhat.com wrote: > From: Marc-André Lureau > > max_peer isn't really useful, it tracks the maximum received VM id, but > that quickly matches nb_peers, the size of the peers array. Since VM > come and go, there might be sparse peers so it doesn't help much in > general to have this value around. Does this max_peer provide any value if VMs _don't_ come and go? Not that I see any. > > Signed-off-by: Marc-André Lureau > --- > hw/misc/ivshmem.c | 10 +--------- > 1 file changed, 1 insertion(+), 9 deletions(-) > > diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c > index 07f2182..cda7dce 100644 > --- a/hw/misc/ivshmem.c > +++ b/hw/misc/ivshmem.c > @@ -90,7 +90,6 @@ typedef struct IVShmemState { > > Peer *peers; > int nb_peers; /* how many guests we have space for */ > - int max_peer; /* maximum numbered peer */ > > int vm_id; > uint32_t vectors; > @@ -200,7 +199,7 @@ static void ivshmem_io_write(void *opaque, hwaddr addr, > > case DOORBELL: > /* check that dest VM ID is reasonable */ > - if (dest > s->max_peer) { > + if (dest >= s->nb_peers) { > IVSHMEM_DPRINTF("Invalid destination VM ID (%d)\n", dest); > break; > } > @@ -574,11 +573,6 @@ static void ivshmem_read(void *opaque, const uint8_t *buf, int size) > /* increment count for particular guest */ > s->peers[incoming_posn].nb_eventfds++; > > - /* keep track of the maximum VM ID */ > - if (incoming_posn > s->max_peer) { > - s->max_peer = incoming_posn; > - } > - > if (incoming_posn == s->vm_id) { > s->eventfd_chr[guest_max_eventfd] = create_eventfd_chr_device(s, > &s->peers[s->vm_id].eventfds[guest_max_eventfd], > @@ -721,8 +715,6 @@ static void pci_ivshmem_realize(PCIDevice *dev, Error **errp) > PCI_BASE_ADDRESS_MEM_PREFETCH;; > Error *local_err = NULL; > > - s->max_peer = -1; > - > if (s->sizearg == NULL) { > s->ivshmem_size = 4 << 20; /* 4 MB default */ > } else { >