From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36022) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1agfyp-0006TI-Eb for qemu-devel@nongnu.org; Thu, 17 Mar 2016 18:01:20 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1agfyo-00033v-AD for qemu-devel@nongnu.org; Thu, 17 Mar 2016 18:01:19 -0400 Date: Thu, 17 Mar 2016 21:00:52 +1100 From: David Gibson Message-ID: <20160317100052.GR9032@voom> References: <1458130611-17304-1-git-send-email-thuth@redhat.com> <1458130611-17304-3-git-send-email-thuth@redhat.com> <20160317062307.GO9032@voom> <56EA5D07.3000706@redhat.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="82WbDBJhFeTAYH8j" Content-Disposition: inline In-Reply-To: <56EA5D07.3000706@redhat.com> Subject: Re: [Qemu-devel] [PATCH 2/3] hw/net/spapr_llan: Fix receive buffer handling for better performance List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Thomas Huth Cc: lvivier@redhat.com, Alexey Kardashevskiy , Jason Wang , qemu-devel@nongnu.org, Alexander Graf , qemu-ppc@nongnu.org, Anton Blanchard --82WbDBJhFeTAYH8j Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Mar 17, 2016 at 08:30:15AM +0100, Thomas Huth wrote: > On 17.03.2016 07:23, David Gibson wrote: > > On Wed, Mar 16, 2016 at 01:16:50PM +0100, Thomas Huth wrote: > >> > >> This patch introduces an alternate way of handling the receive > >> buffers of the spapr-vlan device, resulting in much better > >> receive performance for the guest. > [...] > >> Though it seems at a first glance like PAPR says that we should store > >> the receive buffer descriptors in the page that is supplied during > >> the H_REGISTER_LOGICAL_LAN call, chapter 16.4.1.2 in the LoPAPR spec > >> declares that "the contents of these descriptors are architecturally > >> opaque, none of these descriptors are manipulated by code above > >> the architected interfaces". > >=20 > > Aaaahhh! I remember back when I first implemented this, that exposing > > the pool of buffer descriptors via DMA seemed a silly and pointless > > thing to do, but I did so because I thought that's what the spec said. > >=20 > > 16.4.1.2 seems to make it clearer that the page doesn't list actual Rx > > buffers, but rather opaque handles on internal buffer pools. > >=20 > > I don't know if I just misread this back in 2011 (or whenever it was) > > or if the PAPR wording at the time was less clear at the time. > >=20 > > I note that you don't actually put the buffer pool pointers into that > > page in your patch below. I don't think that matters now, but I > > wonder if we'd ever want to implement H_MIGRATE_DMA and if we'd need > > it in that case. >=20 > I also thought about putting the pointers to the pools into that page. > But: If we put buffer pool pointers into that page, where should the > buffer pools be located? Still in the memory of the hypervisor? Then > this sounds like a very baaad design, the guest then could tinker with > pointers to the host memory, causing very bad side effects or crashes. > Or should the buffer pools be located in guest memory? That might be OK, > but how do the pools get allocated in that case? >=20 > So unless you've got a good idea here, I think it's better to keep the > pointer list and the buffer pools both in host memory for now. Yes, I think you're right. > [...] > >> +/** > >> + * Enqueuing receive buffer by adding it to one of our receive buffer= pools > >> + */ > >> +static target_long spapr_vlan_add_rxbuf_to_pool(VIOsPAPRVLANDevice *d= ev, > >> + target_ulong buf) > >> +{ > >> + int size =3D VLAN_BD_LEN(buf); > >> + int pool; > >> + > >> + pool =3D spapr_vlan_get_rx_pool_id(dev, size); > >> + > >> + /* No matching pool found? Try to create a new one */ > >> + if (pool < 0) { > >> + for (pool =3D RX_MAX_POOLS - 1; pool >=3D 0 ; pool--) { > >=20 > > I don't think this loop actually accomplishes anything. Either the > > last slot is free, in which case you use it, then sort into place, or > > it's not, in which case you've hit the maximum number of buffer pools. >=20 > Oh, you're right. Well spotted! I'll rework my patch to do it without > that loop. >=20 > Thomas >=20 >=20 --=20 David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson --82WbDBJhFeTAYH8j Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJW6oBUAAoJEGw4ysog2bOSReMQAKCWovrd3amE/CoBVueoAq/9 Jyqd40rFxWDc5WpCRl1smEur5NfUBtUbNSSN7hHB4/qLC2xJqo2cEu7cAe88Clrq 7rvbffaa4SYeOetmXb5CYn3G/TrU5XXoryaHbSya1ivfuEpN23RhtJomNXCQ9kVo D95qceFnaro+PFYOg+DUPKJXJfDbNxamOdhjqhAL/hhI+zFPqYszVYmRAfNdMDg3 vvag9jZpnNOBV6Vf/Co3qQ99/L8j8KtBKyN4l0IKdoxjUSI6iqIFzyMEz6jWFcKq FXLk+hrT/wHuGCRBMLGCTcIJmo4d9xY+kdX0CxO36jwgeoDCoTHk3uZFecuTw7y6 9DpbmJkG9t5T2721rrDdW1mYpC427VjMNJwknF5uJqrwdSrc8MwsLJoO9TRWu+mx i5xLuFQWRPRAV/J7KZuTqB3EzYYj/HLUK6QS7ytsnkSf73KNF4nG37Ye+P8hfeVp i+/gvyxQmR0tchfSiX03oDbDHsnTAS+zclo2SOKFy46qufaoKmxX+T4F63wR9uEv nP/b2JZM6Wk75IjH8xlirrzcoyj5ioGgsxxtYCOiBiBFJxqL8TP34JBgDuOoWzrf NlXAglRuC4l9HVPjnGTYT50axAjx726JfT0VLXLKVH23h+nCZYvck4LI+nNhkAK9 3NsNKLjqPxwxsOj0y7++ =Tq+B -----END PGP SIGNATURE----- --82WbDBJhFeTAYH8j--