qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Thomas Huth <thuth@redhat.com>
Cc: lvivier@redhat.com, Alexey Kardashevskiy <aik@ozlabs.ru>,
	Jason Wang <jasowang@redhat.com>,
	qemu-devel@nongnu.org, Alexander Graf <agraf@suse.de>,
	qemu-ppc@nongnu.org, Anton Blanchard <anton@samba.org>
Subject: Re: [Qemu-devel] [PATCH 2/3] hw/net/spapr_llan: Fix receive buffer handling for better performance
Date: Thu, 17 Mar 2016 21:00:52 +1100	[thread overview]
Message-ID: <20160317100052.GR9032@voom> (raw)
In-Reply-To: <56EA5D07.3000706@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 3327 bytes --]

On Thu, Mar 17, 2016 at 08:30:15AM +0100, Thomas Huth wrote:
> On 17.03.2016 07:23, David Gibson wrote:
> > On Wed, Mar 16, 2016 at 01:16:50PM +0100, Thomas Huth wrote:
> >>
> >> This patch introduces an alternate way of handling the receive
> >> buffers of the spapr-vlan device, resulting in much better
> >> receive performance for the guest.
> [...]
> >> Though it seems at a first glance like PAPR says that we should store
> >> the receive buffer descriptors in the page that is supplied during
> >> the H_REGISTER_LOGICAL_LAN call, chapter 16.4.1.2 in the LoPAPR spec
> >> declares that "the contents of these descriptors are architecturally
> >> opaque, none of these descriptors are manipulated by code above
> >> the architected interfaces".
> > 
> > Aaaahhh!  I remember back when I first implemented this, that exposing
> > the pool of buffer descriptors via DMA seemed a silly and pointless
> > thing to do, but I did so because I thought that's what the spec said.
> > 
> > 16.4.1.2 seems to make it clearer that the page doesn't list actual Rx
> > buffers, but rather opaque handles on internal buffer pools.
> > 
> > I don't know if I just misread this back in 2011 (or whenever it was)
> > or if the PAPR wording at the time was less clear at the time.
> > 
> > I note that you don't actually put the buffer pool pointers into that
> > page in your patch below.  I don't think that matters now, but I
> > wonder if we'd ever want to implement H_MIGRATE_DMA and if we'd need
> > it in that case.
> 
> I also thought about putting the pointers to the pools into that page.
> But: If we put buffer pool pointers into that page, where should the
> buffer pools be located? Still in the memory of the hypervisor? Then
> this sounds like a very baaad design, the guest then could tinker with
> pointers to the host memory, causing very bad side effects or crashes.
> Or should the buffer pools be located in guest memory? That might be OK,
> but how do the pools get allocated in that case?
> 
> So unless you've got a good idea here, I think it's better to keep the
> pointer list and the buffer pools both in host memory for now.

Yes, I think you're right.

> [...]
> >> +/**
> >> + * Enqueuing receive buffer by adding it to one of our receive buffer pools
> >> + */
> >> +static target_long spapr_vlan_add_rxbuf_to_pool(VIOsPAPRVLANDevice *dev,
> >> +                                                target_ulong buf)
> >> +{
> >> +    int size = VLAN_BD_LEN(buf);
> >> +    int pool;
> >> +
> >> +    pool = spapr_vlan_get_rx_pool_id(dev, size);
> >> +
> >> +    /* No matching pool found? Try to create a new one */
> >> +    if (pool < 0) {
> >> +        for (pool = RX_MAX_POOLS - 1; pool >= 0 ; pool--) {
> > 
> > I don't think this loop actually accomplishes anything.  Either the
> > last slot is free, in which case you use it, then sort into place, or
> > it's not, in which case you've hit the maximum number of buffer pools.
> 
> Oh, you're right. Well spotted! I'll rework my patch to do it without
> that loop.
> 
>  Thomas
> 
> 



-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

  reply	other threads:[~2016-03-17 22:01 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-16 12:16 [Qemu-devel] [PATCH 0/3] hw/net/spapr_llan: Fix bad RX performance of the spapr-vlan device Thomas Huth
2016-03-16 12:16 ` [Qemu-devel] [PATCH 1/3] hw/net/spapr_llan: Extract rx buffer code into separate functions Thomas Huth
2016-03-17  6:23   ` David Gibson
2016-03-18 11:53   ` Laurent Vivier
2016-03-16 12:16 ` [Qemu-devel] [PATCH 2/3] hw/net/spapr_llan: Fix receive buffer handling for better performance Thomas Huth
2016-03-17  6:23   ` David Gibson
2016-03-17  7:30     ` Thomas Huth
2016-03-17 10:00       ` David Gibson [this message]
2016-03-17 15:15       ` Thomas Huth
2016-03-17 22:33         ` David Gibson
2016-03-18  7:56           ` Thomas Huth
2016-03-20  4:21             ` David Gibson
2016-03-16 12:16 ` [Qemu-devel] [PATCH 3/3] hw/net/spapr_llan: Enable the RX buffer pools by default for new machines Thomas Huth
2016-03-17  6:27   ` David Gibson
2016-03-18 13:15   ` [Qemu-devel] [Qemu-ppc] " Laurent Vivier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160317100052.GR9032@voom \
    --to=david@gibson.dropbear.id.au \
    --cc=agraf@suse.de \
    --cc=aik@ozlabs.ru \
    --cc=anton@samba.org \
    --cc=jasowang@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).