qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Thomas Huth <thuth@redhat.com>
Cc: lvivier@redhat.com, Alexey Kardashevskiy <aik@ozlabs.ru>,
	Jason Wang <jasowang@redhat.com>,
	qemu-devel@nongnu.org, Alexander Graf <agraf@suse.de>,
	qemu-ppc@nongnu.org, Anton Blanchard <anton@samba.org>
Subject: Re: [Qemu-devel] [PATCH 2/3] hw/net/spapr_llan: Fix receive buffer handling for better performance
Date: Sun, 20 Mar 2016 15:21:24 +1100	[thread overview]
Message-ID: <20160320042124.GZ9032@voom> (raw)
In-Reply-To: <56EBB4C8.3030302@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 3223 bytes --]

On Fri, Mar 18, 2016 at 08:56:56AM +0100, Thomas Huth wrote:
> On 17.03.2016 23:33, David Gibson wrote:
> > On Thu, Mar 17, 2016 at 04:15:38PM +0100, Thomas Huth wrote:
> >> On 17.03.2016 08:30, Thomas Huth wrote:
> >>> On 17.03.2016 07:23, David Gibson wrote:
> >>>> On Wed, Mar 16, 2016 at 01:16:50PM +0100, Thomas Huth wrote:
> >>>>>
> >>>>> This patch introduces an alternate way of handling the receive
> >>>>> buffers of the spapr-vlan device, resulting in much better
> >>>>> receive performance for the guest.
> >> [...]
> >>>>> +/**
> >>>>> + * Enqueuing receive buffer by adding it to one of our receive buffer pools
> >>>>> + */
> >>>>> +static target_long spapr_vlan_add_rxbuf_to_pool(VIOsPAPRVLANDevice *dev,
> >>>>> +                                                target_ulong buf)
> >>>>> +{
> >>>>> +    int size = VLAN_BD_LEN(buf);
> >>>>> +    int pool;
> >>>>> +
> >>>>> +    pool = spapr_vlan_get_rx_pool_id(dev, size);
> >>>>> +
> >>>>> +    /* No matching pool found? Try to create a new one */
> >>>>> +    if (pool < 0) {
> >>>>> +        for (pool = RX_MAX_POOLS - 1; pool >= 0 ; pool--) {
> >>>>
> >>>> I don't think this loop actually accomplishes anything.  Either the
> >>>> last slot is free, in which case you use it, then sort into place, or
> >>>> it's not, in which case you've hit the maximum number of buffer pools.
> >>>
> >>> Oh, you're right. Well spotted! I'll rework my patch to do it without
> >>> that loop.
> >>
> >> Wait, no, there was a case where this loop is actually really required:
> >>
> >> 1) All pools are in use and filled with at least one BD
> >> 2) User in the guest suddenly decides to change the buffer size of
> >>    one of the pools in the /sys fs of the guest.
> >> 3) Guest driver tries to add buffers with a new size that do not
> >>    match any size of one of the pools in the host
> >> 4) After the pool on the host runs empty which contained the BDs with
> >>    the size that is not in use anymore, we should recycle that pool
> >>    for the buffers with the new size instead. Since that buffer pool
> >>    might not be at the end of the list, we've got to scan all buffers
> >>    here to make sure we find it.
> >>
> >> So I think the for-loop should stay as it is.
> > 
> > Ah, good point.  I think I was assuming that the pools got sorted when
> > one was emptied as well, but they're not and I suspect it's not a good
> > idea to do so.
> > 
> > Hmm.. I wonder if there's a brief way of explaining the above to put
> > in the comment.
> 
> Something like:
> 
>     /*
>      * If the guest used all pools, but changed the size of one pool
>      * inbetween, we might need to recycle that pool here (if it has
>      * already been emptied). Thus we need to scan all buffer pools
>      * here, not only the last one (which has the highest probability
>      * of being empty)
>      */
> 
> ?
> 
> Or is that too verbose already?

Eh, it's written might as well throw it in.


-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

  reply	other threads:[~2016-03-20  5:06 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-16 12:16 [Qemu-devel] [PATCH 0/3] hw/net/spapr_llan: Fix bad RX performance of the spapr-vlan device Thomas Huth
2016-03-16 12:16 ` [Qemu-devel] [PATCH 1/3] hw/net/spapr_llan: Extract rx buffer code into separate functions Thomas Huth
2016-03-17  6:23   ` David Gibson
2016-03-18 11:53   ` Laurent Vivier
2016-03-16 12:16 ` [Qemu-devel] [PATCH 2/3] hw/net/spapr_llan: Fix receive buffer handling for better performance Thomas Huth
2016-03-17  6:23   ` David Gibson
2016-03-17  7:30     ` Thomas Huth
2016-03-17 10:00       ` David Gibson
2016-03-17 15:15       ` Thomas Huth
2016-03-17 22:33         ` David Gibson
2016-03-18  7:56           ` Thomas Huth
2016-03-20  4:21             ` David Gibson [this message]
2016-03-16 12:16 ` [Qemu-devel] [PATCH 3/3] hw/net/spapr_llan: Enable the RX buffer pools by default for new machines Thomas Huth
2016-03-17  6:27   ` David Gibson
2016-03-18 13:15   ` [Qemu-devel] [Qemu-ppc] " Laurent Vivier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160320042124.GZ9032@voom \
    --to=david@gibson.dropbear.id.au \
    --cc=agraf@suse.de \
    --cc=aik@ozlabs.ru \
    --cc=anton@samba.org \
    --cc=jasowang@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).