public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ben McKeegan <ben@netservers.co.uk>
To: "Paoloni, Gabriele" <gabriele.paoloni@intel.com>
Cc: "davem@davemloft.net" <davem@davemloft.net>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"alan@lxorguk.ukuu.org.uk" <alan@lxorguk.ukuu.org.uk>,
	"linux-ppp@vger.kernel.org" <linux-ppp@vger.kernel.org>,
	"paulus@samba.org" <paulus@samba.org>
Subject: Re: [PATCH] ppp_generic: fix multilink fragment sizes
Date: Wed, 02 Jun 2010 16:55:35 +0100	[thread overview]
Message-ID: <4C067EF7.9040609@netservers.co.uk> (raw)
In-Reply-To: <DF7BB929B28FCF479E888E3D9F8D9E88D3E16B5A@irsmsx502.ger.corp.intel.com>

Paoloni, Gabriele wrote:
> The proposed patch looks wrong to me.
> 
> nbigger is already doing the job; I didn't use DIV_ROUND_UP because in general we don't have always to roundup, otherwise we would exceed the total bandwidth.

I was basing this on the original code prior to your patch, which used 
DIV_ROUND_UP to get the fragment size.  Looking more closely I see your 
point, the original code was starting with the larger fragment size and 
decrementing rather than starting with the smaller size and incrementing 
as your code does, so that makes sense.


> 
>  		flen = len;
>  		if (nfree > 0) {
>  			if (pch->speed == 0) {
> -				flen = totlen/nfree;
> +				if (nfree > 1)
> +					flen = DIV_ROUND_UP(len, nfree);
>  				if (nbigger > 0) {
>  					flen++;
>  					nbigger--;

The important change here is the use of 'len' instead of 'totlen'. 
'nfree' and 'len' should decrease roughly proportionally with each 
iteration of the loop whereas 'totlen' remains unchanged.  Thus 
(totlen/nfree) gets bigger on each iteration whereas len/nfree should 
give roughly the same.  However, without rounding up here I'm not sure 
the logic is right either, since the side effect of nbigger is to make 
len decrease faster so it is not quite proportional to the decrease in 
nfree.  Is there a risk of ending up on the nfree == 1 iteration with 
flen == len - 1 and thus generating a superfluous extra 1 byte long 
fragment?  This would be a far worse situation than a slight imbalance 
in the size of the fragments.

Perhaps the solution is to go back to a precalculated fragment size for 
the pch->speed == 0 case as per original code?

Regards,
Ben.

  parent reply	other threads:[~2010-06-02 15:55 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-26 15:50 [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-03-26 16:02 ` Alan Cox
2010-03-26 16:33   ` Joe Perches
2010-03-26 16:39   ` Richard Hartmann
2010-03-26 16:59     ` David Miller
2010-03-26 17:04       ` David Miller
2010-03-26 17:04     ` James Carlson
2010-03-26 17:00   ` Alexander E. Patrakov
2010-03-26 17:04     ` Alan Cox
2010-03-31 10:03       ` Ben McKeegan
2010-05-29  2:16         ` Paul Mackerras
2010-05-29  9:06           ` Richard Hartmann
2010-05-31 13:39           ` Richard Hartmann
2010-05-31 16:20           ` Ben McKeegan
2010-06-02 14:55             ` Ben McKeegan
2010-06-02 15:04               ` [PATCH] ppp_generic: fix multilink fragment sizes Ben McKeegan
2010-06-02 15:17                 ` Paoloni, Gabriele
2010-06-02 15:31                   ` David Miller
2010-06-02 15:55                   ` Ben McKeegan [this message]
2010-06-03  8:41                     ` Paoloni, Gabriele
2010-06-03  9:14                       ` Ben McKeegan
2010-11-08 14:05               ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-11-15 12:07                 ` Richard Hartmann
2010-06-01 10:20           ` Richard Hartmann
2010-06-01 11:18             ` Ben McKeegan
2010-06-01 11:28               ` Richard Hartmann
2010-06-01 22:15                 ` David Miller
2010-03-31  9:01 ` Richard Hartmann
2010-05-25  9:52 ` Richard Hartmann
     [not found]   ` <4BFBA3F2.2000301@bfs.de>
     [not found]     ` <AANLkTilnueP5HIfX03soMCYE93jubL000rpiOCN1xB2-@mail.gmail.com>
     [not found]       ` <4BFC0942.2030103@bfs.de>
2010-05-26  8:47         ` Richard Hartmann
2010-05-28  7:28           ` walter harms

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C067EF7.9040609@netservers.co.uk \
    --to=ben@netservers.co.uk \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=davem@davemloft.net \
    --cc=gabriele.paoloni@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-ppp@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=paulus@samba.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox