From: Mika Liljeberg <Mika.Liljeberg@welho.com>
To: kuznet@ms2.inr.ac.ru
Cc: ak@muc.de, davem@redhat.com, linux-kernel@vger.kernel.org
Subject: Re: TCP acking too fast
Date: Sun, 14 Oct 2001 21:48:41 +0300 [thread overview]
Message-ID: <3BC9DE09.747F45B2@welho.com> (raw)
In-Reply-To: <200110141820.WAA06484@ms2.inr.ac.ru>
kuznet@ms2.inr.ac.ru wrote:
>
> Hello!
>
> > The assumption is that the peer is implemented the way you expect and
> > that the application doesn't toy with TCP_NODELAY.
>
> Sorry??
>
> It is the most important _exactly_ for TCP_NODELAY, which
> generates lots of remnants.
I simply meant that with the application in control of packet size, you
simply can't make a reliable estimate of maximum receive MSS unless our
assumption that only maximum sized segments don't have PSH.
> > Not really. You could do one of two things: either ack every second
> > segment
>
> I do not worry about this _at_ _all_. See?
> "each other", "each two mss" --- all this is red herring.
Whatever.
> I do understand your problem, which is not related to rcv_mss.
I know.
> When bandwidth in different directions differ more than 20 times,
> stretch ACKs are even preferred. Look into tcplw work, using stretch ACKs
> is even considered as something normal.
I know. It's a difficult tradeoff between saving bandwidth on the return
path, trying to maintain self clocking, and avoiding bursts caused by
ack compression.
> I really commiserate and think that removing "final cut" clause
> will help you.
Yes.
> But sending ACK on buffer drain at least for short
> packets is real demand, which cannot be relaxed.
Why? This one has me stumped.
> "final cut" is also better not to remove actually, but the case
> when it is required is probabilistically marginal.
>
> Alexey
Regards,
MikaL
next prev parent reply other threads:[~2001-10-14 18:48 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-10-14 0:23 TCP acking too fast Mika Liljeberg
2001-10-14 6:40 ` David S. Miller
2001-10-14 7:05 ` Mika Liljeberg
2001-10-14 7:47 ` David S. Miller
2001-10-14 7:51 ` Mika Liljeberg
2001-10-14 8:12 ` David S. Miller
2001-10-14 8:39 ` Mika Liljeberg
2001-10-14 9:03 ` David S. Miller
2001-10-14 9:15 ` Mika Liljeberg
2001-10-14 9:16 ` David S. Miller
2001-10-14 9:25 ` Andi Kleen
2001-10-14 9:39 ` David S. Miller
2001-10-14 11:30 ` Andi Kleen
2001-10-14 11:49 ` Mika Liljeberg
2001-10-14 14:05 ` Andi Kleen
2001-10-14 14:26 ` Mika Liljeberg
2001-10-14 16:12 ` Andi Kleen
2001-10-14 16:55 ` Mika Liljeberg
2001-10-14 17:07 ` kuznet
2001-10-14 17:26 ` Mika Liljeberg
2001-10-14 17:35 ` kuznet
2001-10-14 17:56 ` Mika Liljeberg
2001-10-14 18:20 ` kuznet
2001-10-14 18:48 ` Mika Liljeberg [this message]
2001-10-14 19:12 ` kuznet
2001-10-14 19:32 ` Mika Liljeberg
2001-10-14 19:40 ` kuznet
2001-10-14 20:06 ` Mika Liljeberg
2001-10-15 18:40 ` kuznet
2001-10-15 19:15 ` Mika Liljeberg
2001-10-15 19:38 ` Mika Liljeberg
2001-10-14 13:14 ` [PATCH] " Mika Liljeberg
2001-10-14 16:36 ` kuznet
2001-10-14 7:50 ` David S. Miller
2001-10-14 7:53 ` Mika Liljeberg
2001-10-15 20:59 ` Bill Davidsen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3BC9DE09.747F45B2@welho.com \
--to=mika.liljeberg@welho.com \
--cc=ak@muc.de \
--cc=davem@redhat.com \
--cc=kuznet@ms2.inr.ac.ru \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox