netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Ousterhout <ouster@cs.stanford.edu>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: netdev@vger.kernel.org
Subject: Re: GRO: can't force packet up stack immediately?
Date: Thu, 3 Dec 2020 11:52:30 -0800	[thread overview]
Message-ID: <CAGXJAmwEEnhX5KBvPZmwOKF_0hhVuGfvbXsoGR=+vB8bGge1sQ@mail.gmail.com> (raw)
In-Reply-To: <72f3ea21-b4bd-b5bd-f72f-be415598591f@gmail.com>

Homa uses GRO to collect batches of packets for protocol processing,
but there are times when it wants to push a batch of packet up through
the stack immediately (it doesn't want any more packets to be
processed at NAPI level before pushing the batch up). However, I can't
see a way to achieve this goal. I can return a packet pointer as the
result of homa_gro_receive (and this used to be sufficient to push the
packet up the stack). What happens now is:
* dev_gro_receive calls napi_gro_complete (same as before)
* napi_gro_complete calls gro_normal_one, whereas it used to call
netif_receive_skb_internal
* gro_normal_one just adds the packet to napi->rx_list.

Then NAPI-level packet processing continues, until eventually
napi_complete_done is called; it invokes gro_normal_list, which calls
netif_receive_skb_list_internal.

Because of this, packets can be delayed several microseconds before
they are pushed up the stack. Homa is trying to squeeze out latency,
so the extra delay is undesirable.

-John-

On Thu, Dec 3, 2020 at 11:35 AM Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
>
>
> On 12/3/20 8:03 PM, John Ousterhout wrote:
> > I recently upgraded my kernel module implementing the Homa transport
> > protocol from 4.15.18 to 5.4.80, and a GRO feature available in the
> > older version seems to have gone away in the newer version. In
> > particular, it used to be possible for a protocol's xxx_gro_receive
> > function to force a packet up the stack immediately by returning that
> > skb as the result of xxx_gro_receive. However, in the newer kernel
> > version, these packets simply get queued on napi->rx_list; the queue
> > doesn't get flushed up-stack until napi_complete_done is called or
> > gro_normal_batch packets accumulate. For Homa, this extra level of
> > queuing gets in the way.
>
>
> Could you describe what the issue is ?
>
> >
> > Is there any way for a xxx_gro_receive function to force a packet (in
> > particular, one of those in the list passed as first argument to
> > xxx_gro_receive) up the protocol stack immediately? I suppose I could
> > set gro_normal_batch to 1, but that might interfere with other
> > protocols that really want the batching.
> >
> > -John-
> >

  reply	other threads:[~2020-12-03 19:54 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-03 19:03 GRO: can't force packet up stack immediately? John Ousterhout
2020-12-03 19:35 ` Eric Dumazet
2020-12-03 19:52   ` John Ousterhout [this message]
2020-12-04 11:20     ` Edward Cree
2020-12-07  5:51       ` John Ousterhout

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGXJAmwEEnhX5KBvPZmwOKF_0hhVuGfvbXsoGR=+vB8bGge1sQ@mail.gmail.com' \
    --to=ouster@cs.stanford.edu \
    --cc=eric.dumazet@gmail.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).