netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rick Jones <rick.jones2@hp.com>
To: netdev <netdev@oss.sgi.com>
Subject: Re: [RFC] netif_rx: receive path optimization
Date: Thu, 31 Mar 2005 16:07:54 -0800	[thread overview]
Message-ID: <424C90DA.7030600@hp.com> (raw)
In-Reply-To: <1112312206.1096.25.camel@jzny.localdomain>

>>>Note Linux is quiet resilient to reordering compared to other OSes (as
>>>you may know) but avoiding this is a better approach - hence my
>>>suggestion to use NAPI when you want to do serious TCP.
>>
>>Would the same apply to NIC->CPU interrupt assignments? That is, bind the NIC to 
>>a single CPU.
>>
> 
> 
> No reordering there.

Ah, I wasn't clear - would someone doing serious TCP want to have the interrupts 
of a NIC go to a specific CPU.

>>>Dont think we can do that unfortunately: We are screwed by the APIC
>>>architecture on x86.
>>
>>The IPS and TOPS stuff was/is post-NIC-interrupt. Low-level driver processing 
>>still happened/s on a specific CPU, it is the higher-level processing which is 
>>done on another CPU.  The idea - with TOPS at least, is to try to access the ULP 
>>(TCP, UDP etc) structures on the same CPU as last accessed by the app to 
>>minimize that cache to cache migration.
>>
> 
> 
> But if interupt happens on "wrong" cpu - and you decide higher level
> processing is to be done on the "right" cpu (i assume queueing on some
> per CPU queue); then isnt that expensive? Perhaps IPIs involved even?

More expensive than if one were lucky enough to have the interrupt on the 
"right" CPU in the first place, but as the CPU count goes-up, the chances of 
that go down.  The main idea behind TOPS and prior to that IPS was to spread-out 
the processing of packets across as many CPUs as we could, as "correctly" as we 
could.  Lots of small packets meant/means that a NIC could saturate its 
interrupt CPU before the NIC was saturated.  You don't necessarily see that on 
say single-instance netperf TCP_STREAM (or basic FTP) testing, but certainly can 
on aggregate netperf TCP_RR testing.

IPS, being driven by the packet header info, was good enough for simple 
benchmarking, but once you had more than one connection per process/thread that 
wasn't going to cut it, and even with one connection per process telling the 
process where it should run wasn't terribly easy :)   It wasn't _that_ much more 
expensive than the queueing already happening - IPS was when HP-UX networking 
was BSDish and it was done when things were being queued to the netisr queue(s).

TOPS lets the process (I suppose the scheduler really) decide where some of the 
processing for the packet will happen - the part after the handoff.

rick

  reply	other threads:[~2005-04-01  0:07 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-03-30 21:28 [PATCH] netif_rx: receive path optimization Stephen Hemminger
2005-03-30 21:57 ` jamal
2005-03-30 22:08   ` jamal
2005-03-30 23:53   ` Stephen Hemminger
2005-03-31  3:16     ` jamal
2005-03-31 20:04 ` [RFC] " Stephen Hemminger
2005-03-31 21:10   ` Jamal Hadi Salim
2005-03-31 21:17     ` Stephen Hemminger
2005-03-31 21:25       ` Jamal Hadi Salim
2005-03-31 21:43       ` Eric Lemoine
2005-03-31 22:02         ` Stephen Hemminger
2005-03-31 21:24     ` Rick Jones
2005-03-31 21:38       ` jamal
2005-03-31 22:42         ` Rick Jones
2005-03-31 23:03           ` Nivedita Singhvi
2005-03-31 23:28             ` Rick Jones
2005-04-01  0:10               ` Stephen Hemminger
2005-04-01  0:42                 ` Rick Jones
2005-04-01  0:30               ` Nivedita Singhvi
2005-03-31 23:36           ` jamal
2005-04-01  0:07             ` Rick Jones [this message]
2005-04-01  1:17               ` jamal
2005-04-01 18:22                 ` Rick Jones
2005-04-01 16:40       ` Andi Kleen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=424C90DA.7030600@hp.com \
    --to=rick.jones2@hp.com \
    --cc=netdev@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).