netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rick Jones <rick.jones2@hp.com>
To: Mark Ryden <markryde@gmail.com>
Cc: netdev@vger.kernel.org
Subject: Re: Two Dual Core processors and NICS (not handling interrupts on one	CPU/assigning a Two Dual Core processors and NICS (not handling	interrupts on one CPU / assigning a CPU to a NIC)
Date: Tue, 16 Jan 2007 09:34:34 -0800	[thread overview]
Message-ID: <45AD0CAA.7060601@hp.com> (raw)
In-Reply-To: <dac45060701150115y5308f7b5ka44f2ea4ae304d4b@mail.gmail.com>

Mark Ryden wrote:
> Hello,
> 
> 
> I have a machine with 2 dual core CPUs. This machine runs Fedora Core 6.
> I have two Intel e1000 GigaBit network cards on this machine; I use 
> bonding so
> that the machine assigns the same IP address to both NICs ;
> It seems to me that bonding is configured OK, bacuse when running:
> "cat /proc/net/bonding/bond0"
> I get:
> ...
> Permanent HW addr: ....
> 
> (And the Permanent HW addr is diffenet in these two entries).
> 
> I send a large amount of packets to this machine (more than 20,000 in
> a second).

Well, 20K a second is large in some contexts, but not in others :)
> 
> cat /proc/interrupts shops something like this:
>                 CPU0       CPU1         CPU2         CPU3
> 50:    3359337          0          0          0         PCI-MSI  eth0
> 58:         49    3396136          0          0         PCI-MSI  eth1
> 
> CPU0 and CPU1 are of the first CPU as far as I understand ; so this
> means as far as I understand that the second CPU (which has CPU3 and
> CPU4) does not handle interrupts of the arrived packets; Can I
> somehow change it so the second
> CPU will also handle network interrupts of receiving packets on the
> nic ?

Actually, those could be different chips - it depends on the CPUs I 
think, and I suppose the BIOS/OS.  On a Woodcrest system with which I've 
been playing, CPUs 0 and 2 appear to be on the same die, then 1 and 
three.   I ass-u-me-d the numbering was that way to get maximum 
processor cache when saying "numcpu=N" for something less than the 
number of cores in the system.

NUMA considerations might come into play if this is Opteron (well, any 
NUMA system really - larger IA64's, certain SPARC and Power systems 
etc...).  In broad handwaving terms, one is better-off with the NICs 
interrupts being handled by the topologically closest CPU.  (Not that 
some irqbalancer programs recognize that just yet :)

Now, if both CPU0 and CPU1 are saturated it might make sense to put some 
interrupts on 2 and/or 3.  One of those fun "it depends" situations.

rick jones

      parent reply	other threads:[~2007-01-16 17:34 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-01-15  9:15 Two Dual Core processors and NICS (not handling interrupts on one CPU/assigning a Two Dual Core processors and NICS (not handling interrupts on one CPU / assigning a CPU to a NIC) Mark Ryden
2007-01-15  9:58 ` Robert Iakobashvili
2007-01-15 16:52 ` Auke Kok
2007-01-16 17:34 ` Rick Jones [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=45AD0CAA.7060601@hp.com \
    --to=rick.jones2@hp.com \
    --cc=markryde@gmail.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).