public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Shawn Bohrer <sbohrer@rgmadvisors.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: netdev@vger.kernel.org
Subject: Re: Heavy spin_lock contention in __udp4_lib_mcast_deliver increase
Date: Thu, 26 Apr 2012 16:44:05 -0500	[thread overview]
Message-ID: <20120426214405.GF2479@BohrerMBP.rgmadvisors.com> (raw)
In-Reply-To: <1335457888.2775.55.camel@edumazet-glaptop>

On Thu, Apr 26, 2012 at 06:31:28PM +0200, Eric Dumazet wrote:
> On Thu, 2012-04-26 at 11:28 -0500, Shawn Bohrer wrote:
> 
> > No in this case it is 300 unique multicast addresses, and there is one
> > socket listening to each multicast address.  So a single message is
> > only copied once to a single socket.  The bottle neck appears to be
> > that even though a single message is only going to get copied to a
> > single socket we still have to walk the list of all 300 sockets while
> > holding the spin lock to figure that out.  The incoming packet rate is
> > also roughly evenly distributed across all 300 multicast addresses so
> > even though we have multiple receive queues they are all contending
> > for the same spin lock.
> > 
> 
> I repeat my question : Are these 300 sockets bound to the same UDP
> port ?
> 
> If not, they should be spreaded in hash table.
> 
> You can make this hash table very big to reduce hash collisions
> 
> Boot parameter : uhash_entries=65536

Thanks Eric I don't know how I missed this.  In my test all 300
sockets were bound to the same UDP port so they were all falling into
the same bucket.  Switching the test to use unique ports solves the
issue.

I didn't try your other patch to increase the stack size up to 512
sockets because I don't think we need it.  We rarely have more than a
single socket per machine receiving packets on a multicast address so
I think the current stack size is sufficient for us.  Or perhaps once
again I may be misunderstanding the purpose of that patch.

--
Shawn

-- 

---------------------------------------------------------------
This email, along with any attachments, is confidential. If you 
believe you received this message in error, please contact the 
sender immediately and delete all copies of the message.  
Thank you.

      reply	other threads:[~2012-04-26 21:44 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-26 15:15 Heavy spin_lock contention in __udp4_lib_mcast_deliver increase Shawn Bohrer
2012-04-26 15:53 ` Eric Dumazet
2012-04-26 16:18   ` Eric Dumazet
2012-04-26 16:21     ` Eric Dumazet
2012-04-26 16:28   ` Shawn Bohrer
2012-04-26 16:31     ` Eric Dumazet
2012-04-26 21:44       ` Shawn Bohrer [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120426214405.GF2479@BohrerMBP.rgmadvisors.com \
    --to=sbohrer@rgmadvisors.com \
    --cc=eric.dumazet@gmail.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox