netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matheos Worku <Matheos.Worku@Sun.COM>
To: hadi@cyberus.ca
Cc: Herbert Xu <herbert@gondor.apana.org.au>,
	David Miller <davem@davemloft.net>,
	jesse.brandeburg@intel.com, jarkao2@gmail.com,
	netdev@vger.kernel.org
Subject: Re: 2.6.24 BUG: soft lockup - CPU#X
Date: Fri, 28 Mar 2008 10:00:15 -0700	[thread overview]
Message-ID: <47ED241F.9080003@sun.com> (raw)
In-Reply-To: <1206700389.4429.34.camel@localhost>

jamal wrote:
> On Thu, 2008-27-03 at 18:58 -0700, Matheos Worku wrote:
>
>   
>> In general, while the TX serialization  improves performance in terms to 
>> lock contention, wouldn't it reduce throughput since only one guy is 
>> doing the actual TX at any given time.  Wondering if it would be 
>> worthwhile to have an  enable/disable option specially for multi queue TX.
>>     
>
> Empirical evidence so far says at some point the bottleneck is going to
> be the wire i.e modern CPUs are "fast enough" that sooner than later
> they will fill up the DMA ring of transmitting driver and go back to
> doing other things. 
>   

> It is hard to create the condition you seem to have come across. I had access to a dual core opteron but found it very hard with parallel UDP
> sessions to keep the TX CPU locked in that region (while the other 3
> were busy pumping packets). My folly could have been that i had a Gige
> wire and maybe a 10G would have recreated the condition. 
> If you can reproduce this at will, can you try to reduce the number of
> sending TX u/iperfs and see when it begins to happen?
> Are all the iperfs destined out of the same netdevice?
>   
I am using 10G nic at this time. With the same driver, I haven't come 
across  the lockup on 1G nic though I haven't really tried to reproduce 
it.   Regarding the number of connection it takes to create the 
situation, I have noticed the lockup at 3 or more udp connections.  
Also, with TSO disabled, I have came across it with lots of TCP connections.


> [Typically the TX path on the driver side is inefficient either because
> of coding (ex: unnecessary locks) or expensive IO. But this has not
> mattered much thus far (given fast enough CPUs).
>   
That could be true  though oprofile is not providing obvious clues, 
alteast not yet.
> It all could be improved by reducing the per packet operations the
> driver incurs -  as an example, the CPU (to the driver) could batch a
> set of packet to the device then kick the device DMA once for the batch
> etc.]
>   
Regards
matheos

> cheers,
> jamal
>
>   


  reply	other threads:[~2008-03-28 17:01 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-03-26 16:46 2.6.24 BUG: soft lockup - CPU#X Matheos Worku
2008-03-26 17:31 ` Rick Jones
2008-03-26 20:14 ` Jarek Poplawski
2008-03-26 20:26   ` Matheos Worku
2008-03-26 21:46     ` Jarek Poplawski
2008-03-26 21:53       ` Jarek Poplawski
2008-03-27 10:33     ` Jarek Poplawski
2008-03-27 23:18       ` Brandeburg, Jesse
2008-03-27 23:45         ` Matheos Worku
2008-03-28  0:02           ` David Miller
2008-03-28  0:19             ` Matheos Worku
2008-03-28  0:34               ` David Miller
2008-03-28  1:22                 ` Herbert Xu
2008-03-28  1:38                   ` David Miller
2008-03-28 10:29                     ` Herbert Xu
2008-03-28 10:56                       ` Ingo Molnar
2008-03-28 11:06                         ` Herbert Xu
2008-03-28 11:29                           ` Herbert Xu
2008-03-28 12:19                             ` jamal
2008-03-28 13:26                               ` Herbert Xu
2008-03-28 14:07                                 ` jamal
2008-03-28 14:12                                 ` Ingo Molnar
2008-03-28 23:25                             ` David Miller
2008-03-28 14:09                           ` Ingo Molnar
2008-03-28  1:58                   ` Matheos Worku
2008-03-28 10:33                     ` jamal
2008-03-28 17:00                       ` Matheos Worku [this message]
2008-03-28 10:38                     ` Herbert Xu
2008-03-28 13:38                       ` Jarek Poplawski
2008-03-28 13:53                         ` Herbert Xu
2008-03-28 14:39                           ` Jarek Poplawski
2008-03-28 14:56                             ` Herbert Xu
2008-03-28 15:29                               ` Jarek Poplawski
2008-03-28 15:47                                 ` Jarek Poplawski
2008-03-29  1:06                                 ` Herbert Xu
2008-03-29  9:11                                   ` Jarek Poplawski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47ED241F.9080003@sun.com \
    --to=matheos.worku@sun.com \
    --cc=davem@davemloft.net \
    --cc=hadi@cyberus.ca \
    --cc=herbert@gondor.apana.org.au \
    --cc=jarkao2@gmail.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).