linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Tejun Heo <tj@kernel.org>
Cc: linux-kernel@vger.kernel.org, xfs@oss.sgi.com,
	linux-fsdevel@vger.kernel.org
Subject: Re: [2.6.36-rc3] Workqueues, XFS, dependencies and deadlocks
Date: Wed, 8 Sep 2010 20:12:22 +1000	[thread overview]
Message-ID: <20100908101222.GY705@dastard> (raw)
In-Reply-To: <4C874D55.6080402@kernel.org>

On Wed, Sep 08, 2010 at 10:46:13AM +0200, Tejun Heo wrote:
> On 09/08/2010 10:28 AM, Dave Chinner wrote:
> >> They may if necessary to keep the workqueue progressing.
> > 
> > Ok, so the normal case is that they will all be processed local to the
> > CPU they were queued on, like the old workqueue code?
> 
> Bound workqueues always process works locally.  Please consider the
> following scenario.
> 
>  w0, w1, w2 are queued to q0 on the same CPU.  w0 burns CPU for 5ms
>  then sleeps for 10ms then burns CPU for 5ms again then finishes.  w1
>  and w2 sleeps for 10ms.
> 
> The following is what happens with the original workqueue (ignoring
> all other tasks and processing overhead).
> 
>  TIME IN MSECS	EVENT
>  0		w0 burns CPU
>  5		w0 sleeps
>  15		w0 wakes and burns CPU
>  20		w0 finishes, w1 starts and sleeps
>  30		w1 finishes, w2 starts and sleeps
>  40		w2 finishes
> 
> With cmwq if @max_active >= 3,
> 
>  TIME IN MSECS	EVENT
>  0		w0 burns CPU
>  5		w0 sleeps, w1 starts and sleeps, w2 starts and sleeps
>  15		w0 wakes and burns CPU, w1 finishes, w2 finishes
>  20		w0 finishes
> 
> IOW, cmwq assigns a new worker when there are more work items to
> process but no work item is currently in progress on the CPU.  Please
> note that this behavior is across *all* workqueues.  It doesn't matter
> which work item belongs to which workqueue.

Ok, so in this case if this was on CPU 1, I'd see kworker[1:0],
kworker[1:1] and kworker[1:2] threads all accumulate CPU time?  I'm
just trying to relate your example it to behaviour I've seen to
check if I understand the example correctly.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2010-09-08 10:12 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-09-07  7:29 [2.6.36-rc3] Workqueues, XFS, dependencies and deadlocks Dave Chinner
2010-09-07  9:04 ` Tejun Heo
2010-09-07 10:01   ` Dave Chinner
2010-09-07 10:35     ` Tejun Heo
2010-09-07 12:26       ` Tejun Heo
2010-09-07 13:02         ` Dave Chinner
2010-09-08  8:22         ` Dave Chinner
2010-09-08  8:51           ` Tejun Heo
2010-09-08 10:05             ` Dave Chinner
2010-09-08 14:10               ` Tejun Heo
2010-09-07 12:48       ` Dave Chinner
2010-09-07 15:39         ` Tejun Heo
2010-09-08  7:34           ` Dave Chinner
2010-09-08  8:20             ` Tejun Heo
2010-09-08  8:28               ` Dave Chinner
2010-09-08  8:46                 ` Tejun Heo
2010-09-08 10:12                   ` Dave Chinner [this message]
2010-09-08 10:28                     ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100908101222.GY705@dastard \
    --to=david@fromorbit.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tj@kernel.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).