public inbox for linux-media@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 17/21] workqueue: simple reimplementation of SINGLE_THREAD workqueue
       [not found] ` <1258391726-30264-18-git-send-email-tj@kernel.org>
@ 2009-11-17  0:47   ` Andy Walls
  2009-11-17  5:23     ` Tejun Heo
  0 siblings, 1 reply; 7+ messages in thread
From: Andy Walls @ 2009-11-17  0:47 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, linux-media, jeff, mingo, akpm, jens.axboe, rusty,
	cl, dhowells, arjan, torvalds, avi, peterz, andi, fweisbec

On Tue, 2009-11-17 at 02:15 +0900, Tejun Heo wrote:
> SINGLE_THREAD workqueues are used to reduce the number of worker
> threads and ease synchronization.  The first reason will be irrelevant
> with concurrency managed workqueue implementation.  Simplify
> SINGLE_THREAD implementation by creating the workqueues the same but
> making the worker grab mutex before actually executing works on the
> workqueue.  In the long run, most SINGLE_THREAD workqueues will be
> replaced with generic ones.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> 
> ---
> kernel/workqueue.c |  151 ++++++++++++++++++----------------------------------
>  1 files changed, 52 insertions(+), 99 deletions(-)
> 
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 5392939..82b03a1 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -33,6 +33,7 @@
>  #include <linux/kallsyms.h>
>  #include <linux/debug_locks.h>
>  #include <linux/lockdep.h>
> +#include <linux/mutex.h>
>  
>  /*
>   * Structure fields follow one of the following exclusion rules.
> @@ -71,6 +72,7 @@ struct workqueue_struct {
>  	struct cpu_workqueue_struct *cpu_wq;	/* I: cwq's */
>  	struct list_head	list;		/* W: list of all workqueues */
>  	const char		*name;		/* I: workqueue name */
> +	struct mutex		single_thread_mutex; /* for SINGLE_THREAD wq */
>  #ifdef CONFIG_LOCKDEP
>  	struct lockdep_map	lockdep_map;
>  #endif


> @@ -410,6 +387,8 @@ EXPORT_SYMBOL_GPL(queue_delayed_work_on);
>  static void process_one_work(struct cpu_workqueue_struct *cwq,
>  			     struct work_struct *work)
>  {
> +	struct workqueue_struct *wq = cwq->wq;
> +	bool single_thread = wq->flags & WQ_SINGLE_THREAD;
>  	work_func_t f = work->func;
>  #ifdef CONFIG_LOCKDEP
>  	/*
> @@ -430,11 +409,18 @@ static void process_one_work(struct cpu_workqueue_struct *cwq,
>  
>  	BUG_ON(get_wq_data(work) != cwq);
>  	work_clear_pending(work);
> -	lock_map_acquire(&cwq->wq->lockdep_map);
> +	lock_map_acquire(&wq->lockdep_map);
>  	lock_map_acquire(&lockdep_map);
> -	f(work);
> +
> +	if (unlikely(single_thread)) {
> +		mutex_lock(&wq->single_thread_mutex);
> +		f(work);
> +		mutex_unlock(&wq->single_thread_mutex);
> +	} else
> +		f(work);
> +

An important property of the single threaded workqueue, upon which the
cx18 driver relies, is that work objects will be processed strictly in
the order in which they were queued.  The cx18 driver has a pool of
"work orders" and multiple active work orders can be queued up on the
workqueue especially if multiple streams are active.  If these work
orders were to be processed out of order, video artifacts would result
in video display applications.


With multiple work handling threads, I don't think the 

	mutex_lock(&wq->single_thread_mutex);
	f(work);
	
here can guarantee work requests from the workqueue will always be
processed in the order they are received.

Am I missing something?

Regards,
Andy


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/21] workqueue: simple reimplementation of SINGLE_THREAD workqueue
  2009-11-17  0:47   ` [PATCH 17/21] workqueue: simple reimplementation of SINGLE_THREAD workqueue Andy Walls
@ 2009-11-17  5:23     ` Tejun Heo
  2009-11-17 12:05       ` Andy Walls
  2009-11-17 15:05       ` Linus Torvalds
  0 siblings, 2 replies; 7+ messages in thread
From: Tejun Heo @ 2009-11-17  5:23 UTC (permalink / raw)
  To: Andy Walls
  Cc: linux-kernel, linux-media, jeff, mingo, akpm, jens.axboe, rusty,
	cl, dhowells, arjan, torvalds, avi, peterz, andi, fweisbec

Hello,

11/17/2009 09:47 AM, Andy Walls wrote:
> An important property of the single threaded workqueue, upon which the
> cx18 driver relies, is that work objects will be processed strictly in
> the order in which they were queued.  The cx18 driver has a pool of
> "work orders" and multiple active work orders can be queued up on the
> workqueue especially if multiple streams are active.  If these work
> orders were to be processed out of order, video artifacts would result
> in video display applications.

That's an interesting use of single thread workqueue.  Most of single
thread workqueues seem to be made single thread just to save number of
threads.  Some seem to depend on single thread of execution but I
never knew there are ones which depend on the exact execution order.
Do you think that usage is wide-spread?  Implementing strict ordering
shouldn't be too difficult but I can't help but feeling that such
assumption is abuse of implementation detail.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/21] workqueue: simple reimplementation of SINGLE_THREAD workqueue
  2009-11-17  5:23     ` Tejun Heo
@ 2009-11-17 12:05       ` Andy Walls
  2009-11-17 16:21         ` Tejun Heo
  2009-11-17 15:05       ` Linus Torvalds
  1 sibling, 1 reply; 7+ messages in thread
From: Andy Walls @ 2009-11-17 12:05 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, linux-media, jeff, mingo, akpm, jens.axboe, rusty,
	cl, dhowells, arjan, torvalds, avi, peterz, andi, fweisbec

On Tue, 2009-11-17 at 14:23 +0900, Tejun Heo wrote:
> Hello,
> 
> 11/17/2009 09:47 AM, Andy Walls wrote:
> > An important property of the single threaded workqueue, upon which the
> > cx18 driver relies, is that work objects will be processed strictly in
> > the order in which they were queued.  The cx18 driver has a pool of
> > "work orders" and multiple active work orders can be queued up on the
> > workqueue especially if multiple streams are active.  If these work
> > orders were to be processed out of order, video artifacts would result
> > in video display applications.
> 
> That's an interesting use of single thread workqueue.  Most of single
> thread workqueues seem to be made single thread just to save number of
> threads.  Some seem to depend on single thread of execution but I
> never knew there are ones which depend on the exact execution order.
> Do you think that usage is wide-spread?

I doubt it.

Most that I have seen use the singlethreaded workqueue object with a
queue depth of essentially 1 for syncronization - as you have noted.


>   Implementing strict ordering
> shouldn't be too difficult but I can't help but feeling that such
> assumption is abuse of implementation detail.

Hmmm, does not the "queue" in workqueue mean "FIFO"?

If not for strict ordering, why else would a driver absolutely need a
singlethreaded workqueue object?  It seems to me the strict ording is
the driving requirement for a singlethreaded workqueue at all.  Your
patch series indicates to me that the performance and synchronization
use cases are not driving requirements for a singlethreaded workqueue.

Thanks for your consideration.

Regards,
Andy

> Thanks.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/21] workqueue: simple reimplementation of SINGLE_THREAD workqueue
  2009-11-17  5:23     ` Tejun Heo
  2009-11-17 12:05       ` Andy Walls
@ 2009-11-17 15:05       ` Linus Torvalds
  2009-11-17 16:12         ` Tejun Heo
  1 sibling, 1 reply; 7+ messages in thread
From: Linus Torvalds @ 2009-11-17 15:05 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Andy Walls, linux-kernel, linux-media, jeff, mingo, akpm,
	jens.axboe, rusty, cl, dhowells, arjan, avi, peterz, andi,
	fweisbec



On Tue, 17 Nov 2009, Tejun Heo wrote:
>
> Do you think that usage is wide-spread?  Implementing strict ordering
> shouldn't be too difficult but I can't help but feeling that such
> assumption is abuse of implementation detail.

I think it would be good if it were more than an implementation detail, 
and was something documented and known.

The less random and timing-dependent our interfaces are, the better off we 
are. Guaranteeing that a single-threaded workqueue is done in order seems 
to me to be a GoodThing(tm), regardless of whether much code depends on 
it.

Of course, if there is some fundamental reason why it wouldn't be the 
case, that's another thing. But if you think uit should be easy, and since 
there _are_ users, then it shouldn't be seen as an "implementation 
detail". It's a feature.

			Linus

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/21] workqueue: simple reimplementation of SINGLE_THREAD workqueue
  2009-11-17 15:05       ` Linus Torvalds
@ 2009-11-17 16:12         ` Tejun Heo
  2009-11-17 19:01           ` Linus Torvalds
  0 siblings, 1 reply; 7+ messages in thread
From: Tejun Heo @ 2009-11-17 16:12 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andy Walls, linux-kernel, linux-media, jeff, mingo, akpm,
	jens.axboe, rusty, cl, dhowells, arjan, avi, peterz, andi,
	fweisbec

Hello, Linus.

11/18/2009 12:05 AM, Linus Torvalds wrote:
>> Do you think that usage is wide-spread?  Implementing strict ordering
>> shouldn't be too difficult but I can't help but feeling that such
>> assumption is abuse of implementation detail.
> 
> I think it would be good if it were more than an implementation detail, 
> and was something documented and known.
> 
> The less random and timing-dependent our interfaces are, the better off we 
> are. Guaranteeing that a single-threaded workqueue is done in order seems 
> to me to be a GoodThing(tm), regardless of whether much code depends on 
> it.
> 
> Of course, if there is some fundamental reason why it wouldn't be the 
> case, that's another thing. But if you think uit should be easy, and since 
> there _are_ users, then it shouldn't be seen as an "implementation 
> detail". It's a feature.

I might have been too early with the 'easy' part but I definitely can
give it a shot.  What do you think about the scheduler notifier
implementation?  It seems we'll end up with three callbacks.  It can
either be three hlist_heads in the struct_task linking each ops or
single hilst_head links ops tables (like the current preempt
notifiers).  Which one should I go with?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/21] workqueue: simple reimplementation of SINGLE_THREAD workqueue
  2009-11-17 12:05       ` Andy Walls
@ 2009-11-17 16:21         ` Tejun Heo
  0 siblings, 0 replies; 7+ messages in thread
From: Tejun Heo @ 2009-11-17 16:21 UTC (permalink / raw)
  To: Andy Walls
  Cc: linux-kernel, linux-media, jeff, mingo, akpm, jens.axboe, rusty,
	cl, dhowells, arjan, torvalds, avi, peterz, andi, fweisbec

Hello,

11/17/2009 09:05 PM, Andy Walls wrote:
>>   Implementing strict ordering
>> shouldn't be too difficult but I can't help but feeling that such
>> assumption is abuse of implementation detail.
> 
> Hmmm, does not the "queue" in workqueue mean "FIFO"?

I don't think it necessarily means strict execution ordering.

> If not for strict ordering, why else would a driver absolutely need a
> singlethreaded workqueue object?  It seems to me the strict ording is
> the driving requirement for a singlethreaded workqueue at all.  Your
> patch series indicates to me that the performance and synchronization
> use cases are not driving requirements for a singlethreaded workqueue.

I still think the biggest reason why single threaded workqueue is used
is just to reduce the number of threads hanging around.  I tried to
audit single thread users some time ago.  My impression was that many
of single thread work users did synchronization itself anyway while
smaller portion depended on single threadedness.  I didn't notice the
strict ordering requirement but then again I wasn't looking for them.
It seems there are at least two cases depending on FIFO behavior, so
let's see if we can retain the behavior for single threaded
workqueues (maybe it should be renamed to ORDERED?).

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/21] workqueue: simple reimplementation of SINGLE_THREAD workqueue
  2009-11-17 16:12         ` Tejun Heo
@ 2009-11-17 19:01           ` Linus Torvalds
  0 siblings, 0 replies; 7+ messages in thread
From: Linus Torvalds @ 2009-11-17 19:01 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Andy Walls, linux-kernel, linux-media, jeff, mingo, akpm,
	jens.axboe, rusty, cl, dhowells, arjan, avi, peterz, andi,
	fweisbec



On Wed, 18 Nov 2009, Tejun Heo wrote:
> 
> I might have been too early with the 'easy' part but I definitely can
> give it a shot.  What do you think about the scheduler notifier
> implementation?  It seems we'll end up with three callbacks.  It can
> either be three hlist_heads in the struct_task linking each ops or
> single hilst_head links ops tables (like the current preempt
> notifiers).  Which one should I go with?

I have to say that I don't know. Will this eventually be something common? 
Is the cache footprint problem of 3 pointers that are usually empty worse 
than the cache problem of following a chain where you don't use half the 
entries? Who knows?

And when it actually _is_ used, is it going to be horrible to have a 
possibly upredictable indirect branch (and on some architectures, _all_ 
indirect branches are unpredictable) in a really hot path?

In general, "notifiers" are always horrible. If there's only one or two 
common cases, it's probably going to be better to hardcode those with 
flags to be tested instead of following function pointers. So I just don't 
know.

		Linus

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2009-11-17 19:04 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1258391726-30264-1-git-send-email-tj@kernel.org>
     [not found] ` <1258391726-30264-18-git-send-email-tj@kernel.org>
2009-11-17  0:47   ` [PATCH 17/21] workqueue: simple reimplementation of SINGLE_THREAD workqueue Andy Walls
2009-11-17  5:23     ` Tejun Heo
2009-11-17 12:05       ` Andy Walls
2009-11-17 16:21         ` Tejun Heo
2009-11-17 15:05       ` Linus Torvalds
2009-11-17 16:12         ` Tejun Heo
2009-11-17 19:01           ` Linus Torvalds

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox