* [RFC][PATCH] create workqueue threads only when needed
@ 2009-01-27 0:17 Frederic Weisbecker
2009-01-27 0:30 ` Arjan van de Ven
` (3 more replies)
0 siblings, 4 replies; 37+ messages in thread
From: Frederic Weisbecker @ 2009-01-27 0:17 UTC (permalink / raw)
To: Ingo Molnar
Cc: linux-kernel, Andrew Morton, Lai Jiangshan, Peter Zijlstra,
Steven Rostedt, fweisbec
While looking at the statistics from the workqueue tracer, I've been suprised by the
number of useless workqueues I had:
CPU INSERTED EXECUTED NAME
| | | |
* 0 0 kpsmoused
* 0 0 ata_aux
* 0 0 cqueue
* 0 0 kacpi_notify
* 0 0 kacpid
* 998 998 khelper
* 0 0 cpuset
1 0 0 hda0/1
1 42 42 reiserfs/1
1 0 0 scsi_tgtd/1
1 0 0 aio/1
1 0 0 ata/1
1 193 193 kblockd/1
1 0 0 kintegrityd/1
1 4 4 work_on_cpu/1
1 1244 1244 events/1
0 0 0 hda0/0
0 63 63 reiserfs/0
0 0 0 scsi_tgtd/0
0 0 0 aio/0
0 0 0 ata/0
0 188 188 kblockd/0
0 0 0 kintegrityd/0
0 16 16 work_on_cpu/0
0 1360 1360 events/0
All of the workqueues with 0 work inserted do nothing.
For several reasons:
_ Unneeded built drivers for my system that create workqueue(s) when they init
_ Services which need their own workqueue, for several reasons, but who receive
very rare jobs (often never)
_ ...?
And the result of git-grep create_singlethread_workqueue is even more surprising.
So I've started a patch which creates the workqueues by default without thread except
the kevents one.
They will have their thread created and started only when these workqueues will receive a
first work to do. This is performed by submitting a task's creation work to the kevent workqueues
which are always there, and are the only one which have their thread started on creation.
The result after this patch:
# CPU INSERTED EXECUTED NAME
# | | | |
* 999 1000 khelper
1 5 6 reiserfs/1
1 0 2 work_on_cpu/1
1 86 87 kblockd/1
1 14 16 work_on_cpu/1
1 149 149 events/1
0 15 16 reiserfs/0
0 85 86 kblockd/0
0 146 146 events/0
Dropping 16 useless kernel threads in my system.
(Yes the inserted values are not synced with the executed one because
the tracers looses the first events. I just rewrote some parts to make it work
with this patch).
I guess I will update this tracer to display the "shadow workqueues" which have no
threads too.
I hadn't any problems until now with this patch but I think it needs more testing,
like with cpu hotplug, and some renaming for its functions and structures...
And I would like to receive some comments and feelings before continuing. So this
is just an RFC :-)
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
include/linux/workqueue.h | 44 +++++++++++----------
kernel/workqueue.c | 95 +++++++++++++++++++++++++++++++++++++++------
2 files changed, 106 insertions(+), 33 deletions(-)
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 3cd51e5..c4283c5 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -162,33 +162,35 @@ struct execute_work {
extern struct workqueue_struct *
__create_workqueue_key(const char *name, int singlethread,
int freezeable, int rt, struct lock_class_key *key,
- const char *lock_name);
+ const char *lock_name, bool when_needed);
#ifdef CONFIG_LOCKDEP
-#define __create_workqueue(name, singlethread, freezeable, rt) \
-({ \
- static struct lock_class_key __key; \
- const char *__lock_name; \
- \
- if (__builtin_constant_p(name)) \
- __lock_name = (name); \
- else \
- __lock_name = #name; \
- \
- __create_workqueue_key((name), (singlethread), \
- (freezeable), (rt), &__key, \
- __lock_name); \
+#define __create_workqueue(name, singlethread, freezeable, rt, when_needed) \
+({ \
+ static struct lock_class_key __key; \
+ const char *__lock_name; \
+ \
+ if (__builtin_constant_p(name)) \
+ __lock_name = (name); \
+ else \
+ __lock_name = #name; \
+ \
+ __create_workqueue_key((name), (singlethread), \
+ (freezeable), (rt), &__key, \
+ __lock_name, when_needed); \
})
#else
-#define __create_workqueue(name, singlethread, freezeable, rt) \
- __create_workqueue_key((name), (singlethread), (freezeable), (rt), \
- NULL, NULL)
+#define __create_workqueue(name, singlethread, freezeable, rt, when_needed) \
+ __create_workqueue_key((name), (singlethread), (freezeable), (rt), \
+ NULL, NULL, when_needed)
#endif
-#define create_workqueue(name) __create_workqueue((name), 0, 0, 0)
-#define create_rt_workqueue(name) __create_workqueue((name), 0, 0, 1)
-#define create_freezeable_workqueue(name) __create_workqueue((name), 1, 1, 0)
-#define create_singlethread_workqueue(name) __create_workqueue((name), 1, 0, 0)
+#define create_workqueue(name) __create_workqueue((name), 0, 0, 0, 1)
+#define create_rt_workqueue(name) __create_workqueue((name), 0, 0, 1, 0)
+#define create_freezeable_workqueue(name) __create_workqueue((name), 1, 1, 0, 0)
+#define create_singlethread_workqueue(name) \
+ __create_workqueue((name), 1, 0, 0, 1)
+#define create_kevents(name) __create_workqueue((name), 0, 0, 0, 0)
extern void destroy_workqueue(struct workqueue_struct *wq);
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e53ee18..430cb5a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -69,6 +69,12 @@ struct workqueue_struct {
#endif
};
+
+struct late_workqueue_creation_data {
+ struct cpu_workqueue_struct *cwq;
+ struct work_struct work;
+};
+
/* Serializes the accesses to the list of workqueues. */
static DEFINE_SPINLOCK(workqueue_lock);
static LIST_HEAD(workqueues);
@@ -126,12 +132,34 @@ struct cpu_workqueue_struct *get_wq_data(struct work_struct *work)
return (void *) (atomic_long_read(&work->data) & WORK_STRUCT_WQ_DATA_MASK);
}
+static void create_wq_thread_late_work(struct work_struct *work);
+
+/* Called when a first work is inserted on a workqueue */
+static void create_wq_thread_late(struct cpu_workqueue_struct *cwq)
+{
+ struct late_workqueue_creation_data *l;
+
+ /*
+ * The work can be inserted whatever is the context.
+ * But such atomic allocation will be rare and freed soon.
+ */
+ l = kmalloc(sizeof(*l), GFP_ATOMIC);
+ if (!l) {
+ WARN_ON_ONCE(1);
+ return;
+ }
+ INIT_WORK(&l->work, create_wq_thread_late_work);
+ l->cwq = cwq;
+ schedule_work(&l->work);
+}
+
+
DEFINE_TRACE(workqueue_insertion);
static void insert_work(struct cpu_workqueue_struct *cwq,
struct work_struct *work, struct list_head *head)
{
- trace_workqueue_insertion(cwq->thread, work);
+ trace_workqueue_insertion(cwq->thread, work, cwq->wq->singlethread);
set_wq_data(work, cwq);
/*
@@ -148,6 +176,9 @@ static void __queue_work(struct cpu_workqueue_struct *cwq,
{
unsigned long flags;
+ if (!cwq->thread)
+ create_wq_thread_late(cwq);
+
spin_lock_irqsave(&cwq->lock, flags);
insert_work(cwq, work, &cwq->worklist);
spin_unlock_irqrestore(&cwq->lock, flags);
@@ -291,7 +322,9 @@ static void run_workqueue(struct cpu_workqueue_struct *cwq)
*/
struct lockdep_map lockdep_map = work->lockdep_map;
#endif
- trace_workqueue_execution(cwq->thread, work);
+ struct workqueue_struct *wq = cwq->wq;
+
+ trace_workqueue_execution(cwq->thread, work, wq->singlethread);
cwq->current_work = work;
list_del_init(cwq->worklist.next);
spin_unlock_irq(&cwq->lock);
@@ -387,6 +420,8 @@ static int flush_cpu_workqueue(struct cpu_workqueue_struct *cwq)
} else {
struct wq_barrier barr;
+ if (!cwq->thread)
+ create_wq_thread_late(cwq);
active = 0;
spin_lock_irq(&cwq->lock);
if (!list_empty(&cwq->worklist) || cwq->current_work != NULL) {
@@ -796,7 +831,8 @@ static int create_workqueue_thread(struct cpu_workqueue_struct *cwq, int cpu)
sched_setscheduler_nocheck(p, SCHED_FIFO, ¶m);
cwq->thread = p;
- trace_workqueue_creation(cwq->thread, cpu);
+ trace_workqueue_creation(cwq->thread, wq->singlethread ?
+ SINGLETHREAD_CPU : cpu);
return 0;
}
@@ -812,12 +848,34 @@ static void start_workqueue_thread(struct cpu_workqueue_struct *cwq, int cpu)
}
}
+static void create_wq_thread_late_work(struct work_struct *work)
+{
+ struct late_workqueue_creation_data *l;
+ struct cpu_workqueue_struct *cwq;
+ int cpu = smp_processor_id();
+ int err = 0;
+
+ l = container_of(work, struct late_workqueue_creation_data, work);
+ cwq = l->cwq;
+
+ if (is_wq_single_threaded(cwq->wq)) {
+ err = create_workqueue_thread(cwq, singlethread_cpu);
+ start_workqueue_thread(cwq, -1);
+ } else {
+ err = create_workqueue_thread(cwq, cpu);
+ start_workqueue_thread(cwq, cpu);
+ }
+ WARN_ON_ONCE(err);
+ kfree(l);
+}
+
struct workqueue_struct *__create_workqueue_key(const char *name,
int singlethread,
int freezeable,
int rt,
struct lock_class_key *key,
- const char *lock_name)
+ const char *lock_name,
+ bool when_needed)
{
struct workqueue_struct *wq;
struct cpu_workqueue_struct *cwq;
@@ -842,8 +900,10 @@ struct workqueue_struct *__create_workqueue_key(const char *name,
if (singlethread) {
cwq = init_cpu_workqueue(wq, singlethread_cpu);
- err = create_workqueue_thread(cwq, singlethread_cpu);
- start_workqueue_thread(cwq, -1);
+ if (!when_needed) {
+ err = create_workqueue_thread(cwq, singlethread_cpu);
+ start_workqueue_thread(cwq, -1);
+ }
} else {
cpu_maps_update_begin();
/*
@@ -865,8 +925,11 @@ struct workqueue_struct *__create_workqueue_key(const char *name,
cwq = init_cpu_workqueue(wq, cpu);
if (err || !cpu_online(cpu))
continue;
- err = create_workqueue_thread(cwq, cpu);
- start_workqueue_thread(cwq, cpu);
+
+ if (!when_needed) {
+ err = create_workqueue_thread(cwq, cpu);
+ start_workqueue_thread(cwq, cpu);
+ }
}
cpu_maps_update_done();
}
@@ -904,9 +967,12 @@ static void cleanup_workqueue_thread(struct cpu_workqueue_struct *cwq)
* checks list_empty(), and a "normal" queue_work() can't use
* a dead CPU.
*/
- trace_workqueue_destruction(cwq->thread);
- kthread_stop(cwq->thread);
- cwq->thread = NULL;
+
+ if (cwq->thread) {
+ trace_workqueue_destruction(cwq->thread, cwq->wq->singlethread);
+ kthread_stop(cwq->thread);
+ cwq->thread = NULL;
+ }
}
/**
@@ -955,6 +1021,9 @@ undo:
switch (action) {
case CPU_UP_PREPARE:
+ /* Will be created during the first work insertion */
+ if (wq != keventd_wq)
+ break;
if (!create_workqueue_thread(cwq, cpu))
break;
printk(KERN_ERR "workqueue [%s] for %i failed\n",
@@ -964,6 +1033,8 @@ undo:
goto undo;
case CPU_ONLINE:
+ if (wq != keventd_wq)
+ break;
start_workqueue_thread(cwq, cpu);
break;
@@ -1033,7 +1104,7 @@ void __init init_workqueues(void)
singlethread_cpu = cpumask_first(cpu_possible_mask);
cpu_singlethread_map = cpumask_of(singlethread_cpu);
hotcpu_notifier(workqueue_cpu_callback, 0);
- keventd_wq = create_workqueue("events");
+ keventd_wq = create_kevents("events");
BUG_ON(!keventd_wq);
#ifdef CONFIG_SMP
work_on_cpu_wq = create_workqueue("work_on_cpu");
^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-27 0:17 [RFC][PATCH] create workqueue threads only when needed Frederic Weisbecker
@ 2009-01-27 0:30 ` Arjan van de Ven
2009-01-31 18:03 ` Frederic Weisbecker
[not found] ` <20090126162807.1131c777.akpm@linux-foundation.org>
` (2 subsequent siblings)
3 siblings, 1 reply; 37+ messages in thread
From: Arjan van de Ven @ 2009-01-27 0:30 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt, fweisbec
On Tue, 27 Jan 2009 01:17:11 +0100
Frederic Weisbecker <fweisbec@gmail.com> wrote:
> While looking at the statistics from the workqueue tracer, I've been
> suprised by the number of useless workqueues I had:
>
> CPU INSERTED EXECUTED NAME
> | | | |
>
> * 0 0 kpsmoused
> * 0 0 ata_aux
> * 0 0 cqueue
> * 0 0 kacpi_notify
> * 0 0 kacpid
> * 998 998 khelper
> * 0 0 cpuset
>
> 1 0 0 hda0/1
> 1 42 42 reiserfs/1
> 1 0 0 scsi_tgtd/1
> 1 0 0 aio/1
> 1 0 0 ata/1
> 1 193 193 kblockd/1
> 1 0 0 kintegrityd/1
> 1 4 4 work_on_cpu/1
> 1 1244 1244 events/1
>
> 0 0 0 hda0/0
> 0 63 63 reiserfs/0
> 0 0 0 scsi_tgtd/0
> 0 0 0 aio/0
> 0 0 0 ata/0
> 0 188 188 kblockd/0
> 0 0 0 kintegrityd/0
> 0 16 16 work_on_cpu/0
> 0 1360 1360 events/0
>
>
> All of the workqueues with 0 work inserted do nothing.
> For several reasons:
>
> _ Unneeded built drivers for my system that create workqueue(s) when
> they init _ Services which need their own workqueue, for several
> reasons, but who receive very rare jobs (often never)
> _ ...?
>
> And the result of git-grep create_singlethread_workqueue is even more
> surprising.
>
> So I've started a patch which creates the workqueues by default
> without thread except the kevents one.
> They will have their thread created and started only when these
> workqueues will receive a first work to do. This is performed by
> submitting a task's creation work to the kevent workqueues which are
> always there, and are the only one which have their thread started on
> creation.
>
> The result after this patch:
>
> # CPU INSERTED EXECUTED NAME
> # | | | |
>
> * 999 1000 khelper
>
> 1 5 6 reiserfs/1
> 1 0 2 work_on_cpu/1
> 1 86 87 kblockd/1
> 1 14 16 work_on_cpu/1
> 1 149 149 events/1
>
> 0 15 16 reiserfs/0
> 0 85 86 kblockd/0
> 0 146 146 events/0
>
>
> Dropping 16 useless kernel threads in my system.
> (Yes the inserted values are not synced with the executed one because
> the tracers looses the first events. I just rewrote some parts to
> make it work with this patch) .
> I guess I will update this tracer to display the "shadow workqueues"
> which have no threads too.
>
> I hadn't any problems until now with this patch but I think it needs
> more testing, like with cpu hotplug, and some renaming for its
> functions and structures... And I would like to receive some comments
> and feelings before continuing. So this is just an RFC :-)
>
one thing to look at for work queues that never get work is to see if
they are appropriate for the async function call interface
(the only requirement for that is that they need to cope with calling
inline in exceptional cases)
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
[not found] ` <20090126162807.1131c777.akpm@linux-foundation.org>
@ 2009-01-27 1:46 ` Oleg Nesterov
2009-01-27 8:40 ` Frederic Weisbecker
0 siblings, 1 reply; 37+ messages in thread
From: Oleg Nesterov @ 2009-01-27 1:46 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt, fweisbec
Frederic Weisbecker <fweisbec@gmail.com> wrote:
>
> static void insert_work(struct cpu_workqueue_struct *cwq,
> struct work_struct *work, struct list_head *head)
> {
> - trace_workqueue_insertion(cwq->thread, work);
> + trace_workqueue_insertion(cwq->thread, work, cwq->wq->singlethread);
>
> set_wq_data(work, cwq);
> /*
> @@ -148,6 +176,9 @@ static void __queue_work(struct cpu_workqueue_struct *cwq,
> {
> unsigned long flags;
>
> + if (!cwq->thread)
> + create_wq_thread_late(cwq);
> +
[...snip...]
> +static void create_wq_thread_late_work(struct work_struct *work)
> +{
> + struct late_workqueue_creation_data *l;
> + struct cpu_workqueue_struct *cwq;
> + int cpu = smp_processor_id();
> + int err = 0;
> +
> + l = container_of(work, struct late_workqueue_creation_data, work);
> + cwq = l->cwq;
> +
> + if (is_wq_single_threaded(cwq->wq)) {
> + err = create_workqueue_thread(cwq, singlethread_cpu);
> + start_workqueue_thread(cwq, -1);
> + } else {
> + err = create_workqueue_thread(cwq, cpu);
> + start_workqueue_thread(cwq, cpu);
> + }
> + WARN_ON_ONCE(err);
> + kfree(l);
> +}
Let's suppose the workqueue was just created, and cwq->thared == NULL
on (say) CPU 0.
Then CPU 0 does
queue_work(wq, work1);
queue_work(wq, work2);
Both these calls will notice cwq->thread == NULL, both will schedule
the work wilth ->func = create_wq_thread_late_work.
The first work correctly creates cwq->thread, the second one creates
the new thread too and replaces cwq->thread? Now we have two threads
which run in parallel doing the same work, but the first thread is
"stealth", no?
> @@ -904,9 +967,12 @@ static void cleanup_workqueue_thread(struct cpu_workqueue_struct *cwq)
> * checks list_empty(), and a "normal" queue_work() can't use
> * a dead CPU.
> */
> - trace_workqueue_destruction(cwq->thread);
> - kthread_stop(cwq->thread);
> - cwq->thread = NULL;
> +
> + if (cwq->thread) {
> + trace_workqueue_destruction(cwq->thread, cwq->wq->singlethread);
> + kthread_stop(cwq->thread);
> + cwq->thread = NULL;
> + }
cleanup_workqueue_thread() has already checked cwq->thread != NULL,
how can it become NULL ?
And let's suppose a user does:
wq = create_workqueue(...., when_needed => 1);
queue_work(wq, some_work);
destroy_workqueue(wq);
This can return before create_wq_thread_late() populates the necessary
cwq->thread. We can destroy/free workqueue with the pending work_structs,
no?
Oleg.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-27 0:17 [RFC][PATCH] create workqueue threads only when needed Frederic Weisbecker
2009-01-27 0:30 ` Arjan van de Ven
[not found] ` <20090126162807.1131c777.akpm@linux-foundation.org>
@ 2009-01-27 3:07 ` Alasdair G Kergon
2009-01-27 8:57 ` Frederic Weisbecker
2009-02-02 14:49 ` Daniel Walker
3 siblings, 1 reply; 37+ messages in thread
From: Alasdair G Kergon @ 2009-01-27 3:07 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
On Tue, Jan 27, 2009 at 01:17:11AM +0100, Frederic Weisbecker wrote:
> For several reasons:
> _ Unneeded built drivers for my system that create workqueue(s) when they init
> _ Services which need their own workqueue, for several reasons, but who receive
> very rare jobs (often never)
> I hadn't any problems until now with this patch but I think it needs more testing,
> like with cpu hotplug, and some renaming for its functions and structures...
> And I would like to receive some comments and feelings before continuing. So this
> is just an RFC :-)
Make sure this optimisation also works when the system's running low on memory
if workqueues are involved in "making forward progress". Doubtless there
are other reasons for apparently-unused workqueues too.
How about reviewing each particular workqueue that you've identified to see if
it can be created later or even not at all, or destroyed while it's not being
used, or if some workqueues can be shared - rather than presuming that a change
like this would be safe globally?
Alasdair
--
agk@redhat.com
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-27 1:46 ` Oleg Nesterov
@ 2009-01-27 8:40 ` Frederic Weisbecker
0 siblings, 0 replies; 37+ messages in thread
From: Frederic Weisbecker @ 2009-01-27 8:40 UTC (permalink / raw)
To: Oleg Nesterov
Cc: Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
On Tue, Jan 27, 2009 at 02:46:51AM +0100, Oleg Nesterov wrote:
> Frederic Weisbecker <fweisbec@gmail.com> wrote:
> >
> > static void insert_work(struct cpu_workqueue_struct *cwq,
> > struct work_struct *work, struct list_head *head)
> > {
> > - trace_workqueue_insertion(cwq->thread, work);
> > + trace_workqueue_insertion(cwq->thread, work, cwq->wq->singlethread);
> >
> > set_wq_data(work, cwq);
> > /*
> > @@ -148,6 +176,9 @@ static void __queue_work(struct cpu_workqueue_struct *cwq,
> > {
> > unsigned long flags;
> >
> > + if (!cwq->thread)
> > + create_wq_thread_late(cwq);
> > +
>
> [...snip...]
>
> > +static void create_wq_thread_late_work(struct work_struct *work)
> > +{
> > + struct late_workqueue_creation_data *l;
> > + struct cpu_workqueue_struct *cwq;
> > + int cpu = smp_processor_id();
> > + int err = 0;
> > +
> > + l = container_of(work, struct late_workqueue_creation_data, work);
> > + cwq = l->cwq;
> > +
> > + if (is_wq_single_threaded(cwq->wq)) {
> > + err = create_workqueue_thread(cwq, singlethread_cpu);
> > + start_workqueue_thread(cwq, -1);
> > + } else {
> > + err = create_workqueue_thread(cwq, cpu);
> > + start_workqueue_thread(cwq, cpu);
> > + }
> > + WARN_ON_ONCE(err);
> > + kfree(l);
> > +}
>
> Let's suppose the workqueue was just created, and cwq->thared == NULL
> on (say) CPU 0.
>
> Then CPU 0 does
>
> queue_work(wq, work1);
> queue_work(wq, work2);
>
> Both these calls will notice cwq->thread == NULL, both will schedule
> the work wilth ->func = create_wq_thread_late_work.
>
> The first work correctly creates cwq->thread, the second one creates
> the new thread too and replaces cwq->thread? Now we have two threads
> which run in parallel doing the same work, but the first thread is
> "stealth", no?
You're right. I will put a mutex + a recheck of the cwq->thread inside
create_wq_thread_late_work to be sure there is no race during creation.
> > @@ -904,9 +967,12 @@ static void cleanup_workqueue_thread(struct cpu_workqueue_struct *cwq)
> > * checks list_empty(), and a "normal" queue_work() can't use
> > * a dead CPU.
> > */
> > - trace_workqueue_destruction(cwq->thread);
> > - kthread_stop(cwq->thread);
> > - cwq->thread = NULL;
> > +
> > + if (cwq->thread) {
> > + trace_workqueue_destruction(cwq->thread, cwq->wq->singlethread);
> > + kthread_stop(cwq->thread);
> > + cwq->thread = NULL;
> > + }
>
> cleanup_workqueue_thread() has already checked cwq->thread != NULL,
> how can it become NULL ?
Right.
> And let's suppose a user does:
>
> wq = create_workqueue(...., when_needed => 1);
> queue_work(wq, some_work);
> destroy_workqueue(wq);
>
> This can return before create_wq_thread_late() populates the necessary
> cwq->thread. We can destroy/free workqueue with the pending work_structs,
> no?
>
> Oleg.
>
Totally right. I 'll fix these bugs.
Thanks!
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-27 3:07 ` Alasdair G Kergon
@ 2009-01-27 8:57 ` Frederic Weisbecker
2009-01-27 12:43 ` Ingo Molnar
0 siblings, 1 reply; 37+ messages in thread
From: Frederic Weisbecker @ 2009-01-27 8:57 UTC (permalink / raw)
To: Alasdair G Kergon, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
On Tue, Jan 27, 2009 at 03:07:27AM +0000, Alasdair G Kergon wrote:
> On Tue, Jan 27, 2009 at 01:17:11AM +0100, Frederic Weisbecker wrote:
> > For several reasons:
> > _ Unneeded built drivers for my system that create workqueue(s) when they init
> > _ Services which need their own workqueue, for several reasons, but who receive
> > very rare jobs (often never)
>
> > I hadn't any problems until now with this patch but I think it needs more testing,
> > like with cpu hotplug, and some renaming for its functions and structures...
> > And I would like to receive some comments and feelings before continuing. So this
> > is just an RFC :-)
>
> Make sure this optimisation also works when the system's running low on memory
> if workqueues are involved in "making forward progress". Doubtless there
> are other reasons for apparently-unused workqueues too.
That's true. But currently, each useless workqueue thread is consuming a
task_struct in memory, so this patch makes actually consuming less memory than
before.
If the system is running low on memory...well perhaps I can reschedule the thread
creation after some delays...?
> How about reviewing each particular workqueue that you've identified to see if
> it can be created later or even not at all, or destroyed while it's not being
> used, or if some workqueues can be shared - rather than presuming that a change
> like this would be safe globally?
>
> Alasdair
> --
> agk@redhat.com
I did it with kpsmoused but there are so much workqueues:
$ git-grep create_singlethread_workqueue | wc -l
122
And lot of them are related to particular drivers for hardware I don't have.
So it wouldn't be easy for me to test and fix them.
And note that this patch only solves a part of the problem. It breaks the useless
threads creation, not the useless workqueue creation. So there is still some useless
memory used, and this problem can only be solved one by one.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-27 8:57 ` Frederic Weisbecker
@ 2009-01-27 12:43 ` Ingo Molnar
0 siblings, 0 replies; 37+ messages in thread
From: Ingo Molnar @ 2009-01-27 12:43 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Alasdair G Kergon, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
* Frederic Weisbecker <fweisbec@gmail.com> wrote:
> On Tue, Jan 27, 2009 at 03:07:27AM +0000, Alasdair G Kergon wrote:
> > On Tue, Jan 27, 2009 at 01:17:11AM +0100, Frederic Weisbecker wrote:
> > > For several reasons:
> > > _ Unneeded built drivers for my system that create workqueue(s) when they init
> > > _ Services which need their own workqueue, for several reasons, but who receive
> > > very rare jobs (often never)
> >
> > > I hadn't any problems until now with this patch but I think it needs more testing,
> > > like with cpu hotplug, and some renaming for its functions and structures...
> > > And I would like to receive some comments and feelings before continuing. So this
> > > is just an RFC :-)
> >
> > Make sure this optimisation also works when the system's running low
> > on memory if workqueues are involved in "making forward progress".
> > Doubtless there are other reasons for apparently-unused workqueues
> > too.
>
> That's true. But currently, each useless workqueue thread is consuming a
> task_struct in memory, so this patch makes actually consuming less
> memory than before. If the system is running low on memory...well
> perhaps I can reschedule the thread creation after some delays...?
Lets put a warning in there to make sure it's not forgotten - and deal
with it if it happens.
Ingo
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-27 0:30 ` Arjan van de Ven
@ 2009-01-31 18:03 ` Frederic Weisbecker
2009-01-31 18:15 ` Arjan van de Ven
0 siblings, 1 reply; 37+ messages in thread
From: Frederic Weisbecker @ 2009-01-31 18:03 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
On Mon, Jan 26, 2009 at 04:30:15PM -0800, Arjan van de Ven wrote:
> On Tue, 27 Jan 2009 01:17:11 +0100
> Frederic Weisbecker <fweisbec@gmail.com> wrote:
>
> > While looking at the statistics from the workqueue tracer, I've been
> > suprised by the number of useless workqueues I had:
> >
> > CPU INSERTED EXECUTED NAME
> > | | | |
> >
> > * 0 0 kpsmoused
> > * 0 0 ata_aux
> > * 0 0 cqueue
> > * 0 0 kacpi_notify
> > * 0 0 kacpid
> > * 998 998 khelper
> > * 0 0 cpuset
> >
> > 1 0 0 hda0/1
> > 1 42 42 reiserfs/1
> > 1 0 0 scsi_tgtd/1
> > 1 0 0 aio/1
> > 1 0 0 ata/1
> > 1 193 193 kblockd/1
> > 1 0 0 kintegrityd/1
> > 1 4 4 work_on_cpu/1
> > 1 1244 1244 events/1
> >
> > 0 0 0 hda0/0
> > 0 63 63 reiserfs/0
> > 0 0 0 scsi_tgtd/0
> > 0 0 0 aio/0
> > 0 0 0 ata/0
> > 0 188 188 kblockd/0
> > 0 0 0 kintegrityd/0
> > 0 16 16 work_on_cpu/0
> > 0 1360 1360 events/0
> >
> >
> > All of the workqueues with 0 work inserted do nothing.
> > For several reasons:
> >
> > _ Unneeded built drivers for my system that create workqueue(s) when
> > they init _ Services which need their own workqueue, for several
> > reasons, but who receive very rare jobs (often never)
> > _ ...?
> >
> > And the result of git-grep create_singlethread_workqueue is even more
> > surprising.
> >
> > So I've started a patch which creates the workqueues by default
> > without thread except the kevents one.
> > They will have their thread created and started only when these
> > workqueues will receive a first work to do. This is performed by
> > submitting a task's creation work to the kevent workqueues which are
> > always there, and are the only one which have their thread started on
> > creation.
> >
> > The result after this patch:
> >
> > # CPU INSERTED EXECUTED NAME
> > # | | | |
> >
> > * 999 1000 khelper
> >
> > 1 5 6 reiserfs/1
> > 1 0 2 work_on_cpu/1
> > 1 86 87 kblockd/1
> > 1 14 16 work_on_cpu/1
> > 1 149 149 events/1
> >
> > 0 15 16 reiserfs/0
> > 0 85 86 kblockd/0
> > 0 146 146 events/0
> >
> >
> > Dropping 16 useless kernel threads in my system.
> > (Yes the inserted values are not synced with the executed one because
> > the tracers looses the first events. I just rewrote some parts to
> > make it work with this patch) .
> > I guess I will update this tracer to display the "shadow workqueues"
> > which have no threads too.
> >
> > I hadn't any problems until now with this patch but I think it needs
> > more testing, like with cpu hotplug, and some renaming for its
> > functions and structures... And I would like to receive some comments
> > and feelings before continuing. So this is just an RFC :-)
> >
>
> one thing to look at for work queues that never get work is to see if
> they are appropriate for the async function call interface
> (the only requirement for that is that they need to cope with calling
> inline in exceptional cases)
>
Hi Arjan,
There is one thing that make it hard to replace workqueues in such cases,
there is not guarantee the function will run in user context because of this
condition:
if (!async_enabled || !entry || atomic_read(&entry_count) > MAX_WORK)
I wanted to replace kpsmoused with an async function but I want to schedule
a slow work that can't be done from irq...
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-31 18:03 ` Frederic Weisbecker
@ 2009-01-31 18:15 ` Arjan van de Ven
2009-01-31 18:28 ` Frederic Weisbecker
0 siblings, 1 reply; 37+ messages in thread
From: Arjan van de Ven @ 2009-01-31 18:15 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
On Sat, 31 Jan 2009 19:03:49 +0100
Frederic Weisbecker <fweisbec@gmail.com> wrote:
> > one thing to look at for work queues that never get work is to see
> > if they are appropriate for the async function call interface
> > (the only requirement for that is that they need to cope with
> > calling inline in exceptional cases)
> >
>
>
> Hi Arjan,
>
> There is one thing that make it hard to replace workqueues in such
> cases, there is not guarantee the function will run in user context
> because of this condition:
>
> if (!async_enabled || !entry || atomic_read(&entry_count) > MAX_WORK)
>
> I wanted to replace kpsmoused with an async function but I want to
> schedule a slow work that can't be done from irq...
if there is enough value in having a variant that is guaranteed to
always run from a thread we could add that. Likely that needs that the
caller passes in a bit of memory, but that's not too big a deal.
If there is only 1 in the entire kernel it might not be worth it,
but if it's a common pattern then for sure...
do you have a feeling on how common this is ?
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-31 18:15 ` Arjan van de Ven
@ 2009-01-31 18:28 ` Frederic Weisbecker
2009-02-01 16:22 ` Stefan Richter
2009-02-01 21:37 ` Benjamin Herrenschmidt
0 siblings, 2 replies; 37+ messages in thread
From: Frederic Weisbecker @ 2009-01-31 18:28 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
On Sat, Jan 31, 2009 at 10:15:02AM -0800, Arjan van de Ven wrote:
> On Sat, 31 Jan 2009 19:03:49 +0100
> Frederic Weisbecker <fweisbec@gmail.com> wrote:
>
> > > one thing to look at for work queues that never get work is to see
> > > if they are appropriate for the async function call interface
> > > (the only requirement for that is that they need to cope with
> > > calling inline in exceptional cases)
> > >
> >
> >
> > Hi Arjan,
> >
> > There is one thing that make it hard to replace workqueues in such
> > cases, there is not guarantee the function will run in user context
> > because of this condition:
> >
> > if (!async_enabled || !entry || atomic_read(&entry_count) > MAX_WORK)
> >
> > I wanted to replace kpsmoused with an async function but I want to
> > schedule a slow work that can't be done from irq...
>
> if there is enough value in having a variant that is guaranteed to
> always run from a thread we could add that. Likely that needs that the
> caller passes in a bit of memory, but that's not too big a deal.
> If there is only 1 in the entire kernel it might not be worth it,
> but if it's a common pattern then for sure...
>
> do you have a feeling on how common this is ?
>
I don't know, most of those I've looked on are not documented about the reason
for a private workqueue. I guess most of them can use the usual kevent.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-31 18:28 ` Frederic Weisbecker
@ 2009-02-01 16:22 ` Stefan Richter
2009-02-01 17:04 ` Arjan van de Ven
2009-02-01 21:37 ` Benjamin Herrenschmidt
1 sibling, 1 reply; 37+ messages in thread
From: Stefan Richter @ 2009-02-01 16:22 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Arjan van de Ven, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
Frederic Weisbecker wrote:
> On Sat, Jan 31, 2009 at 10:15:02AM -0800, Arjan van de Ven wrote:
>> On Sat, 31 Jan 2009 19:03:49 +0100
>> Frederic Weisbecker <fweisbec@gmail.com> wrote:
>>> There is one thing that make it hard to replace workqueues in such
>>> cases, there is not guarantee the function will run in user context
>>> because of this condition:
>>>
>>> if (!async_enabled || !entry || atomic_read(&entry_count) > MAX_WORK)
>>>
>>> I wanted to replace kpsmoused with an async function but I want to
>>> schedule a slow work that can't be done from irq...
>> if there is enough value in having a variant that is guaranteed to
>> always run from a thread we could add that. Likely that needs that the
>> caller passes in a bit of memory, but that's not too big a deal.
>> If there is only 1 in the entire kernel it might not be worth it,
>> but if it's a common pattern then for sure...
>>
>> do you have a feeling on how common this is ?
>>
>
>
> I don't know, most of those I've looked on are not documented about the reason
> for a private workqueue. I guess most of them can use the usual kevent.
I have stuff in drivers/firewire/ done in a private workqueue and some
in the shared workqueue which I will eventually move either into
short-lived ad hoc created kthreads /or/ preferably into a thread pool
implementation --- if such a thing will have found its way into the
kernel when I have time for my project.
I need to get callers of scsi_add_device and scsi_remove_device out of
the picture.
--
Stefan Richter
-=====-==--= --=- ----=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-01 16:22 ` Stefan Richter
@ 2009-02-01 17:04 ` Arjan van de Ven
2009-02-01 17:40 ` Stefan Richter
0 siblings, 1 reply; 37+ messages in thread
From: Arjan van de Ven @ 2009-02-01 17:04 UTC (permalink / raw)
To: Stefan Richter
Cc: Frederic Weisbecker, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
On Sun, 01 Feb 2009 17:22:47 +0100
Stefan Richter <stefanr@s5r6.in-berlin.de> wrote:
> > I don't know, most of those I've looked on are not documented about
> > the reason for a private workqueue. I guess most of them can use
> > the usual kevent.
>
> I have stuff in drivers/firewire/ done in a private workqueue and some
> in the shared workqueue which I will eventually move either into
> short-lived ad hoc created kthreads /or/ preferably into a thread pool
> implementation --- if such a thing will have found its way into the
> kernel when I have time for my project.
>
> I need to get callers of scsi_add_device and scsi_remove_device out of
> the picture.
what are the requirements for you to do this?
right now, the async calls are "fire and forget... but you can wait for
completion". The "price" is that the code needs to allocate a little
bit of memory for management overhead. The second price then is that
if this allocation fails, the code needs to run "in context".
Would this be a problem?
I can add a variant of the API where you pass in some memory of your
own, which then does not fail. (but also won't get freed automatically
etc)
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-01 17:04 ` Arjan van de Ven
@ 2009-02-01 17:40 ` Stefan Richter
2009-02-01 17:47 ` Arjan van de Ven
0 siblings, 1 reply; 37+ messages in thread
From: Stefan Richter @ 2009-02-01 17:40 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Frederic Weisbecker, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
Arjan van de Ven wrote:
> On Sun, 01 Feb 2009 17:22:47 +0100
> Stefan Richter <stefanr@s5r6.in-berlin.de> wrote:
>> I have stuff in drivers/firewire/ done in a private workqueue and some
>> in the shared workqueue which I will eventually move either into
>> short-lived ad hoc created kthreads /or/ preferably into a thread pool
>> implementation --- if such a thing will have found its way into the
>> kernel when I have time for my project.
...
> what are the requirements for you to do this?
>
> right now, the async calls are "fire and forget... but you can wait for
> completion". The "price" is that the code needs to allocate a little
> bit of memory for management overhead. The second price then is that
> if this allocation fails, the code needs to run "in context".
>
> Would this be a problem?
>
> I can add a variant of the API where you pass in some memory of your
> own, which then does not fail. (but also won't get freed automatically
> etc)
I'm not quite sure yet but I believe I can run the work in the current
context if there was an allocation failure, provided those failures are
unlikely.
Hmm. I don't see the "async/mgr" in the process list of that 2.6.29-rc3
box next to me. Under which circumstances is this facility available
(long after boot)? Or am I just barging into a discussion to /make/ it
available after boot?
--
Stefan Richter
-=====-==--= --=- ----=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-01 17:40 ` Stefan Richter
@ 2009-02-01 17:47 ` Arjan van de Ven
2009-02-01 18:06 ` Stefan Richter
0 siblings, 1 reply; 37+ messages in thread
From: Arjan van de Ven @ 2009-02-01 17:47 UTC (permalink / raw)
To: Stefan Richter
Cc: Frederic Weisbecker, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
On Sun, 01 Feb 2009 18:40:18 +0100
Stefan Richter <stefanr@s5r6.in-berlin.de> wrote:
> I'm not quite sure yet but I believe I can run the work in the current
> context if there was an allocation failure, provided those failures
> are unlikely.
>
> Hmm. I don't see the "async/mgr" in the process list of that
> 2.6.29-rc3 box next to me. Under which circumstances is this
> facility available (long after boot)? Or am I just barging into a
> discussion to /make/ it available after boot?
for 2.6.29 only it was turned off by default, for 2.6.30 it will be on
by default.
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-01 17:47 ` Arjan van de Ven
@ 2009-02-01 18:06 ` Stefan Richter
2009-02-01 18:11 ` Arjan van de Ven
0 siblings, 1 reply; 37+ messages in thread
From: Stefan Richter @ 2009-02-01 18:06 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Frederic Weisbecker, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
Arjan van de Ven wrote:
> On Sun, 01 Feb 2009 18:40:18 +0100
> Stefan Richter <stefanr@s5r6.in-berlin.de> wrote:
>> I don't see the "async/mgr" in the process list of that
>> 2.6.29-rc3 box next to me. Under which circumstances is this
>> facility available (long after boot)? Or am I just barging into a
>> discussion to /make/ it available after boot?
>
> for 2.6.29 only it was turned off by default, for 2.6.30 it will be on
> by default.
Ah right, I forgot. Thanks for the pointers,
--
Stefan Richter
-=====-==--= --=- ----=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-01 18:06 ` Stefan Richter
@ 2009-02-01 18:11 ` Arjan van de Ven
0 siblings, 0 replies; 37+ messages in thread
From: Arjan van de Ven @ 2009-02-01 18:11 UTC (permalink / raw)
To: Stefan Richter
Cc: Frederic Weisbecker, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
On Sun, 01 Feb 2009 19:06:48 +0100
Stefan Richter <stefanr@s5r6.in-berlin.de> wrote:
> Arjan van de Ven wrote:
> > On Sun, 01 Feb 2009 18:40:18 +0100
> > Stefan Richter <stefanr@s5r6.in-berlin.de> wrote:
> >> I don't see the "async/mgr" in the process list of that
> >> 2.6.29-rc3 box next to me. Under which circumstances is this
> >> facility available (long after boot)? Or am I just barging into a
> >> discussion to /make/ it available after boot?
> >
> > for 2.6.29 only it was turned off by default, for 2.6.30 it will be
> > on by default.
>
> Ah right, I forgot. Thanks for the pointers,
you can get it right now by just putting "fastboot" on the kernel
commandline.
btw what you're describing looks like a good fit; that's good news ;)
if you need any help or have questions just let me know
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-31 18:28 ` Frederic Weisbecker
2009-02-01 16:22 ` Stefan Richter
@ 2009-02-01 21:37 ` Benjamin Herrenschmidt
2009-02-02 2:24 ` Frederic Weisbecker
2009-02-02 5:19 ` Arjan van de Ven
1 sibling, 2 replies; 37+ messages in thread
From: Benjamin Herrenschmidt @ 2009-02-01 21:37 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Arjan van de Ven, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
> I don't know, most of those I've looked on are not documented about the reason
> for a private workqueue. I guess most of them can use the usual kevent.
The main problem with kevent is that it gets clogged up.
That's were thread pools kick in ... tried using Dave Howells slow
work ?
Cheers,
Ben.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-01 21:37 ` Benjamin Herrenschmidt
@ 2009-02-02 2:24 ` Frederic Weisbecker
2009-02-02 6:00 ` Benjamin Herrenschmidt
2009-02-02 5:19 ` Arjan van de Ven
1 sibling, 1 reply; 37+ messages in thread
From: Frederic Weisbecker @ 2009-02-02 2:24 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Arjan van de Ven, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
On Mon, Feb 02, 2009 at 08:37:41AM +1100, Benjamin Herrenschmidt wrote:
>
> > I don't know, most of those I've looked on are not documented about the reason
> > for a private workqueue. I guess most of them can use the usual kevent.
>
> The main problem with kevent is that it gets clogged up.
I don't think so. Here is a snapshot of the workqueue tracer in my
box currently:
# CPU INSERTED EXECUTED NAME
# | | | |
1 0 0 hda0/1
1 589 589 reiserfs/1
1 0 0 scsi_tgtd/1
1 0 0 aio/1
1 0 0 ata/1
1 378 378 kblockd/1
1 0 0 kintegrityd/1
1 2 2 work_on_cpu/1
1 8706 8706 events/1
0 19994 19994 cqueue
0 0 0 hda0/0
0 501 501 reiserfs/0
0 0 0 scsi_tgtd/0
0 0 0 aio/0
0 0 0 ata_aux
0 0 0 ata/0
0 4 4 kacpi_notify
0 22 22 kacpid
0 379 379 kblockd/0
0 0 0 kintegrityd/0
0 1056 1056 khelper
0 15 15 work_on_cpu/0
0 9367 9367 events/0
0 0 0 cpuset
And the result of uptime:
02:51:40 up 2:22, 6 users, load average: 0.22, 0.34, 0.59
So I have a total of 18073 works sent and performed by kevent for
10300 seconds.
An average of a bit less than 2 works on kevents per seconds is not about something
clogged, especially since we are talking about small jobs, usually those which play
the role of bottom halves and would have fit in a tasklet/softirq by the past.
Now if you look at the other workqueues.
Most of them don't do anything (more likely I don't do anything with them).
Khelper can't be replaced since it performs slow works which may wait for userspace
processes completion although I'm not sure it's still useful after the boot.
cqueue is usually inactive but I raised a good number of events artificially on it, I
just sent a patch to create its workqueue only when needed.
And for the others, I don't know yet.
> That's were thread pools kick in ... tried using Dave Howells slow
> work ?
I think it can be useful, as an example the kpsmoused workqueue receive rare
and slow works of mouse resynchronisation.
I wanted to convert it into a slow work created only on the fly.
But..I don't find it. I've seen some patches about it but it doesn't seem
to be merged.
So I tried something else for kpsmoused: turn it into a thread created on the fly.
But I think these slow works could be useful for such things.
> Cheers,
> Ben.
>
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-01 21:37 ` Benjamin Herrenschmidt
2009-02-02 2:24 ` Frederic Weisbecker
@ 2009-02-02 5:19 ` Arjan van de Ven
2009-02-02 6:01 ` Benjamin Herrenschmidt
2009-02-02 9:01 ` Stefan Richter
1 sibling, 2 replies; 37+ messages in thread
From: Arjan van de Ven @ 2009-02-02 5:19 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Frederic Weisbecker, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
On Mon, 02 Feb 2009 08:37:41 +1100
Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
>
> > I don't know, most of those I've looked on are not documented about
> > the reason for a private workqueue. I guess most of them can use
> > the usual kevent.
>
> The main problem with kevent is that it gets clogged up.
>
> That's were thread pools kick in ... tried using Dave Howells slow
> work ?
async function calls are pretty much the same and actually in mainlinme.
Dave Howells stuff in addition plays some extremely weird refcounting
games that I cannot imagine anyone but him needing...
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 2:24 ` Frederic Weisbecker
@ 2009-02-02 6:00 ` Benjamin Herrenschmidt
2009-02-02 8:42 ` Stefan Richter
2009-02-02 11:26 ` Frederic Weisbecker
0 siblings, 2 replies; 37+ messages in thread
From: Benjamin Herrenschmidt @ 2009-02-02 6:00 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Arjan van de Ven, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
On Mon, 2009-02-02 at 03:24 +0100, Frederic Weisbecker wrote:
> On Mon, Feb 02, 2009 at 08:37:41AM +1100, Benjamin Herrenschmidt wrote:
> >
> > > I don't know, most of those I've looked on are not documented about the reason
> > > for a private workqueue. I guess most of them can use the usual kevent.
> >
> > The main problem with kevent is that it gets clogged up.
>
>
> I don't think so. Here is a snapshot of the workqueue tracer in my
> box currently:
That's not quite what I meant ...
The main problem with keventd I'd say is that it's used in all sort of
exeptional code path (ie, driver reset path, error handling, etc...) for
things that will msleep happily for tenth milliseconds, that sort of
thing.
IE. It will be pretty responsive -in general- but can suffer form
horrible latencies every now and then.
Cheers,
Ben.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 5:19 ` Arjan van de Ven
@ 2009-02-02 6:01 ` Benjamin Herrenschmidt
2009-02-02 9:01 ` Stefan Richter
1 sibling, 0 replies; 37+ messages in thread
From: Benjamin Herrenschmidt @ 2009-02-02 6:01 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Frederic Weisbecker, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
On Sun, 2009-02-01 at 21:19 -0800, Arjan van de Ven wrote:
> On Mon, 02 Feb 2009 08:37:41 +1100
> Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
>
> >
> > > I don't know, most of those I've looked on are not documented about
> > > the reason for a private workqueue. I guess most of them can use
> > > the usual kevent.
> >
> > The main problem with kevent is that it gets clogged up.
> >
> > That's were thread pools kick in ... tried using Dave Howells slow
> > work ?
>
> async function calls are pretty much the same and actually in mainlinme.
> Dave Howells stuff in addition plays some extremely weird refcounting
> games that I cannot imagine anyone but him needing...
I missed that new shinny stuff, I'll have a look.
Cheers,
Ben.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 6:00 ` Benjamin Herrenschmidt
@ 2009-02-02 8:42 ` Stefan Richter
2009-02-02 9:05 ` Benjamin Herrenschmidt
2009-02-02 11:32 ` Frederic Weisbecker
2009-02-02 11:26 ` Frederic Weisbecker
1 sibling, 2 replies; 37+ messages in thread
From: Stefan Richter @ 2009-02-02 8:42 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Frederic Weisbecker, Arjan van de Ven, Ingo Molnar, linux-kernel,
Andrew Morton, Lai Jiangshan, Peter Zijlstra, Steven Rostedt
Benjamin Herrenschmidt wrote:
> On Mon, 2009-02-02 at 03:24 +0100, Frederic Weisbecker wrote:
>> On Mon, Feb 02, 2009 at 08:37:41AM +1100, Benjamin Herrenschmidt wrote:
>> >
>> > > I don't know, most of those I've looked on are not documented about the reason
>> > > for a private workqueue. I guess most of them can use the usual kevent.
I rather suspect that the majority of private workqueues are there for
good reasons.
>> > The main problem with kevent is that it gets clogged up.
>>
>> I don't think so. Here is a snapshot of the workqueue tracer in my
>> box currently:
>
> That's not quite what I meant ...
>
> The main problem with keventd I'd say is that it's used in all sort of
> exeptional code path (ie, driver reset path, error handling, etc...) for
> things that will msleep happily for tenth milliseconds, that sort of
> thing.
>
> IE. It will be pretty responsive -in general- but can suffer form
> horrible latencies every now and then.
Actually it /should/ be the other way around:
The shared workqueue should only be used for work that sleeps only
briefly (perhaps with the exception of very unlikely longer sleeps e.g.
for allocations that cause paging).
Work which /may/ sleep longer, for example performs SCSI transactions,
needs to go into a private workqueue or other kind of context.
OTOH you are right too; work which must not be deferred too long by work
from another uncooperative/ unfair subsystem is probably also better off
in an own workqueue...
--
Stefan Richter
-=====-==--= --=- ---=-
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 5:19 ` Arjan van de Ven
2009-02-02 6:01 ` Benjamin Herrenschmidt
@ 2009-02-02 9:01 ` Stefan Richter
2009-02-02 14:45 ` Arjan van de Ven
1 sibling, 1 reply; 37+ messages in thread
From: Stefan Richter @ 2009-02-02 9:01 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Benjamin Herrenschmidt, Frederic Weisbecker, Ingo Molnar,
linux-kernel, Andrew Morton, Lai Jiangshan, Peter Zijlstra,
Steven Rostedt
Arjan van de Ven wrote:
> On Mon, 02 Feb 2009 08:37:41 +1100
> Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
>> That's were thread pools kick in ... tried using Dave Howells slow
>> work ?
>
> async function calls are pretty much the same and actually in mainlinme.
> Dave Howells stuff in addition plays some extremely weird refcounting
> games that I cannot imagine anyone but him needing...
I haven't looked at this particular slow-work implementation. Do you
refer to some internal refcounting or to some refcounting as a service
for the API user?
IME, scheduling work and executing work is often accompanied by
reference counting of this sort:
static int schedule_delayed_work_wrapper(struct foo *container,
unsigned long delay)
{
int scheduled;
foo_get(container);
scheduled = schedule_delayed_work(&container->work, delay);
if (!scheduled)
foo_put(container);
return scheduled;
}
static void foo_work(struct work_struct *work)
{
struct foo *container =
container_of(work, struct foo, work.work);
do_the_work;
foo_put(container);
}
But I think that whenever there are additional call sites of
foo_put(container), this refcounting of container_of(work, ...) cannot
be easily generalized.
--
Stefan Richter
-=====-==--= --=- ---=-
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 8:42 ` Stefan Richter
@ 2009-02-02 9:05 ` Benjamin Herrenschmidt
2009-02-02 9:14 ` Oliver Neukum
2009-02-02 11:32 ` Frederic Weisbecker
1 sibling, 1 reply; 37+ messages in thread
From: Benjamin Herrenschmidt @ 2009-02-02 9:05 UTC (permalink / raw)
To: Stefan Richter
Cc: Frederic Weisbecker, Arjan van de Ven, Ingo Molnar, linux-kernel,
Andrew Morton, Lai Jiangshan, Peter Zijlstra, Steven Rostedt
On Mon, 2009-02-02 at 09:42 +0100, Stefan Richter wrote:
> >
> > IE. It will be pretty responsive -in general- but can suffer form
> > horrible latencies every now and then.
>
> Actually it /should/ be the other way around:
>
> The shared workqueue should only be used for work that sleeps only
> briefly (perhaps with the exception of very unlikely longer sleeps e.g.
> for allocations that cause paging).
I agree, I'm just stating the current situation :-) Hopefully something
like async funcs / slow work / whatever will take over the case of stuff
that wants to be around for longer. I haven't had a chance to look at
the async funcs yet, sounds like they may do the job tho in which case
I'll look at converting a driver or two to use them.
> Work which /may/ sleep longer, for example performs SCSI transactions,
> needs to go into a private workqueue or other kind of context.
Well, it's a bit silly to allocate a private workqueue with all it's
associated per CPU kernel threads for something as rare as resetting
your eth NIC ... or even SCSI error handling in fact.
> OTOH you are right too; work which must not be deferred too long by work
> from another uncooperative/ unfair subsystem is probably also better off
> in an own workqueue...
I suspect the main reason for dedicated work queues is that, plus the
per-CPU affinity.
Cheers,
Ben.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 9:05 ` Benjamin Herrenschmidt
@ 2009-02-02 9:14 ` Oliver Neukum
2009-02-02 9:46 ` Benjamin Herrenschmidt
2009-02-02 11:39 ` Stefan Richter
0 siblings, 2 replies; 37+ messages in thread
From: Oliver Neukum @ 2009-02-02 9:14 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Stefan Richter, Frederic Weisbecker, Arjan van de Ven,
Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
Am Monday 02 February 2009 10:05:28 schrieb Benjamin Herrenschmidt:
> > Work which /may/ sleep longer, for example performs SCSI transactions,
> > needs to go into a private workqueue or other kind of context.
>
> Well, it's a bit silly to allocate a private workqueue with all it's
> associated per CPU kernel threads for something as rare as resetting
> your eth NIC ... or even SCSI error handling in fact.
How do you avoid a deadlock if SCSI error handling doesn't use
a dedicated workqueue?
Regards
Oliver
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 9:14 ` Oliver Neukum
@ 2009-02-02 9:46 ` Benjamin Herrenschmidt
2009-02-02 9:58 ` Peter Zijlstra
2009-02-02 10:03 ` Oliver Neukum
2009-02-02 11:39 ` Stefan Richter
1 sibling, 2 replies; 37+ messages in thread
From: Benjamin Herrenschmidt @ 2009-02-02 9:46 UTC (permalink / raw)
To: Oliver Neukum
Cc: Stefan Richter, Frederic Weisbecker, Arjan van de Ven,
Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
On Mon, 2009-02-02 at 10:14 +0100, Oliver Neukum wrote:
> Am Monday 02 February 2009 10:05:28 schrieb Benjamin Herrenschmidt:
> > > Work which /may/ sleep longer, for example performs SCSI transactions,
> > > needs to go into a private workqueue or other kind of context.
> >
> > Well, it's a bit silly to allocate a private workqueue with all it's
> > associated per CPU kernel threads for something as rare as resetting
> > your eth NIC ... or even SCSI error handling in fact.
>
> How do you avoid a deadlock if SCSI error handling doesn't use
> a dedicated workqueue?
Something such as slow-work or async funcs (not sure about the later, I
have to look at the implementation) but the basic idea is to have a pool
of threads for "generic" delayed work, when its busy, pick another one,
and the pool itself should resize if there's too much pressure.
Ben.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 9:46 ` Benjamin Herrenschmidt
@ 2009-02-02 9:58 ` Peter Zijlstra
2009-02-02 10:03 ` Oliver Neukum
1 sibling, 0 replies; 37+ messages in thread
From: Peter Zijlstra @ 2009-02-02 9:58 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Oliver Neukum, Stefan Richter, Frederic Weisbecker,
Arjan van de Ven, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Steven Rostedt, Chris Mason
On Mon, 2009-02-02 at 20:46 +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2009-02-02 at 10:14 +0100, Oliver Neukum wrote:
> > Am Monday 02 February 2009 10:05:28 schrieb Benjamin Herrenschmidt:
> > > > Work which /may/ sleep longer, for example performs SCSI transactions,
> > > > needs to go into a private workqueue or other kind of context.
> > >
> > > Well, it's a bit silly to allocate a private workqueue with all it's
> > > associated per CPU kernel threads for something as rare as resetting
> > > your eth NIC ... or even SCSI error handling in fact.
> >
> > How do you avoid a deadlock if SCSI error handling doesn't use
> > a dedicated workqueue?
>
> Something such as slow-work or async funcs (not sure about the later, I
> have to look at the implementation) but the basic idea is to have a pool
> of threads for "generic" delayed work, when its busy, pick another one,
> and the pool itself should resize if there's too much pressure.
One thing that comes to mind is that some people want strict cpu
affinity for their work, while others want the work to be freely
scheduled so that the load-balancer can get the most out of the system.
I think mason does something like that in btrfs, where he has likes to
keep all cpus busy doing checksumming or something or other.
One could of course layer these things so that the principal thread
pools are per-cpu and then let the next layer RR the work between all
cpus.
Furthermore, all cpus seems like too wide a mask, imagine what happens
on a 4k cpu system, do we really want a cpu half the world away from the
IO node doing checksumming? -- is there a nice solution?
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 9:46 ` Benjamin Herrenschmidt
2009-02-02 9:58 ` Peter Zijlstra
@ 2009-02-02 10:03 ` Oliver Neukum
2009-02-02 20:44 ` Benjamin Herrenschmidt
1 sibling, 1 reply; 37+ messages in thread
From: Oliver Neukum @ 2009-02-02 10:03 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Stefan Richter, Frederic Weisbecker, Arjan van de Ven,
Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
Am Monday 02 February 2009 10:46:33 schrieb Benjamin Herrenschmidt:
> On Mon, 2009-02-02 at 10:14 +0100, Oliver Neukum wrote:
> > Am Monday 02 February 2009 10:05:28 schrieb Benjamin Herrenschmidt:
> > > > Work which /may/ sleep longer, for example performs SCSI transactions,
> > > > needs to go into a private workqueue or other kind of context.
> > >
> > > Well, it's a bit silly to allocate a private workqueue with all it's
> > > associated per CPU kernel threads for something as rare as resetting
> > > your eth NIC ... or even SCSI error handling in fact.
> >
> > How do you avoid a deadlock if SCSI error handling doesn't use
> > a dedicated workqueue?
>
> Something such as slow-work or async funcs (not sure about the later, I
> have to look at the implementation) but the basic idea is to have a pool
> of threads for "generic" delayed work, when its busy, pick another one,
> and the pool itself should resize if there's too much pressure.
And all that without using GFP_KERNEL?
Regards
Oliver
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 6:00 ` Benjamin Herrenschmidt
2009-02-02 8:42 ` Stefan Richter
@ 2009-02-02 11:26 ` Frederic Weisbecker
1 sibling, 0 replies; 37+ messages in thread
From: Frederic Weisbecker @ 2009-02-02 11:26 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Arjan van de Ven, Ingo Molnar, linux-kernel, Andrew Morton,
Lai Jiangshan, Peter Zijlstra, Steven Rostedt
On Mon, Feb 02, 2009 at 05:00:43PM +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2009-02-02 at 03:24 +0100, Frederic Weisbecker wrote:
> > On Mon, Feb 02, 2009 at 08:37:41AM +1100, Benjamin Herrenschmidt wrote:
> > >
> > > > I don't know, most of those I've looked on are not documented about the reason
> > > > for a private workqueue. I guess most of them can use the usual kevent.
> > >
> > > The main problem with kevent is that it gets clogged up.
> >
> >
> > I don't think so. Here is a snapshot of the workqueue tracer in my
> > box currently:
>
> That's not quite what I meant ...
>
> The main problem with keventd I'd say is that it's used in all sort of
> exeptional code path (ie, driver reset path, error handling, etc...) for
> things that will msleep happily for tenth milliseconds, that sort of
> thing.
> IE. It will be pretty responsive -in general- but can suffer form
> horrible latencies every now and then.
So I think it should be reserved for small, quick jobs which don't sleep for
too much time.
I can improve the workqueue tracer to localize these callsites if needed. And
why not warn on such cases if some kind of option is selected.
> Cheers,
> Ben.
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 8:42 ` Stefan Richter
2009-02-02 9:05 ` Benjamin Herrenschmidt
@ 2009-02-02 11:32 ` Frederic Weisbecker
1 sibling, 0 replies; 37+ messages in thread
From: Frederic Weisbecker @ 2009-02-02 11:32 UTC (permalink / raw)
To: Stefan Richter
Cc: Benjamin Herrenschmidt, Arjan van de Ven, Ingo Molnar,
linux-kernel, Andrew Morton, Lai Jiangshan, Peter Zijlstra,
Steven Rostedt
On Mon, Feb 02, 2009 at 09:42:45AM +0100, Stefan Richter wrote:
> Benjamin Herrenschmidt wrote:
> > On Mon, 2009-02-02 at 03:24 +0100, Frederic Weisbecker wrote:
> >> On Mon, Feb 02, 2009 at 08:37:41AM +1100, Benjamin Herrenschmidt wrote:
> >> >
> >> > > I don't know, most of those I've looked on are not documented about the reason
> >> > > for a private workqueue. I guess most of them can use the usual kevent.
>
> I rather suspect that the majority of private workqueues are there for
> good reasons.
>
> >> > The main problem with kevent is that it gets clogged up.
> >>
> >> I don't think so. Here is a snapshot of the workqueue tracer in my
> >> box currently:
> >
> > That's not quite what I meant ...
> >
> > The main problem with keventd I'd say is that it's used in all sort of
> > exeptional code path (ie, driver reset path, error handling, etc...) for
> > things that will msleep happily for tenth milliseconds, that sort of
> > thing.
> >
> > IE. It will be pretty responsive -in general- but can suffer form
> > horrible latencies every now and then.
>
> Actually it /should/ be the other way around:
>
> The shared workqueue should only be used for work that sleeps only
> briefly (perhaps with the exception of very unlikely longer sleeps e.g.
> for allocations that cause paging).
>
> Work which /may/ sleep longer, for example performs SCSI transactions,
> needs to go into a private workqueue or other kind of context.
Right. But most of the time, these workqueues receive few events.
That's why async looks a good alternative for such cases. The threads
from async core which perform the jobs are created and destroyed on the fly,
depending on the number of jobs queued.
> OTOH you are right too; work which must not be deferred too long by work
> from another uncooperative/ unfair subsystem is probably also better off
> in an own workqueue...
Or callsites which use kevent and may sleep for too long could be identified
and fixed...
> Stefan Richter
> -=====-==--= --=- ---=-
> http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 9:14 ` Oliver Neukum
2009-02-02 9:46 ` Benjamin Herrenschmidt
@ 2009-02-02 11:39 ` Stefan Richter
1 sibling, 0 replies; 37+ messages in thread
From: Stefan Richter @ 2009-02-02 11:39 UTC (permalink / raw)
To: Oliver Neukum
Cc: Benjamin Herrenschmidt, Frederic Weisbecker, Arjan van de Ven,
Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
Oliver Neukum wrote:
> Am Monday 02 February 2009 10:05:28 schrieb Benjamin Herrenschmidt:
[I wrote]
>> > Work which /may/ sleep longer, for example performs SCSI transactions,
>> > needs to go into a private workqueue or other kind of context.
>>
>> Well, it's a bit silly to allocate a private workqueue with all it's
>> associated per CPU kernel threads for something as rare as resetting
>> your eth NIC ... or even SCSI error handling in fact.
>
> How do you avoid a deadlock if SCSI error handling doesn't use
> a dedicated workqueue?
SCSI error handling happens in dedicated per-Scsi_Host threads. These
look like candidates for some kind of thread pool implementation too
though (like Arjan's kernel/async.c), especially since a Scsi_Host
instance is sometimes actually a per-target instance.
But I thought primarily about scsi_add_device and scsi_remove_device and
friends. Incidentally, device probing is exactly what Arjan's fastboot
facility is targeting. (Sometimes device shutdown is slow too and wants
to be parallelized versus other bus event handling then.)
--
Stefan Richter
-=====-==--= --=- ---=-
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 9:01 ` Stefan Richter
@ 2009-02-02 14:45 ` Arjan van de Ven
0 siblings, 0 replies; 37+ messages in thread
From: Arjan van de Ven @ 2009-02-02 14:45 UTC (permalink / raw)
To: Stefan Richter
Cc: Benjamin Herrenschmidt, Frederic Weisbecker, Ingo Molnar,
linux-kernel, Andrew Morton, Lai Jiangshan, Peter Zijlstra,
Steven Rostedt
On Mon, 02 Feb 2009 10:01:56 +0100
Stefan Richter <stefanr@s5r6.in-berlin.de> wrote:
> Arjan van de Ven wrote:
> > On Mon, 02 Feb 2009 08:37:41 +1100
> > Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
> >> That's were thread pools kick in ... tried using Dave Howells slow
> >> work ?
> >
> > async function calls are pretty much the same and actually in
> > mainlinme. Dave Howells stuff in addition plays some extremely
> > weird refcounting games that I cannot imagine anyone but him
> > needing...
>
> I haven't looked at this particular slow-work implementation. Do you
> refer to some internal refcounting or to some refcounting as a service
> for the API user?
the later. which I find a bit weird ;)
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-01-27 0:17 [RFC][PATCH] create workqueue threads only when needed Frederic Weisbecker
` (2 preceding siblings ...)
2009-01-27 3:07 ` Alasdair G Kergon
@ 2009-02-02 14:49 ` Daniel Walker
2009-02-02 14:51 ` Frédéric Weisbecker
3 siblings, 1 reply; 37+ messages in thread
From: Daniel Walker @ 2009-02-02 14:49 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
On Tue, 2009-01-27 at 01:17 +0100, Frederic Weisbecker wrote:
> All of the workqueues with 0 work inserted do nothing.
> For several reasons:
>
> _ Unneeded built drivers for my system that create workqueue(s) when they init
> _ Services which need their own workqueue, for several reasons, but who receive
> very rare jobs (often never)
> _ ...?
>
Some of the workqueues you have on your system can be removed just by
tuning your kernel config. It's more desirable to be able to remove the
whole unused feature since that's all unused memory beyond just
thread ..
Daniel
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 14:49 ` Daniel Walker
@ 2009-02-02 14:51 ` Frédéric Weisbecker
2009-02-02 15:40 ` Daniel Walker
0 siblings, 1 reply; 37+ messages in thread
From: Frédéric Weisbecker @ 2009-02-02 14:51 UTC (permalink / raw)
To: Daniel Walker
Cc: Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
2009/2/2 Daniel Walker <dwalker@fifo99.com>:
> On Tue, 2009-01-27 at 01:17 +0100, Frederic Weisbecker wrote:
>
>> All of the workqueues with 0 work inserted do nothing.
>> For several reasons:
>>
>> _ Unneeded built drivers for my system that create workqueue(s) when they init
>> _ Services which need their own workqueue, for several reasons, but who receive
>> very rare jobs (often never)
>> _ ...?
>>
>
> Some of the workqueues you have on your system can be removed just by
> tuning your kernel config. It's more desirable to be able to remove the
> whole unused feature since that's all unused memory beyond just
> thread ..
>
> Daniel
Yes of course. I just think about the distros which enable a lot of
options by default.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 14:51 ` Frédéric Weisbecker
@ 2009-02-02 15:40 ` Daniel Walker
0 siblings, 0 replies; 37+ messages in thread
From: Daniel Walker @ 2009-02-02 15:40 UTC (permalink / raw)
To: Frédéric Weisbecker
Cc: Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
On Mon, 2009-02-02 at 15:51 +0100, Frédéric Weisbecker wrote:
> 2009/2/2 Daniel Walker <dwalker@fifo99.com>:
> > On Tue, 2009-01-27 at 01:17 +0100, Frederic Weisbecker wrote:
> >
> >> All of the workqueues with 0 work inserted do nothing.
> >> For several reasons:
> >>
> >> _ Unneeded built drivers for my system that create workqueue(s) when they init
> >> _ Services which need their own workqueue, for several reasons, but who receive
> >> very rare jobs (often never)
> >> _ ...?
> >>
> >
> > Some of the workqueues you have on your system can be removed just by
> > tuning your kernel config. It's more desirable to be able to remove the
> > whole unused feature since that's all unused memory beyond just
> > thread ..
> >
> > Daniel
>
>
> Yes of course. I just think about the distros which enable a lot of
> options by default.
The problem is that your just removing the visible part of the memory
waste .. Even distros can/do use modules where some of these features
wouldn't get loaded.
I think it's a better policy to assume if a kernel has a feature enabled
that feature will get used, including the workqueue.
Daniel
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 10:03 ` Oliver Neukum
@ 2009-02-02 20:44 ` Benjamin Herrenschmidt
2009-02-04 9:54 ` Oliver Neukum
0 siblings, 1 reply; 37+ messages in thread
From: Benjamin Herrenschmidt @ 2009-02-02 20:44 UTC (permalink / raw)
To: Oliver Neukum
Cc: Stefan Richter, Frederic Weisbecker, Arjan van de Ven,
Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
> > Something such as slow-work or async funcs (not sure about the later, I
> > have to look at the implementation) but the basic idea is to have a pool
> > of threads for "generic" delayed work, when its busy, pick another one,
> > and the pool itself should resize if there's too much pressure.
>
> And all that without using GFP_KERNEL?
Not too hard. There are options here, one is to keep always one thread
available in the pool that will create new ones, one is to hop via
keventd for the creation, which is still better than spending 200ms
resetting a network NIC, etc...
Ben.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [RFC][PATCH] create workqueue threads only when needed
2009-02-02 20:44 ` Benjamin Herrenschmidt
@ 2009-02-04 9:54 ` Oliver Neukum
0 siblings, 0 replies; 37+ messages in thread
From: Oliver Neukum @ 2009-02-04 9:54 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Stefan Richter, Frederic Weisbecker, Arjan van de Ven,
Ingo Molnar, linux-kernel, Andrew Morton, Lai Jiangshan,
Peter Zijlstra, Steven Rostedt
Am Monday 02 February 2009 21:44:17 schrieb Benjamin Herrenschmidt:
>
> > > Something such as slow-work or async funcs (not sure about the later, I
> > > have to look at the implementation) but the basic idea is to have a pool
> > > of threads for "generic" delayed work, when its busy, pick another one,
> > > and the pool itself should resize if there's too much pressure.
> >
> > And all that without using GFP_KERNEL?
>
> Not too hard. There are options here, one is to keep always one thread
> available in the pool that will create new ones, one is to hop via
> keventd for the creation, which is still better than spending 200ms
> resetting a network NIC, etc...
But how does that solve the underlying problem of not being
allowed to hit the block layer? It seems to me you cannot do
without dedicated threads for block error handling. In fact, given
how rare this is, why not start one and share it?
Regards
Oliver
^ permalink raw reply [flat|nested] 37+ messages in thread
end of thread, other threads:[~2009-02-04 9:53 UTC | newest]
Thread overview: 37+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-27 0:17 [RFC][PATCH] create workqueue threads only when needed Frederic Weisbecker
2009-01-27 0:30 ` Arjan van de Ven
2009-01-31 18:03 ` Frederic Weisbecker
2009-01-31 18:15 ` Arjan van de Ven
2009-01-31 18:28 ` Frederic Weisbecker
2009-02-01 16:22 ` Stefan Richter
2009-02-01 17:04 ` Arjan van de Ven
2009-02-01 17:40 ` Stefan Richter
2009-02-01 17:47 ` Arjan van de Ven
2009-02-01 18:06 ` Stefan Richter
2009-02-01 18:11 ` Arjan van de Ven
2009-02-01 21:37 ` Benjamin Herrenschmidt
2009-02-02 2:24 ` Frederic Weisbecker
2009-02-02 6:00 ` Benjamin Herrenschmidt
2009-02-02 8:42 ` Stefan Richter
2009-02-02 9:05 ` Benjamin Herrenschmidt
2009-02-02 9:14 ` Oliver Neukum
2009-02-02 9:46 ` Benjamin Herrenschmidt
2009-02-02 9:58 ` Peter Zijlstra
2009-02-02 10:03 ` Oliver Neukum
2009-02-02 20:44 ` Benjamin Herrenschmidt
2009-02-04 9:54 ` Oliver Neukum
2009-02-02 11:39 ` Stefan Richter
2009-02-02 11:32 ` Frederic Weisbecker
2009-02-02 11:26 ` Frederic Weisbecker
2009-02-02 5:19 ` Arjan van de Ven
2009-02-02 6:01 ` Benjamin Herrenschmidt
2009-02-02 9:01 ` Stefan Richter
2009-02-02 14:45 ` Arjan van de Ven
[not found] ` <20090126162807.1131c777.akpm@linux-foundation.org>
2009-01-27 1:46 ` Oleg Nesterov
2009-01-27 8:40 ` Frederic Weisbecker
2009-01-27 3:07 ` Alasdair G Kergon
2009-01-27 8:57 ` Frederic Weisbecker
2009-01-27 12:43 ` Ingo Molnar
2009-02-02 14:49 ` Daniel Walker
2009-02-02 14:51 ` Frédéric Weisbecker
2009-02-02 15:40 ` Daniel Walker
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox