public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] workqueue: Two API's to update delayed works quickly
@ 2008-09-22  4:04 Krishna Kumar
  2008-09-22  4:04 ` [PATCH 1/2]: workqueue: Implement the kernel API Krishna Kumar
  2008-09-22  4:04 ` [PATCH 2/2]: workqueue: Modify some users to use the new API Krishna Kumar
  0 siblings, 2 replies; 5+ messages in thread
From: Krishna Kumar @ 2008-09-22  4:04 UTC (permalink / raw)
  To: linux-kernel; +Cc: krkumar2, Krishna Kumar

From: Krishna Kumar <krkumar2@in.ibm.com>

Implement two API's for quickly updating delayed works:
	void schedule_update_delayed_work(struct delayed_work *dwork,
					  unsigned long delay);
	void queue_update_delayed_work(struct workqueue_struct *wq,
				       struct delayed_work *dwork,
				       unsigned long delay);

These API's are useful to update an existing work entry more efficiently (but
can be used to queue a new work entry too) when the operation is done very
frequently. The rationale is to save time of first cancelling work/timer and
adding work/timer when the same work is added many times in quick succession.
An additional optimization is to do nothing when events are queued within 1
jiffy past the expiry time. However this can result in the work being queue'd
upto 1 jiffy before the minimum time requested (but if another request comes 1
jiffy or more later, the work expires at the correct time). The assumption
behind this optimization is that the work scheduled doesn't require 100%
accuracy at the expense of performance. But if that is a faulty assumption, the
'time_in_range()' call can be changed to 'jiffies + delay == timer->expires'.

queue_update_delayed_work_on() and schedule_update_delayed_work_on() are not
implemented as there are no users which can take advantage of these variations.

Example possible users (FS's, wireless drivers):
------------------------------------------------
	lbs_scan_networks, nfs4_renew_state, ocfs2_schedule_truncate_log_flush,
	nfs4_schedule_state_renewal, afs_flush_callback_breaks, afs_reap_server,
	afs_purge_servers, afs_vlocation_reaper, afs_vlocation_purge,
	o2hb_arm_write_timeout, lbs_scan_networks, isr_indicate_rf_kill,
	lbs_postpone_association_work, isr_scan_complete, ipw_radio_kill_sw,
	 __ipw_led_activity_on, ipw_radio_kill_sw, ds2760_battery_resume, etc.

Performance (numbers in jiffies):
---------------------------------
These API's are useful in following cases:
	a. add followed by cancel+add with long period between the add's. Time
	   saved is the time to clear and re-setting the WORK_STRUCT_PENDING
	   bit, plus the reduction of one API call.
	b. add followed by cancel+add with very short time between the add's.
	   Time saved is all of above, plus avoid cancelling and re-adding. The
	   results below are testing this case (with HZ = 1000).

1. queue_delayed_work+cancel vs queue_update_delayed_work on 1 cpu. Queue the
   same entry on same cpu serially - do this many times.
	Laptop (Xeon 2 cpu): Saves 83.7%
		ORG: Time: 41654
		NEW: Time: 6804
	Server (x86-64, 4 cpu): Saves 93.8%
		ORG: Time: 211488
		NEW: Time: 13187
2. queue_delayed_work+cancel vs queue_update_delayed_work on 'N' cpus in
   parallel. Queue 'N' different entries on 'N' cpu's in paralle - do this
   many times.
	Laptop (Xeon 2 cpu): Saves 95.7%
		ORG: Time: 146064, 182160
		NEW: Time: 7014, 7045
	Server (x86-64, 4 cpu): Saves 93.7%
		ORG: Time: 165255, 225159, 227658, 237000
		NEW: Time: 13446, 13449, 13455, 13477
3. schedule_delayed_work+cancel vs schedule_update_delayed_work on 1 cpu.
   Queue the same entry on same cpu serially - do this many times.
	Laptop (Xeon 2 cpu): Saves 83.3%
		ORG: Time: 41878
		NEW: Time: 6987
	Server (x86-64, 4 cpu): Saves 92.3%
		ORG: Time: 184893
		NEW: Time: 14205
4. schedule_delayed_work+cancel vs schedule_update_delayed_work on 'N' cpus in
   parallel. Queue 'N' different entries on 'N' cpu's in parallel - do this
   many times.
	Laptop (Xeon 2 cpu): Saves 95.73%
		ORG: Time: 145147, 180987
		NEW: Time: 6955, 6958
	Server (x86-64, 4 cpu): Saves 94.5%
		ORG: Time: 165031, 263509, 277071, 321099
		NEW: Time: 14211, 14211, 14216, 14242

[PATCH 1/2]: Implement the kernel API's
[PATCH 2/2]: Modify some drivers to use the new APIs instead of the old ones

Signed-off-by: Krishna Kumar <krkumar2@in.ibm.com>
---

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2]: workqueue: Implement the kernel API
  2008-09-22  4:04 [PATCH 0/2] workqueue: Two API's to update delayed works quickly Krishna Kumar
@ 2008-09-22  4:04 ` Krishna Kumar
  2008-09-22 14:10   ` Oleg Nesterov
  2008-09-22  4:04 ` [PATCH 2/2]: workqueue: Modify some users to use the new API Krishna Kumar
  1 sibling, 1 reply; 5+ messages in thread
From: Krishna Kumar @ 2008-09-22  4:04 UTC (permalink / raw)
  To: linux-kernel; +Cc: krkumar2, Krishna Kumar

From: Krishna Kumar <krkumar2@in.ibm.com>

Implement two API's for quickly updating delayed works:
	void schedule_update_delayed_work(struct delayed_work *dwork,
					  unsigned long delay);
	void queue_update_delayed_work(struct workqueue_struct *wq,
				       struct delayed_work *dwork,
				       unsigned long delay);

These API's are useful to update an existing work entry more efficiently (but
can be used to queue a new work entry too) when the operation is done very
frequently. The rationale is to save time of first cancelling work/timer and
adding work/timer when the same work is added many times in quick succession.

Signed-off-by: Krishna Kumar <krkumar2@in.ibm.com>
---
 include/linux/workqueue.h |    4 +
 kernel/workqueue.c        |  127 ++++++++++++++++++++++++++++++------
 2 files changed, 113 insertions(+), 18 deletions(-)

diff -ruNp 2.6.27-rc7-org/include/linux/workqueue.h 2.6.27-rc7-new/include/linux/workqueue.h
--- 2.6.27-rc7-org/include/linux/workqueue.h	2008-09-17 13:14:27.000000000 +0530
+++ 2.6.27-rc7-new/include/linux/workqueue.h	2008-09-17 13:17:43.000000000 +0530
@@ -185,6 +185,8 @@ extern int queue_delayed_work(struct wor
 			struct delayed_work *work, unsigned long delay);
 extern int queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
 			struct delayed_work *work, unsigned long delay);
+extern void queue_update_delayed_work(struct workqueue_struct *wq,
+			struct delayed_work *dwork, unsigned long delay);
 
 extern void flush_workqueue(struct workqueue_struct *wq);
 extern void flush_scheduled_work(void);
@@ -194,6 +196,8 @@ extern int schedule_work_on(int cpu, str
 extern int schedule_delayed_work(struct delayed_work *work, unsigned long delay);
 extern int schedule_delayed_work_on(int cpu, struct delayed_work *work,
 					unsigned long delay);
+extern void schedule_update_delayed_work(struct delayed_work *work,
+					unsigned long delay);
 extern int schedule_on_each_cpu(work_func_t func);
 extern int current_is_keventd(void);
 extern int keventd_up(void);
diff -ruNp 2.6.27-rc7-org/kernel/workqueue.c 2.6.27-rc7-new/kernel/workqueue.c
--- 2.6.27-rc7-org/kernel/workqueue.c	2008-09-17 13:14:07.000000000 +0530
+++ 2.6.27-rc7-new/kernel/workqueue.c	2008-09-18 12:52:22.000000000 +0530
@@ -202,6 +202,30 @@ static void delayed_work_timer_fn(unsign
 	__queue_work(wq_per_cpu(wq, smp_processor_id()), &dwork->work);
 }
 
+static inline void __queue_delayed_work(int cpu, struct delayed_work *dwork,
+					struct work_struct *work,
+					struct workqueue_struct *wq,
+					unsigned long delay)
+{
+	struct timer_list *timer = &dwork->timer;
+
+	BUG_ON(timer_pending(timer));
+	BUG_ON(!list_empty(&work->entry));
+
+	timer_stats_timer_set_start_info(timer);
+
+	/* This stores cwq for the moment, for the timer_fn */
+	set_wq_data(work, wq_per_cpu(wq, raw_smp_processor_id()));
+	timer->expires = jiffies + delay;
+	timer->data = (unsigned long)dwork;
+	timer->function = delayed_work_timer_fn;
+
+	if (unlikely(cpu >= 0))
+		add_timer_on(timer, cpu);
+	else
+		add_timer(timer);
+}
+
 /**
  * queue_delayed_work - queue work on a workqueue after delay
  * @wq: workqueue to use
@@ -233,31 +257,63 @@ int queue_delayed_work_on(int cpu, struc
 			struct delayed_work *dwork, unsigned long delay)
 {
 	int ret = 0;
-	struct timer_list *timer = &dwork->timer;
 	struct work_struct *work = &dwork->work;
 
 	if (!test_and_set_bit(WORK_STRUCT_PENDING, work_data_bits(work))) {
-		BUG_ON(timer_pending(timer));
-		BUG_ON(!list_empty(&work->entry));
-
-		timer_stats_timer_set_start_info(&dwork->timer);
-
-		/* This stores cwq for the moment, for the timer_fn */
-		set_wq_data(work, wq_per_cpu(wq, raw_smp_processor_id()));
-		timer->expires = jiffies + delay;
-		timer->data = (unsigned long)dwork;
-		timer->function = delayed_work_timer_fn;
-
-		if (unlikely(cpu >= 0))
-			add_timer_on(timer, cpu);
-		else
-			add_timer(timer);
+		__queue_delayed_work(cpu, dwork, work, wq, delay);
 		ret = 1;
 	}
 	return ret;
 }
 EXPORT_SYMBOL_GPL(queue_delayed_work_on);
 
+static int __cancel_work_timer_internal(struct work_struct *work,
+					struct timer_list *timer);
+
+/**
+ * queue_update_delayed_work - queue or update a work entry on a workqueue
+ *			       after delay
+ * @wq: workqueue to use
+ * @dwork: delayable work to queue
+ * @delay: number of jiffies to wait before queueing
+ *
+ * This function is useful to update an existing delayed work without having
+ * to first cancel it. E.g. code snippets that can use this API:
+ * 	if (delayed_work_pending(&work))
+ * 		cancel_delayed_work(&work);
+ * 	queue_delayed_work(wq, &work, delay);
+ *
+ * Passing delay=0 will result in immediate queueing of the entry, whether
+ * queue'd earlier or otherwise.
+ *
+ * Always succeeds. If work is already on a queue and different in the expiry,
+ * modify it with the new expiry value.
+ */
+void queue_update_delayed_work(struct workqueue_struct *wq,
+			       struct delayed_work *dwork, unsigned long delay)
+{
+	struct work_struct *work = &dwork->work;
+
+	if (likely(test_and_set_bit(WORK_STRUCT_PENDING,
+				    work_data_bits(work)))) {
+		struct timer_list *timer = &dwork->timer;
+
+		/*
+		 * Already present in workqueue. Check if the timer expiry is
+		 * the same. Also, optimize in case requests are within one
+		 * jiffy beyond the set expiry.
+		 */
+		if (time_in_range(jiffies + delay, timer->expires,
+				  timer->expires + 1))
+			return;
+
+		__cancel_work_timer_internal(work, timer);
+	}
+
+	__queue_delayed_work(-1, dwork, work, wq, delay);
+}
+EXPORT_SYMBOL_GPL(queue_update_delayed_work);
+
 static void run_workqueue(struct cpu_workqueue_struct *cwq)
 {
 	spin_lock_irq(&cwq->lock);
@@ -550,8 +606,8 @@ static void wait_on_work(struct work_str
 		wait_on_cpu_work(per_cpu_ptr(wq->cpu_wq, cpu), work);
 }
 
-static int __cancel_work_timer(struct work_struct *work,
-				struct timer_list* timer)
+static int __cancel_work_timer_internal(struct work_struct *work,
+					struct timer_list *timer)
 {
 	int ret;
 
@@ -562,6 +618,15 @@ static int __cancel_work_timer(struct wo
 		wait_on_work(work);
 	} while (unlikely(ret < 0));
 
+	return ret;
+}
+
+static int __cancel_work_timer(struct work_struct *work,
+				struct timer_list *timer)
+{
+	int ret;
+
+	ret = __cancel_work_timer_internal(work, timer);
 	work_clear_pending(work);
 	return ret;
 }
@@ -667,6 +732,32 @@ int schedule_delayed_work_on(int cpu,
 EXPORT_SYMBOL(schedule_delayed_work_on);
 
 /**
+ * schedule_update_delayed_work - put or update a work task in global workqueue
+ *				  after delay
+ * @dwork: job to be done
+ * @delay: number of jiffies to wait or 0 for immediate execution
+ *
+ * This function is useful to update an existing delayed work without having
+ * to first cancel it. E.g. code snippets that can use this API:
+ * 	if (delayed_work_pending(&work))
+ * 		cancel_delayed_work(&work);
+ * 	schedule_delayed_work(&work, delay);
+ *
+ * After waiting for a given time this puts a job in the kernel-global
+ * workqueue. Passing delay=0 will result in immediate queueing of the entry,
+ * whether queue'd earlier or otherwise.
+ *
+ * Always succeeds. If work is already on a queue and different in the expiry,
+ * modify it with the new expiry value.
+ */
+void schedule_update_delayed_work(struct delayed_work *dwork,
+				  unsigned long delay)
+{
+	queue_update_delayed_work(keventd_wq, dwork, delay);
+}
+EXPORT_SYMBOL(schedule_update_delayed_work);
+
+/**
  * schedule_on_each_cpu - call a function on each online CPU from keventd
  * @func: the function to call
  *

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 2/2]: workqueue: Modify some users to use the new API
  2008-09-22  4:04 [PATCH 0/2] workqueue: Two API's to update delayed works quickly Krishna Kumar
  2008-09-22  4:04 ` [PATCH 1/2]: workqueue: Implement the kernel API Krishna Kumar
@ 2008-09-22  4:04 ` Krishna Kumar
  1 sibling, 0 replies; 5+ messages in thread
From: Krishna Kumar @ 2008-09-22  4:04 UTC (permalink / raw)
  To: linux-kernel; +Cc: krkumar2, Krishna Kumar

From: Krishna Kumar <krkumar2@in.ibm.com>

Modify some users to use the new API.

Signed-off-by: Krishna Kumar <krkumar2@in.ibm.com>
---
 drivers/net/wireless/ipw2100.c       |   12 ++++++------
 drivers/net/wireless/libertas/scan.c |    5 ++---
 drivers/net/wireless/libertas/wext.c |    7 +++----
 fs/afs/vlocation.c                   |   11 +++--------
 fs/ocfs2/cluster/heartbeat.c         |    5 ++---
 5 files changed, 16 insertions(+), 24 deletions(-)

diff -ruNp 2.6.27-rc7-org/drivers/net/wireless/ipw2100.c 2.6.27-rc7-new/drivers/net/wireless/ipw2100.c
--- 2.6.27-rc7-org/drivers/net/wireless/ipw2100.c	2008-09-17 13:15:53.000000000 +0530
+++ 2.6.27-rc7-new/drivers/net/wireless/ipw2100.c	2008-09-17 13:26:08.000000000 +0530
@@ -2085,9 +2085,8 @@ static void isr_indicate_rf_kill(struct 
 
 	/* Make sure the RF Kill check timer is running */
 	priv->stop_rf_kill = 0;
-	cancel_delayed_work(&priv->rf_kill);
-	queue_delayed_work(priv->workqueue, &priv->rf_kill,
-			   round_jiffies_relative(HZ));
+	queue_update_delayed_work(priv->workqueue, &priv->rf_kill,
+				  round_jiffies_relative(HZ));
 }
 
 static void send_scan_event(void *data)
@@ -4241,9 +4240,10 @@ static int ipw_radio_kill_sw(struct ipw2
 					  "disabled by HW switch\n");
 			/* Make sure the RF_KILL check timer is running */
 			priv->stop_rf_kill = 0;
-			cancel_delayed_work(&priv->rf_kill);
-			queue_delayed_work(priv->workqueue, &priv->rf_kill,
-					   round_jiffies_relative(HZ));
+
+			queue_update_delayed_work(priv->workqueue,
+						  &priv->rf_kill,
+						  round_jiffies_relative(HZ));
 		} else
 			schedule_reset(priv);
 	}
diff -ruNp 2.6.27-rc7-org/drivers/net/wireless/libertas/scan.c 2.6.27-rc7-new/drivers/net/wireless/libertas/scan.c
--- 2.6.27-rc7-org/drivers/net/wireless/libertas/scan.c	2008-09-17 13:15:53.000000000 +0530
+++ 2.6.27-rc7-new/drivers/net/wireless/libertas/scan.c	2008-09-17 13:27:46.000000000 +0530
@@ -435,9 +435,8 @@ int lbs_scan_networks(struct lbs_private
 				priv->scan_channel = to_scan;
 			else
 				priv->scan_channel += to_scan;
-			cancel_delayed_work(&priv->scan_work);
-			queue_delayed_work(priv->work_thread, &priv->scan_work,
-					   msecs_to_jiffies(300));
+			queue_update_delayed_work(priv->work_thread,
+				&priv->scan_work, msecs_to_jiffies(300));
 			/* skip over GIWSCAN event */
 			goto out;
 		}
diff -ruNp 2.6.27-rc7-org/drivers/net/wireless/libertas/wext.c 2.6.27-rc7-new/drivers/net/wireless/libertas/wext.c
--- 2.6.27-rc7-org/drivers/net/wireless/libertas/wext.c	2008-09-17 13:15:53.000000000 +0530
+++ 2.6.27-rc7-new/drivers/net/wireless/libertas/wext.c	2008-09-17 13:26:08.000000000 +0530
@@ -24,10 +24,9 @@
 
 static inline void lbs_postpone_association_work(struct lbs_private *priv)
 {
-	if (priv->surpriseremoved)
-		return;
-	cancel_delayed_work(&priv->assoc_work);
-	queue_delayed_work(priv->work_thread, &priv->assoc_work, HZ / 2);
+	if (!priv->surpriseremoved)
+		queue_update_delayed_work(priv->work_thread, &priv->assoc_work,
+					  HZ / 2);
 }
 
 static inline void lbs_cancel_association_work(struct lbs_private *priv)
diff -ruNp 2.6.27-rc7-org/fs/afs/vlocation.c 2.6.27-rc7-new/fs/afs/vlocation.c
--- 2.6.27-rc7-org/fs/afs/vlocation.c	2008-09-17 13:15:53.000000000 +0530
+++ 2.6.27-rc7-new/fs/afs/vlocation.c	2008-09-17 13:26:08.000000000 +0530
@@ -557,12 +557,8 @@ static void afs_vlocation_reaper(struct 
 		if (expiry > now) {
 			delay = (expiry - now) * HZ;
 			_debug("delay %lu", delay);
-			if (!schedule_delayed_work(&afs_vlocation_reap,
-						   delay)) {
-				cancel_delayed_work(&afs_vlocation_reap);
-				schedule_delayed_work(&afs_vlocation_reap,
-						      delay);
-			}
+			schedule_update_delayed_work(&afs_vlocation_reap,
+						     delay);
 			break;
 		}
 
@@ -615,8 +611,7 @@ void afs_vlocation_purge(void)
 			   &afs_vlocation_update, 0);
 	destroy_workqueue(afs_vlocation_update_worker);
 
-	cancel_delayed_work(&afs_vlocation_reap);
-	schedule_delayed_work(&afs_vlocation_reap, 0);
+	schedule_update_delayed_work(&afs_vlocation_reap, 0);
 }
 
 /*
diff -ruNp 2.6.27-rc7-org/fs/ocfs2/cluster/heartbeat.c 2.6.27-rc7-new/fs/ocfs2/cluster/heartbeat.c
--- 2.6.27-rc7-org/fs/ocfs2/cluster/heartbeat.c	2008-09-17 13:15:53.000000000 +0530
+++ 2.6.27-rc7-new/fs/ocfs2/cluster/heartbeat.c	2008-09-17 13:26:08.000000000 +0530
@@ -172,10 +172,9 @@ static void o2hb_arm_write_timeout(struc
 {
 	mlog(0, "Queue write timeout for %u ms\n", O2HB_MAX_WRITE_TIMEOUT_MS);
 
-	cancel_delayed_work(&reg->hr_write_timeout_work);
 	reg->hr_last_timeout_start = jiffies;
-	schedule_delayed_work(&reg->hr_write_timeout_work,
-			      msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS));
+	schedule_update_delayed_work(&reg->hr_write_timeout_work,
+			msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS));
 }
 
 static void o2hb_disarm_write_timeout(struct o2hb_region *reg)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2]: workqueue: Implement the kernel API
  2008-09-22  4:04 ` [PATCH 1/2]: workqueue: Implement the kernel API Krishna Kumar
@ 2008-09-22 14:10   ` Oleg Nesterov
  2008-09-23  5:20     ` Krishna Kumar2
  0 siblings, 1 reply; 5+ messages in thread
From: Oleg Nesterov @ 2008-09-22 14:10 UTC (permalink / raw)
  To: Krishna Kumar; +Cc: linux-kernel, Thomas Gleixner

On 09/22, Krishna Kumar wrote:
>
> Implement two API's for quickly updating delayed works:
> 	void schedule_update_delayed_work(struct delayed_work *dwork,
> 					  unsigned long delay);
> 	void queue_update_delayed_work(struct workqueue_struct *wq,
> 				       struct delayed_work *dwork,
> 				       unsigned long delay);
>
> These API's are useful to update an existing work entry more efficiently (but
> can be used to queue a new work entry too) when the operation is done very
> frequently. The rationale is to save time of first cancelling work/timer and
> adding work/timer when the same work is added many times in quick succession.

I agree, this looks like a useful helper. But, afaics, it is not as quick
as it could, please see below.

> + * Passing delay=0 will result in immediate queueing of the entry, whether
> + * queue'd earlier or otherwise.

The comment doesn't match the code ;)

> + * Always succeeds.

minor, but perhaps it would be nice to change this helper to return 0/1 to
indicate was the work pending or not. like __mod_timer().

> +void queue_update_delayed_work(struct workqueue_struct *wq,
> +			       struct delayed_work *dwork, unsigned long delay)
> +{
> +	struct work_struct *work = &dwork->work;
> +
> +	if (likely(test_and_set_bit(WORK_STRUCT_PENDING,
> +				    work_data_bits(work)))) {
> +		struct timer_list *timer = &dwork->timer;
> +
> +		/*
> +		 * Already present in workqueue. Check if the timer expiry is
> +		 * the same. Also, optimize in case requests are within one
> +		 * jiffy beyond the set expiry.
> +		 */
> +		if (time_in_range(jiffies + delay, timer->expires,
> +				  timer->expires + 1))
> +			return;

Not that I argue, but do we really need to optimize this very unlikely case?

> +		__cancel_work_timer_internal(work, timer);

__cancel_work_timer_internal() is slow, mostly due to
wait_on_work()->for_each_cpu_mask_nr(). And please note we don't really
need wait_on_work() once del_timer() || try_to_grab_pending() succeeds.

Can't we take another approach? First, let's add the new helper:

	int __update_timer(struct timer_list *timer, unsigned long expires)
	{
		struct tvec_base *base;
		unsigned long flags;
		int ret = 0;

		base = lock_timer_base(timer, &flags);
		if (timer_pending(timer)) {
			detach_timer(timer, 0);
			timer->expires = expires;
			internal_add_timer(base, timer);
			ret = 1;
		}
		spin_unlock_irqrestore(&base->lock, flags);

		return ret;
	}

Now, something like

	int update_delayed_work(...)
	{
		ret = 0;
		for (;;) {
			if (queue_delayed_work(...))
				break;

			ret = 1;
			if (__update_timer(...))
				break;
			cancel_work_sync(...);
		}

		return ret;
	}

This way the fast path is really fast, and the patch becomes simpler.

What do you think? We can optimize the code (the usage of cancel_work_sync)
further, but perhaps this is enough.

Oleg.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2]: workqueue: Implement the kernel API
  2008-09-22 14:10   ` Oleg Nesterov
@ 2008-09-23  5:20     ` Krishna Kumar2
  0 siblings, 0 replies; 5+ messages in thread
From: Krishna Kumar2 @ 2008-09-23  5:20 UTC (permalink / raw)
  To: Oleg Nesterov; +Cc: linux-kernel, Thomas Gleixner

Oleg Nesterov <oleg@tv-sign.ru> wrote on 09/22/2008 07:40:51 PM:

Thanks for your review comments, Oleg.

> > + * Passing delay=0 will result in immediate queueing of the entry,
whether
> > + * queue'd earlier or otherwise.
>
> The comment doesn't match the code ;)

Right! The delay=0 doesn't result in immediate queueing of the entry iff
the
current time has just hit the expiry time (or the timer has not fired yet).
The intention was to allow the existing code handle the timer/queueing in
this case, but I agree the comment has to be changed to explain that.

> > +       * Already present in workqueue. Check if the timer expiry is
> > +       * the same. Also, optimize in case requests are within one
> > +       * jiffy beyond the set expiry.
> > +       */
> > +      if (time_in_range(jiffies + delay, timer->expires,
> > +              timer->expires + 1))
> > +         return;
>
> Not that I argue, but do we really need to optimize this very unlikely
case?

Agree.

> This way the fast path is really fast, and the patch becomes simpler.
>
> What do you think? We can optimize the code (the usage of
cancel_work_sync)
> further, but perhaps this is enough.

Thanks for the suggestion. I will code it up and get some testing numbers.
Can I run
this by you?

Thanks,

- KK


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2008-09-23  5:21 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-09-22  4:04 [PATCH 0/2] workqueue: Two API's to update delayed works quickly Krishna Kumar
2008-09-22  4:04 ` [PATCH 1/2]: workqueue: Implement the kernel API Krishna Kumar
2008-09-22 14:10   ` Oleg Nesterov
2008-09-23  5:20     ` Krishna Kumar2
2008-09-22  4:04 ` [PATCH 2/2]: workqueue: Modify some users to use the new API Krishna Kumar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox