From: Petr Mladek <pmladek@suse.com>
To: Andrew Morton <akpm@linux-foundation.org>,
Oleg Nesterov <oleg@redhat.com>, Tejun Heo <tj@kernel.org>,
Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
Josh Triplett <josh@joshtriplett.org>,
Thomas Gleixner <tglx@linutronix.de>,
Linus Torvalds <torvalds@linux-foundation.org>,
Jiri Kosina <jkosina@suse.cz>, Borislav Petkov <bp@suse.de>,
Michal Hocko <mhocko@suse.cz>,
linux-mm@kvack.org, Vlastimil Babka <vbabka@suse.cz>,
linux-api@vger.kernel.org, linux-kernel@vger.kernel.org,
Petr Mladek <pmladek@suse.com>
Subject: [PATCH v8 10/12] kthread: Allow to cancel kthread work
Date: Thu, 9 Jun 2016 15:52:04 +0200 [thread overview]
Message-ID: <1465480326-31606-11-git-send-email-pmladek@suse.com> (raw)
In-Reply-To: <1465480326-31606-1-git-send-email-pmladek@suse.com>
We are going to use kthread workers more widely and sometimes we will need
to make sure that the work is neither pending nor running.
This patch implements cancel_*_sync() operations as inspired by
workqueues. Well, we are synchronized against the other operations
via the worker lock, we use del_timer_sync() and a counter to count
parallel cancel operations. Therefore the implementation might be easier.
First, we check if a worker is assigned. If not, the work has newer
been queued after it was initialized.
Second, we take the worker lock. It must be the right one. The work must
not be assigned to another worker unless it is initialized in between.
Third, we try to cancel the timer when it exists. The timer is deleted
synchronously to make sure that the timer call back is not running.
We need to temporary release the worker->lock to avoid a possible
deadlock with the callback. In the meantime, we set work->canceling
counter to avoid any queuing.
Fourth, we try to remove the work from a worker list. It might be
the list of either normal or delayed works.
Fifth, if the work is running, we call kthread_flush_work(). It might
take an arbitrary time. We need to release the worker-lock again.
In the meantime, we again block any queuing by the canceling counter.
As already mentioned, the check for a pending kthread work is done under
a lock. In compare with workqueues, we do not need to fight for a single
PENDING bit to block other operations. Therefore we do not suffer from
the thundering storm problem and all parallel canceling jobs might use
kthread_flush_work(). Any queuing is blocked until the counter gets zero.
Signed-off-by: Petr Mladek <pmladek@suse.com>
---
include/linux/kthread.h | 5 ++
kernel/kthread.c | 131 +++++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 134 insertions(+), 2 deletions(-)
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index 24c434cd302d..95e329097c14 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -77,6 +77,8 @@ struct kthread_work {
struct list_head node;
kthread_work_func_t func;
struct kthread_worker *worker;
+ /* Number of canceling calls that are running at the moment. */
+ int canceling;
};
struct kthread_delayed_work {
@@ -170,6 +172,9 @@ void kthread_flush_work(struct kthread_work *work);
void kthread_flush_worker(struct kthread_worker *worker);
void kthread_drain_worker(struct kthread_worker *worker);
+bool kthread_cancel_work_sync(struct kthread_work *work);
+bool kthread_cancel_delayed_work_sync(struct kthread_delayed_work *work);
+
void kthread_destroy_worker(struct kthread_worker *worker);
#endif /* _LINUX_KTHREAD_H */
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 74b29b703ab8..f9d6a75779e3 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -714,6 +714,18 @@ kthread_create_worker_on_cpu(int cpu, const char namefmt[], ...)
}
EXPORT_SYMBOL(kthread_create_worker_on_cpu);
+/*
+ * Returns true when the work could not be queued at the moment.
+ * It happens when it is already pending in a worker list
+ * or when it is being cancelled.
+ *
+ * This function must be called under work->worker->lock.
+ */
+static inline bool queuing_blocked(const struct kthread_work *work)
+{
+ return !list_empty(&work->node) || work->canceling;
+}
+
static void kthread_insert_work_sanity_check(struct kthread_worker *worker,
struct kthread_work *work)
{
@@ -755,7 +767,7 @@ bool kthread_queue_work(struct kthread_worker *worker,
unsigned long flags;
spin_lock_irqsave(&worker->lock, flags);
- if (list_empty(&work->node)) {
+ if (!queuing_blocked(work)) {
kthread_insert_work(worker, work, &worker->work_list);
ret = true;
}
@@ -855,7 +867,7 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
spin_lock_irqsave(&worker->lock, flags);
- if (list_empty(&work->node)) {
+ if (!queuing_blocked(work)) {
__kthread_queue_delayed_work(worker, dwork, delay);
ret = true;
}
@@ -915,6 +927,121 @@ void kthread_flush_work(struct kthread_work *work)
}
EXPORT_SYMBOL_GPL(kthread_flush_work);
+/*
+ * This function removes the work from the worker queue. Also it makes sure
+ * that it won't get queued later via the delayed work's timer.
+ *
+ * The work might still be in use when this function finishes. See the
+ * current_work proceed by the worker.
+ *
+ * Return: %true if @work was pending and successfully canceled,
+ * %false if @work was not pending
+ */
+static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
+ unsigned long *flags)
+{
+ /* Try to cancel the timer if exists. */
+ if (is_dwork) {
+ struct kthread_delayed_work *dwork =
+ container_of(work, struct kthread_delayed_work, work);
+ struct kthread_worker *worker = work->worker;
+
+ /*
+ * del_timer_sync() must be called to make sure that the timer
+ * callback is not running. The lock must be temporary released
+ * to avoid a deadlock with the callback. In the meantime,
+ * any queuing is blocked by setting the canceling counter.
+ */
+ work->canceling++;
+ spin_unlock_irqrestore(&worker->lock, *flags);
+ del_timer_sync(&dwork->timer);
+ spin_lock_irqsave(&worker->lock, *flags);
+ work->canceling--;
+ }
+
+ /*
+ * Try to remove the work from a worker list. It might either
+ * be from worker->work_list or from worker->delayed_work_list.
+ */
+ if (!list_empty(&work->node)) {
+ list_del_init(&work->node);
+ return true;
+ }
+
+ return false;
+}
+
+static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
+{
+ struct kthread_worker *worker = work->worker;
+ unsigned long flags;
+ int ret = false;
+
+ if (!worker)
+ goto out;
+
+ spin_lock_irqsave(&worker->lock, flags);
+ /* Work must not be used with more workers, see kthread_queue_work(). */
+ WARN_ON_ONCE(work->worker != worker);
+
+ ret = __kthread_cancel_work(work, is_dwork, &flags);
+
+ if (worker->current_work != work)
+ goto out_fast;
+
+ /*
+ * The work is in progress and we need to wait with the lock released.
+ * In the meantime, block any queuing by setting the canceling counter.
+ */
+ work->canceling++;
+ spin_unlock_irqrestore(&worker->lock, flags);
+ kthread_flush_work(work);
+ spin_lock_irqsave(&worker->lock, flags);
+ work->canceling--;
+
+out_fast:
+ spin_unlock_irqrestore(&worker->lock, flags);
+out:
+ return ret;
+}
+
+/**
+ * kthread_cancel_work_sync - cancel a kthread work and wait for it to finish
+ * @work: the kthread work to cancel
+ *
+ * Cancel @work and wait for its execution to finish. This function
+ * can be used even if the work re-queues itself. On return from this
+ * function, @work is guaranteed to be not pending or executing on any CPU.
+ *
+ * kthread_cancel_work_sync(&delayed_work->work) must not be used for
+ * delayed_work's. Use kthread_cancel_delayed_work_sync() instead.
+ *
+ * The caller must ensure that the worker on which @work was last
+ * queued can't be destroyed before this function returns.
+ *
+ * Return: %true if @work was pending, %false otherwise.
+ */
+bool kthread_cancel_work_sync(struct kthread_work *work)
+{
+ return __kthread_cancel_work_sync(work, false);
+}
+EXPORT_SYMBOL_GPL(kthread_cancel_work_sync);
+
+/**
+ * kthread_cancel_delayed_work_sync - cancel a kthread delayed work and
+ * wait for it to finish.
+ * @dwork: the kthread delayed work to cancel
+ *
+ * This is kthread_cancel_work_sync() for delayed works.
+ *
+ * Return: %true if @dwork was pending, %false otherwise.
+ */
+bool kthread_cancel_delayed_work_sync(struct kthread_delayed_work *dwork)
+{
+ return __kthread_cancel_work_sync(&dwork->work, true);
+}
+EXPORT_SYMBOL_GPL(kthread_cancel_delayed_work_sync);
+
/**
* kthread_flush_worker - flush all current works on a kthread_worker
* @worker: worker to flush
--
1.8.5.6
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-06-09 13:52 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-09 13:51 [PATCH v8 00/12] kthread: Kthread worker API improvements Petr Mladek
[not found] ` <1465480326-31606-1-git-send-email-pmladek-IBi9RG/b67k@public.gmane.org>
2016-06-09 13:51 ` [PATCH v8 01/12] kthread: Rename probe_kthread_data() to kthread_probe_data() Petr Mladek
2016-06-09 13:51 ` [PATCH v8 02/12] kthread: Kthread worker API cleanup Petr Mladek
2016-06-09 15:07 ` Steven Rostedt
[not found] ` <20160609110710.510c7c67-f9ZlEuEWxVcJvu8Pb33WZ0EMvNT87kid@public.gmane.org>
2016-06-10 22:29 ` Andrew Morton
[not found] ` <20160610152905.e99933d99108fa6d9f8d4dca-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2016-06-13 15:13 ` Petr Mladek
2016-06-13 16:03 ` Steven Rostedt
2016-06-09 17:22 ` Peter Zijlstra
2016-06-09 13:51 ` [PATCH v8 03/12] kthread/smpboot: Do not park in kthread_create_on_cpu() Petr Mladek
2016-06-09 13:51 ` [PATCH v8 04/12] kthread: Allow to call __kthread_create_on_node() with va_list args Petr Mladek
2016-06-09 13:51 ` [PATCH v8 05/12] kthread: Add kthread_create_worker*() Petr Mladek
2016-06-09 13:52 ` [PATCH v8 06/12] kthread: Add kthread_drain_worker() Petr Mladek
2016-06-09 13:52 ` [PATCH v8 07/12] kthread: Add kthread_destroy_worker() Petr Mladek
2016-06-09 13:52 ` [PATCH v8 08/12] kthread: Detect when a kthread work is used by more workers Petr Mladek
2016-06-09 13:52 ` [PATCH v8 09/12] kthread: Initial support for delayed kthread work Petr Mladek
2016-06-09 13:52 ` Petr Mladek [this message]
2016-06-09 13:52 ` [PATCH v8 11/12] kthread: Allow to modify " Petr Mladek
2016-06-09 13:52 ` [PATCH v8 12/12] kthread: Better support freezable kthread workers Petr Mladek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1465480326-31606-11-git-send-email-pmladek@suse.com \
--to=pmladek@suse.com \
--cc=akpm@linux-foundation.org \
--cc=bp@suse.de \
--cc=jkosina@suse.cz \
--cc=josh@joshtriplett.org \
--cc=linux-api@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=mingo@redhat.com \
--cc=oleg@redhat.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).