From: Tejun Heo <tj@kernel.org>
To: linux-scsi@vger.kernel.org, James.Bottomley@suse.de,
fujita.tomonori@lab.ntt.co.jp, linux-kernel@vger.kernel.org,
Eric.Moore@lsi.com, dgilbert@interlog.com
Cc: Tejun Heo <tj@kernel.org>
Subject: [PATCH 3/6] fcoe: use dedicated workqueue instead of system_wq
Date: Tue, 21 Dec 2010 16:01:58 +0100 [thread overview]
Message-ID: <1292943721-4519-4-git-send-email-tj@kernel.org> (raw)
In-Reply-To: <1292943721-4519-1-git-send-email-tj@kernel.org>
fcoe uses the system_wq to destroy ports and the work items need to be
flushed before the driver is unloaded. As the work items free the
containing data structure, they can't be flushed directly. The
workqueue should be flushed instead.
Also, the destruction works can be chained - ie. destruction of a port
may lead to destruction of another port where the work item for the
former queues the work for the latter. Currently, the depth of chain
can be at most two and fcoe_exit() makes sure everything is complete
by calling flush_scheduled_work() twice.
With commit c8efcc25 (workqueue: allow chained queueing during
destruction), destroy_workqueue() can take care of chained works on
workqueue destruction. Add and use fcoe_wq instead. Simply
destroying fcoe_wq on driver unload takes care of flushing.
Signed-off-by: Tejun Heo <tj@kernel.org>
---
drivers/scsi/fcoe/fcoe.c | 25 ++++++++++++++++---------
1 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 9f9600b..e2f5820 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -31,6 +31,7 @@
#include <linux/fs.h>
#include <linux/sysfs.h>
#include <linux/ctype.h>
+#include <linux/workqueue.h>
#include <scsi/scsi_tcq.h>
#include <scsi/scsicam.h>
#include <scsi/scsi_transport.h>
@@ -58,6 +59,8 @@ MODULE_PARM_DESC(ddp_min, "Minimum I/O size in bytes for " \
DEFINE_MUTEX(fcoe_config_mutex);
+static struct workqueue_struct *fcoe_wq;
+
/* fcoe_percpu_clean completion. Waiter protected by fcoe_create_mutex */
static DECLARE_COMPLETION(fcoe_flush_completion);
@@ -1874,7 +1877,7 @@ static int fcoe_device_notification(struct notifier_block *notifier,
list_del(&fcoe->list);
port = lport_priv(fcoe->ctlr.lp);
fcoe_interface_cleanup(fcoe);
- schedule_work(&port->destroy_work);
+ queue_work(fcoe_wq, &port->destroy_work);
goto out;
break;
case NETDEV_FEAT_CHANGE:
@@ -2412,6 +2415,10 @@ static int __init fcoe_init(void)
unsigned int cpu;
int rc = 0;
+ fcoe_wq = alloc_workqueue("fcoe", 0, 0);
+ if (!fcoe_wq)
+ return -ENOMEM;
+
mutex_lock(&fcoe_config_mutex);
for_each_possible_cpu(cpu) {
@@ -2442,6 +2449,7 @@ out_free:
fcoe_percpu_thread_destroy(cpu);
}
mutex_unlock(&fcoe_config_mutex);
+ destroy_workqueue(fcoe_wq);
return rc;
}
module_init(fcoe_init);
@@ -2467,7 +2475,7 @@ static void __exit fcoe_exit(void)
list_del(&fcoe->list);
port = lport_priv(fcoe->ctlr.lp);
fcoe_interface_cleanup(fcoe);
- schedule_work(&port->destroy_work);
+ queue_work(fcoe_wq, &port->destroy_work);
}
rtnl_unlock();
@@ -2478,12 +2486,11 @@ static void __exit fcoe_exit(void)
mutex_unlock(&fcoe_config_mutex);
- /* flush any asyncronous interface destroys,
- * this should happen after the netdev notifier is unregistered */
- flush_scheduled_work();
- /* That will flush out all the N_Ports on the hostlist, but now we
- * may have NPIV VN_Ports scheduled for destruction */
- flush_scheduled_work();
+ /*
+ * destroy_work's may be chained but destroy_workqueue() can take
+ * care of them. Just kill the wq.
+ */
+ destroy_workqueue(fcoe_wq);
/* detach from scsi transport
* must happen after all destroys are done, therefor after the flush */
@@ -2632,7 +2639,7 @@ static int fcoe_vport_destroy(struct fc_vport *vport)
mutex_lock(&n_port->lp_mutex);
list_del(&vn_port->list);
mutex_unlock(&n_port->lp_mutex);
- schedule_work(&port->destroy_work);
+ queue_work(fcoe_wq, &port->destroy_work);
return 0;
}
--
1.7.1
next prev parent reply other threads:[~2010-12-21 15:02 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-12-21 15:01 [PATCHSET] scsi: don't use flush_scheduled_work() Tejun Heo
2010-12-21 15:01 ` [PATCH 1/6] scsi: remove flush_scheduled_work() usages Tejun Heo
2010-12-21 15:01 ` [PATCH 2/6] scsi: pm8001: simplify workqueue usage Tejun Heo
2010-12-21 15:01 ` Tejun Heo [this message]
2010-12-21 15:01 ` [PATCH 4/6] fusion: don't use flush_scheduled_work() Tejun Heo
2011-01-10 7:30 ` Desai, Kashyap
2011-01-11 15:13 ` Tejun Heo
2010-12-21 15:02 ` [PATCH 5/6] scsi: remove bogus use of struct execute_work in sg Tejun Heo
2010-12-21 15:02 ` [PATCH 6/6] scsi: don't use execute_in_process_context() Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1292943721-4519-4-git-send-email-tj@kernel.org \
--to=tj@kernel.org \
--cc=Eric.Moore@lsi.com \
--cc=James.Bottomley@suse.de \
--cc=dgilbert@interlog.com \
--cc=fujita.tomonori@lab.ntt.co.jp \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).