* [PATCH 16/32] scsi/be2iscsi,qla2xxx: convert to alloc_workqueue()
[not found] <1294062595-30097-1-git-send-email-tj@kernel.org>
@ 2011-01-03 13:49 ` Tejun Heo
2011-02-01 23:45 ` Mike Christie
2011-01-03 13:49 ` [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue Tejun Heo
` (2 subsequent siblings)
3 siblings, 1 reply; 19+ messages in thread
From: Tejun Heo @ 2011-01-03 13:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Jayamohan Kallickal, Andrew Vasquez,
James E.J. Bottomley, linux-scsi
Switch to new workqueue interface alloc_workqueue(). These are
identity conversions.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jayamohan Kallickal <jayamohank@serverengines.com>
Cc: Andrew Vasquez <andrew.vasquez@qlogic.com>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: linux-scsi@vger.kernel.org
---
Please feel free to take it into the subsystem tree or simply ack -
I'll route it through the wq tree.
Thanks.
drivers/scsi/be2iscsi/be_main.c | 2 +-
drivers/scsi/qla2xxx/qla_os.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
index 75a85aa..4339196 100644
--- a/drivers/scsi/be2iscsi/be_main.c
+++ b/drivers/scsi/be2iscsi/be_main.c
@@ -4276,7 +4276,7 @@ static int __devinit beiscsi_dev_probe(struct pci_dev *pcidev,
snprintf(phba->wq_name, sizeof(phba->wq_name), "beiscsi_q_irq%u",
phba->shost->host_no);
- phba->wq = create_workqueue(phba->wq_name);
+ phba->wq = alloc_workqueue(phba->wq_name, WQ_MEM_RECLAIM, 1);
if (!phba->wq) {
shost_printk(KERN_ERR, phba->shost, "beiscsi_dev_probe-"
"Failed to allocate work queue\n");
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index 2c0876c..2cd0a77 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -356,7 +356,7 @@ static int qla25xx_setup_mode(struct scsi_qla_host *vha)
"Can't create request queue\n");
goto fail;
}
- ha->wq = create_workqueue("qla2xxx_wq");
+ ha->wq = alloc_workqueue("qla2xxx_wq", WQ_MEM_RECLAIM, 1);
vha->req = ha->req_q_map[req];
options |= BIT_1;
for (ques = 1; ques < ha->max_rsp_queues; ques++) {
--
1.7.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 16/32] scsi/be2iscsi,qla2xxx: convert to alloc_workqueue()
2011-01-03 13:49 ` [PATCH 16/32] scsi/be2iscsi,qla2xxx: convert to alloc_workqueue() Tejun Heo
@ 2011-02-01 23:45 ` Mike Christie
2011-02-02 10:25 ` Tejun Heo
0 siblings, 1 reply; 19+ messages in thread
From: Mike Christie @ 2011-02-01 23:45 UTC (permalink / raw)
To: Tejun Heo
Cc: linux-kernel, Jayamohan Kallickal, Andrew Vasquez,
James E.J. Bottomley, linux-scsi
On 01/03/2011 07:49 AM, Tejun Heo wrote:
> diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
> index 75a85aa..4339196 100644
> --- a/drivers/scsi/be2iscsi/be_main.c
> +++ b/drivers/scsi/be2iscsi/be_main.c
> @@ -4276,7 +4276,7 @@ static int __devinit beiscsi_dev_probe(struct pci_dev *pcidev,
>
> snprintf(phba->wq_name, sizeof(phba->wq_name), "beiscsi_q_irq%u",
> phba->shost->host_no);
> - phba->wq = create_workqueue(phba->wq_name);
> + phba->wq = alloc_workqueue(phba->wq_name, WQ_MEM_RECLAIM, 1);
> if (!phba->wq) {
> shost_printk(KERN_ERR, phba->shost, "beiscsi_dev_probe-"
> "Failed to allocate work queue\n");
> diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
> index 2c0876c..2cd0a77 100644
> --- a/drivers/scsi/qla2xxx/qla_os.c
> +++ b/drivers/scsi/qla2xxx/qla_os.c
> @@ -356,7 +356,7 @@ static int qla25xx_setup_mode(struct scsi_qla_host *vha)
> "Can't create request queue\n");
> goto fail;
> }
> - ha->wq = create_workqueue("qla2xxx_wq");
> + ha->wq = alloc_workqueue("qla2xxx_wq", WQ_MEM_RECLAIM, 1);
> vha->req = ha->req_q_map[req];
> options |= BIT_1;
> for (ques = 1; ques< ha->max_rsp_queues; ques++) {
I think these are used in the main IO path, so would you also want
WQ_HIGHPRI | WQ_CPU_INTENSIVE to be set? I think qla2xxx was using this
in a path they expected to have high throughput/low latency, and from
the interrupt handler they would try to queue the work on the same cpu
the isr was answered on. With the new workqueue code could you possibly
get queued onto a workqueue that is doing other work for other drivers?
Should qla2xxx be using the blkiopoll framework instead of a workqueue
or will workqueues still provide the same performance when setting
WQ_HIGHPRI | WQ_CPU_INTENSIVE?
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 16/32] scsi/be2iscsi,qla2xxx: convert to alloc_workqueue()
2011-02-01 23:45 ` Mike Christie
@ 2011-02-02 10:25 ` Tejun Heo
2011-02-02 20:41 ` Mike Christie
0 siblings, 1 reply; 19+ messages in thread
From: Tejun Heo @ 2011-02-02 10:25 UTC (permalink / raw)
To: Mike Christie
Cc: linux-kernel, Jayamohan Kallickal, Andrew Vasquez,
James E.J. Bottomley, linux-scsi
Hello,
On Tue, Feb 01, 2011 at 05:45:13PM -0600, Mike Christie wrote:
> I think these are used in the main IO path, so would you also want
> WQ_HIGHPRI | WQ_CPU_INTENSIVE to be set?
Maybe HIGHPRI but definitely not CPU_INTENSIVE, but even HIGHPRI isn't
usually necessary. The queue of pending works tend to be short and
consumed pretty fast.
> I think qla2xxx was using this in a path they expected to have high
> throughput/low latency, and from the interrupt handler they would
> try to queue the work on the same cpu the isr was answered on. With
> the new workqueue code could you possibly get queued onto a
> workqueue that is doing other work for other drivers?
There are no separate workqueues. There's one per cpu which is used
by all workqueue users. HIGHPRI boosts the queueing order in that
queue.
> Should qla2xxx be using the blkiopoll framework instead of a
> workqueue or will workqueues still provide the same performance when
> setting WQ_HIGHPRI | WQ_CPU_INTENSIVE?
CPU_INTENSIVE isn't applicable at all unless you want to run crypto or
zlib from the driver.
blk_iopoll runs off softirq and will always have lower latency than
workqueue which requires scheduling, so it depends on the
requirements. I guess the best thing would be playing with things a
bit and make decisions on test results. It shouldn't be too difficult
to switch among them.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 16/32] scsi/be2iscsi,qla2xxx: convert to alloc_workqueue()
2011-02-02 10:25 ` Tejun Heo
@ 2011-02-02 20:41 ` Mike Christie
2011-02-03 9:31 ` Tejun Heo
0 siblings, 1 reply; 19+ messages in thread
From: Mike Christie @ 2011-02-02 20:41 UTC (permalink / raw)
To: Tejun Heo
Cc: linux-kernel, Jayamohan Kallickal, Andrew Vasquez,
James E.J. Bottomley, linux-scsi
On 02/02/2011 04:25 AM, Tejun Heo wrote:
> usually necessary. The queue of pending works tend to be short and
> consumed pretty fast.
>
What if we want to do something that could take a while, like doing
recovery of a device/transport (so you have to send resets and
logouts/logins and wait for the results but they could take a while if
they timeout), should we be using something other than a workqueue so it
does not interfere with other users?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 16/32] scsi/be2iscsi,qla2xxx: convert to alloc_workqueue()
2011-02-02 20:41 ` Mike Christie
@ 2011-02-03 9:31 ` Tejun Heo
0 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2011-02-03 9:31 UTC (permalink / raw)
To: Mike Christie
Cc: linux-kernel, Jayamohan Kallickal, Andrew Vasquez,
James E.J. Bottomley, linux-scsi
Hello, Mike.
On Wed, Feb 02, 2011 at 02:41:22PM -0600, Mike Christie wrote:
> On 02/02/2011 04:25 AM, Tejun Heo wrote:
> >usually necessary. The queue of pending works tend to be short and
> >consumed pretty fast.
>
> What if we want to do something that could take a while, like doing
> recovery of a device/transport (so you have to send resets and
> logouts/logins and wait for the results but they could take a while
> if they timeout), should we be using something other than a
> workqueue so it does not interfere with other users?
As long as you don't use ordered workqueue, it wouldn't be a problem.
The max concurrency is determined by @max_active parameter to
alloc_workqueue() and workqueue will try to provide concurrency upto
the limit on demand as long as resources are available.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue
[not found] <1294062595-30097-1-git-send-email-tj@kernel.org>
2011-01-03 13:49 ` [PATCH 16/32] scsi/be2iscsi,qla2xxx: convert to alloc_workqueue() Tejun Heo
@ 2011-01-03 13:49 ` Tejun Heo
2011-01-03 17:45 ` Bart Van Assche
2011-01-24 16:09 ` Bart Van Assche
2011-01-03 13:49 ` [PATCH 18/32] scsi/scsi_tgt_lib: scsi_tgtd isn't used in memory reclaim path Tejun Heo
2011-01-25 14:29 ` [PATCHSET] workqueue: update workqueue users - replace create_workqueue() Tejun Heo
3 siblings, 2 replies; 19+ messages in thread
From: Tejun Heo @ 2011-01-03 13:49 UTC (permalink / raw)
To: linux-kernel; +Cc: Tejun Heo, FUJITA Tomonori, James E.J. Bottomley, linux-scsi
The target driver is not in the memory reclaim path and doesn't need a
dedicated workqueue. Drop vtgtd and use system_wq instead. The used
work item is sync flushed on removal.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: linux-scsi@vger.kernel.org
---
Only compile tested. Please feel free to take it into the subsystem
tree or simply ack - I'll route it through the wq tree.
Thanks.
drivers/scsi/ibmvscsi/ibmvstgt.c | 15 ++++-----------
1 files changed, 4 insertions(+), 11 deletions(-)
diff --git a/drivers/scsi/ibmvscsi/ibmvstgt.c b/drivers/scsi/ibmvscsi/ibmvstgt.c
index 2256bab..47fc632 100644
--- a/drivers/scsi/ibmvscsi/ibmvstgt.c
+++ b/drivers/scsi/ibmvscsi/ibmvstgt.c
@@ -74,7 +74,6 @@ struct vio_port {
struct srp_rport *rport;
};
-static struct workqueue_struct *vtgtd;
static struct scsi_transport_template *ibmvstgt_transport_template;
/*
@@ -546,7 +545,7 @@ static irqreturn_t ibmvstgt_interrupt(int dummy, void *data)
struct vio_port *vport = target_to_port(target);
vio_disable_interrupts(vport->dma_dev);
- queue_work(vtgtd, &vport->crq_work);
+ schedule_work(&vport->crq_work);
return IRQ_HANDLED;
}
@@ -900,6 +899,7 @@ static int ibmvstgt_remove(struct vio_dev *dev)
crq_queue_destroy(target);
srp_remove_host(shost);
scsi_remove_host(shost);
+ flush_work_sync(&vport->crq_work);
scsi_tgt_free_queue(shost);
srp_target_free(target);
kfree(vport);
@@ -967,21 +967,15 @@ static int __init ibmvstgt_init(void)
if (!ibmvstgt_transport_template)
return err;
- vtgtd = create_workqueue("ibmvtgtd");
- if (!vtgtd)
- goto release_transport;
-
err = get_system_info();
if (err)
- goto destroy_wq;
+ goto release_transport;
err = vio_register_driver(&ibmvstgt_driver);
if (err)
- goto destroy_wq;
+ goto release_transport;
return 0;
-destroy_wq:
- destroy_workqueue(vtgtd);
release_transport:
srp_release_transport(ibmvstgt_transport_template);
return err;
@@ -991,7 +985,6 @@ static void __exit ibmvstgt_exit(void)
{
printk("Unregister IBM virtual SCSI driver\n");
- destroy_workqueue(vtgtd);
vio_unregister_driver(&ibmvstgt_driver);
srp_release_transport(ibmvstgt_transport_template);
}
--
1.7.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue
2011-01-03 13:49 ` [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue Tejun Heo
@ 2011-01-03 17:45 ` Bart Van Assche
2011-01-04 5:20 ` Tejun Heo
2011-01-24 16:09 ` Bart Van Assche
1 sibling, 1 reply; 19+ messages in thread
From: Bart Van Assche @ 2011-01-03 17:45 UTC (permalink / raw)
To: Tejun Heo; +Cc: linux-kernel, FUJITA Tomonori, James E.J. Bottomley, linux-scsi
On Mon, Jan 3, 2011 at 2:49 PM, Tejun Heo <tj@kernel.org> wrote:
> The target driver is not in the memory reclaim path and doesn't need a
> dedicated workqueue. Drop vtgtd and use system_wq instead. The used
> work item is sync flushed on removal.
[ ... ]
Hi Tejun,
No matter which storage target will go upstream, I'm afraid that this
patch will conflict with it. See e.g.
http://dchs.spinics.net/lists/linux-scsi/msg49105.html.
Bart.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue
2011-01-03 17:45 ` Bart Van Assche
@ 2011-01-04 5:20 ` Tejun Heo
0 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2011-01-04 5:20 UTC (permalink / raw)
To: Bart Van Assche
Cc: linux-kernel, FUJITA Tomonori, James E.J. Bottomley, linux-scsi
Hello,
On Mon, Jan 03, 2011 at 06:45:33PM +0100, Bart Van Assche wrote:
> No matter which storage target will go upstream, I'm afraid that this
> patch will conflict with it. See e.g.
> http://dchs.spinics.net/lists/linux-scsi/msg49105.html.
Ah, okay. The full conversion is gonna span several devel cycles
anyway. I'll re-convert ibmvstgt when the dust settles down.
Thank you.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue
2011-01-03 13:49 ` [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue Tejun Heo
2011-01-03 17:45 ` Bart Van Assche
@ 2011-01-24 16:09 ` Bart Van Assche
2011-01-24 16:24 ` Tejun Heo
1 sibling, 1 reply; 19+ messages in thread
From: Bart Van Assche @ 2011-01-24 16:09 UTC (permalink / raw)
To: Tejun Heo
Cc: linux-kernel, FUJITA Tomonori, James E.J. Bottomley, linux-scsi,
Brian King, Robert Jennings
On Mon, Jan 3, 2011 at 2:49 PM, Tejun Heo <tj@kernel.org> wrote:
>
> The target driver is not in the memory reclaim path and doesn't need a
> dedicated workqueue. Drop vtgtd and use system_wq instead. The used
> work item is sync flushed on removal.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
> Cc: linux-scsi@vger.kernel.org
> ---
> Only compile tested. Please feel free to take it into the subsystem
> tree or simply ack - I'll route it through the wq tree.
>
> Thanks.
>
> drivers/scsi/ibmvscsi/ibmvstgt.c | 15 ++++-----------
> 1 files changed, 4 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/scsi/ibmvscsi/ibmvstgt.c b/drivers/scsi/ibmvscsi/ibmvstgt.c
> index 2256bab..47fc632 100644
> --- a/drivers/scsi/ibmvscsi/ibmvstgt.c
> +++ b/drivers/scsi/ibmvscsi/ibmvstgt.c
> @@ -74,7 +74,6 @@ struct vio_port {
> struct srp_rport *rport;
> };
>
> -static struct workqueue_struct *vtgtd;
> static struct scsi_transport_template *ibmvstgt_transport_template;
>
> /*
> @@ -546,7 +545,7 @@ static irqreturn_t ibmvstgt_interrupt(int dummy, void *data)
> struct vio_port *vport = target_to_port(target);
>
> vio_disable_interrupts(vport->dma_dev);
> - queue_work(vtgtd, &vport->crq_work);
> + schedule_work(&vport->crq_work);
>
> return IRQ_HANDLED;
> }
> @@ -900,6 +899,7 @@ static int ibmvstgt_remove(struct vio_dev *dev)
> crq_queue_destroy(target);
> srp_remove_host(shost);
> scsi_remove_host(shost);
> + flush_work_sync(&vport->crq_work);
> scsi_tgt_free_queue(shost);
> srp_target_free(target);
> kfree(vport);
> @@ -967,21 +967,15 @@ static int __init ibmvstgt_init(void)
> if (!ibmvstgt_transport_template)
> return err;
>
> - vtgtd = create_workqueue("ibmvtgtd");
> - if (!vtgtd)
> - goto release_transport;
> -
> err = get_system_info();
> if (err)
> - goto destroy_wq;
> + goto release_transport;
>
> err = vio_register_driver(&ibmvstgt_driver);
> if (err)
> - goto destroy_wq;
> + goto release_transport;
>
> return 0;
> -destroy_wq:
> - destroy_workqueue(vtgtd);
> release_transport:
> srp_release_transport(ibmvstgt_transport_template);
> return err;
> @@ -991,7 +985,6 @@ static void __exit ibmvstgt_exit(void)
> {
> printk("Unregister IBM virtual SCSI driver\n");
>
> - destroy_workqueue(vtgtd);
> vio_unregister_driver(&ibmvstgt_driver);
> srp_release_transport(ibmvstgt_transport_template);
> }
(added Brian King and Robert Jennings in CC)
Hello Tejun,
Insertion of flush_work_sync() fixes a race - that's a good catch.
flush_work_sync() should be invoked a little earlier though because
the scheduled work may access the queue destroyed by the
crq_queue_destroy(target) call. And the CRQ interrupt should be
disabled from before flush_work_sync() is invoked until after the CRQ
has been destroyed.
Regarding the queue removal: I might have missed something, but why
would you like to remove the vtgtd work queue ? Since the ibmvstgt
driver is a storage target driver, processing latency matters. I'm
afraid that switching from a dedicated queue to the global work queue
will increase processing latency.
Bart.
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue
2011-01-24 16:09 ` Bart Van Assche
@ 2011-01-24 16:24 ` Tejun Heo
2011-02-01 10:40 ` Tejun Heo
0 siblings, 1 reply; 19+ messages in thread
From: Tejun Heo @ 2011-01-24 16:24 UTC (permalink / raw)
To: Bart Van Assche
Cc: linux-kernel, FUJITA Tomonori, James E.J. Bottomley, linux-scsi,
Brian King, Robert Jennings
Hello,
On Mon, Jan 24, 2011 at 05:09:18PM +0100, Bart Van Assche wrote:
> Insertion of flush_work_sync() fixes a race - that's a good catch.
> flush_work_sync() should be invoked a little earlier though because
> the scheduled work may access the queue destroyed by the
> crq_queue_destroy(target) call. And the CRQ interrupt should be
> disabled from before flush_work_sync() is invoked until after the CRQ
> has been destroyed.
Heh, I'm a bit out of my depth here. If you know what's necessary,
please go ahead and make the change.
> Regarding the queue removal: I might have missed something, but why
> would you like to remove the vtgtd work queue ? Since the ibmvstgt
> driver is a storage target driver, processing latency matters. I'm
> afraid that switching from a dedicated queue to the global work queue
> will increase processing latency.
Having a dedicated workqueue no longer makes any difference regarding
processing latency. Each workqueue is mere frontend to the shared
worker pool anyway. Dedicated workqueues are now meaningful only as
forward progress guarantee, attribute and/or flush domain - IOW, when
the workqueue needs to be used during memory reclaim, the work items
need to have specific attributes or certain group of work items need
to be flushed together. Apart from that, there's virtually no
difference between using the system_wq and a dedicated one. As using
the system one is usually simpler, it's natural to do that.
Thank you.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue
2011-01-24 16:24 ` Tejun Heo
@ 2011-02-01 10:40 ` Tejun Heo
2011-02-01 14:18 ` FUJITA Tomonori
0 siblings, 1 reply; 19+ messages in thread
From: Tejun Heo @ 2011-02-01 10:40 UTC (permalink / raw)
To: Bart Van Assche
Cc: linux-kernel, FUJITA Tomonori, James E.J. Bottomley, linux-scsi,
Brian King, Robert Jennings
On Mon, Jan 24, 2011 at 05:24:14PM +0100, Tejun Heo wrote:
> Hello,
>
> On Mon, Jan 24, 2011 at 05:09:18PM +0100, Bart Van Assche wrote:
> > Insertion of flush_work_sync() fixes a race - that's a good catch.
> > flush_work_sync() should be invoked a little earlier though because
> > the scheduled work may access the queue destroyed by the
> > crq_queue_destroy(target) call. And the CRQ interrupt should be
> > disabled from before flush_work_sync() is invoked until after the CRQ
> > has been destroyed.
>
> Heh, I'm a bit out of my depth here. If you know what's necessary,
> please go ahead and make the change.
>
> > Regarding the queue removal: I might have missed something, but why
> > would you like to remove the vtgtd work queue ? Since the ibmvstgt
> > driver is a storage target driver, processing latency matters. I'm
> > afraid that switching from a dedicated queue to the global work queue
> > will increase processing latency.
>
> Having a dedicated workqueue no longer makes any difference regarding
> processing latency. Each workqueue is mere frontend to the shared
> worker pool anyway. Dedicated workqueues are now meaningful only as
> forward progress guarantee, attribute and/or flush domain - IOW, when
> the workqueue needs to be used during memory reclaim, the work items
> need to have specific attributes or certain group of work items need
> to be flushed together. Apart from that, there's virtually no
> difference between using the system_wq and a dedicated one. As using
> the system one is usually simpler, it's natural to do that.
Ping. Are you interested in doing the conversion?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue
2011-02-01 10:40 ` Tejun Heo
@ 2011-02-01 14:18 ` FUJITA Tomonori
2011-02-01 14:25 ` James Bottomley
2011-02-01 14:29 ` Tejun Heo
0 siblings, 2 replies; 19+ messages in thread
From: FUJITA Tomonori @ 2011-02-01 14:18 UTC (permalink / raw)
To: tj
Cc: bvanassche, linux-kernel, fujita.tomonori, James.Bottomley,
linux-scsi, brking, rcj
On Tue, 1 Feb 2011 11:40:43 +0100
Tejun Heo <tj@kernel.org> wrote:
> On Mon, Jan 24, 2011 at 05:24:14PM +0100, Tejun Heo wrote:
> > Hello,
> >
> > On Mon, Jan 24, 2011 at 05:09:18PM +0100, Bart Van Assche wrote:
> > > Insertion of flush_work_sync() fixes a race - that's a good catch.
> > > flush_work_sync() should be invoked a little earlier though because
> > > the scheduled work may access the queue destroyed by the
> > > crq_queue_destroy(target) call. And the CRQ interrupt should be
> > > disabled from before flush_work_sync() is invoked until after the CRQ
> > > has been destroyed.
> >
> > Heh, I'm a bit out of my depth here. If you know what's necessary,
> > please go ahead and make the change.
> >
> > > Regarding the queue removal: I might have missed something, but why
> > > would you like to remove the vtgtd work queue ? Since the ibmvstgt
> > > driver is a storage target driver, processing latency matters. I'm
> > > afraid that switching from a dedicated queue to the global work queue
> > > will increase processing latency.
> >
> > Having a dedicated workqueue no longer makes any difference regarding
> > processing latency. Each workqueue is mere frontend to the shared
> > worker pool anyway. Dedicated workqueues are now meaningful only as
> > forward progress guarantee, attribute and/or flush domain - IOW, when
> > the workqueue needs to be used during memory reclaim, the work items
> > need to have specific attributes or certain group of work items need
> > to be flushed together. Apart from that, there's virtually no
> > difference between using the system_wq and a dedicated one. As using
> > the system one is usually simpler, it's natural to do that.
>
> Ping. Are you interested in doing the conversion?
FYI, this driver will be replaced shortly. Now I have the working
ibmvscsis driver for the new target framework. I'll submit it this
week. So this driver will be removed soon or later (if James prefer to
go through the proper Documentation/feature-removal-schedule.txt
process, it'll be for some time). You could leave this alone, I guess.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue
2011-02-01 14:18 ` FUJITA Tomonori
@ 2011-02-01 14:25 ` James Bottomley
2011-02-01 14:29 ` Tejun Heo
1 sibling, 0 replies; 19+ messages in thread
From: James Bottomley @ 2011-02-01 14:25 UTC (permalink / raw)
To: FUJITA Tomonori; +Cc: tj, bvanassche, linux-kernel, linux-scsi, brking, rcj
On Tue, 2011-02-01 at 23:18 +0900, FUJITA Tomonori wrote:
> On Tue, 1 Feb 2011 11:40:43 +0100
> Tejun Heo <tj@kernel.org> wrote:
>
> > On Mon, Jan 24, 2011 at 05:24:14PM +0100, Tejun Heo wrote:
> > > Hello,
> > >
> > > On Mon, Jan 24, 2011 at 05:09:18PM +0100, Bart Van Assche wrote:
> > > > Insertion of flush_work_sync() fixes a race - that's a good catch.
> > > > flush_work_sync() should be invoked a little earlier though because
> > > > the scheduled work may access the queue destroyed by the
> > > > crq_queue_destroy(target) call. And the CRQ interrupt should be
> > > > disabled from before flush_work_sync() is invoked until after the CRQ
> > > > has been destroyed.
> > >
> > > Heh, I'm a bit out of my depth here. If you know what's necessary,
> > > please go ahead and make the change.
> > >
> > > > Regarding the queue removal: I might have missed something, but why
> > > > would you like to remove the vtgtd work queue ? Since the ibmvstgt
> > > > driver is a storage target driver, processing latency matters. I'm
> > > > afraid that switching from a dedicated queue to the global work queue
> > > > will increase processing latency.
> > >
> > > Having a dedicated workqueue no longer makes any difference regarding
> > > processing latency. Each workqueue is mere frontend to the shared
> > > worker pool anyway. Dedicated workqueues are now meaningful only as
> > > forward progress guarantee, attribute and/or flush domain - IOW, when
> > > the workqueue needs to be used during memory reclaim, the work items
> > > need to have specific attributes or certain group of work items need
> > > to be flushed together. Apart from that, there's virtually no
> > > difference between using the system_wq and a dedicated one. As using
> > > the system one is usually simpler, it's natural to do that.
> >
> > Ping. Are you interested in doing the conversion?
>
> FYI, this driver will be replaced shortly. Now I have the working
> ibmvscsis driver for the new target framework. I'll submit it this
> week. So this driver will be removed soon or later (if James prefer to
> go through the proper Documentation/feature-removal-schedule.txt
> process, it'll be for some time). You could leave this alone, I guess.
Whatever works for you is fine by me. I don't think we need to go
through feature removal since we're not technically removing the
feature.
James
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue
2011-02-01 14:18 ` FUJITA Tomonori
2011-02-01 14:25 ` James Bottomley
@ 2011-02-01 14:29 ` Tejun Heo
1 sibling, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2011-02-01 14:29 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: bvanassche, linux-kernel, James.Bottomley, linux-scsi, brking,
rcj
On Tue, Feb 01, 2011 at 11:18:54PM +0900, FUJITA Tomonori wrote:
> > Ping. Are you interested in doing the conversion?
>
> FYI, this driver will be replaced shortly. Now I have the working
> ibmvscsis driver for the new target framework. I'll submit it this
> week. So this driver will be removed soon or later (if James prefer to
> go through the proper Documentation/feature-removal-schedule.txt
> process, it'll be for some time). You could leave this alone, I guess.
I see. I'll drop the patch then. Please keep in mind that I'm
planning on marking flush_scheduled_work() deprecated in linux-next in
a month or so. create_singlethread_workqueue() will follow in the
next cycle and then create_workqueue() the next.
Thank you.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 18/32] scsi/scsi_tgt_lib: scsi_tgtd isn't used in memory reclaim path
[not found] <1294062595-30097-1-git-send-email-tj@kernel.org>
2011-01-03 13:49 ` [PATCH 16/32] scsi/be2iscsi,qla2xxx: convert to alloc_workqueue() Tejun Heo
2011-01-03 13:49 ` [PATCH 17/32] scsi/ibmvstgt: use system_wq instead of vtgtd workqueue Tejun Heo
@ 2011-01-03 13:49 ` Tejun Heo
2011-01-25 14:29 ` [PATCHSET] workqueue: update workqueue users - replace create_workqueue() Tejun Heo
3 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2011-01-03 13:49 UTC (permalink / raw)
To: linux-kernel; +Cc: Tejun Heo, FUJITA Tomonori, James E.J. Bottomley, linux-scsi
Workqueue scsi_tgtd isn't used during memory reclaim. Convert to
alloc_workqueue() without WQ_MEM_RECLAIM.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: FUJITA Tomonori <tomof@acm.org>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: linux-scsi@vger.kernel.org
---
Only compile tested. Please feel free to take it into the subsystem
tree or simply ack - I'll route it through the wq tree.
Thanks.
drivers/scsi/scsi_tgt_lib.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/drivers/scsi/scsi_tgt_lib.c b/drivers/scsi/scsi_tgt_lib.c
index c399be9..f672820 100644
--- a/drivers/scsi/scsi_tgt_lib.c
+++ b/drivers/scsi/scsi_tgt_lib.c
@@ -629,7 +629,7 @@ static int __init scsi_tgt_init(void)
if (!scsi_tgt_cmd_cache)
return -ENOMEM;
- scsi_tgtd = create_workqueue("scsi_tgtd");
+ scsi_tgtd = alloc_workqueue("scsi_tgtd", 0, 1);
if (!scsi_tgtd) {
err = -ENOMEM;
goto free_kmemcache;
--
1.7.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCHSET] workqueue: update workqueue users - replace create_workqueue()
[not found] <1294062595-30097-1-git-send-email-tj@kernel.org>
` (2 preceding siblings ...)
2011-01-03 13:49 ` [PATCH 18/32] scsi/scsi_tgt_lib: scsi_tgtd isn't used in memory reclaim path Tejun Heo
@ 2011-01-25 14:29 ` Tejun Heo
2011-01-25 17:47 ` Madhu Iyengar
` (2 more replies)
3 siblings, 3 replies; 19+ messages in thread
From: Tejun Heo @ 2011-01-25 14:29 UTC (permalink / raw)
To: linux-kernel
Cc: lenb, linux-acpi, davej, cpufreq, Markus.Lidel, tomas.winkler,
jayamohank, andrew.vasquez, James.Bottomley, linux-scsi, tomof,
stf_xl, linux-usb, tytso, mfasheh, joel.becker, reiserfs-devel,
aelder, hch, ericvh, rminnich, v9fs-developer, lucho, andy.grover
Hello,
There was no response for the following patches and I'm planning on
routing them through workqueue#for-2.6.39. If you have an objection
or want to take the patch through a different tree, please let me
know.
0006-acpi-kacpi-_wq-don-t-need-WQ_MEM_RECLAIM.patch
0007-cpufreq-use-system_wq-instead-of-dedicated-workqueue.patch
0012-i2o-use-alloc_workqueue-instead-of-create_workqueue.patch
0013-misc-iwmc3200top-use-system_wq-instead-of-dedicated-.patch
0016-scsi-be2iscsi-qla2xxx-convert-to-alloc_workqueue.patch
0018-scsi-scsi_tgt_lib-scsi_tgtd-isn-t-used-in-memory-rec.patch
0019-usb-ueagle-atm-use-system_wq-instead-of-dedicated-wo.patch
0025-ext4-convert-to-alloc_workqueue.patch
0026-ocfs2-use-system_wq-instead-of-ocfs2_quota_wq.patch
0027-reiserfs-make-commit_wq-use-the-default-concurrency-.patch
0028-xfs-convert-to-alloc_workqueue.patch
0029-net-9p-use-system_wq-instead-of-p9_mux_wq.patch
0030-net-9p-replace-p9_poll_task-with-a-work.patch
0031-rds-ib-use-system_wq-instead-of-rds_ib_fmr_wq.patch
The whole series can be found at the following URL.
http://thread.gmane.org/gmane.linux.kernel/1082587
Thank you.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread* RE: [PATCHSET] workqueue: update workqueue users - replace create_workqueue()
2011-01-25 14:29 ` [PATCHSET] workqueue: update workqueue users - replace create_workqueue() Tejun Heo
@ 2011-01-25 17:47 ` Madhu Iyengar
2011-01-26 4:46 ` Greg KH
2011-02-01 10:44 ` Tejun Heo
2 siblings, 0 replies; 19+ messages in thread
From: Madhu Iyengar @ 2011-01-25 17:47 UTC (permalink / raw)
To: Tejun Heo
Cc: Andrew Vasquez, linux-scsi@vger.kernel.org, Giridhar Malavali,
Madhu Iyengar
Tejun,
Here's my ack for patch #16 from the QLogic side:
Acked-by: Madhuranath Iyengar <Madhu.Iyengar@qlogic.com>
Cheers,
Madhu
-----Original Message-----
From: linux-scsi-owner@vger.kernel.org [mailto:linux-scsi-owner@vger.kernel.org] On Behalf Of Tejun Heo
Sent: Tuesday, January 25, 2011 6:29 AM
To: linux-kernel@vger.kernel.org
Cc: lenb@kernel.org; linux-acpi@vger.kernel.org; davej@redhat.com; cpufreq@vger.kernel.org; Markus.Lidel@shadowconnect.com; tomas.winkler@intel.com; jayamohank@serverengines.com; Andrew Vasquez; James.Bottomley@HansenPartnership.com; linux-scsi@vger.kernel.org; tomof@acm.org; stf_xl@wp.pl; linux-usb@vger.kernel.org; tytso@mit.edu; mfasheh@suse.com; joel.becker@oracle.com; reiserfs-devel@vger.kernel.org; aelder@sgi.com; hch@infradead.org; ericvh@gmail.com; rminnich@sandia.gov; v9fs-developer@lists.sourceforge.net; lucho@ionkov.net; andy.grover@oracle.com
Subject: Re: [PATCHSET] workqueue: update workqueue users - replace create_workqueue()
Hello,
There was no response for the following patches and I'm planning on
routing them through workqueue#for-2.6.39. If you have an objection
or want to take the patch through a different tree, please let me
know.
0006-acpi-kacpi-_wq-don-t-need-WQ_MEM_RECLAIM.patch
0007-cpufreq-use-system_wq-instead-of-dedicated-workqueue.patch
0012-i2o-use-alloc_workqueue-instead-of-create_workqueue.patch
0013-misc-iwmc3200top-use-system_wq-instead-of-dedicated-.patch
0016-scsi-be2iscsi-qla2xxx-convert-to-alloc_workqueue.patch
0018-scsi-scsi_tgt_lib-scsi_tgtd-isn-t-used-in-memory-rec.patch
0019-usb-ueagle-atm-use-system_wq-instead-of-dedicated-wo.patch
0025-ext4-convert-to-alloc_workqueue.patch
0026-ocfs2-use-system_wq-instead-of-ocfs2_quota_wq.patch
0027-reiserfs-make-commit_wq-use-the-default-concurrency-.patch
0028-xfs-convert-to-alloc_workqueue.patch
0029-net-9p-use-system_wq-instead-of-p9_mux_wq.patch
0030-net-9p-replace-p9_poll_task-with-a-work.patch
0031-rds-ib-use-system_wq-instead-of-rds_ib_fmr_wq.patch
The whole series can be found at the following URL.
http://thread.gmane.org/gmane.linux.kernel/1082587
Thank you.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
This message and any attached documents contain information from QLogic Corporation or its wholly-owned subsidiaries that may be confidential. If you are not the intended recipient, you may not read, copy, distribute, or use this information. If you have received this transmission in error, please notify the sender immediately by reply e-mail and then delete this message.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCHSET] workqueue: update workqueue users - replace create_workqueue()
2011-01-25 14:29 ` [PATCHSET] workqueue: update workqueue users - replace create_workqueue() Tejun Heo
2011-01-25 17:47 ` Madhu Iyengar
@ 2011-01-26 4:46 ` Greg KH
2011-02-01 10:44 ` Tejun Heo
2 siblings, 0 replies; 19+ messages in thread
From: Greg KH @ 2011-01-26 4:46 UTC (permalink / raw)
To: Tejun Heo
Cc: linux-kernel, lenb, linux-acpi, davej, cpufreq, Markus.Lidel,
tomas.winkler, jayamohank, andrew.vasquez, James.Bottomley,
linux-scsi, tomof, stf_xl, linux-usb, tytso, mfasheh, joel.becker,
reiserfs-devel, aelder, hch, ericvh, rminnich, v9fs-developer,
lucho, andy.grover
On Tue, Jan 25, 2011 at 03:29:21PM +0100, Tejun Heo wrote:
> Hello,
>
> There was no response for the following patches and I'm planning on
> routing them through workqueue#for-2.6.39. If you have an objection
> or want to take the patch through a different tree, please let me
> know.
>
> 0019-usb-ueagle-atm-use-system_wq-instead-of-dedicated-wo.patch
I've already taken this in my usb-next tree to go into .39.
thanks,
greg k-h
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCHSET] workqueue: update workqueue users - replace create_workqueue()
2011-01-25 14:29 ` [PATCHSET] workqueue: update workqueue users - replace create_workqueue() Tejun Heo
2011-01-25 17:47 ` Madhu Iyengar
2011-01-26 4:46 ` Greg KH
@ 2011-02-01 10:44 ` Tejun Heo
2 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2011-02-01 10:44 UTC (permalink / raw)
To: linux-kernel
Cc: lenb, linux-acpi, davej, cpufreq, Markus.Lidel, tomas.winkler,
jayamohank, andrew.vasquez, James.Bottomley, linux-scsi, tomof,
stf_xl, linux-usb, tytso, mfasheh, joel.becker, reiserfs-devel,
aelder, hch, ericvh, rminnich, v9fs-developer, lucho, andy.grover
On Tue, Jan 25, 2011 at 03:29:21PM +0100, Tejun Heo wrote:
> Hello,
>
> There was no response for the following patches and I'm planning on
> routing them through workqueue#for-2.6.39. If you have an objection
> or want to take the patch through a different tree, please let me
> know.
>
> 0006-acpi-kacpi-_wq-don-t-need-WQ_MEM_RECLAIM.patch
> 0007-cpufreq-use-system_wq-instead-of-dedicated-workqueue.patch
> 0012-i2o-use-alloc_workqueue-instead-of-create_workqueue.patch
> 0013-misc-iwmc3200top-use-system_wq-instead-of-dedicated-.patch
> 0016-scsi-be2iscsi-qla2xxx-convert-to-alloc_workqueue.patch
> 0018-scsi-scsi_tgt_lib-scsi_tgtd-isn-t-used-in-memory-rec.patch
> 0019-usb-ueagle-atm-use-system_wq-instead-of-dedicated-wo.patch
> 0025-ext4-convert-to-alloc_workqueue.patch
> 0026-ocfs2-use-system_wq-instead-of-ocfs2_quota_wq.patch
> 0027-reiserfs-make-commit_wq-use-the-default-concurrency-.patch
> 0028-xfs-convert-to-alloc_workqueue.patch
> 0029-net-9p-use-system_wq-instead-of-p9_mux_wq.patch
> 0030-net-9p-replace-p9_poll_task-with-a-work.patch
> 0031-rds-ib-use-system_wq-instead-of-rds_ib_fmr_wq.patch
Patches applied to wq#for-2.6.39 w/ collected acks added.
Thank you.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread