* [PATCH v1 0/5] block: blk-mq: support blk_cleanup_queue on mq
@ 2013-12-26 13:31 Ming Lei
2013-12-26 13:31 ` [PATCH v1 1/5] block: blk-mq: avoid initializing request during its completion Ming Lei
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Ming Lei @ 2013-12-26 13:31 UTC (permalink / raw)
To: Jens Axboe, linux-kernel; +Cc: Christoph Hellwig
Hi,
The 1st one moves request initializtion out of completion handler.
The following 2 patches support drain/sync mq queue in blk_cleanup_queue.
The 4th patch calls blk_cleanup_queue() in removing device path
to fix queue leak problem.
The 5th patch doesn't export blk_mq_free_queue() because the function
is called in release handler of queue kobject, and drivers needn't
to call it.
V1:
- add patch 1/5
- add comments on blk_execute_rq_nowait 2/5
- set QUEUE_FLAG_DEAD for MQ 2/5
- use __blk_mq_drain_queue() helper 2/5
block/blk-core.c | 21 ++++++++++++--
block/blk-exec.c | 4 +++
block/blk-mq.c | 71 +++++++++++++++++++++++-----------------------
block/blk-mq.h | 2 ++
block/blk-sysfs.c | 1 +
drivers/block/null_blk.c | 10 ++-----
include/linux/blk-mq.h | 1 -
7 files changed, 63 insertions(+), 47 deletions(-)
Thanks,
--
Ming Lei
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v1 1/5] block: blk-mq: avoid initializing request during its completion
2013-12-26 13:31 [PATCH v1 0/5] block: blk-mq: support blk_cleanup_queue on mq Ming Lei
@ 2013-12-26 13:31 ` Ming Lei
2013-12-31 16:38 ` Jens Axboe
2013-12-26 13:31 ` [PATCH v1 2/5] block: blk-mq: support draining mq queue Ming Lei
` (3 subsequent siblings)
4 siblings, 1 reply; 9+ messages in thread
From: Ming Lei @ 2013-12-26 13:31 UTC (permalink / raw)
To: Jens Axboe, linux-kernel; +Cc: Christoph Hellwig, Ming Lei
One problem is that request->start_time/start_time_ns could be
set as wrong.
Also it is normal to intialize one data structure just after its
allocation.
So move the initialization out of its completion path.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
block/blk-mq.c | 26 +++++++++-----------------
1 file changed, 9 insertions(+), 17 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 53dc9f7..35ae189 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -82,6 +82,7 @@ static struct request *blk_mq_alloc_rq(struct blk_mq_hw_ctx *hctx, gfp_t gfp,
tag = blk_mq_get_tag(hctx->tags, gfp, reserved);
if (tag != BLK_MQ_TAG_FAIL) {
rq = hctx->rqs[tag];
+ blk_rq_init(hctx->queue, rq);
rq->tag = tag;
return rq;
@@ -169,9 +170,13 @@ bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx)
}
EXPORT_SYMBOL(blk_mq_can_queue);
-static void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
- struct request *rq, unsigned int rw_flags)
+static void blk_mq_rq_init(struct request_queue *q, struct blk_mq_ctx *ctx,
+ struct blk_mq_hw_ctx *hctx, struct request *rq,
+ unsigned int rw_flags)
{
+ if (hctx->cmd_size)
+ rq->special = blk_mq_rq_to_pdu(rq);
+
if (blk_queue_io_stat(q))
rw_flags |= REQ_IO_STAT;
@@ -198,7 +203,7 @@ static struct request *blk_mq_alloc_request_pinned(struct request_queue *q,
rq = __blk_mq_alloc_request(hctx, gfp & ~__GFP_WAIT, reserved);
if (rq) {
- blk_mq_rq_ctx_init(q, ctx, rq, rw);
+ blk_mq_rq_init(q, ctx, hctx, rq, rw);
break;
}
@@ -242,24 +247,12 @@ struct request *blk_mq_alloc_reserved_request(struct request_queue *q, int rw,
}
EXPORT_SYMBOL(blk_mq_alloc_reserved_request);
-/*
- * Re-init and set pdu, if we have it
- */
-static void blk_mq_rq_init(struct blk_mq_hw_ctx *hctx, struct request *rq)
-{
- blk_rq_init(hctx->queue, rq);
-
- if (hctx->cmd_size)
- rq->special = blk_mq_rq_to_pdu(rq);
-}
-
static void __blk_mq_free_request(struct blk_mq_hw_ctx *hctx,
struct blk_mq_ctx *ctx, struct request *rq)
{
const int tag = rq->tag;
struct request_queue *q = rq->q;
- blk_mq_rq_init(hctx, rq);
blk_mq_put_tag(hctx->tags, tag);
blk_mq_queue_exit(q);
@@ -889,7 +882,7 @@ static void blk_mq_make_request(struct request_queue *q, struct bio *bio)
trace_block_getrq(q, bio, rw);
rq = __blk_mq_alloc_request(hctx, GFP_ATOMIC, false);
if (likely(rq))
- blk_mq_rq_ctx_init(q, ctx, rq, rw);
+ blk_mq_rq_init(q, ctx, hctx, rq, rw);
else {
blk_mq_put_ctx(ctx);
trace_block_sleeprq(q, bio, rw);
@@ -1123,7 +1116,6 @@ static int blk_mq_init_rq_map(struct blk_mq_hw_ctx *hctx,
left -= to_do * rq_size;
for (j = 0; j < to_do; j++) {
hctx->rqs[i] = p;
- blk_mq_rq_init(hctx, hctx->rqs[i]);
p += rq_size;
i++;
}
--
1.7.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v1 2/5] block: blk-mq: support draining mq queue
2013-12-26 13:31 [PATCH v1 0/5] block: blk-mq: support blk_cleanup_queue on mq Ming Lei
2013-12-26 13:31 ` [PATCH v1 1/5] block: blk-mq: avoid initializing request during its completion Ming Lei
@ 2013-12-26 13:31 ` Ming Lei
2013-12-26 13:31 ` [PATCH v1 3/5] block: blk-mq: make blk_sync_queue support mq Ming Lei
` (2 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Ming Lei @ 2013-12-26 13:31 UTC (permalink / raw)
To: Jens Axboe, linux-kernel; +Cc: Christoph Hellwig, Ming Lei
blk_mq_drain_queue() is introduced so that we can drain
mq queue inside blk_cleanup_queue().
Also don't accept new requests any more if queue is marked
as dying.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
block/blk-core.c | 10 ++++++++--
block/blk-exec.c | 4 ++++
block/blk-mq.c | 43 +++++++++++++++++++++++++++----------------
block/blk-mq.h | 1 +
4 files changed, 40 insertions(+), 18 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 5da8e90..accb7fc 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -38,6 +38,7 @@
#include "blk.h"
#include "blk-cgroup.h"
+#include "blk-mq.h"
EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_remap);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_remap);
@@ -497,8 +498,13 @@ void blk_cleanup_queue(struct request_queue *q)
* Drain all requests queued before DYING marking. Set DEAD flag to
* prevent that q->request_fn() gets invoked after draining finished.
*/
- spin_lock_irq(lock);
- __blk_drain_queue(q, true);
+ if (q->mq_ops) {
+ blk_mq_drain_queue(q);
+ spin_lock_irq(lock);
+ } else {
+ spin_lock_irq(lock);
+ __blk_drain_queue(q, true);
+ }
queue_flag_set(QUEUE_FLAG_DEAD, q);
spin_unlock_irq(lock);
diff --git a/block/blk-exec.c b/block/blk-exec.c
index c3edf9d..bbfc072 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -60,6 +60,10 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
rq->rq_disk = bd_disk;
rq->end_io = done;
+ /*
+ * don't check dying flag for MQ because the request won't
+ * be resued after dying flag is set
+ */
if (q->mq_ops) {
blk_mq_insert_request(q, rq, true);
return;
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 35ae189..38b7bde 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -105,10 +105,13 @@ static int blk_mq_queue_enter(struct request_queue *q)
spin_lock_irq(q->queue_lock);
ret = wait_event_interruptible_lock_irq(q->mq_freeze_wq,
- !blk_queue_bypass(q), *q->queue_lock);
+ !blk_queue_bypass(q) || blk_queue_dying(q),
+ *q->queue_lock);
/* inc usage with lock hold to avoid freeze_queue runs here */
- if (!ret)
+ if (!ret && !blk_queue_dying(q))
__percpu_counter_add(&q->mq_usage_counter, 1, 1000000);
+ else if (blk_queue_dying(q))
+ ret = -ENODEV;
spin_unlock_irq(q->queue_lock);
return ret;
@@ -119,6 +122,22 @@ static void blk_mq_queue_exit(struct request_queue *q)
__percpu_counter_add(&q->mq_usage_counter, -1, 1000000);
}
+static void __blk_mq_drain_queue(struct request_queue *q)
+{
+ while (true) {
+ s64 count;
+
+ spin_lock_irq(q->queue_lock);
+ count = percpu_counter_sum(&q->mq_usage_counter);
+ spin_unlock_irq(q->queue_lock);
+
+ if (count == 0)
+ break;
+ blk_mq_run_queues(q, false);
+ msleep(10);
+ }
+}
+
/*
* Guarantee no request is in use, so we can change any data structure of
* the queue afterward.
@@ -132,21 +151,13 @@ static void blk_mq_freeze_queue(struct request_queue *q)
queue_flag_set(QUEUE_FLAG_BYPASS, q);
spin_unlock_irq(q->queue_lock);
- if (!drain)
- return;
-
- while (true) {
- s64 count;
-
- spin_lock_irq(q->queue_lock);
- count = percpu_counter_sum(&q->mq_usage_counter);
- spin_unlock_irq(q->queue_lock);
+ if (drain)
+ __blk_mq_drain_queue(q);
+}
- if (count == 0)
- break;
- blk_mq_run_queues(q, false);
- msleep(10);
- }
+void blk_mq_drain_queue(struct request_queue *q)
+{
+ __blk_mq_drain_queue(q);
}
static void blk_mq_unfreeze_queue(struct request_queue *q)
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 5761eed..35ff4f7 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -27,6 +27,7 @@ void blk_mq_complete_request(struct request *rq, int error);
void blk_mq_run_request(struct request *rq, bool run_queue, bool async);
void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async);
void blk_mq_init_flush(struct request_queue *q);
+void blk_mq_drain_queue(struct request_queue *q);
/*
* CPU hotplug helpers
--
1.7.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v1 3/5] block: blk-mq: make blk_sync_queue support mq
2013-12-26 13:31 [PATCH v1 0/5] block: blk-mq: support blk_cleanup_queue on mq Ming Lei
2013-12-26 13:31 ` [PATCH v1 1/5] block: blk-mq: avoid initializing request during its completion Ming Lei
2013-12-26 13:31 ` [PATCH v1 2/5] block: blk-mq: support draining mq queue Ming Lei
@ 2013-12-26 13:31 ` Ming Lei
2013-12-26 13:31 ` [PATCH v1 4/5] block: null_blk: fix queue leak inside removing device Ming Lei
2013-12-26 13:31 ` [PATCH v1 5/5] block: blk-mq: don't export blk_mq_free_queue() Ming Lei
4 siblings, 0 replies; 9+ messages in thread
From: Ming Lei @ 2013-12-26 13:31 UTC (permalink / raw)
To: Jens Axboe, linux-kernel; +Cc: Christoph Hellwig, Ming Lei
This patch moves synchronization on mq->delay_work
from blk_mq_free_queue() to blk_sync_queue(), so that
blk_sync_queue can work on mq.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
block/blk-core.c | 11 ++++++++++-
block/blk-mq.c | 1 -
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index accb7fc..c00e0bd 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -246,7 +246,16 @@ EXPORT_SYMBOL(blk_stop_queue);
void blk_sync_queue(struct request_queue *q)
{
del_timer_sync(&q->timeout);
- cancel_delayed_work_sync(&q->delay_work);
+
+ if (q->mq_ops) {
+ struct blk_mq_hw_ctx *hctx;
+ int i;
+
+ queue_for_each_hw_ctx(q, hctx, i)
+ cancel_delayed_work_sync(&hctx->delayed_work);
+ } else {
+ cancel_delayed_work_sync(&q->delay_work);
+ }
}
EXPORT_SYMBOL(blk_sync_queue);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 38b7bde..eaa12ac 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1391,7 +1391,6 @@ void blk_mq_free_queue(struct request_queue *q)
int i;
queue_for_each_hw_ctx(q, hctx, i) {
- cancel_delayed_work_sync(&hctx->delayed_work);
kfree(hctx->ctx_map);
kfree(hctx->ctxs);
blk_mq_free_rq_map(hctx);
--
1.7.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v1 4/5] block: null_blk: fix queue leak inside removing device
2013-12-26 13:31 [PATCH v1 0/5] block: blk-mq: support blk_cleanup_queue on mq Ming Lei
` (2 preceding siblings ...)
2013-12-26 13:31 ` [PATCH v1 3/5] block: blk-mq: make blk_sync_queue support mq Ming Lei
@ 2013-12-26 13:31 ` Ming Lei
2013-12-31 16:26 ` Jens Axboe
2013-12-26 13:31 ` [PATCH v1 5/5] block: blk-mq: don't export blk_mq_free_queue() Ming Lei
4 siblings, 1 reply; 9+ messages in thread
From: Ming Lei @ 2013-12-26 13:31 UTC (permalink / raw)
To: Jens Axboe, linux-kernel; +Cc: Christoph Hellwig, Ming Lei
When queue_mode is NULL_Q_MQ and null_blk is being removed,
blk_cleanup_queue() isn't called to cleanup queue, so the
queue allocated won't be freed.
This patch calls blk_cleanup_queue() for MQ to drain all
pending requests first and release the reference counter
of queue kobject, then blk_mq_free_queue() will be called
in queue kobject's release handler when queue kobject's
reference counter drops to zero.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
drivers/block/null_blk.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index a2e69d2..83a598e 100644
--- a/drivers/block/null_blk.c
+++ b/drivers/block/null_blk.c
@@ -425,10 +425,7 @@ static void null_del_dev(struct nullb *nullb)
list_del_init(&nullb->list);
del_gendisk(nullb->disk);
- if (queue_mode == NULL_Q_MQ)
- blk_mq_free_queue(nullb->q);
- else
- blk_cleanup_queue(nullb->q);
+ blk_cleanup_queue(nullb->q);
put_disk(nullb->disk);
kfree(nullb);
}
@@ -578,10 +575,7 @@ static int null_add_dev(void)
disk = nullb->disk = alloc_disk_node(1, home_node);
if (!disk) {
queue_fail:
- if (queue_mode == NULL_Q_MQ)
- blk_mq_free_queue(nullb->q);
- else
- blk_cleanup_queue(nullb->q);
+ blk_cleanup_queue(nullb->q);
cleanup_queues(nullb);
err:
kfree(nullb);
--
1.7.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v1 5/5] block: blk-mq: don't export blk_mq_free_queue()
2013-12-26 13:31 [PATCH v1 0/5] block: blk-mq: support blk_cleanup_queue on mq Ming Lei
` (3 preceding siblings ...)
2013-12-26 13:31 ` [PATCH v1 4/5] block: null_blk: fix queue leak inside removing device Ming Lei
@ 2013-12-26 13:31 ` Ming Lei
4 siblings, 0 replies; 9+ messages in thread
From: Ming Lei @ 2013-12-26 13:31 UTC (permalink / raw)
To: Jens Axboe, linux-kernel; +Cc: Christoph Hellwig, Ming Lei
blk_mq_free_queue() is called from release handler of
queue kobject, so it needn't be called from drivers.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
block/blk-mq.c | 1 -
block/blk-mq.h | 1 +
block/blk-sysfs.c | 1 +
include/linux/blk-mq.h | 1 -
4 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index eaa12ac..a6360f5 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1412,7 +1412,6 @@ void blk_mq_free_queue(struct request_queue *q)
list_del_init(&q->all_q_node);
mutex_unlock(&all_q_mutex);
}
-EXPORT_SYMBOL(blk_mq_free_queue);
/* Basically redo blk_mq_init_queue with queue frozen */
static void blk_mq_queue_reinit(struct request_queue *q)
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 35ff4f7..5c39179 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -28,6 +28,7 @@ void blk_mq_run_request(struct request *rq, bool run_queue, bool async);
void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async);
void blk_mq_init_flush(struct request_queue *q);
void blk_mq_drain_queue(struct request_queue *q);
+void blk_mq_free_queue(struct request_queue *q);
/*
* CPU hotplug helpers
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 9777952..8095c4a 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -11,6 +11,7 @@
#include "blk.h"
#include "blk-cgroup.h"
+#include "blk-mq.h"
struct queue_sysfs_entry {
struct attribute attr;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index ab0e9b2..851d34b 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -113,7 +113,6 @@ enum {
};
struct request_queue *blk_mq_init_queue(struct blk_mq_reg *, void *);
-void blk_mq_free_queue(struct request_queue *);
int blk_mq_register_disk(struct gendisk *);
void blk_mq_unregister_disk(struct gendisk *);
void blk_mq_init_commands(struct request_queue *, void (*init)(void *data, struct blk_mq_hw_ctx *, struct request *, unsigned int), void *data);
--
1.7.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v1 4/5] block: null_blk: fix queue leak inside removing device
2013-12-26 13:31 ` [PATCH v1 4/5] block: null_blk: fix queue leak inside removing device Ming Lei
@ 2013-12-31 16:26 ` Jens Axboe
0 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2013-12-31 16:26 UTC (permalink / raw)
To: Ming Lei; +Cc: linux-kernel, Christoph Hellwig
On Thu, Dec 26 2013, Ming Lei wrote:
> When queue_mode is NULL_Q_MQ and null_blk is being removed,
> blk_cleanup_queue() isn't called to cleanup queue, so the
> queue allocated won't be freed.
>
> This patch calls blk_cleanup_queue() for MQ to drain all
> pending requests first and release the reference counter
> of queue kobject, then blk_mq_free_queue() will be called
> in queue kobject's release handler when queue kobject's
> reference counter drops to zero.
I have applied this to for-linus for 3.13, the other 4 will go into
3.14.
--
Jens Axboe
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 1/5] block: blk-mq: avoid initializing request during its completion
2013-12-26 13:31 ` [PATCH v1 1/5] block: blk-mq: avoid initializing request during its completion Ming Lei
@ 2013-12-31 16:38 ` Jens Axboe
2014-01-01 4:57 ` Ming Lei
0 siblings, 1 reply; 9+ messages in thread
From: Jens Axboe @ 2013-12-31 16:38 UTC (permalink / raw)
To: Ming Lei; +Cc: linux-kernel, Christoph Hellwig
On Thu, Dec 26 2013, Ming Lei wrote:
> One problem is that request->start_time/start_time_ns could be
> set as wrong.
>
> Also it is normal to intialize one data structure just after its
> allocation.
>
> So move the initialization out of its completion path.
It's done that way because of presumed cache hotness on completion,
since we just touched a lot of the members. Lets just fix the start time
issue by itself.
--
Jens Axboe
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 1/5] block: blk-mq: avoid initializing request during its completion
2013-12-31 16:38 ` Jens Axboe
@ 2014-01-01 4:57 ` Ming Lei
0 siblings, 0 replies; 9+ messages in thread
From: Ming Lei @ 2014-01-01 4:57 UTC (permalink / raw)
To: Jens Axboe; +Cc: Linux Kernel Mailing List, Christoph Hellwig
Hi Jens,
On Wed, Jan 1, 2014 at 12:38 AM, Jens Axboe <axboe@kernel.dk> wrote:
> On Thu, Dec 26 2013, Ming Lei wrote:
>> One problem is that request->start_time/start_time_ns could be
>> set as wrong.
>>
>> Also it is normal to intialize one data structure just after its
>> allocation.
>>
>> So move the initialization out of its completion path.
>
> It's done that way because of presumed cache hotness on completion,
> since we just touched a lot of the members. Lets just fix the start time
> issue by itself.
But some members of request are already touched in allocation
path(blk-mq core) too, and lots of members will be touched by
driver for starting the transfer.
Also I didn't observe obvious effect on L1 dcache load/store miss rate
after applying the patch when reading/writing on null_blk device.
Considered the theoretic advantage of reinitializing in free path, I
will just fix the start time issue by itself.
Thanks,
--
Ming Lei
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2014-01-01 4:57 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-26 13:31 [PATCH v1 0/5] block: blk-mq: support blk_cleanup_queue on mq Ming Lei
2013-12-26 13:31 ` [PATCH v1 1/5] block: blk-mq: avoid initializing request during its completion Ming Lei
2013-12-31 16:38 ` Jens Axboe
2014-01-01 4:57 ` Ming Lei
2013-12-26 13:31 ` [PATCH v1 2/5] block: blk-mq: support draining mq queue Ming Lei
2013-12-26 13:31 ` [PATCH v1 3/5] block: blk-mq: make blk_sync_queue support mq Ming Lei
2013-12-26 13:31 ` [PATCH v1 4/5] block: null_blk: fix queue leak inside removing device Ming Lei
2013-12-31 16:26 ` Jens Axboe
2013-12-26 13:31 ` [PATCH v1 5/5] block: blk-mq: don't export blk_mq_free_queue() Ming Lei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).