public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] virtio_scsi: support multi hw queue of blk-mq
@ 2014-11-09 16:57 Ming Lei
  2014-11-09 16:57 ` [PATCH 1/2] scsi: introduce force_blk_mq Ming Lei
  2014-11-09 16:57 ` [PATCH 2/2] virtio_scsi: support multi hw queue of blk-mq Ming Lei
  0 siblings, 2 replies; 8+ messages in thread
From: Ming Lei @ 2014-11-09 16:57 UTC (permalink / raw)
  To: James Bottomley, Jens Axboe, Christoph Hellwig; +Cc: linux-scsi

Hi,

The 1st patch introduces 'force_blk_mq' so that one driver can
claim that it only supports blk-mq.

The 2nd patch implements multi hw queue support for virtio_scsi,
since it is very natural to map virtqueue to hw queue of blk-mq.

Thanks,
Ming Lei



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] scsi: introduce force_blk_mq
  2014-11-09 16:57 [PATCH 0/2] virtio_scsi: support multi hw queue of blk-mq Ming Lei
@ 2014-11-09 16:57 ` Ming Lei
  2014-11-09 16:57 ` [PATCH 2/2] virtio_scsi: support multi hw queue of blk-mq Ming Lei
  1 sibling, 0 replies; 8+ messages in thread
From: Ming Lei @ 2014-11-09 16:57 UTC (permalink / raw)
  To: James Bottomley, Jens Axboe, Christoph Hellwig; +Cc: linux-scsi, Ming Lei

>From scsi driver view, it is a bit troublesome to support both
blk-mq and non-blk-mq at the same time, especially when drivers
start to support multi hw-queue.

This patch introduces 'force_blk_mq' to scsi_host_template
so that drivers can provide blk-mq only support, and avoid
to consider legacy support.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
---
 drivers/scsi/hosts.c     |    3 ++-
 include/scsi/scsi_host.h |    3 +++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
index 06030e1..d2b83f8 100644
--- a/drivers/scsi/hosts.c
+++ b/drivers/scsi/hosts.c
@@ -459,7 +459,8 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
 	else
 		shost->dma_boundary = 0xffffffff;
 
-	shost->use_blk_mq = scsi_use_blk_mq && !shost->hostt->disable_blk_mq;
+	shost->use_blk_mq = (scsi_use_blk_mq && !shost->hostt->disable_blk_mq)
+				|| shost->hostt->force_blk_mq;
 
 	device_initialize(&shost->shost_gendev);
 	dev_set_name(&shost->shost_gendev, "host%d", shost->host_no);
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index 537c431..75c1468 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -514,6 +514,9 @@ struct scsi_host_template {
 
 	/* temporary flag to disable blk-mq I/O path */
 	bool disable_blk_mq;
+
+	/* tell scsi core we only support blk-mq */
+	bool force_blk_mq;
 };
 
 /*
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] virtio_scsi: support multi hw queue of blk-mq
  2014-11-09 16:57 [PATCH 0/2] virtio_scsi: support multi hw queue of blk-mq Ming Lei
  2014-11-09 16:57 ` [PATCH 1/2] scsi: introduce force_blk_mq Ming Lei
@ 2014-11-09 16:57 ` Ming Lei
  2014-11-10 10:41   ` Paolo Bonzini
  1 sibling, 1 reply; 8+ messages in thread
From: Ming Lei @ 2014-11-09 16:57 UTC (permalink / raw)
  To: James Bottomley, Jens Axboe, Christoph Hellwig
  Cc: linux-scsi, Ming Lei, Paolo Bonzini

Since virtio_scsi has supported multi virtqueue already,
it is natural to map virtque to hw-queue of blk-mq.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
---
 drivers/scsi/virtio_scsi.c |  154 ++++----------------------------------------
 1 file changed, 14 insertions(+), 140 deletions(-)

diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index b83846f..719adb2 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -65,36 +65,6 @@ struct virtio_scsi_vq {
 	struct virtqueue *vq;
 };
 
-/*
- * Per-target queue state.
- *
- * This struct holds the data needed by the queue steering policy.  When a
- * target is sent multiple requests, we need to drive them to the same queue so
- * that FIFO processing order is kept.  However, if a target was idle, we can
- * choose a queue arbitrarily.  In this case the queue is chosen according to
- * the current VCPU, so the driver expects the number of request queues to be
- * equal to the number of VCPUs.  This makes it easy and fast to select the
- * queue, and also lets the driver optimize the IRQ affinity for the virtqueues
- * (each virtqueue's affinity is set to the CPU that "owns" the queue).
- *
- * tgt_seq is held to serialize reading and writing req_vq.
- *
- * Decrements of reqs are never concurrent with writes of req_vq: before the
- * decrement reqs will be != 0; after the decrement the virtqueue completion
- * routine will not use the req_vq so it can be changed by a new request.
- * Thus they can happen outside the tgt_seq, provided of course we make reqs
- * an atomic_t.
- */
-struct virtio_scsi_target_state {
-	seqcount_t tgt_seq;
-
-	/* Count of outstanding requests. */
-	atomic_t reqs;
-
-	/* Currently active virtqueue for requests sent to this target. */
-	struct virtio_scsi_vq *req_vq;
-};
-
 /* Driver instance state */
 struct virtio_scsi {
 	struct virtio_device *vdev;
@@ -150,8 +120,6 @@ static void virtscsi_complete_cmd(struct virtio_scsi *vscsi, void *buf)
 	struct virtio_scsi_cmd *cmd = buf;
 	struct scsi_cmnd *sc = cmd->sc;
 	struct virtio_scsi_cmd_resp *resp = &cmd->resp.cmd;
-	struct virtio_scsi_target_state *tgt =
-				scsi_target(sc->device)->hostdata;
 
 	dev_dbg(&sc->device->sdev_gendev,
 		"cmd %p response %u status %#02x sense_len %u\n",
@@ -205,8 +173,6 @@ static void virtscsi_complete_cmd(struct virtio_scsi *vscsi, void *buf)
 	}
 
 	sc->scsi_done(sc);
-
-	atomic_dec(&tgt->reqs);
 }
 
 static void virtscsi_vq_done(struct virtio_scsi *vscsi,
@@ -514,9 +480,9 @@ static void virtio_scsi_init_hdr_pi(struct virtio_scsi_cmd_req_pi *cmd_pi,
 		cmd_pi->pi_bytesin = blk_rq_sectors(rq) * bi->tuple_size;
 }
 
-static int virtscsi_queuecommand(struct virtio_scsi *vscsi,
-				 struct virtio_scsi_vq *req_vq,
-				 struct scsi_cmnd *sc)
+static int __virtscsi_queuecommand(struct virtio_scsi *vscsi,
+				   struct virtio_scsi_vq *req_vq,
+				   struct scsi_cmnd *sc)
 {
 	struct Scsi_Host *shost = virtio_scsi_host(vscsi->vdev);
 	struct virtio_scsi_cmd *cmd = scsi_cmd_priv(sc);
@@ -550,63 +516,15 @@ static int virtscsi_queuecommand(struct virtio_scsi *vscsi,
 	return 0;
 }
 
-static int virtscsi_queuecommand_single(struct Scsi_Host *sh,
-					struct scsi_cmnd *sc)
-{
-	struct virtio_scsi *vscsi = shost_priv(sh);
-	struct virtio_scsi_target_state *tgt =
-				scsi_target(sc->device)->hostdata;
-
-	atomic_inc(&tgt->reqs);
-	return virtscsi_queuecommand(vscsi, &vscsi->req_vqs[0], sc);
-}
-
-static struct virtio_scsi_vq *virtscsi_pick_vq(struct virtio_scsi *vscsi,
-					       struct virtio_scsi_target_state *tgt)
-{
-	struct virtio_scsi_vq *vq;
-	unsigned long flags;
-	u32 queue_num;
-
-	local_irq_save(flags);
-	if (atomic_inc_return(&tgt->reqs) > 1) {
-		unsigned long seq;
-
-		do {
-			seq = read_seqcount_begin(&tgt->tgt_seq);
-			vq = tgt->req_vq;
-		} while (read_seqcount_retry(&tgt->tgt_seq, seq));
-	} else {
-		/* no writes can be concurrent because of atomic_t */
-		write_seqcount_begin(&tgt->tgt_seq);
-
-		/* keep previous req_vq if a reader just arrived */
-		if (unlikely(atomic_read(&tgt->reqs) > 1)) {
-			vq = tgt->req_vq;
-			goto unlock;
-		}
-
-		queue_num = smp_processor_id();
-		while (unlikely(queue_num >= vscsi->num_queues))
-			queue_num -= vscsi->num_queues;
-		tgt->req_vq = vq = &vscsi->req_vqs[queue_num];
- unlock:
-		write_seqcount_end(&tgt->tgt_seq);
-	}
-	local_irq_restore(flags);
-
-	return vq;
-}
-
-static int virtscsi_queuecommand_multi(struct Scsi_Host *sh,
-				       struct scsi_cmnd *sc)
+static int virtscsi_queuecommand(struct Scsi_Host *sh,
+				 struct scsi_cmnd *sc)
 {
 	struct virtio_scsi *vscsi = shost_priv(sh);
-	struct virtio_scsi_target_state *tgt =
-				scsi_target(sc->device)->hostdata;
-	struct virtio_scsi_vq *req_vq = virtscsi_pick_vq(vscsi, tgt);
+	u32 tag = blk_mq_unique_tag(sc->request);
+	u16 hwq = blk_mq_unique_tag_to_hwq(tag);
+	struct virtio_scsi_vq *req_vq = &vscsi->req_vqs[hwq];
 
-	return virtscsi_queuecommand(vscsi, req_vq, sc);
+	return __virtscsi_queuecommand(vscsi, req_vq, sc);
 }
 
 static int virtscsi_tmf(struct virtio_scsi *vscsi, struct virtio_scsi_cmd *cmd)
@@ -718,37 +636,13 @@ static int virtscsi_abort(struct scsi_cmnd *sc)
 	return virtscsi_tmf(vscsi, cmd);
 }
 
-static int virtscsi_target_alloc(struct scsi_target *starget)
-{
-	struct Scsi_Host *sh = dev_to_shost(starget->dev.parent);
-	struct virtio_scsi *vscsi = shost_priv(sh);
-
-	struct virtio_scsi_target_state *tgt =
-				kmalloc(sizeof(*tgt), GFP_KERNEL);
-	if (!tgt)
-		return -ENOMEM;
-
-	seqcount_init(&tgt->tgt_seq);
-	atomic_set(&tgt->reqs, 0);
-	tgt->req_vq = &vscsi->req_vqs[0];
-
-	starget->hostdata = tgt;
-	return 0;
-}
-
-static void virtscsi_target_destroy(struct scsi_target *starget)
-{
-	struct virtio_scsi_target_state *tgt = starget->hostdata;
-	kfree(tgt);
-}
-
-static struct scsi_host_template virtscsi_host_template_single = {
+static struct scsi_host_template virtscsi_host_template = {
 	.module = THIS_MODULE,
 	.name = "Virtio SCSI HBA",
 	.proc_name = "virtio_scsi",
 	.this_id = -1,
 	.cmd_size = sizeof(struct virtio_scsi_cmd),
-	.queuecommand = virtscsi_queuecommand_single,
+	.queuecommand = virtscsi_queuecommand,
 	.change_queue_depth = virtscsi_change_queue_depth,
 	.eh_abort_handler = virtscsi_abort,
 	.eh_device_reset_handler = virtscsi_device_reset,
@@ -756,26 +650,8 @@ static struct scsi_host_template virtscsi_host_template_single = {
 	.can_queue = 1024,
 	.dma_boundary = UINT_MAX,
 	.use_clustering = ENABLE_CLUSTERING,
-	.target_alloc = virtscsi_target_alloc,
-	.target_destroy = virtscsi_target_destroy,
-};
 
-static struct scsi_host_template virtscsi_host_template_multi = {
-	.module = THIS_MODULE,
-	.name = "Virtio SCSI HBA",
-	.proc_name = "virtio_scsi",
-	.this_id = -1,
-	.cmd_size = sizeof(struct virtio_scsi_cmd),
-	.queuecommand = virtscsi_queuecommand_multi,
-	.change_queue_depth = virtscsi_change_queue_depth,
-	.eh_abort_handler = virtscsi_abort,
-	.eh_device_reset_handler = virtscsi_device_reset,
-
-	.can_queue = 1024,
-	.dma_boundary = UINT_MAX,
-	.use_clustering = ENABLE_CLUSTERING,
-	.target_alloc = virtscsi_target_alloc,
-	.target_destroy = virtscsi_target_destroy,
+	.force_blk_mq = true,
 };
 
 #define virtscsi_config_get(vdev, fld) \
@@ -944,10 +820,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
 
 	num_targets = virtscsi_config_get(vdev, max_target) + 1;
 
-	if (num_queues == 1)
-		hostt = &virtscsi_host_template_single;
-	else
-		hostt = &virtscsi_host_template_multi;
+	hostt = &virtscsi_host_template;
 
 	shost = scsi_host_alloc(hostt,
 		sizeof(*vscsi) + sizeof(vscsi->req_vqs[0]) * num_queues);
@@ -983,6 +856,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
 	shost->max_id = num_targets;
 	shost->max_channel = 0;
 	shost->max_cmd_len = VIRTIO_SCSI_CDB_SIZE;
+	shost->nr_hw_queues = num_queues;
 
 	if (virtio_has_feature(vdev, VIRTIO_SCSI_F_T10_PI)) {
 		host_prot = SHOST_DIF_TYPE1_PROTECTION | SHOST_DIF_TYPE2_PROTECTION |
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] virtio_scsi: support multi hw queue of blk-mq
  2014-11-09 16:57 ` [PATCH 2/2] virtio_scsi: support multi hw queue of blk-mq Ming Lei
@ 2014-11-10 10:41   ` Paolo Bonzini
  2014-11-10 13:39     ` Christoph Hellwig
  0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2014-11-10 10:41 UTC (permalink / raw)
  To: Ming Lei, James Bottomley, Jens Axboe, Christoph Hellwig; +Cc: linux-scsi

On 09/11/2014 17:57, Ming Lei wrote:
> Since virtio_scsi has supported multi virtqueue already,
> it is natural to map virtque to hw-queue of blk-mq.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Ming Lei <ming.lei@canonical.com>
> ---
>  drivers/scsi/virtio_scsi.c |  154 ++++----------------------------------------
>  1 file changed, 14 insertions(+), 140 deletions(-)

Nice. :)

FWIW, instead of force_blk_mq, it would be fine for me to make
virtio-scsi use a single queue if not using blk-mq.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] virtio_scsi: support multi hw queue of blk-mq
  2014-11-10 10:41   ` Paolo Bonzini
@ 2014-11-10 13:39     ` Christoph Hellwig
  2014-11-10 14:25       ` Ming Lei
  0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2014-11-10 13:39 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Ming Lei, James Bottomley, Jens Axboe, Christoph Hellwig,
	linux-scsi

On Mon, Nov 10, 2014 at 11:41:43AM +0100, Paolo Bonzini wrote:
> FWIW, instead of force_blk_mq, it would be fine for me to make
> virtio-scsi use a single queue if not using blk-mq.

I agree with that.  I'd rather move all of SCSI over after a fairly short
period instead of special cases like this, mostly out of fear of a horrible
transition we had to the new EH model 10 years ago.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] virtio_scsi: support multi hw queue of blk-mq
  2014-11-10 13:39     ` Christoph Hellwig
@ 2014-11-10 14:25       ` Ming Lei
  2014-11-10 16:05         ` Christoph Hellwig
  0 siblings, 1 reply; 8+ messages in thread
From: Ming Lei @ 2014-11-10 14:25 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Paolo Bonzini, James Bottomley, Jens Axboe, Linux SCSI List

On Mon, Nov 10, 2014 at 9:39 PM, Christoph Hellwig <hch@lst.de> wrote:
> On Mon, Nov 10, 2014 at 11:41:43AM +0100, Paolo Bonzini wrote:
>> FWIW, instead of force_blk_mq, it would be fine for me to make
>> virtio-scsi use a single queue if not using blk-mq.
>
> I agree with that.

For virtio-scsi, we can do that because only single vq is running at
the same time for previous multi-vq support.

> I'd rather move all of SCSI over after a fairly short
> period instead of special cases like this, mostly out of fear of a horrible
> transition we had to the new EH model 10 years ago.

It depends on how long the transition period is, and other drivers
with multi hwq support might be in the same situation with virtio-scsi. If
the period is a bit long, it would be painful for these drivers to support
both blk-mq and non-mq.

Also with force_blk_mq on virtio-scsi, people can test scsi-mq with
default setting, then it should be helpful to collect more test reports, :-)

But I don't have strong desire for force_blk_mq if you think it isn't good.

Thanks,
Ming Lei

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] virtio_scsi: support multi hw queue of blk-mq
  2014-11-10 14:25       ` Ming Lei
@ 2014-11-10 16:05         ` Christoph Hellwig
  2014-11-11 17:09           ` Paolo Bonzini
  0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2014-11-10 16:05 UTC (permalink / raw)
  To: Ming Lei; +Cc: Paolo Bonzini, James Bottomley, Jens Axboe, Linux SCSI List

On Mon, Nov 10, 2014 at 10:25:43PM +0800, Ming Lei wrote:
> It depends on how long the transition period is, and other drivers
> with multi hwq support might be in the same situation with virtio-scsi. If
> the period is a bit long, it would be painful for these drivers to support
> both blk-mq and non-mq.

If it's too long we need to find better ways to control the behavior.  But
honestly I'd rather prefer to keep selected drivers on the old code for
that case than the other way around.

We currently still have two big blockers for having any sort of real
blk-mq by default decision for scsi:

 a) there is no multiath support for it, and we simply can't break existing
    setups that use.
 b) there is no support for I/O schedulers at all.  This might be okay
    for virtio-scsi, where in general you have a host scheduler, but for
    real hardware that still has to deal with more spinning rust than
    solid state it's a blocker.  Fortuntely Jens is working on that one.

> Also with force_blk_mq on virtio-scsi, people can test scsi-mq with
> default setting, then it should be helpful to collect more test reports, :-)

I think between the config and the module/boot option we've already got
ways to get reports.  Improvements welcome.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] virtio_scsi: support multi hw queue of blk-mq
  2014-11-10 16:05         ` Christoph Hellwig
@ 2014-11-11 17:09           ` Paolo Bonzini
  0 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2014-11-11 17:09 UTC (permalink / raw)
  To: Christoph Hellwig, Ming Lei; +Cc: James Bottomley, Jens Axboe, Linux SCSI List



On 10/11/2014 17:05, Christoph Hellwig wrote:
> 
>  a) there is no multiath support for it, and we simply can't break existing
>     setups that use.
>  b) there is no support for I/O schedulers at all.  This might be okay
>     for virtio-scsi, where in general you have a host scheduler, but for
>     real hardware that still has to deal with more spinning rust than
>     solid state it's a blocker.  Fortuntely Jens is working on that one.

It is sort-of okay for virtio-scsi (and virtio-blk).  Deadline provides
a good speedup over noop for virtual disks, even in the presence of a
host scheduler---though nobody ever benchmarked why.

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-11-11 17:09 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-11-09 16:57 [PATCH 0/2] virtio_scsi: support multi hw queue of blk-mq Ming Lei
2014-11-09 16:57 ` [PATCH 1/2] scsi: introduce force_blk_mq Ming Lei
2014-11-09 16:57 ` [PATCH 2/2] virtio_scsi: support multi hw queue of blk-mq Ming Lei
2014-11-10 10:41   ` Paolo Bonzini
2014-11-10 13:39     ` Christoph Hellwig
2014-11-10 14:25       ` Ming Lei
2014-11-10 16:05         ` Christoph Hellwig
2014-11-11 17:09           ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox