From: Tyrel Datwyler <tyreld@linux.ibm.com>
To: james.bottomley@hansenpartnership.com
Cc: Tyrel Datwyler <tyreld@linux.ibm.com>,
martin.petersen@oracle.com, linux-scsi@vger.kernel.org,
linux-kernel@vger.kernel.org,
Brian King <brking@linux.vnet.ibm.com>,
brking@linux.ibm.com, linuxppc-dev@lists.ozlabs.org
Subject: [PATCH v5 15/21] ibmvfc: send commands down HW Sub-CRQ when channelized
Date: Thu, 14 Jan 2021 14:31:42 -0600 [thread overview]
Message-ID: <20210114203148.246656-16-tyreld@linux.ibm.com> (raw)
In-Reply-To: <20210114203148.246656-1-tyreld@linux.ibm.com>
When the client has negotiated the use of channels all vfcFrames are
required to go down a Sub-CRQ channel or it is a protocoal violation. If
the adapter state is channelized submit vfcFrames to the appropriate
Sub-CRQ via the h_send_sub_crq() helper.
Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com>
Reviewed-by: Brian King <brking@linux.vnet.ibm.com>
---
drivers/scsi/ibmvscsi/ibmvfc.c | 39 ++++++++++++++++++++++++++++------
1 file changed, 33 insertions(+), 6 deletions(-)
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
index 3f3cc37a263f..865b87881d86 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.c
+++ b/drivers/scsi/ibmvscsi/ibmvfc.c
@@ -704,6 +704,15 @@ static int ibmvfc_send_crq(struct ibmvfc_host *vhost, u64 word1, u64 word2)
return plpar_hcall_norets(H_SEND_CRQ, vdev->unit_address, word1, word2);
}
+static int ibmvfc_send_sub_crq(struct ibmvfc_host *vhost, u64 cookie, u64 word1,
+ u64 word2, u64 word3, u64 word4)
+{
+ struct vio_dev *vdev = to_vio_dev(vhost->dev);
+
+ return plpar_hcall_norets(H_SEND_SUB_CRQ, vdev->unit_address, cookie,
+ word1, word2, word3, word4);
+}
+
/**
* ibmvfc_send_crq_init - Send a CRQ init message
* @vhost: ibmvfc host struct
@@ -1623,8 +1632,17 @@ static int ibmvfc_send_event(struct ibmvfc_event *evt,
mb();
- if ((rc = ibmvfc_send_crq(vhost, be64_to_cpu(crq_as_u64[0]),
- be64_to_cpu(crq_as_u64[1])))) {
+ if (evt->queue->fmt == IBMVFC_SUB_CRQ_FMT)
+ rc = ibmvfc_send_sub_crq(vhost,
+ evt->queue->vios_cookie,
+ be64_to_cpu(crq_as_u64[0]),
+ be64_to_cpu(crq_as_u64[1]),
+ 0, 0);
+ else
+ rc = ibmvfc_send_crq(vhost, be64_to_cpu(crq_as_u64[0]),
+ be64_to_cpu(crq_as_u64[1]));
+
+ if (rc) {
list_del(&evt->queue_list);
spin_unlock_irqrestore(&evt->queue->l_lock, flags);
del_timer(&evt->timer);
@@ -1842,6 +1860,7 @@ static int ibmvfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
struct ibmvfc_event *evt;
u32 tag_and_hwq = blk_mq_unique_tag(cmnd->request);
u16 hwq = blk_mq_unique_tag_to_hwq(tag_and_hwq);
+ u16 scsi_channel;
int rc;
if (unlikely((rc = fc_remote_port_chkready(rport))) ||
@@ -1852,7 +1871,13 @@ static int ibmvfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
}
cmnd->result = (DID_OK << 16);
- evt = ibmvfc_get_event(&vhost->crq);
+ if (vhost->using_channels) {
+ scsi_channel = hwq % vhost->scsi_scrqs.active_queues;
+ evt = ibmvfc_get_event(&vhost->scsi_scrqs.scrqs[scsi_channel]);
+ evt->hwq = hwq % vhost->scsi_scrqs.active_queues;
+ } else
+ evt = ibmvfc_get_event(&vhost->crq);
+
ibmvfc_init_event(evt, ibmvfc_scsi_done, IBMVFC_CMD_FORMAT);
evt->cmnd = cmnd;
@@ -1868,8 +1893,6 @@ static int ibmvfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
}
vfc_cmd->correlation = cpu_to_be64((u64)evt);
- if (vhost->using_channels)
- evt->hwq = hwq % vhost->scsi_scrqs.active_queues;
if (likely(!(rc = ibmvfc_map_sg_data(cmnd, evt, vfc_cmd, vhost->dev))))
return ibmvfc_send_event(evt, vhost, 0);
@@ -2200,7 +2223,11 @@ static int ibmvfc_reset_device(struct scsi_device *sdev, int type, char *desc)
spin_lock_irqsave(vhost->host->host_lock, flags);
if (vhost->state == IBMVFC_ACTIVE) {
- evt = ibmvfc_get_event(&vhost->crq);
+ if (vhost->using_channels)
+ evt = ibmvfc_get_event(&vhost->scsi_scrqs.scrqs[0]);
+ else
+ evt = ibmvfc_get_event(&vhost->crq);
+
ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT);
tmf = ibmvfc_init_vfc_cmd(evt, sdev);
iu = ibmvfc_get_fcp_iu(vhost, tmf);
--
2.27.0
next prev parent reply other threads:[~2021-01-14 20:58 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-14 20:31 [PATCH v5 00/21] ibmvfc: initial MQ development/enablement Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 01/21] ibmvfc: add vhost fields and defaults for MQ enablement Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 02/21] ibmvfc: move event pool init/free routines Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 03/21] ibmvfc: init/free event pool during queue allocation/free Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 04/21] ibmvfc: add size parameter to ibmvfc_init_event_pool Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 05/21] ibmvfc: define hcall wrapper for registering a Sub-CRQ Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 06/21] ibmvfc: add Subordinate CRQ definitions Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 07/21] ibmvfc: add alloc/dealloc routines for SCSI Sub-CRQ Channels Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 08/21] ibmvfc: add Sub-CRQ IRQ enable/disable routine Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 09/21] ibmvfc: add handlers to drain and complete Sub-CRQ responses Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 10/21] ibmvfc: define Sub-CRQ interrupt handler routine Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 11/21] ibmvfc: map/request irq and register Sub-CRQ interrupt handler Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 12/21] ibmvfc: implement channel enquiry and setup commands Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 13/21] ibmvfc: advertise client support for using hardware channels Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 14/21] ibmvfc: set and track hw queue in ibmvfc_event struct Tyrel Datwyler
2021-01-14 20:31 ` Tyrel Datwyler [this message]
2021-01-14 20:31 ` [PATCH v5 16/21] ibmvfc: register Sub-CRQ handles with VIOS during channel setup Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 17/21] ibmvfc: add cancel mad initialization helper Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 18/21] ibmvfc: send Cancel MAD down each hw scsi channel Tyrel Datwyler
2021-01-14 22:42 ` Brian King
2021-01-14 20:31 ` [PATCH v5 19/21] ibmvfc: purge scsi channels after transport loss/reset Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 20/21] ibmvfc: enable MQ and set reasonable defaults Tyrel Datwyler
2021-01-14 20:31 ` [PATCH v5 21/21] ibmvfc: provide modules parameters for MQ settings Tyrel Datwyler
2021-01-14 22:44 ` Brian King
2021-01-14 22:47 ` [PATCH v5 00/21] ibmvfc: initial MQ development/enablement Brian King
2021-01-15 3:31 ` Martin K. Petersen
2021-01-21 3:34 ` Martin K. Petersen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210114203148.246656-16-tyreld@linux.ibm.com \
--to=tyreld@linux.ibm.com \
--cc=brking@linux.ibm.com \
--cc=brking@linux.vnet.ibm.com \
--cc=james.bottomley@hansenpartnership.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=martin.petersen@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).