From: Brian King <brking@linux.vnet.ibm.com>
To: Tyrel Datwyler <tyreld@linux.ibm.com>,
james.bottomley@hansenpartnership.com
Cc: brking@linux.ibm.com, linuxppc-dev@lists.ozlabs.org,
linux-scsi@vger.kernel.org, martin.petersen@oracle.com,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 18/21] ibmvfc: send Cancel MAD down each hw scsi channel
Date: Tue, 12 Jan 2021 15:46:45 -0600 [thread overview]
Message-ID: <27876949-1427-a0b6-277c-e21628669a36@linux.vnet.ibm.com> (raw)
In-Reply-To: <20210111231225.105347-19-tyreld@linux.ibm.com>
On 1/11/21 5:12 PM, Tyrel Datwyler wrote:
> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
> index b0b0212344f3..24e1278acfeb 100644
> --- a/drivers/scsi/ibmvscsi/ibmvfc.c
> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c
> @@ -2418,18 +2418,79 @@ static struct ibmvfc_event *ibmvfc_init_tmf(struct ibmvfc_queue *queue,
> return evt;
> }
>
> -/**
> - * ibmvfc_cancel_all - Cancel all outstanding commands to the device
> - * @sdev: scsi device to cancel commands
> - * @type: type of error recovery being performed
> - *
> - * This sends a cancel to the VIOS for the specified device. This does
> - * NOT send any abort to the actual device. That must be done separately.
> - *
> - * Returns:
> - * 0 on success / other on failure
> - **/
> -static int ibmvfc_cancel_all(struct scsi_device *sdev, int type)
> +static int ibmvfc_cancel_all_mq(struct scsi_device *sdev, int type)
> +{
> + struct ibmvfc_host *vhost = shost_priv(sdev->host);
> + struct ibmvfc_event *evt, *found_evt, *temp;
> + struct ibmvfc_queue *queues = vhost->scsi_scrqs.scrqs;
> + unsigned long flags;
> + int num_hwq, i;
> + LIST_HEAD(cancelq);
> + u16 status;
> +
> + ENTER;
> + spin_lock_irqsave(vhost->host->host_lock, flags);
> + num_hwq = vhost->scsi_scrqs.active_queues;
> + for (i = 0; i < num_hwq; i++) {
> + spin_lock(queues[i].q_lock);
> + spin_lock(&queues[i].l_lock);
> + found_evt = NULL;
> + list_for_each_entry(evt, &queues[i].sent, queue_list) {
> + if (evt->cmnd && evt->cmnd->device == sdev) {
> + found_evt = evt;
> + break;
> + }
> + }
> + spin_unlock(&queues[i].l_lock);
> +
I really don't like the way the first for loop grabs all the q_locks, and then
there is a second for loop that drops all these locks... I think this code would
be cleaner if it looked like:
if (found_evt && vhost->logged_in) {
evt = ibmvfc_init_tmf(&queues[i], sdev, type);
evt->sync_iu = &queues[i].cancel_rsp;
ibmvfc_send_event(evt, vhost, default_timeout);
list_add_tail(&evt->cancel, &cancelq);
}
spin_unlock(queues[i].q_lock);
}
> + if (!found_evt)
> + continue;
> +
> + if (vhost->logged_in) {
> + evt = ibmvfc_init_tmf(&queues[i], sdev, type);
> + evt->sync_iu = &queues[i].cancel_rsp;
> + ibmvfc_send_event(evt, vhost, default_timeout);
> + list_add_tail(&evt->cancel, &cancelq);
> + }
> + }
> +
> + for (i = 0; i < num_hwq; i++)
> + spin_unlock(queues[i].q_lock);
> + spin_unlock_irqrestore(vhost->host->host_lock, flags);
> +
> + if (list_empty(&cancelq)) {
> + if (vhost->log_level > IBMVFC_DEFAULT_LOG_LEVEL)
> + sdev_printk(KERN_INFO, sdev, "No events found to cancel\n");
> + return 0;
> + }
> +
> + sdev_printk(KERN_INFO, sdev, "Cancelling outstanding commands.\n");
> +
> + list_for_each_entry_safe(evt, temp, &cancelq, cancel) {
> + wait_for_completion(&evt->comp);
> + status = be16_to_cpu(evt->queue->cancel_rsp.mad_common.status);
You probably want a list_del(&evt->cancel) here.
> + ibmvfc_free_event(evt);
> +
> + if (status != IBMVFC_MAD_SUCCESS) {
> + sdev_printk(KERN_WARNING, sdev, "Cancel failed with rc=%x\n", status);
> + switch (status) {
> + case IBMVFC_MAD_DRIVER_FAILED:
> + case IBMVFC_MAD_CRQ_ERROR:
> + /* Host adapter most likely going through reset, return success to
> + * the caller will wait for the command being cancelled to get returned
> + */
> + break;
> + default:
> + break;
I think this default break needs to be a return -EIO.
> + }
> + }
> + }
> +
> + sdev_printk(KERN_INFO, sdev, "Successfully cancelled outstanding commands\n");
> + return 0;
> +}
> +
> +static int ibmvfc_cancel_all_sq(struct scsi_device *sdev, int type)
> {
> struct ibmvfc_host *vhost = shost_priv(sdev->host);
> struct ibmvfc_event *evt, *found_evt;
> @@ -2498,6 +2559,27 @@ static int ibmvfc_cancel_all(struct scsi_device *sdev, int type)
> return 0;
> }
>
--
Brian King
Power Linux I/O
IBM Linux Technology Center
next prev parent reply other threads:[~2021-01-12 23:09 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-11 23:12 [PATCH v4 00/21] ibmvfc: initial MQ development Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 01/21] ibmvfc: add vhost fields and defaults for MQ enablement Tyrel Datwyler
2021-01-12 11:43 ` John Garry
2021-01-12 22:54 ` Brian King
2021-01-13 0:33 ` Tyrel Datwyler
2021-01-13 17:13 ` Brian King
2021-01-14 1:27 ` Ming Lei
2021-01-14 17:24 ` Brian King
2021-01-15 1:47 ` Ming Lei
2021-01-11 23:12 ` [PATCH v4 02/21] ibmvfc: move event pool init/free routines Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 03/21] ibmvfc: init/free event pool during queue allocation/free Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 04/21] ibmvfc: add size parameter to ibmvfc_init_event_pool Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 05/21] ibmvfc: define hcall wrapper for registering a Sub-CRQ Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 06/21] ibmvfc: add Subordinate CRQ definitions Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 07/21] ibmvfc: add alloc/dealloc routines for SCSI Sub-CRQ Channels Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 08/21] ibmvfc: add Sub-CRQ IRQ enable/disable routine Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 09/21] ibmvfc: add handlers to drain and complete Sub-CRQ responses Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 10/21] ibmvfc: define Sub-CRQ interrupt handler routine Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 11/21] ibmvfc: map/request irq and register Sub-CRQ interrupt handler Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 12/21] ibmvfc: implement channel enquiry and setup commands Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 13/21] ibmvfc: advertise client support for using hardware channels Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 14/21] ibmvfc: set and track hw queue in ibmvfc_event struct Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 15/21] ibmvfc: send commands down HW Sub-CRQ when channelized Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 16/21] ibmvfc: register Sub-CRQ handles with VIOS during channel setup Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 17/21] ibmvfc: add cancel mad initialization helper Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 18/21] ibmvfc: send Cancel MAD down each hw scsi channel Tyrel Datwyler
2021-01-12 21:46 ` Brian King [this message]
2021-01-11 23:12 ` [PATCH v4 19/21] ibmvfc: purge scsi channels after transport loss/reset Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 20/21] ibmvfc: enable MQ and set reasonable defaults Tyrel Datwyler
2021-01-11 23:12 ` [PATCH v4 21/21] ibmvfc: provide modules parameters for MQ settings Tyrel Datwyler
2021-01-13 18:57 ` Brian King
2021-01-13 18:58 ` [PATCH v4 00/21] ibmvfc: initial MQ development Brian King
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=27876949-1427-a0b6-277c-e21628669a36@linux.vnet.ibm.com \
--to=brking@linux.vnet.ibm.com \
--cc=brking@linux.ibm.com \
--cc=james.bottomley@hansenpartnership.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=martin.petersen@oracle.com \
--cc=tyreld@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).