linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Tyrel Datwyler <tyreld@linux.ibm.com>
To: Brian King <brking@linux.vnet.ibm.com>,
	james.bottomley@hansenpartnership.com
Cc: brking@linux.ibm.com, linuxppc-dev@lists.ozlabs.org,
	linux-scsi@vger.kernel.org, martin.petersen@oracle.com,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 04/13] ibmvfc: add alloc/dealloc routines for SCSI Sub-CRQ Channels
Date: Mon, 30 Nov 2020 09:26:05 -0800	[thread overview]
Message-ID: <0eeac30f-07bd-d4c5-fe21-d5092ca3fd62@linux.ibm.com> (raw)
In-Reply-To: <0c308b76-c744-0257-d5ba-3ffd0e6073a3@linux.vnet.ibm.com>

On 11/27/20 9:46 AM, Brian King wrote:
> On 11/25/20 7:48 PM, Tyrel Datwyler wrote:
>> Allocate a set of Sub-CRQs in advance. During channel setup the client
>> and VIOS negotiate the number of queues the VIOS supports and the number
>> that the client desires to request. Its possible that the final channel
>> resources allocated is less than requested, but the client is still
>> responsible for sending handles for every queue it is hoping for.
>>
>> Also, provide deallocation cleanup routines.
>>
>> Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com>
>> ---
>>  drivers/scsi/ibmvscsi/ibmvfc.c | 115 +++++++++++++++++++++++++++++++++
>>  drivers/scsi/ibmvscsi/ibmvfc.h |   1 +
>>  2 files changed, 116 insertions(+)
>>
>> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
>> index 260b82e3cc01..571abdb48384 100644
>> --- a/drivers/scsi/ibmvscsi/ibmvfc.c
>> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c
>> @@ -4983,6 +4983,114 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost)
>>  	return retrc;
>>  }
>>  
>> +static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
>> +				  int index)
>> +{
>> +	struct device *dev = vhost->dev;
>> +	struct vio_dev *vdev = to_vio_dev(dev);
>> +	struct ibmvfc_sub_queue *scrq = &vhost->scsi_scrqs.scrqs[index];
>> +	int rc = -ENOMEM;
>> +
>> +	ENTER;
>> +
>> +	scrq->msgs = (struct ibmvfc_sub_crq *)get_zeroed_page(GFP_KERNEL);
>> +	if (!scrq->msgs)
>> +		return rc;
>> +
>> +	scrq->size = PAGE_SIZE / sizeof(*scrq->msgs);
>> +	scrq->msg_token = dma_map_single(dev, scrq->msgs, PAGE_SIZE,
>> +					 DMA_BIDIRECTIONAL);
>> +
>> +	if (dma_mapping_error(dev, scrq->msg_token))
>> +		goto dma_map_failed;
>> +
>> +	rc = h_reg_sub_crq(vdev->unit_address, scrq->msg_token, PAGE_SIZE,
>> +			   &scrq->cookie, &scrq->hw_irq);
>> +
>> +	if (rc) {
>> +		dev_warn(dev, "Error registering sub-crq: %d\n", rc);
>> +		dev_warn(dev, "Firmware may not support MQ\n");
>> +		goto reg_failed;
>> +	}
>> +
>> +	scrq->hwq_id = index;
>> +	scrq->vhost = vhost;
>> +
>> +	LEAVE;
>> +	return 0;
>> +
>> +reg_failed:
>> +	dma_unmap_single(dev, scrq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL);
>> +dma_map_failed:
>> +	free_page((unsigned long)scrq->msgs);
>> +	LEAVE;
>> +	return rc;
>> +}
>> +
>> +static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index)
>> +{
>> +	struct device *dev = vhost->dev;
>> +	struct vio_dev *vdev = to_vio_dev(dev);
>> +	struct ibmvfc_sub_queue *scrq = &vhost->scsi_scrqs.scrqs[index];
>> +	long rc;
>> +
>> +	ENTER;
>> +
>> +	do {
>> +		rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address,
>> +					scrq->cookie);
>> +	} while (rc == H_BUSY || H_IS_LONG_BUSY(rc));
>> +
>> +	if (rc)
>> +		dev_err(dev, "Failed to free sub-crq[%d]: rc=%ld\n", index, rc);
>> +
>> +	dma_unmap_single(dev, scrq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL);
>> +	free_page((unsigned long)scrq->msgs);
>> +	LEAVE;
>> +}
>> +
>> +static int ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost)
>> +{
>> +	int i, j;
>> +
>> +	ENTER;
>> +
>> +	vhost->scsi_scrqs.scrqs = kcalloc(vhost->client_scsi_channels,
>> +					  sizeof(*vhost->scsi_scrqs.scrqs),
>> +					  GFP_KERNEL);
>> +	if (!vhost->scsi_scrqs.scrqs)
>> +		return -1;
>> +
>> +	for (i = 0; i < vhost->client_scsi_channels; i++) {
>> +		if (ibmvfc_register_scsi_channel(vhost, i)) {
>> +			for (j = i; j > 0; j--)
>> +				ibmvfc_deregister_scsi_channel(vhost, j - 1);
>> +			kfree(vhost->scsi_scrqs.scrqs);
>> +			LEAVE;
>> +			return -1;
>> +		}
>> +	}
>> +
>> +	LEAVE;
>> +	return 0;
>> +}
>> +
>> +static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost)
>> +{
>> +	int i;
>> +
>> +	ENTER;
>> +	if (!vhost->scsi_scrqs.scrqs)
>> +		return;
>> +
>> +	for (i = 0; i < vhost->client_scsi_channels; i++)
>> +		ibmvfc_deregister_scsi_channel(vhost, i);
>> +
>> +	vhost->scsi_scrqs.active_queues = 0;
>> +	kfree(vhost->scsi_scrqs.scrqs);
> 
> Do you want to NULL this out after you free it do you don't keep
> a reference to a freed page around?

This isn't actually a page, but a dynamically allocated array of
ibmvfc_sub_queues, but it should be NULL'ed regardless.

-Tyrel

> 
>> +	LEAVE;
>> +}
>> +
>>  /**
>>   * ibmvfc_free_mem - Free memory for vhost
>>   * @vhost:	ibmvfc host struct
> 
> 
> 


  reply	other threads:[~2020-11-30 17:28 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-26  1:48 [PATCH 00/13] ibmvfc: initial MQ development Tyrel Datwyler
2020-11-26  1:48 ` [PATCH 01/13] ibmvfc: add vhost fields and defaults for MQ enablement Tyrel Datwyler
2020-11-27 17:45   ` Brian King
2020-11-30 17:22     ` Tyrel Datwyler
2020-11-26  1:48 ` [PATCH 02/13] ibmvfc: define hcall wrapper for registering a Sub-CRQ Tyrel Datwyler
2020-11-27 17:45   ` Brian King
2020-11-26  1:48 ` [PATCH 03/13] ibmvfc: add Subordinate CRQ definitions Tyrel Datwyler
2020-11-27 17:46   ` Brian King
2020-11-26  1:48 ` [PATCH 04/13] ibmvfc: add alloc/dealloc routines for SCSI Sub-CRQ Channels Tyrel Datwyler
2020-11-27 17:46   ` Brian King
2020-11-30 17:26     ` Tyrel Datwyler [this message]
2020-11-26  1:48 ` [PATCH 05/13] ibmvfc: add Sub-CRQ IRQ enable/disable routine Tyrel Datwyler
2020-11-27 17:47   ` Brian King
2020-11-26  1:48 ` [PATCH 06/13] ibmvfc: add handlers to drain and complete Sub-CRQ responses Tyrel Datwyler
2020-11-27 17:47   ` Brian King
2020-11-30 17:27     ` Tyrel Datwyler
2020-11-26  1:48 ` [PATCH 07/13] ibmvfc: define Sub-CRQ interrupt handler routine Tyrel Datwyler
2020-11-27 17:48   ` Brian King
2020-11-26  1:48 ` [PATCH 08/13] ibmvfc: map/request irq and register Sub-CRQ interrupt handler Tyrel Datwyler
2020-11-27 17:48   ` Brian King
2020-11-26  1:48 ` [PATCH 09/13] ibmvfc: implement channel enquiry and setup commands Tyrel Datwyler
2020-11-27 17:49   ` Brian King
2020-11-30 17:29     ` Tyrel Datwyler
2020-11-26  1:48 ` [PATCH 10/13] ibmvfc: advertise client support for using hardware channels Tyrel Datwyler
2020-11-27 17:49   ` Brian King
2020-11-26  1:48 ` [PATCH 11/13] ibmvfc: set and track hw queue in ibmvfc_event struct Tyrel Datwyler
2020-11-27 17:50   ` Brian King
2020-11-26  1:48 ` [PATCH 12/13] ibmvfc: send commands down HW Sub-CRQ when channelized Tyrel Datwyler
2020-11-27 17:50   ` Brian King
2020-11-26  1:48 ` [PATCH 13/13] ibmvfc: register Sub-CRQ handles with VIOS during channel setup Tyrel Datwyler
2020-11-27 17:50   ` Brian King
2020-12-02 12:03 ` [PATCH 00/13] ibmvfc: initial MQ development Hannes Reinecke
2020-12-02 17:19   ` Tyrel Datwyler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0eeac30f-07bd-d4c5-fe21-d5092ca3fd62@linux.ibm.com \
    --to=tyreld@linux.ibm.com \
    --cc=brking@linux.ibm.com \
    --cc=brking@linux.vnet.ibm.com \
    --cc=james.bottomley@hansenpartnership.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=martin.petersen@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).