* Re: [PATCH 0/7] Add support for new IBM SAS controllers
[not found] <20121119213657.152897506@linux.vnet.ibm.com>
@ 2012-11-20 21:58 ` Brian King
0 siblings, 0 replies; 12+ messages in thread
From: Brian King @ 2012-11-20 21:58 UTC (permalink / raw)
To: wenxiong; +Cc: James.Bottomley, linux-scsi, klebers
On 11/19/2012 03:36 PM, wenxiong@linux.vnet.ibm.com wrote:
> Hi all,
>
> The new generation IBM SAS controllers will support MSI-X interrupt and
> Distributed Completion Processing features. The patches in this series
> add the code to support these new hardware features and also add some
> patches for performance improvement such as block iopoll and reducing
> lock contention in ipr driver.
>
> Thanks for your help!
> Wendy
>
Ack patches 1-7.
Acked-by: Brian King <brking@linux.vnet.ibm.com>
Thanks,
Brian
--
Brian King
Power Linux I/O
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 0/7] Add support for new IBM SAS controllers
@ 2012-11-26 15:55 wenxiong
2012-11-26 15:55 ` [PATCH 1/7] ipr: Add sereral new CCIN definitions for new adapters support wenxiong
` (6 more replies)
0 siblings, 7 replies; 12+ messages in thread
From: wenxiong @ 2012-11-26 15:55 UTC (permalink / raw)
To: James.Bottomley; +Cc: linux-scsi, klebers, brking
The new generation IBM SAS controllers will support MSI-X interrupt and
Distributed Completion Processing features. The patches in this series
add the code to support these new hardware features and also add some
patches for performance improvement such as block iopoll and reducing
lock contention in ipr driver.
Thanks for your help!
Wendy
--
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/7] ipr: Add sereral new CCIN definitions for new adapters support
2012-11-26 15:55 [PATCH 0/7] Add support for new IBM SAS controllers wenxiong
@ 2012-11-26 15:55 ` wenxiong
2012-11-26 15:55 ` [PATCH 2/7] ipr: Handler ID memory allocation failure wenxiong
` (5 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: wenxiong @ 2012-11-26 15:55 UTC (permalink / raw)
To: James.Bottomley; +Cc: linux-scsi, klebers, brking, Wen Xiong
[-- Attachment #1: new_pciid --]
[-- Type: text/plain, Size: 2446 bytes --]
Add the appropriate definitions and table entries for new adapter support.
Signed-off-by: Wen Xiong <wenxiong@linux.vnet.ibm.com>
---
drivers/scsi/ipr.c | 10 ++++++++++
drivers/scsi/ipr.h | 5 +++++
2 files changed, 15 insertions(+)
Index: b/drivers/scsi/ipr.c
===================================================================
--- a/drivers/scsi/ipr.c 2012-11-14 20:15:46.666405044 -0600
+++ b/drivers/scsi/ipr.c 2012-11-14 20:23:26.295466065 -0600
@@ -9279,6 +9279,8 @@ static struct pci_device_id ipr_pci_tabl
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_FPGA_E2,
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57B2, 0, 0, 0 },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_FPGA_E2,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C0, 0, 0, 0 },
+ { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_FPGA_E2,
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C3, 0, 0, 0 },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_FPGA_E2,
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C4, 0, 0, 0 },
@@ -9292,6 +9294,14 @@ static struct pci_device_id ipr_pci_tabl
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C8, 0, 0, 0 },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE,
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57CE, 0, 0, 0 },
+ { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57D5, 0, 0, 0 },
+ { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57D6, 0, 0, 0 },
+ { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57D7, 0, 0, 0 },
+ { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57D8, 0, 0, 0 },
{ }
};
MODULE_DEVICE_TABLE(pci, ipr_pci_table);
Index: b/drivers/scsi/ipr.h
===================================================================
--- a/drivers/scsi/ipr.h 2012-11-14 20:15:46.666405044 -0600
+++ b/drivers/scsi/ipr.h 2012-11-14 20:23:23.404216172 -0600
@@ -82,6 +82,7 @@
#define IPR_SUBS_DEV_ID_57B4 0x033B
#define IPR_SUBS_DEV_ID_57B2 0x035F
+#define IPR_SUBS_DEV_ID_57C0 0x0352
#define IPR_SUBS_DEV_ID_57C3 0x0353
#define IPR_SUBS_DEV_ID_57C4 0x0354
#define IPR_SUBS_DEV_ID_57C6 0x0357
@@ -94,6 +95,10 @@
#define IPR_SUBS_DEV_ID_574D 0x0356
#define IPR_SUBS_DEV_ID_57C8 0x035D
+#define IPR_SUBS_DEV_ID_57D5 0x03FB
+#define IPR_SUBS_DEV_ID_57D6 0x03FC
+#define IPR_SUBS_DEV_ID_57D7 0x03FF
+#define IPR_SUBS_DEV_ID_57D8 0x03FE
#define IPR_NAME "ipr"
/*
--
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/7] ipr: Handler ID memory allocation failure
2012-11-26 15:55 [PATCH 0/7] Add support for new IBM SAS controllers wenxiong
2012-11-26 15:55 ` [PATCH 1/7] ipr: Add sereral new CCIN definitions for new adapters support wenxiong
@ 2012-11-26 15:55 ` wenxiong
2012-11-30 15:30 ` James Bottomley
2012-11-26 15:55 ` [PATCH 3/7] ipr: Resource path error logging cleanup wenxiong
` (4 subsequent siblings)
6 siblings, 1 reply; 12+ messages in thread
From: wenxiong @ 2012-11-26 15:55 UTC (permalink / raw)
To: James.Bottomley; +Cc: linux-scsi, klebers, brking
[-- Attachment #1: fix_mem_alloc_fail --]
[-- Type: text/plain, Size: 1143 bytes --]
Add code to handle memory allocation failures at module load time.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
---
drivers/scsi/ipr.c | 7 +++++++
1 file changed, 7 insertions(+)
Index: b/drivers/scsi/ipr.c
===================================================================
--- a/drivers/scsi/ipr.c 2012-11-14 23:07:30.067656109 -0600
+++ b/drivers/scsi/ipr.c 2012-11-14 23:09:49.484214019 -0600
@@ -8516,6 +8516,10 @@ static int __devinit ipr_alloc_mem(struc
BITS_TO_LONGS(ioa_cfg->max_devs_supported), GFP_KERNEL);
ioa_cfg->vset_ids = kzalloc(sizeof(unsigned long) *
BITS_TO_LONGS(ioa_cfg->max_devs_supported), GFP_KERNEL);
+
+ if (!ioa_cfg->target_ids || !ioa_cfg->array_ids
+ || !ioa_cfg->vset_ids)
+ goto out_free_res_entries;
}
for (i = 0; i < ioa_cfg->max_devs_supported; i++) {
@@ -8591,6 +8595,9 @@ out_free_vpd_cbs:
ioa_cfg->vpd_cbs, ioa_cfg->vpd_cbs_dma);
out_free_res_entries:
kfree(ioa_cfg->res_entries);
+ kfree(ioa_cfg->target_ids);
+ kfree(ioa_cfg->array_ids);
+ kfree(ioa_cfg->vset_ids);
goto out;
}
--
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 3/7] ipr: Resource path error logging cleanup
2012-11-26 15:55 [PATCH 0/7] Add support for new IBM SAS controllers wenxiong
2012-11-26 15:55 ` [PATCH 1/7] ipr: Add sereral new CCIN definitions for new adapters support wenxiong
2012-11-26 15:55 ` [PATCH 2/7] ipr: Handler ID memory allocation failure wenxiong
@ 2012-11-26 15:55 ` wenxiong
2012-11-26 15:55 ` [PATCH 4/7] ipr: Add support for MSI-X and distributed completion wenxiong
` (3 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: wenxiong @ 2012-11-26 15:55 UTC (permalink / raw)
To: James.Bottomley; +Cc: linux-scsi, klebers, brking
[-- Attachment #1: resource_path_error_clean --]
[-- Type: text/plain, Size: 7330 bytes --]
The resource path as displayed by the ipr driver is the
location string identifying a location on the SAS fabric.
This patch adds the SCSI host number such that error logs
can be more easily correlated in multiple adapter configurations.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
---
drivers/scsi/ipr.c | 78 ++++++++++++++++++++++++++++++++++++-----------------
drivers/scsi/ipr.h | 5 ++-
2 files changed, 56 insertions(+), 27 deletions(-)
Index: b/drivers/scsi/ipr.h
===================================================================
--- a/drivers/scsi/ipr.h 2012-11-14 23:07:25.986404583 -0600
+++ b/drivers/scsi/ipr.h 2012-11-14 23:11:26.756404525 -0600
@@ -409,7 +409,7 @@ struct ipr_config_table_entry64 {
__be64 dev_id;
__be64 lun;
__be64 lun_wwn[2];
-#define IPR_MAX_RES_PATH_LENGTH 24
+#define IPR_MAX_RES_PATH_LENGTH 48
__be64 res_path;
struct ipr_std_inq_data std_inq_data;
u8 reserved2[4];
@@ -1722,7 +1722,8 @@ struct ipr_ucode_image_header {
if (ipr_is_device(hostrcb)) { \
if ((hostrcb)->ioa_cfg->sis64) { \
printk(KERN_ERR IPR_NAME ": %s: " fmt, \
- ipr_format_res_path(hostrcb->hcam.u.error64.fd_res_path, \
+ ipr_format_res_path(hostrcb->ioa_cfg, \
+ hostrcb->hcam.u.error64.fd_res_path, \
hostrcb->rp_buffer, \
sizeof(hostrcb->rp_buffer)), \
__VA_ARGS__); \
Index: b/drivers/scsi/ipr.c
===================================================================
--- a/drivers/scsi/ipr.c 2012-11-14 23:09:49.484214019 -0600
+++ b/drivers/scsi/ipr.c 2012-11-14 23:28:24.724214331 -0600
@@ -1166,14 +1166,15 @@ static int ipr_is_same_device(struct ipr
}
/**
- * ipr_format_res_path - Format the resource path for printing.
+ * __ipr_format_res_path - Format the resource path for printing.
* @res_path: resource path
* @buf: buffer
+ * @len: length of buffer provided
*
* Return value:
* pointer to buffer
**/
-static char *ipr_format_res_path(u8 *res_path, char *buffer, int len)
+static char *__ipr_format_res_path(u8 *res_path, char *buffer, int len)
{
int i;
char *p = buffer;
@@ -1187,6 +1188,27 @@ static char *ipr_format_res_path(u8 *res
}
/**
+ * ipr_format_res_path - Format the resource path for printing.
+ * @ioa_cfg: ioa config struct
+ * @res_path: resource path
+ * @buf: buffer
+ * @len: length of buffer provided
+ *
+ * Return value:
+ * pointer to buffer
+ **/
+static char *ipr_format_res_path(struct ipr_ioa_cfg *ioa_cfg,
+ u8 *res_path, char *buffer, int len)
+{
+ char *p = buffer;
+
+ *p = '\0';
+ p += snprintf(p, buffer + len - p, "%d/", ioa_cfg->host->host_no);
+ __ipr_format_res_path(res_path, p, len - (buffer - p));
+ return buffer;
+}
+
+/**
* ipr_update_res_entry - Update the resource entry.
* @res: resource entry struct
* @cfgtew: config table entry wrapper struct
@@ -1226,8 +1248,8 @@ static void ipr_update_res_entry(struct
if (res->sdev && new_path)
sdev_printk(KERN_INFO, res->sdev, "Resource path: %s\n",
- ipr_format_res_path(res->res_path, buffer,
- sizeof(buffer)));
+ ipr_format_res_path(res->ioa_cfg,
+ res->res_path, buffer, sizeof(buffer)));
} else {
res->flags = cfgtew->u.cfgte->flags;
if (res->flags & IPR_IS_IOA_RESOURCE)
@@ -1613,8 +1635,8 @@ static void ipr_log_sis64_config_error(s
ipr_err_separator;
ipr_err("Device %d : %s", i + 1,
- ipr_format_res_path(dev_entry->res_path, buffer,
- sizeof(buffer)));
+ __ipr_format_res_path(dev_entry->res_path,
+ buffer, sizeof(buffer)));
ipr_log_ext_vpd(&dev_entry->vpd);
ipr_err("-----New Device Information-----\n");
@@ -1960,14 +1982,16 @@ static void ipr_log64_fabric_path(struct
ipr_hcam_err(hostrcb, "%s %s: Resource Path=%s\n",
path_active_desc[i].desc, path_state_desc[j].desc,
- ipr_format_res_path(fabric->res_path, buffer,
- sizeof(buffer)));
+ ipr_format_res_path(hostrcb->ioa_cfg,
+ fabric->res_path,
+ buffer, sizeof(buffer)));
return;
}
}
ipr_err("Path state=%02X Resource Path=%s\n", path_state,
- ipr_format_res_path(fabric->res_path, buffer, sizeof(buffer)));
+ ipr_format_res_path(hostrcb->ioa_cfg, fabric->res_path,
+ buffer, sizeof(buffer)));
}
static const struct {
@@ -2108,18 +2132,20 @@ static void ipr_log64_path_elem(struct i
ipr_hcam_err(hostrcb, "%s %s: Resource Path=%s, Link rate=%s, WWN=%08X%08X\n",
path_status_desc[j].desc, path_type_desc[i].desc,
- ipr_format_res_path(cfg->res_path, buffer,
- sizeof(buffer)),
- link_rate[cfg->link_rate & IPR_PHY_LINK_RATE_MASK],
- be32_to_cpu(cfg->wwid[0]), be32_to_cpu(cfg->wwid[1]));
+ ipr_format_res_path(hostrcb->ioa_cfg,
+ cfg->res_path, buffer, sizeof(buffer)),
+ link_rate[cfg->link_rate & IPR_PHY_LINK_RATE_MASK],
+ be32_to_cpu(cfg->wwid[0]),
+ be32_to_cpu(cfg->wwid[1]));
return;
}
}
ipr_hcam_err(hostrcb, "Path element=%02X: Resource Path=%s, Link rate=%s "
"WWN=%08X%08X\n", cfg->type_status,
- ipr_format_res_path(cfg->res_path, buffer, sizeof(buffer)),
- link_rate[cfg->link_rate & IPR_PHY_LINK_RATE_MASK],
- be32_to_cpu(cfg->wwid[0]), be32_to_cpu(cfg->wwid[1]));
+ ipr_format_res_path(hostrcb->ioa_cfg,
+ cfg->res_path, buffer, sizeof(buffer)),
+ link_rate[cfg->link_rate & IPR_PHY_LINK_RATE_MASK],
+ be32_to_cpu(cfg->wwid[0]), be32_to_cpu(cfg->wwid[1]));
}
/**
@@ -2182,7 +2208,8 @@ static void ipr_log_sis64_array_error(st
ipr_err("RAID %s Array Configuration: %s\n",
error->protection_level,
- ipr_format_res_path(error->last_res_path, buffer, sizeof(buffer)));
+ ipr_format_res_path(ioa_cfg, error->last_res_path,
+ buffer, sizeof(buffer)));
ipr_err_separator;
@@ -2203,11 +2230,12 @@ static void ipr_log_sis64_array_error(st
ipr_err("Array Member %d:\n", i);
ipr_log_ext_vpd(&array_entry->vpd);
ipr_err("Current Location: %s\n",
- ipr_format_res_path(array_entry->res_path, buffer,
- sizeof(buffer)));
+ ipr_format_res_path(ioa_cfg, array_entry->res_path,
+ buffer, sizeof(buffer)));
ipr_err("Expected Location: %s\n",
- ipr_format_res_path(array_entry->expected_res_path,
- buffer, sizeof(buffer)));
+ ipr_format_res_path(ioa_cfg,
+ array_entry->expected_res_path,
+ buffer, sizeof(buffer)));
ipr_err_separator;
}
@@ -4227,8 +4255,8 @@ static ssize_t ipr_show_resource_path(st
res = (struct ipr_resource_entry *)sdev->hostdata;
if (res && ioa_cfg->sis64)
len = snprintf(buf, PAGE_SIZE, "%s\n",
- ipr_format_res_path(res->res_path, buffer,
- sizeof(buffer)));
+ __ipr_format_res_path(res->res_path, buffer,
+ sizeof(buffer)));
else if (res)
len = snprintf(buf, PAGE_SIZE, "%d:%d:%d:%d\n", ioa_cfg->host->host_no,
res->bus, res->target, res->lun);
@@ -4556,8 +4584,8 @@ static int ipr_slave_configure(struct sc
scsi_adjust_queue_depth(sdev, 0, sdev->host->cmd_per_lun);
if (ioa_cfg->sis64)
sdev_printk(KERN_INFO, sdev, "Resource path: %s\n",
- ipr_format_res_path(res->res_path, buffer,
- sizeof(buffer)));
+ ipr_format_res_path(ioa_cfg,
+ res->res_path, buffer, sizeof(buffer)));
return 0;
}
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
--
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 4/7] ipr: Add support for MSI-X and distributed completion
2012-11-26 15:55 [PATCH 0/7] Add support for new IBM SAS controllers wenxiong
` (2 preceding siblings ...)
2012-11-26 15:55 ` [PATCH 3/7] ipr: Resource path error logging cleanup wenxiong
@ 2012-11-26 15:55 ` wenxiong
2012-11-26 15:55 ` [PATCH 5/7] ipr: Reduce lock contention wenxiong
` (2 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: wenxiong @ 2012-11-26 15:55 UTC (permalink / raw)
To: James.Bottomley; +Cc: linux-scsi, klebers, brking, Wen Xiong
[-- Attachment #1: mmsi_mhrrq --]
[-- Type: text/plain, Size: 45937 bytes --]
The new generation IBM SAS Controllers will support MSI-X interrupts and
Distributed Completion Processing features. This patch add these support
in ipr device driver.
Signed-off-by: Wen Xiong <wenxiong@linux.vnet.ibm.com>
---
drivers/scsi/ipr.c | 716 ++++++++++++++++++++++++++++++++++++++++-------------
drivers/scsi/ipr.h | 70 +++--
2 files changed, 593 insertions(+), 193 deletions(-)
Index: b/drivers/scsi/ipr.c
===================================================================
--- a/drivers/scsi/ipr.c 2012-11-14 23:28:24.724214331 -0600
+++ b/drivers/scsi/ipr.c 2012-11-15 20:48:21.684212986 -0600
@@ -98,6 +98,7 @@ static unsigned int ipr_transop_timeout
static unsigned int ipr_debug = 0;
static unsigned int ipr_max_devs = IPR_DEFAULT_SIS64_DEVS;
static unsigned int ipr_dual_ioa_raid = 1;
+static unsigned int ipr_number_of_msix = 2;
static DEFINE_SPINLOCK(ipr_driver_lock);
/* This table describes the differences between DMA controller chips */
@@ -215,6 +216,8 @@ MODULE_PARM_DESC(dual_ioa_raid, "Enable
module_param_named(max_devs, ipr_max_devs, int, 0);
MODULE_PARM_DESC(max_devs, "Specify the maximum number of physical devices. "
"[Default=" __stringify(IPR_DEFAULT_SIS64_DEVS) "]");
+module_param_named(number_of_msix, ipr_number_of_msix, int, 0);
+MODULE_PARM_DESC(number_of_msix, "Specify the number of MSIX interrupts to use on capable adapters (1 - 5). (default:1)");
MODULE_LICENSE("GPL");
MODULE_VERSION(IPR_DRIVER_VERSION);
@@ -595,8 +598,11 @@ static void ipr_reinit_ipr_cmnd(struct i
struct ipr_ioasa *ioasa = &ipr_cmd->s.ioasa;
struct ipr_ioasa64 *ioasa64 = &ipr_cmd->s.ioasa64;
dma_addr_t dma_addr = ipr_cmd->dma_addr;
+ int hrrq_id;
+ hrrq_id = ioarcb->cmd_pkt.hrrq_id;
memset(&ioarcb->cmd_pkt, 0, sizeof(struct ipr_cmd_pkt));
+ ioarcb->cmd_pkt.hrrq_id = hrrq_id;
ioarcb->data_transfer_length = 0;
ioarcb->read_data_transfer_length = 0;
ioarcb->ioadl_len = 0;
@@ -646,12 +652,16 @@ static void ipr_init_ipr_cmnd(struct ipr
* pointer to ipr command struct
**/
static
-struct ipr_cmnd *__ipr_get_free_ipr_cmnd(struct ipr_ioa_cfg *ioa_cfg)
+struct ipr_cmnd *__ipr_get_free_ipr_cmnd(struct ipr_hrr_queue *hrrq)
{
- struct ipr_cmnd *ipr_cmd;
+ struct ipr_cmnd *ipr_cmd = NULL;
+
+ if (likely(!list_empty(&hrrq->hrrq_free_q))) {
+ ipr_cmd = list_entry(hrrq->hrrq_free_q.next,
+ struct ipr_cmnd, queue);
+ list_del(&ipr_cmd->queue);
+ }
- ipr_cmd = list_entry(ioa_cfg->free_q.next, struct ipr_cmnd, queue);
- list_del(&ipr_cmd->queue);
return ipr_cmd;
}
@@ -666,7 +676,8 @@ struct ipr_cmnd *__ipr_get_free_ipr_cmnd
static
struct ipr_cmnd *ipr_get_free_ipr_cmnd(struct ipr_ioa_cfg *ioa_cfg)
{
- struct ipr_cmnd *ipr_cmd = __ipr_get_free_ipr_cmnd(ioa_cfg);
+ struct ipr_cmnd *ipr_cmd =
+ __ipr_get_free_ipr_cmnd(&ioa_cfg->hrrq[IPR_INIT_HRRQ]);
ipr_init_ipr_cmnd(ipr_cmd, ipr_lock_and_done);
return ipr_cmd;
}
@@ -761,13 +772,12 @@ static int ipr_set_pcix_cmd_reg(struct i
**/
static void ipr_sata_eh_done(struct ipr_cmnd *ipr_cmd)
{
- struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
struct ata_queued_cmd *qc = ipr_cmd->qc;
struct ipr_sata_port *sata_port = qc->ap->private_data;
qc->err_mask |= AC_ERR_OTHER;
sata_port->ioasa.status |= ATA_BUSY;
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
ata_qc_complete(qc);
}
@@ -783,14 +793,13 @@ static void ipr_sata_eh_done(struct ipr_
**/
static void ipr_scsi_eh_done(struct ipr_cmnd *ipr_cmd)
{
- struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
scsi_cmd->result |= (DID_ERROR << 16);
scsi_dma_unmap(ipr_cmd->scsi_cmd);
scsi_cmd->scsi_done(scsi_cmd);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
}
/**
@@ -805,24 +814,30 @@ static void ipr_scsi_eh_done(struct ipr_
static void ipr_fail_all_ops(struct ipr_ioa_cfg *ioa_cfg)
{
struct ipr_cmnd *ipr_cmd, *temp;
+ struct ipr_hrr_queue *hrrq;
ENTER;
- list_for_each_entry_safe(ipr_cmd, temp, &ioa_cfg->pending_q, queue) {
- list_del(&ipr_cmd->queue);
+ for_each_hrrq(hrrq, ioa_cfg) {
+ list_for_each_entry_safe(ipr_cmd,
+ temp, &hrrq->hrrq_pending_q, queue) {
+ list_del(&ipr_cmd->queue);
+
+ ipr_cmd->s.ioasa.hdr.ioasc =
+ cpu_to_be32(IPR_IOASC_IOA_WAS_RESET);
+ ipr_cmd->s.ioasa.hdr.ilid =
+ cpu_to_be32(IPR_DRIVER_ILID);
- ipr_cmd->s.ioasa.hdr.ioasc = cpu_to_be32(IPR_IOASC_IOA_WAS_RESET);
- ipr_cmd->s.ioasa.hdr.ilid = cpu_to_be32(IPR_DRIVER_ILID);
-
- if (ipr_cmd->scsi_cmd)
- ipr_cmd->done = ipr_scsi_eh_done;
- else if (ipr_cmd->qc)
- ipr_cmd->done = ipr_sata_eh_done;
+ if (ipr_cmd->scsi_cmd)
+ ipr_cmd->done = ipr_scsi_eh_done;
+ else if (ipr_cmd->qc)
+ ipr_cmd->done = ipr_sata_eh_done;
- ipr_trc_hook(ipr_cmd, IPR_TRACE_FINISH, IPR_IOASC_IOA_WAS_RESET);
- del_timer(&ipr_cmd->timer);
- ipr_cmd->done(ipr_cmd);
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_FINISH,
+ IPR_IOASC_IOA_WAS_RESET);
+ del_timer(&ipr_cmd->timer);
+ ipr_cmd->done(ipr_cmd);
+ }
}
-
LEAVE;
}
@@ -872,9 +887,7 @@ static void ipr_do_req(struct ipr_cmnd *
void (*done) (struct ipr_cmnd *),
void (*timeout_func) (struct ipr_cmnd *), u32 timeout)
{
- struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
-
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_pending_q);
ipr_cmd->done = done;
@@ -975,6 +988,17 @@ static void ipr_send_blocking_cmd(struct
spin_lock_irq(ioa_cfg->host->host_lock);
}
+static int ipr_get_hrrq_index(struct ipr_ioa_cfg *ioa_cfg)
+{
+ if (ioa_cfg->hrrq_num == 1)
+ ioa_cfg->hrrq_index = 0;
+ else {
+ if (++ioa_cfg->hrrq_index >= ioa_cfg->hrrq_num)
+ ioa_cfg->hrrq_index = 1;
+ }
+ return ioa_cfg->hrrq_index;
+}
+
/**
* ipr_send_hcam - Send an HCAM to the adapter.
* @ioa_cfg: ioa config struct
@@ -996,7 +1020,7 @@ static void ipr_send_hcam(struct ipr_ioa
if (ioa_cfg->allow_cmds) {
ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_pending_q);
list_add_tail(&hostrcb->queue, &ioa_cfg->hostrcb_pending_q);
ipr_cmd->u.hostrcb = hostrcb;
@@ -1385,7 +1409,7 @@ static void ipr_process_ccn(struct ipr_c
u32 ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
list_del(&hostrcb->queue);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
if (ioasc) {
if (ioasc != IPR_IOASC_IOA_WAS_RESET)
@@ -2437,7 +2461,7 @@ static void ipr_process_error(struct ipr
fd_ioasc = be32_to_cpu(hostrcb->hcam.u.error.fd_ioasc);
list_del(&hostrcb->queue);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
if (!ioasc) {
ipr_handle_log_data(ioa_cfg, hostrcb);
@@ -4751,7 +4775,7 @@ static int ipr_device_reset(struct ipr_i
ipr_send_blocking_cmd(ipr_cmd, ipr_timeout, IPR_DEVICE_RESET_TIMEOUT);
ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
if (ipr_is_gata(res) && res->sata_port && ioasc != IPR_IOASC_IOA_WAS_RESET) {
if (ipr_cmd->ioa_cfg->sis64)
memcpy(&res->sata_port->ioasa, &ipr_cmd->s.ioasa64.u.gata,
@@ -4821,6 +4845,7 @@ static int __ipr_eh_dev_reset(struct scs
struct ipr_resource_entry *res;
struct ata_port *ap;
int rc = 0;
+ struct ipr_hrr_queue *hrrq;
ENTER;
ioa_cfg = (struct ipr_ioa_cfg *) scsi_cmd->device->host->hostdata;
@@ -4839,19 +4864,21 @@ static int __ipr_eh_dev_reset(struct scs
if (ioa_cfg->ioa_is_dead)
return FAILED;
- list_for_each_entry(ipr_cmd, &ioa_cfg->pending_q, queue) {
- if (ipr_cmd->ioarcb.res_handle == res->res_handle) {
- if (ipr_cmd->scsi_cmd)
- ipr_cmd->done = ipr_scsi_eh_done;
- if (ipr_cmd->qc)
- ipr_cmd->done = ipr_sata_eh_done;
- if (ipr_cmd->qc && !(ipr_cmd->qc->flags & ATA_QCFLAG_FAILED)) {
- ipr_cmd->qc->err_mask |= AC_ERR_TIMEOUT;
- ipr_cmd->qc->flags |= ATA_QCFLAG_FAILED;
+ for_each_hrrq(hrrq, ioa_cfg) {
+ list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) {
+ if (ipr_cmd->ioarcb.res_handle == res->res_handle) {
+ if (ipr_cmd->scsi_cmd)
+ ipr_cmd->done = ipr_scsi_eh_done;
+ if (ipr_cmd->qc)
+ ipr_cmd->done = ipr_sata_eh_done;
+ if (ipr_cmd->qc &&
+ !(ipr_cmd->qc->flags & ATA_QCFLAG_FAILED)) {
+ ipr_cmd->qc->err_mask |= AC_ERR_TIMEOUT;
+ ipr_cmd->qc->flags |= ATA_QCFLAG_FAILED;
+ }
}
}
}
-
res->resetting_device = 1;
scmd_printk(KERN_ERR, scsi_cmd, "Resetting device\n");
@@ -4861,10 +4888,14 @@ static int __ipr_eh_dev_reset(struct scs
ata_std_error_handler(ap);
spin_lock_irq(scsi_cmd->device->host->host_lock);
- list_for_each_entry(ipr_cmd, &ioa_cfg->pending_q, queue) {
- if (ipr_cmd->ioarcb.res_handle == res->res_handle) {
- rc = -EIO;
- break;
+ for_each_hrrq(hrrq, ioa_cfg) {
+ list_for_each_entry(ipr_cmd,
+ &hrrq->hrrq_pending_q, queue) {
+ if (ipr_cmd->ioarcb.res_handle ==
+ res->res_handle) {
+ rc = -EIO;
+ break;
+ }
}
}
} else
@@ -4918,7 +4949,7 @@ static void ipr_bus_reset_done(struct ip
else
ipr_cmd->sibling->done(ipr_cmd->sibling);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
LEAVE;
}
@@ -4979,6 +5010,7 @@ static int ipr_cancel_op(struct scsi_cmn
struct ipr_cmd_pkt *cmd_pkt;
u32 ioasc, int_reg;
int op_found = 0;
+ struct ipr_hrr_queue *hrrq;
ENTER;
ioa_cfg = (struct ipr_ioa_cfg *)scsi_cmd->device->host->hostdata;
@@ -5003,11 +5035,13 @@ static int ipr_cancel_op(struct scsi_cmn
if (!ipr_is_gscsi(res))
return FAILED;
- list_for_each_entry(ipr_cmd, &ioa_cfg->pending_q, queue) {
- if (ipr_cmd->scsi_cmd == scsi_cmd) {
- ipr_cmd->done = ipr_scsi_eh_done;
- op_found = 1;
- break;
+ for_each_hrrq(hrrq, ioa_cfg) {
+ list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) {
+ if (ipr_cmd->scsi_cmd == scsi_cmd) {
+ ipr_cmd->done = ipr_scsi_eh_done;
+ op_found = 1;
+ break;
+ }
}
}
@@ -5035,7 +5069,7 @@ static int ipr_cancel_op(struct scsi_cmn
ipr_trace;
}
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_free_q);
if (!ipr_is_naca_model(res))
res->needs_sync_complete = 1;
@@ -5078,7 +5112,6 @@ static irqreturn_t ipr_handle_other_inte
{
irqreturn_t rc = IRQ_HANDLED;
u32 int_mask_reg;
-
int_mask_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg32);
int_reg &= ~int_mask_reg;
@@ -5127,6 +5160,9 @@ static irqreturn_t ipr_handle_other_inte
} else {
if (int_reg & IPR_PCII_IOA_UNIT_CHECKED)
ioa_cfg->ioa_unit_checked = 1;
+ else if (int_reg & IPR_PCII_NO_HOST_RRQ)
+ dev_err(&ioa_cfg->pdev->dev,
+ "No Host RRQ. 0x%08X\n", int_reg);
else
dev_err(&ioa_cfg->pdev->dev,
"Permanent IOA failure. 0x%08X\n", int_reg);
@@ -5137,7 +5173,6 @@ static irqreturn_t ipr_handle_other_inte
ipr_mask_and_clear_interrupts(ioa_cfg, ~0);
ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
}
-
return rc;
}
@@ -5149,10 +5184,10 @@ static irqreturn_t ipr_handle_other_inte
* Return value:
* none
**/
-static void ipr_isr_eh(struct ipr_ioa_cfg *ioa_cfg, char *msg)
+static void ipr_isr_eh(struct ipr_ioa_cfg *ioa_cfg, char *msg, u16 number)
{
ioa_cfg->errors_logged++;
- dev_err(&ioa_cfg->pdev->dev, "%s\n", msg);
+ dev_err(&ioa_cfg->pdev->dev, "%s %d\n", msg, number);
if (WAIT_FOR_DUMP == ioa_cfg->sdt_state)
ioa_cfg->sdt_state = GET_DUMP;
@@ -5160,6 +5195,51 @@ static void ipr_isr_eh(struct ipr_ioa_cf
ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
}
+static int __ipr_process_hrrq(struct ipr_hrr_queue *hrr_queue,
+ struct list_head *doneq)
+{
+ u32 ioasc;
+ u16 cmd_index;
+ struct ipr_cmnd *ipr_cmd;
+ struct ipr_ioa_cfg *ioa_cfg = hrr_queue->ioa_cfg;
+ int num_hrrq = 0;
+
+ /* If interrupts are disabled, ignore the interrupt */
+ if (!ioa_cfg->allow_interrupts)
+ return 0;
+
+ while ((be32_to_cpu(*hrr_queue->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
+ hrr_queue->toggle_bit) {
+
+ cmd_index = (be32_to_cpu(*hrr_queue->hrrq_curr) &
+ IPR_HRRQ_REQ_RESP_HANDLE_MASK) >>
+ IPR_HRRQ_REQ_RESP_HANDLE_SHIFT;
+
+ if (unlikely(cmd_index > hrr_queue->max_cmd_id ||
+ cmd_index < hrr_queue->min_cmd_id)) {
+ ipr_isr_eh(ioa_cfg,
+ "Invalid response handle from IOA: ",
+ cmd_index);
+ break;
+ }
+
+ ipr_cmd = ioa_cfg->ipr_cmnd_list[cmd_index];
+ ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
+
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_FINISH, ioasc);
+
+ list_move_tail(&ipr_cmd->queue, doneq);
+
+ if (hrr_queue->hrrq_curr < hrr_queue->hrrq_end) {
+ hrr_queue->hrrq_curr++;
+ } else {
+ hrr_queue->hrrq_curr = hrr_queue->hrrq_start;
+ hrr_queue->toggle_bit ^= 1u;
+ }
+ num_hrrq++;
+ }
+ return num_hrrq;
+}
/**
* ipr_isr - Interrupt service routine
* @irq: irq number
@@ -5170,7 +5250,8 @@ static void ipr_isr_eh(struct ipr_ioa_cf
**/
static irqreturn_t ipr_isr(int irq, void *devp)
{
- struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)devp;
+ struct ipr_hrr_queue *hrrq = (struct ipr_hrr_queue *)devp;
+ struct ipr_ioa_cfg *ioa_cfg = hrrq->ioa_cfg;
unsigned long lock_flags = 0;
u32 int_reg = 0;
u32 ioasc;
@@ -5182,7 +5263,6 @@ static irqreturn_t ipr_isr(int irq, void
LIST_HEAD(doneq);
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
-
/* If interrupts are disabled, ignore the interrupt */
if (!ioa_cfg->allow_interrupts) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
@@ -5192,20 +5272,22 @@ static irqreturn_t ipr_isr(int irq, void
while (1) {
ipr_cmd = NULL;
- while ((be32_to_cpu(*ioa_cfg->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
- ioa_cfg->toggle_bit) {
+ while ((be32_to_cpu(*hrrq->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
+ hrrq->toggle_bit) {
- cmd_index = (be32_to_cpu(*ioa_cfg->hrrq_curr) &
+ cmd_index = (be32_to_cpu(*hrrq->hrrq_curr) &
IPR_HRRQ_REQ_RESP_HANDLE_MASK) >> IPR_HRRQ_REQ_RESP_HANDLE_SHIFT;
- if (unlikely(cmd_index >= IPR_NUM_CMD_BLKS)) {
- ipr_isr_eh(ioa_cfg, "Invalid response handle from IOA");
+ if (unlikely(cmd_index > hrrq->max_cmd_id ||
+ cmd_index < hrrq->min_cmd_id)) {
+ ipr_isr_eh(ioa_cfg,
+ "Invalid response handle from IOA: ",
+ cmd_index);
rc = IRQ_HANDLED;
goto unlock_out;
}
ipr_cmd = ioa_cfg->ipr_cmnd_list[cmd_index];
-
ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
ipr_trc_hook(ipr_cmd, IPR_TRACE_FINISH, ioasc);
@@ -5214,11 +5296,11 @@ static irqreturn_t ipr_isr(int irq, void
rc = IRQ_HANDLED;
- if (ioa_cfg->hrrq_curr < ioa_cfg->hrrq_end) {
- ioa_cfg->hrrq_curr++;
+ if (hrrq->hrrq_curr < hrrq->hrrq_end) {
+ hrrq->hrrq_curr++;
} else {
- ioa_cfg->hrrq_curr = ioa_cfg->hrrq_start;
- ioa_cfg->toggle_bit ^= 1u;
+ hrrq->hrrq_curr = hrrq->hrrq_start;
+ hrrq->toggle_bit ^= 1u;
}
}
@@ -5239,7 +5321,7 @@ static irqreturn_t ipr_isr(int irq, void
irq_none++;
} else if (num_hrrq == IPR_MAX_HRRQ_RETRIES &&
int_reg & IPR_PCII_HRRQ_UPDATED) {
- ipr_isr_eh(ioa_cfg, "Error clearing HRRQ");
+ ipr_isr_eh(ioa_cfg, "Error clearing HRRQ: ", num_hrrq);
rc = IRQ_HANDLED;
goto unlock_out;
} else
@@ -5256,7 +5338,47 @@ unlock_out:
del_timer(&ipr_cmd->timer);
ipr_cmd->fast_done(ipr_cmd);
}
+ return rc;
+}
+
+/**
+ * ipr_isr_mhrrq - Interrupt service routine
+ * @irq: irq number
+ * @devp: pointer to ioa config struct
+ *
+ * Return value:
+ * IRQ_NONE / IRQ_HANDLED
+ **/
+static irqreturn_t ipr_isr_mhrrq(int irq, void *devp)
+{
+ struct ipr_hrr_queue *hrrq = (struct ipr_hrr_queue *)devp;
+ struct ipr_ioa_cfg *ioa_cfg = hrrq->ioa_cfg;
+ unsigned long lock_flags = 0;
+ struct ipr_cmnd *ipr_cmd, *temp;
+ irqreturn_t rc = IRQ_NONE;
+ LIST_HEAD(doneq);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ /* If interrupts are disabled, ignore the interrupt */
+ if (!ioa_cfg->allow_interrupts) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return IRQ_NONE;
+ }
+
+ if ((be32_to_cpu(*hrrq->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
+ hrrq->toggle_bit)
+
+ if (__ipr_process_hrrq(hrrq, &doneq))
+ rc = IRQ_HANDLED;
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ list_for_each_entry_safe(ipr_cmd, temp, &doneq, queue) {
+ list_del(&ipr_cmd->queue);
+ del_timer(&ipr_cmd->timer);
+ ipr_cmd->fast_done(ipr_cmd);
+ }
return rc;
}
@@ -5416,7 +5538,6 @@ static void ipr_erp_done(struct ipr_cmnd
{
struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
struct ipr_resource_entry *res = scsi_cmd->device->hostdata;
- struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
u32 ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
if (IPR_IOASC_SENSE_KEY(ioasc) > 0) {
@@ -5434,7 +5555,7 @@ static void ipr_erp_done(struct ipr_cmnd
res->in_erp = 0;
}
scsi_dma_unmap(ipr_cmd->scsi_cmd);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
scsi_cmd->scsi_done(scsi_cmd);
}
@@ -5818,7 +5939,7 @@ static void ipr_erp_start(struct ipr_ioa
}
scsi_dma_unmap(ipr_cmd->scsi_cmd);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
scsi_cmd->scsi_done(scsi_cmd);
}
@@ -5837,21 +5958,21 @@ static void ipr_scsi_done(struct ipr_cmn
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
u32 ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
- unsigned long lock_flags;
+ unsigned long hrrq_flags;
scsi_set_resid(scsi_cmd, be32_to_cpu(ipr_cmd->s.ioasa.hdr.residual_data_len));
if (likely(IPR_IOASC_SENSE_KEY(ioasc) == 0)) {
scsi_dma_unmap(scsi_cmd);
- spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, hrrq_flags);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
scsi_cmd->scsi_done(scsi_cmd);
- spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, hrrq_flags);
} else {
- spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, hrrq_flags);
ipr_erp_start(ioa_cfg, ipr_cmd);
- spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, hrrq_flags);
}
}
@@ -5876,12 +5997,16 @@ static int ipr_queuecommand(struct Scsi_
struct ipr_cmnd *ipr_cmd;
unsigned long lock_flags;
int rc;
+ struct ipr_hrr_queue *hrrq;
+ int hrrq_id;
ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
spin_lock_irqsave(shost->host_lock, lock_flags);
scsi_cmd->result = (DID_OK << 16);
res = scsi_cmd->device->hostdata;
+ hrrq_id = ipr_get_hrrq_index(ioa_cfg);
+ hrrq = &ioa_cfg->hrrq[hrrq_id];
/*
* We are currently blocking all devices due to a host reset
@@ -5908,7 +6033,11 @@ static int ipr_queuecommand(struct Scsi_
return rc;
}
- ipr_cmd = __ipr_get_free_ipr_cmnd(ioa_cfg);
+ ipr_cmd = __ipr_get_free_ipr_cmnd(hrrq);
+ if (ipr_cmd == NULL) {
+ spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
spin_unlock_irqrestore(shost->host_lock, lock_flags);
ipr_init_ipr_cmnd(ipr_cmd, ipr_scsi_done);
@@ -5930,8 +6059,9 @@ static int ipr_queuecommand(struct Scsi_
}
if (scsi_cmd->cmnd[0] >= 0xC0 &&
- (!ipr_is_gscsi(res) || scsi_cmd->cmnd[0] == IPR_QUERY_RSRC_STATE))
+ (!ipr_is_gscsi(res) || scsi_cmd->cmnd[0] == IPR_QUERY_RSRC_STATE)) {
ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+ }
if (ioa_cfg->sis64)
rc = ipr_build_ioadl64(ioa_cfg, ipr_cmd);
@@ -5940,7 +6070,7 @@ static int ipr_queuecommand(struct Scsi_
spin_lock_irqsave(shost->host_lock, lock_flags);
if (unlikely(rc || (!ioa_cfg->allow_cmds && !ioa_cfg->ioa_is_dead))) {
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_free_q);
spin_unlock_irqrestore(shost->host_lock, lock_flags);
if (!rc)
scsi_dma_unmap(scsi_cmd);
@@ -5948,7 +6078,7 @@ static int ipr_queuecommand(struct Scsi_
}
if (unlikely(ioa_cfg->ioa_is_dead)) {
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_free_q);
spin_unlock_irqrestore(shost->host_lock, lock_flags);
scsi_dma_unmap(scsi_cmd);
goto err_nodev;
@@ -5959,7 +6089,7 @@ static int ipr_queuecommand(struct Scsi_
ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_SYNC_COMPLETE;
res->needs_sync_complete = 0;
}
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+ list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_pending_q);
ipr_trc_hook(ipr_cmd, IPR_TRACE_START, IPR_GET_RES_PHYS_LOC(res));
ipr_send_command(ipr_cmd);
spin_unlock_irqrestore(shost->host_lock, lock_flags);
@@ -6099,6 +6229,7 @@ static void ipr_ata_post_internal(struct
struct ipr_sata_port *sata_port = qc->ap->private_data;
struct ipr_ioa_cfg *ioa_cfg = sata_port->ioa_cfg;
struct ipr_cmnd *ipr_cmd;
+ struct ipr_hrr_queue *hrrq;
unsigned long flags;
spin_lock_irqsave(ioa_cfg->host->host_lock, flags);
@@ -6108,10 +6239,12 @@ static void ipr_ata_post_internal(struct
spin_lock_irqsave(ioa_cfg->host->host_lock, flags);
}
- list_for_each_entry(ipr_cmd, &ioa_cfg->pending_q, queue) {
- if (ipr_cmd->qc == qc) {
- ipr_device_reset(ioa_cfg, sata_port->res);
- break;
+ for_each_hrrq(hrrq, ioa_cfg) {
+ list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) {
+ if (ipr_cmd->qc == qc) {
+ ipr_device_reset(ioa_cfg, sata_port->res);
+ break;
+ }
}
}
spin_unlock_irqrestore(ioa_cfg->host->host_lock, flags);
@@ -6176,7 +6309,7 @@ static void ipr_sata_done(struct ipr_cmn
qc->err_mask |= __ac_err_mask(sata_port->ioasa.status);
else
qc->err_mask |= ac_err_mask(sata_port->ioasa.status);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
ata_qc_complete(qc);
}
@@ -6287,11 +6420,16 @@ static unsigned int ipr_qc_issue(struct
struct ipr_cmnd *ipr_cmd;
struct ipr_ioarcb *ioarcb;
struct ipr_ioarcb_ata_regs *regs;
+ struct ipr_hrr_queue *hrrq;
+ int hrrq_id;
if (unlikely(!ioa_cfg->allow_cmds || ioa_cfg->ioa_is_dead))
return AC_ERR_SYSTEM;
- ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ hrrq_id = ipr_get_hrrq_index(ioa_cfg);
+ hrrq = &ioa_cfg->hrrq[hrrq_id];
+ ipr_cmd = __ipr_get_free_ipr_cmnd(hrrq);
+ ipr_init_ipr_cmnd(ipr_cmd, ipr_lock_and_done);
ioarcb = &ipr_cmd->ioarcb;
if (ioa_cfg->sis64) {
@@ -6303,7 +6441,7 @@ static unsigned int ipr_qc_issue(struct
memset(regs, 0, sizeof(*regs));
ioarcb->add_cmd_parms_len = cpu_to_be16(sizeof(*regs));
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+ list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_pending_q);
ipr_cmd->qc = qc;
ipr_cmd->done = ipr_sata_done;
ipr_cmd->ioarcb.res_handle = res->res_handle;
@@ -6455,7 +6593,7 @@ static int ipr_ioa_bringdown_done(struct
ENTER;
ioa_cfg->in_reset_reload = 0;
ioa_cfg->reset_retries = 0;
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
wake_up_all(&ioa_cfg->reset_wait_q);
spin_unlock_irq(ioa_cfg->host->host_lock);
@@ -6510,7 +6648,7 @@ static int ipr_ioa_reset_done(struct ipr
dev_info(&ioa_cfg->pdev->dev, "IOA initialized.\n");
ioa_cfg->reset_retries = 0;
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
wake_up_all(&ioa_cfg->reset_wait_q);
spin_unlock(ioa_cfg->host->host_lock);
@@ -6588,9 +6726,11 @@ static int ipr_set_supported_devs(struct
if (!ioa_cfg->sis64)
ipr_cmd->job_step = ipr_set_supported_devs;
+ LEAVE;
return IPR_RC_JOB_RETURN;
}
+ LEAVE;
return IPR_RC_JOB_CONTINUE;
}
@@ -6848,7 +6988,7 @@ static int ipr_reset_cmd_failed(struct i
ipr_cmd->ioarcb.cmd_pkt.cdb[0], ioasc);
ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
return IPR_RC_JOB_RETURN;
}
@@ -7306,46 +7446,75 @@ static int ipr_ioafp_identify_hrrq(struc
{
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_hrr_queue *hrrq;
ENTER;
+ ipr_cmd->job_step = ipr_ioafp_std_inquiry;
dev_info(&ioa_cfg->pdev->dev, "Starting IOA initialization sequence.\n");
- ioarcb->cmd_pkt.cdb[0] = IPR_ID_HOST_RR_Q;
- ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+ if (ioa_cfg->hrrq_index < ioa_cfg->hrrq_num) {
+ hrrq = &ioa_cfg->hrrq[ioa_cfg->hrrq_index];
- ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
- if (ioa_cfg->sis64)
- ioarcb->cmd_pkt.cdb[1] = 0x1;
- ioarcb->cmd_pkt.cdb[2] =
- ((u64) ioa_cfg->host_rrq_dma >> 24) & 0xff;
- ioarcb->cmd_pkt.cdb[3] =
- ((u64) ioa_cfg->host_rrq_dma >> 16) & 0xff;
- ioarcb->cmd_pkt.cdb[4] =
- ((u64) ioa_cfg->host_rrq_dma >> 8) & 0xff;
- ioarcb->cmd_pkt.cdb[5] =
- ((u64) ioa_cfg->host_rrq_dma) & 0xff;
- ioarcb->cmd_pkt.cdb[7] =
- ((sizeof(u32) * IPR_NUM_CMD_BLKS) >> 8) & 0xff;
- ioarcb->cmd_pkt.cdb[8] =
- (sizeof(u32) * IPR_NUM_CMD_BLKS) & 0xff;
+ ioarcb->cmd_pkt.cdb[0] = IPR_ID_HOST_RR_Q;
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
- if (ioa_cfg->sis64) {
- ioarcb->cmd_pkt.cdb[10] =
- ((u64) ioa_cfg->host_rrq_dma >> 56) & 0xff;
- ioarcb->cmd_pkt.cdb[11] =
- ((u64) ioa_cfg->host_rrq_dma >> 48) & 0xff;
- ioarcb->cmd_pkt.cdb[12] =
- ((u64) ioa_cfg->host_rrq_dma >> 40) & 0xff;
- ioarcb->cmd_pkt.cdb[13] =
- ((u64) ioa_cfg->host_rrq_dma >> 32) & 0xff;
- }
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+ if (ioa_cfg->sis64)
+ ioarcb->cmd_pkt.cdb[1] = 0x1;
- ipr_cmd->job_step = ipr_ioafp_std_inquiry;
+ if (ioa_cfg->nvectors == 1)
+ ioarcb->cmd_pkt.cdb[1] &= ~IPR_ID_HRRQ_SELE_ENABLE;
+ else
+ ioarcb->cmd_pkt.cdb[1] |= IPR_ID_HRRQ_SELE_ENABLE;
- ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+ ioarcb->cmd_pkt.cdb[2] =
+ ((u64) hrrq->host_rrq_dma >> 24) & 0xff;
+ ioarcb->cmd_pkt.cdb[3] =
+ ((u64) hrrq->host_rrq_dma >> 16) & 0xff;
+ ioarcb->cmd_pkt.cdb[4] =
+ ((u64) hrrq->host_rrq_dma >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[5] =
+ ((u64) hrrq->host_rrq_dma) & 0xff;
+ ioarcb->cmd_pkt.cdb[7] =
+ ((sizeof(u32) * hrrq->size) >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[8] =
+ (sizeof(u32) * hrrq->size) & 0xff;
+
+ if (ioarcb->cmd_pkt.cdb[1] & IPR_ID_HRRQ_SELE_ENABLE)
+ ioarcb->cmd_pkt.cdb[9] = ioa_cfg->hrrq_index;
+
+ if (ioa_cfg->sis64) {
+ ioarcb->cmd_pkt.cdb[10] =
+ ((u64) hrrq->host_rrq_dma >> 56) & 0xff;
+ ioarcb->cmd_pkt.cdb[11] =
+ ((u64) hrrq->host_rrq_dma >> 48) & 0xff;
+ ioarcb->cmd_pkt.cdb[12] =
+ ((u64) hrrq->host_rrq_dma >> 40) & 0xff;
+ ioarcb->cmd_pkt.cdb[13] =
+ ((u64) hrrq->host_rrq_dma >> 32) & 0xff;
+ }
+
+ if (ioarcb->cmd_pkt.cdb[1] & IPR_ID_HRRQ_SELE_ENABLE)
+ ioarcb->cmd_pkt.cdb[14] = ioa_cfg->hrrq_index;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout,
+ IPR_INTERNAL_TIMEOUT);
+
+ if (++ioa_cfg->hrrq_index < ioa_cfg->hrrq_num)
+ ipr_cmd->job_step = ipr_ioafp_identify_hrrq;
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+
+ }
+
+ if (ioa_cfg->hrrq_num == 1)
+ ioa_cfg->hrrq_index = 0;
+ else
+ ioa_cfg->hrrq_index = 1;
LEAVE;
- return IPR_RC_JOB_RETURN;
+ return IPR_RC_JOB_CONTINUE;
}
/**
@@ -7393,13 +7562,16 @@ static void ipr_reset_timer_done(struct
static void ipr_reset_start_timer(struct ipr_cmnd *ipr_cmd,
unsigned long timeout)
{
- list_add_tail(&ipr_cmd->queue, &ipr_cmd->ioa_cfg->pending_q);
+
+ ENTER;
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_pending_q);
ipr_cmd->done = ipr_reset_ioa_job;
ipr_cmd->timer.data = (unsigned long) ipr_cmd;
ipr_cmd->timer.expires = jiffies + timeout;
ipr_cmd->timer.function = (void (*)(unsigned long))ipr_reset_timer_done;
add_timer(&ipr_cmd->timer);
+ LEAVE;
}
/**
@@ -7411,13 +7583,19 @@ static void ipr_reset_start_timer(struct
**/
static void ipr_init_ioa_mem(struct ipr_ioa_cfg *ioa_cfg)
{
- memset(ioa_cfg->host_rrq, 0, sizeof(u32) * IPR_NUM_CMD_BLKS);
+ struct ipr_hrr_queue *hrrq;
- /* Initialize Host RRQ pointers */
- ioa_cfg->hrrq_start = ioa_cfg->host_rrq;
- ioa_cfg->hrrq_end = &ioa_cfg->host_rrq[IPR_NUM_CMD_BLKS - 1];
- ioa_cfg->hrrq_curr = ioa_cfg->hrrq_start;
- ioa_cfg->toggle_bit = 1;
+ for_each_hrrq(hrrq, ioa_cfg) {
+ memset(hrrq->host_rrq, 0, sizeof(u32) * hrrq->size);
+
+ /* Initialize Host RRQ pointers */
+ hrrq->hrrq_start = hrrq->host_rrq;
+ hrrq->hrrq_end = &hrrq->host_rrq[hrrq->size - 1];
+ hrrq->hrrq_curr = hrrq->hrrq_start;
+ hrrq->toggle_bit = 1;
+ }
+
+ ioa_cfg->hrrq_index = 0;
/* Zero out config table */
memset(ioa_cfg->u.cfg_table, 0, ioa_cfg->cfg_table_size);
@@ -7474,7 +7652,8 @@ static int ipr_reset_next_stage(struct i
ipr_cmd->timer.function = (void (*)(unsigned long))ipr_oper_timeout;
ipr_cmd->done = ipr_reset_ioa_job;
add_timer(&ipr_cmd->timer);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_pending_q);
return IPR_RC_JOB_RETURN;
}
@@ -7539,7 +7718,7 @@ static int ipr_reset_enable_ioa(struct i
ipr_cmd->timer.function = (void (*)(unsigned long))ipr_oper_timeout;
ipr_cmd->done = ipr_reset_ioa_job;
add_timer(&ipr_cmd->timer);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_pending_q);
LEAVE;
return IPR_RC_JOB_RETURN;
@@ -8106,7 +8285,8 @@ static void ipr_reset_ioa_job(struct ipr
* We are doing nested adapter resets and this is
* not the current reset job.
*/
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue,
+ &ipr_cmd->hrrq->hrrq_free_q);
return;
}
@@ -8218,7 +8398,7 @@ static int ipr_reset_freeze(struct ipr_c
{
/* Disallow new interrupts, avoid loop */
ipr_cmd->ioa_cfg->allow_interrupts = 0;
- list_add_tail(&ipr_cmd->queue, &ipr_cmd->ioa_cfg->pending_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_pending_q);
ipr_cmd->done = ipr_reset_ioa_job;
return IPR_RC_JOB_RETURN;
}
@@ -8338,7 +8518,6 @@ static int __devinit ipr_probe_ioa_part2
} else
_ipr_initiate_ioa_reset(ioa_cfg, ipr_reset_enable_ioa,
IPR_SHUTDOWN_NONE);
-
spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags);
wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
@@ -8404,8 +8583,13 @@ static void ipr_free_mem(struct ipr_ioa_
pci_free_consistent(ioa_cfg->pdev, sizeof(struct ipr_misc_cbs),
ioa_cfg->vpd_cbs, ioa_cfg->vpd_cbs_dma);
ipr_free_cmd_blks(ioa_cfg);
- pci_free_consistent(ioa_cfg->pdev, sizeof(u32) * IPR_NUM_CMD_BLKS,
- ioa_cfg->host_rrq, ioa_cfg->host_rrq_dma);
+
+ for (i = 0; i < ioa_cfg->hrrq_num; i++)
+ pci_free_consistent(ioa_cfg->pdev,
+ sizeof(u32) * ioa_cfg->hrrq[i].size,
+ ioa_cfg->hrrq[i].host_rrq,
+ ioa_cfg->hrrq[i].host_rrq_dma);
+
pci_free_consistent(ioa_cfg->pdev, ioa_cfg->cfg_table_size,
ioa_cfg->u.cfg_table,
ioa_cfg->cfg_table_dma);
@@ -8436,8 +8620,20 @@ static void ipr_free_all_resources(struc
struct pci_dev *pdev = ioa_cfg->pdev;
ENTER;
- free_irq(pdev->irq, ioa_cfg);
- pci_disable_msi(pdev);
+ if (ioa_cfg->intr_flag == IPR_USE_MSI ||
+ ioa_cfg->intr_flag == IPR_USE_MSIX) {
+ int i;
+ for (i = 0; i < ioa_cfg->nvectors; i++)
+ free_irq(ioa_cfg->vectors_info[i].vec,
+ &ioa_cfg->hrrq[i]);
+ } else
+ free_irq(pdev->irq, &ioa_cfg->hrrq[0]);
+
+ if (ioa_cfg->intr_flag == IPR_USE_MSI)
+ pci_disable_msi(pdev);
+ else if (ioa_cfg->intr_flag == IPR_USE_MSIX)
+ pci_disable_msix(pdev);
+
iounmap(ioa_cfg->hdw_dma_regs);
pci_release_regions(pdev);
ipr_free_mem(ioa_cfg);
@@ -8458,7 +8654,7 @@ static int __devinit ipr_alloc_cmd_blks(
struct ipr_cmnd *ipr_cmd;
struct ipr_ioarcb *ioarcb;
dma_addr_t dma_addr;
- int i;
+ int i, entries_each_hrrq, hrrq_id = 0;
ioa_cfg->ipr_cmd_pool = pci_pool_create(IPR_NAME, ioa_cfg->pdev,
sizeof(struct ipr_cmnd), 512, 0);
@@ -8474,6 +8670,39 @@ static int __devinit ipr_alloc_cmd_blks(
return -ENOMEM;
}
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ if (ioa_cfg->hrrq_num > 1) {
+ if (i == 0) {
+ entries_each_hrrq = IPR_NUM_INTERNAL_CMD_BLKS;
+ ioa_cfg->hrrq[i].min_cmd_id = 0;
+ ioa_cfg->hrrq[i].max_cmd_id =
+ (entries_each_hrrq - 1);
+ } else {
+ entries_each_hrrq =
+ IPR_NUM_BASE_CMD_BLKS/
+ (ioa_cfg->hrrq_num - 1);
+ ioa_cfg->hrrq[i].min_cmd_id =
+ IPR_NUM_INTERNAL_CMD_BLKS +
+ (i - 1) * entries_each_hrrq;
+ ioa_cfg->hrrq[i].max_cmd_id =
+ (IPR_NUM_INTERNAL_CMD_BLKS +
+ i * entries_each_hrrq - 1);
+ }
+ } else {
+ entries_each_hrrq = IPR_NUM_CMD_BLKS;
+ ioa_cfg->hrrq[i].min_cmd_id = 0;
+ ioa_cfg->hrrq[i].max_cmd_id = (entries_each_hrrq - 1);
+ }
+ ioa_cfg->hrrq[i].size = entries_each_hrrq;
+ }
+
+ i = IPR_NUM_CMD_BLKS -
+ ioa_cfg->hrrq[ioa_cfg->hrrq_num - 1].max_cmd_id - 1;
+ if (i > 0) {
+ ioa_cfg->hrrq[ioa_cfg->hrrq_num - 1].size += i;
+ ioa_cfg->hrrq[ioa_cfg->hrrq_num - 1].max_cmd_id += i;
+ }
+
for (i = 0; i < IPR_NUM_CMD_BLKS; i++) {
ipr_cmd = pci_pool_alloc(ioa_cfg->ipr_cmd_pool, GFP_KERNEL, &dma_addr);
@@ -8512,7 +8741,11 @@ static int __devinit ipr_alloc_cmd_blks(
ipr_cmd->sense_buffer_dma = dma_addr +
offsetof(struct ipr_cmnd, sense_buffer);
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ ipr_cmd->ioarcb.cmd_pkt.hrrq_id = hrrq_id;
+ ipr_cmd->hrrq = &ioa_cfg->hrrq[hrrq_id];
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
+ if (i >= ioa_cfg->hrrq[hrrq_id].max_cmd_id)
+ hrrq_id++;
}
return 0;
@@ -8562,15 +8795,29 @@ static int __devinit ipr_alloc_mem(struc
if (!ioa_cfg->vpd_cbs)
goto out_free_res_entries;
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ INIT_LIST_HEAD(&ioa_cfg->hrrq[i].hrrq_free_q);
+ INIT_LIST_HEAD(&ioa_cfg->hrrq[i].hrrq_pending_q);
+ }
+
if (ipr_alloc_cmd_blks(ioa_cfg))
goto out_free_vpd_cbs;
- ioa_cfg->host_rrq = pci_alloc_consistent(ioa_cfg->pdev,
- sizeof(u32) * IPR_NUM_CMD_BLKS,
- &ioa_cfg->host_rrq_dma);
-
- if (!ioa_cfg->host_rrq)
- goto out_ipr_free_cmd_blocks;
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ ioa_cfg->hrrq[i].host_rrq = pci_alloc_consistent(ioa_cfg->pdev,
+ sizeof(u32) * ioa_cfg->hrrq[i].size,
+ &ioa_cfg->hrrq[i].host_rrq_dma);
+
+ if (!ioa_cfg->hrrq[i].host_rrq) {
+ while (--i > 0)
+ pci_free_consistent(pdev,
+ sizeof(u32) * ioa_cfg->hrrq[i].size,
+ ioa_cfg->hrrq[i].host_rrq,
+ ioa_cfg->hrrq[i].host_rrq_dma);
+ goto out_ipr_free_cmd_blocks;
+ }
+ ioa_cfg->hrrq[i].ioa_cfg = ioa_cfg;
+ }
ioa_cfg->u.cfg_table = pci_alloc_consistent(ioa_cfg->pdev,
ioa_cfg->cfg_table_size,
@@ -8614,8 +8861,12 @@ out_free_hostrcb_dma:
ioa_cfg->u.cfg_table,
ioa_cfg->cfg_table_dma);
out_free_host_rrq:
- pci_free_consistent(pdev, sizeof(u32) * IPR_NUM_CMD_BLKS,
- ioa_cfg->host_rrq, ioa_cfg->host_rrq_dma);
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ pci_free_consistent(pdev,
+ sizeof(u32) * ioa_cfg->hrrq[i].size,
+ ioa_cfg->hrrq[i].host_rrq,
+ ioa_cfg->hrrq[i].host_rrq_dma);
+ }
out_ipr_free_cmd_blocks:
ipr_free_cmd_blks(ioa_cfg);
out_free_vpd_cbs:
@@ -8673,15 +8924,11 @@ static void __devinit ipr_init_ioa_cfg(s
ioa_cfg->doorbell = IPR_DOORBELL;
sprintf(ioa_cfg->eye_catcher, IPR_EYECATCHER);
sprintf(ioa_cfg->trace_start, IPR_TRACE_START_LABEL);
- sprintf(ioa_cfg->ipr_free_label, IPR_FREEQ_LABEL);
- sprintf(ioa_cfg->ipr_pending_label, IPR_PENDQ_LABEL);
sprintf(ioa_cfg->cfg_table_start, IPR_CFG_TBL_START);
sprintf(ioa_cfg->resource_table_label, IPR_RES_TABLE_LABEL);
sprintf(ioa_cfg->ipr_hcam_label, IPR_HCAM_LABEL);
sprintf(ioa_cfg->ipr_cmd_label, IPR_CMD_LABEL);
- INIT_LIST_HEAD(&ioa_cfg->free_q);
- INIT_LIST_HEAD(&ioa_cfg->pending_q);
INIT_LIST_HEAD(&ioa_cfg->hostrcb_free_q);
INIT_LIST_HEAD(&ioa_cfg->hostrcb_pending_q);
INIT_LIST_HEAD(&ioa_cfg->free_res_q);
@@ -8759,6 +9006,88 @@ ipr_get_chip_info(const struct pci_devic
return NULL;
}
+static int __devinit ipr_enable_msix(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct msix_entry entries[IPR_MAX_MSIX_VECTORS];
+ int i, err, vectors;
+
+ for (i = 0; i < ARRAY_SIZE(entries); ++i)
+ entries[i].entry = i;
+
+ vectors = ipr_number_of_msix;
+
+ while ((err = pci_enable_msix(ioa_cfg->pdev, entries, vectors)) > 0)
+ vectors = err;
+
+ if (err < 0) {
+ pci_disable_msix(ioa_cfg->pdev);
+ return err;
+ }
+
+ if (!err) {
+ for (i = 0; i < vectors; i++)
+ ioa_cfg->vectors_info[i].vec = entries[i].vector;
+ ioa_cfg->nvectors = vectors;
+ }
+
+ return err;
+}
+
+static int __devinit ipr_enable_msi(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int i, err, vectors;
+
+ vectors = ipr_number_of_msix;
+
+ while ((err = pci_enable_msi_block(ioa_cfg->pdev, vectors)) > 0)
+ vectors = err;
+
+ if (err < 0) {
+ pci_disable_msi(ioa_cfg->pdev);
+ return err;
+ }
+
+ if (!err) {
+ for (i = 0; i < vectors; i++)
+ ioa_cfg->vectors_info[i].vec = ioa_cfg->pdev->irq + i;
+ ioa_cfg->nvectors = vectors;
+ }
+
+ return err;
+}
+
+static void __devinit name_msi_vectors(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int vec_idx, n = sizeof(ioa_cfg->vectors_info[0].desc) - 1;
+
+ for (vec_idx = 0; vec_idx < ioa_cfg->nvectors; vec_idx++) {
+ snprintf(ioa_cfg->vectors_info[vec_idx].desc, n,
+ "host%d-%d", ioa_cfg->host->host_no, vec_idx);
+ ioa_cfg->vectors_info[vec_idx].
+ desc[strlen(ioa_cfg->vectors_info[vec_idx].desc)] = 0;
+ }
+}
+
+static int __devinit ipr_request_other_msi_irqs(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int i, rc;
+
+ for (i = 1; i < ioa_cfg->nvectors; i++) {
+ rc = request_irq(ioa_cfg->vectors_info[i].vec,
+ ipr_isr_mhrrq,
+ 0,
+ ioa_cfg->vectors_info[i].desc,
+ &ioa_cfg->hrrq[i]);
+ if (rc) {
+ while (--i >= 0)
+ free_irq(ioa_cfg->vectors_info[i].vec,
+ &ioa_cfg->hrrq[i]);
+ return rc;
+ }
+ }
+ return 0;
+}
+
/**
* ipr_test_intr - Handle the interrupt generated in ipr_test_msi().
* @pdev: PCI device struct
@@ -8775,6 +9104,7 @@ static irqreturn_t __devinit ipr_test_in
unsigned long lock_flags = 0;
irqreturn_t rc = IRQ_HANDLED;
+ dev_info(&ioa_cfg->pdev->dev, "Received IRQ : %d\n", irq);
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
ioa_cfg->msi_received = 1;
@@ -8842,8 +9172,7 @@ static int __devinit ipr_test_msi(struct
return rc;
}
-/**
- * ipr_probe_ioa - Allocates memory and does first stage of initialization
+ /* ipr_probe_ioa - Allocates memory and does first stage of initialization
* @pdev: PCI device struct
* @dev_id: PCI device id struct
*
@@ -8954,17 +9283,56 @@ static int __devinit ipr_probe_ioa(struc
goto cleanup_nomem;
}
- /* Enable MSI style interrupts if they are supported. */
- if (ioa_cfg->ipr_chip->intr_type == IPR_USE_MSI && !pci_enable_msi(pdev)) {
+ if (ipr_number_of_msix > IPR_MAX_MSIX_VECTORS) {
+ dev_err(&pdev->dev, "The max number of MSIX is %d\n",
+ IPR_MAX_MSIX_VECTORS);
+ ipr_number_of_msix = IPR_MAX_MSIX_VECTORS;
+ }
+
+ if (ioa_cfg->ipr_chip->intr_type == IPR_USE_MSI &&
+ ipr_enable_msix(ioa_cfg) == 0)
+ ioa_cfg->intr_flag = IPR_USE_MSIX;
+ else if (ioa_cfg->ipr_chip->intr_type == IPR_USE_MSI &&
+ ipr_enable_msi(ioa_cfg) == 0)
+ ioa_cfg->intr_flag = IPR_USE_MSI;
+ else {
+ ioa_cfg->intr_flag = IPR_USE_LSI;
+ ioa_cfg->nvectors = 1;
+ dev_info(&pdev->dev, "Cannot enable MSI.\n");
+ }
+
+ if (ioa_cfg->intr_flag == IPR_USE_MSI ||
+ ioa_cfg->intr_flag == IPR_USE_MSIX) {
rc = ipr_test_msi(ioa_cfg, pdev);
- if (rc == -EOPNOTSUPP)
- pci_disable_msi(pdev);
+ if (rc == -EOPNOTSUPP) {
+ if (ioa_cfg->intr_flag == IPR_USE_MSI) {
+ ioa_cfg->intr_flag &= ~IPR_USE_MSI;
+ pci_disable_msi(pdev);
+ } else if (ioa_cfg->intr_flag == IPR_USE_MSIX) {
+ ioa_cfg->intr_flag &= ~IPR_USE_MSIX;
+ pci_disable_msix(pdev);
+ }
+
+ ioa_cfg->intr_flag = IPR_USE_LSI;
+ ioa_cfg->nvectors = 1;
+ }
else if (rc)
goto out_msi_disable;
- else
- dev_info(&pdev->dev, "MSI enabled with IRQ: %d\n", pdev->irq);
- } else if (ipr_debug)
- dev_info(&pdev->dev, "Cannot enable MSI.\n");
+ else {
+ if (ioa_cfg->intr_flag == IPR_USE_MSI)
+ dev_info(&pdev->dev,
+ "Request for %d MSIs succeeded with starting IRQ: %d\n",
+ ioa_cfg->nvectors, pdev->irq);
+ else if (ioa_cfg->intr_flag == IPR_USE_MSIX)
+ dev_info(&pdev->dev,
+ "Request for %d MSIXs succeeded.",
+ ioa_cfg->nvectors);
+ }
+ }
+
+ ioa_cfg->hrrq_num = min3(ioa_cfg->nvectors,
+ (unsigned int)num_online_cpus(),
+ (unsigned int)IPR_MAX_HRRQ_NUM);
/* Save away PCI config space for use following IOA reset */
rc = pci_save_state(pdev);
@@ -9012,10 +9380,21 @@ static int __devinit ipr_probe_ioa(struc
ioa_cfg->ioa_unit_checked = 1;
ipr_mask_and_clear_interrupts(ioa_cfg, ~IPR_PCII_IOA_TRANS_TO_OPER);
- rc = request_irq(pdev->irq, ipr_isr,
- ioa_cfg->msi_received ? 0 : IRQF_SHARED,
- IPR_NAME, ioa_cfg);
+ if (ioa_cfg->intr_flag == IPR_USE_MSI
+ || ioa_cfg->intr_flag == IPR_USE_MSIX) {
+ name_msi_vectors(ioa_cfg);
+ rc = request_irq(ioa_cfg->vectors_info[0].vec, ipr_isr,
+ 0,
+ ioa_cfg->vectors_info[0].desc,
+ &ioa_cfg->hrrq[0]);
+ if (!rc)
+ rc = ipr_request_other_msi_irqs(ioa_cfg);
+ } else {
+ rc = request_irq(pdev->irq, ipr_isr,
+ IRQF_SHARED,
+ IPR_NAME, &ioa_cfg->hrrq[0]);
+ }
if (rc) {
dev_err(&pdev->dev, "Couldn't register IRQ %d! rc=%d\n",
pdev->irq, rc);
@@ -9040,7 +9419,10 @@ out:
cleanup_nolog:
ipr_free_mem(ioa_cfg);
out_msi_disable:
- pci_disable_msi(pdev);
+ if (ioa_cfg->intr_flag == IPR_USE_MSI)
+ pci_disable_msi(pdev);
+ else if (ioa_cfg->intr_flag == IPR_USE_MSIX)
+ pci_disable_msix(pdev);
cleanup_nomem:
iounmap(ipr_regs);
out_release_regions:
@@ -9363,9 +9745,7 @@ static struct pci_driver ipr_driver = {
**/
static void ipr_halt_done(struct ipr_cmnd *ipr_cmd)
{
- struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
-
- list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
}
/**
Index: b/drivers/scsi/ipr.h
===================================================================
--- a/drivers/scsi/ipr.h 2012-11-14 23:11:26.756404525 -0600
+++ b/drivers/scsi/ipr.h 2012-11-15 20:46:59.507654256 -0600
@@ -303,6 +303,9 @@ IPR_PCII_NO_HOST_RRQ | IPR_PCII_IOARRIN_
* Misc literals
*/
#define IPR_NUM_IOADL_ENTRIES IPR_MAX_SGLIST
+#define IPR_MAX_MSIX_VECTORS 0x5
+#define IPR_MAX_HRRQ_NUM 0x10
+#define IPR_INIT_HRRQ 0x0
/*
* Adapter interface types
@@ -464,9 +467,36 @@ struct ipr_supported_device {
u8 reserved2[16];
}__attribute__((packed, aligned (4)));
+struct ipr_hrr_queue {
+ struct ipr_ioa_cfg *ioa_cfg;
+ __be32 *host_rrq;
+ dma_addr_t host_rrq_dma;
+#define IPR_HRRQ_REQ_RESP_HANDLE_MASK 0xfffffffc
+#define IPR_HRRQ_RESP_BIT_SET 0x00000002
+#define IPR_HRRQ_TOGGLE_BIT 0x00000001
+#define IPR_HRRQ_REQ_RESP_HANDLE_SHIFT 2
+#define IPR_ID_HRRQ_SELE_ENABLE 0x02
+ volatile __be32 *hrrq_start;
+ volatile __be32 *hrrq_end;
+ volatile __be32 *hrrq_curr;
+
+ struct list_head hrrq_free_q;
+ struct list_head hrrq_pending_q;
+
+ volatile u32 toggle_bit;
+ u32 size;
+ u32 min_cmd_id;
+ u32 max_cmd_id;
+};
+
+#define for_each_hrrq(hrrq, ioa_cfg) \
+ for (hrrq = (ioa_cfg)->hrrq; \
+ hrrq < ((ioa_cfg)->hrrq + (ioa_cfg)->hrrq_num); hrrq++)
+
/* Command packet structure */
struct ipr_cmd_pkt {
- __be16 reserved; /* Reserved by IOA */
+ u8 reserved; /* Reserved by IOA */
+ u8 hrrq_id;
u8 request_type;
#define IPR_RQTYPE_SCSICDB 0x00
#define IPR_RQTYPE_IOACMD 0x01
@@ -1322,6 +1352,7 @@ struct ipr_chip_t {
u16 intr_type;
#define IPR_USE_LSI 0x00
#define IPR_USE_MSI 0x01
+#define IPR_USE_MSIX 0x02
u16 sis_type;
#define IPR_SIS32 0x00
#define IPR_SIS64 0x01
@@ -1420,20 +1451,6 @@ struct ipr_ioa_cfg {
struct ipr_trace_entry *trace;
u32 trace_index:IPR_NUM_TRACE_INDEX_BITS;
- /*
- * Queue for free command blocks
- */
- char ipr_free_label[8];
-#define IPR_FREEQ_LABEL "free-q"
- struct list_head free_q;
-
- /*
- * Queue for command blocks outstanding to the adapter
- */
- char ipr_pending_label[8];
-#define IPR_PENDQ_LABEL "pend-q"
- struct list_head pending_q;
-
char cfg_table_start[8];
#define IPR_CFG_TBL_START "cfg"
union {
@@ -1457,16 +1474,9 @@ struct ipr_ioa_cfg {
struct list_head hostrcb_free_q;
struct list_head hostrcb_pending_q;
- __be32 *host_rrq;
- dma_addr_t host_rrq_dma;
-#define IPR_HRRQ_REQ_RESP_HANDLE_MASK 0xfffffffc
-#define IPR_HRRQ_RESP_BIT_SET 0x00000002
-#define IPR_HRRQ_TOGGLE_BIT 0x00000001
-#define IPR_HRRQ_REQ_RESP_HANDLE_SHIFT 2
- volatile __be32 *hrrq_start;
- volatile __be32 *hrrq_end;
- volatile __be32 *hrrq_curr;
- volatile u32 toggle_bit;
+ struct ipr_hrr_queue hrrq[IPR_MAX_HRRQ_NUM];
+ u32 hrrq_num;
+ u32 hrrq_index;
struct ipr_bus_attributes bus_attr[IPR_MAX_NUM_BUSES];
@@ -1512,6 +1522,15 @@ struct ipr_ioa_cfg {
u32 max_cmds;
struct ipr_cmnd **ipr_cmnd_list;
dma_addr_t *ipr_cmnd_list_dma;
+
+ u16 intr_flag;
+ unsigned int nvectors;
+
+ struct {
+ unsigned short vec;
+ char desc[22];
+ } vectors_info[IPR_MAX_MSIX_VECTORS];
+
}; /* struct ipr_ioa_cfg */
struct ipr_cmnd {
@@ -1549,6 +1568,7 @@ struct ipr_cmnd {
struct scsi_device *sdev;
} u;
+ struct ipr_hrr_queue *hrrq;
struct ipr_ioa_cfg *ioa_cfg;
};
--
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 5/7] ipr: Reduce lock contention
2012-11-26 15:55 [PATCH 0/7] Add support for new IBM SAS controllers wenxiong
` (3 preceding siblings ...)
2012-11-26 15:55 ` [PATCH 4/7] ipr: Add support for MSI-X and distributed completion wenxiong
@ 2012-11-26 15:55 ` wenxiong
2012-11-26 15:55 ` [PATCH 6/7] ipr: Implement block iopoll wenxiong
2012-11-26 15:55 ` [PATCH 7/7] ipr: Driver version 2.6.0 wenxiong
6 siblings, 0 replies; 12+ messages in thread
From: wenxiong @ 2012-11-26 15:55 UTC (permalink / raw)
To: James.Bottomley; +Cc: linux-scsi, klebers, brking, Wen Xiong
[-- Attachment #1: hrrq_lock --]
[-- Type: text/plain, Size: 30572 bytes --]
This patch reduces lock contention while implementing distributed
completion processing.
Signed-off-by: Wen Xiong <wenxiong@linux.vnet.ibm.com>
---
drivers/scsi/ipr.c | 322 +++++++++++++++++++++++++++++++++++++----------------
drivers/scsi/ipr.h | 21 +--
2 files changed, 239 insertions(+), 104 deletions(-)
Index: b/drivers/scsi/ipr.c
===================================================================
--- a/drivers/scsi/ipr.c 2012-11-15 20:48:21.000000000 -0600
+++ b/drivers/scsi/ipr.c 2012-11-18 22:35:29.254215152 -0600
@@ -552,7 +552,8 @@ static void ipr_trc_hook(struct ipr_cmnd
struct ipr_trace_entry *trace_entry;
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
- trace_entry = &ioa_cfg->trace[ioa_cfg->trace_index++];
+ trace_entry = &ioa_cfg->trace[atomic_add_return
+ (1, &ioa_cfg->trace_index)%IPR_NUM_TRACE_ENTRIES];
trace_entry->time = jiffies;
trace_entry->op_code = ipr_cmd->ioarcb.cmd_pkt.cdb[0];
trace_entry->type = type;
@@ -563,6 +564,7 @@ static void ipr_trc_hook(struct ipr_cmnd
trace_entry->cmd_index = ipr_cmd->cmd_index & 0xff;
trace_entry->res_handle = ipr_cmd->ioarcb.res_handle;
trace_entry->u.add_data = add_data;
+ wmb();
}
#else
#define ipr_trc_hook(ipr_cmd, type, add_data) do { } while (0)
@@ -697,9 +699,15 @@ static void ipr_mask_and_clear_interrupt
u32 clr_ints)
{
volatile u32 int_reg;
+ int i;
/* Stop new interrupts */
- ioa_cfg->allow_interrupts = 0;
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ spin_lock(&ioa_cfg->hrrq[i]._lock);
+ ioa_cfg->hrrq[i].allow_interrupts = 0;
+ spin_unlock(&ioa_cfg->hrrq[i]._lock);
+ }
+ wmb();
/* Set interrupt mask to stop all new interrupts */
if (ioa_cfg->sis64)
@@ -818,6 +826,7 @@ static void ipr_fail_all_ops(struct ipr_
ENTER;
for_each_hrrq(hrrq, ioa_cfg) {
+ spin_lock(&hrrq->_lock);
list_for_each_entry_safe(ipr_cmd,
temp, &hrrq->hrrq_pending_q, queue) {
list_del(&ipr_cmd->queue);
@@ -837,6 +846,7 @@ static void ipr_fail_all_ops(struct ipr_
del_timer(&ipr_cmd->timer);
ipr_cmd->done(ipr_cmd);
}
+ spin_unlock(&hrrq->_lock);
}
LEAVE;
}
@@ -991,12 +1001,9 @@ static void ipr_send_blocking_cmd(struct
static int ipr_get_hrrq_index(struct ipr_ioa_cfg *ioa_cfg)
{
if (ioa_cfg->hrrq_num == 1)
- ioa_cfg->hrrq_index = 0;
- else {
- if (++ioa_cfg->hrrq_index >= ioa_cfg->hrrq_num)
- ioa_cfg->hrrq_index = 1;
- }
- return ioa_cfg->hrrq_index;
+ return 0;
+ else
+ return (atomic_add_return(1, &ioa_cfg->hrrq_index) % (ioa_cfg->hrrq_num - 1)) + 1;
}
/**
@@ -1018,7 +1025,7 @@ static void ipr_send_hcam(struct ipr_ioa
struct ipr_cmnd *ipr_cmd;
struct ipr_ioarcb *ioarcb;
- if (ioa_cfg->allow_cmds) {
+ if (ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) {
ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_pending_q);
list_add_tail(&hostrcb->queue, &ioa_cfg->hostrcb_pending_q);
@@ -2564,7 +2571,7 @@ static int ipr_reset_reload(struct ipr_i
/* If we got hit with a host reset while we were already resetting
the adapter for some reason, and the reset failed. */
- if (ioa_cfg->ioa_is_dead) {
+ if (ioa_cfg->hrrq[IPR_INIT_HRRQ].ioa_is_dead) {
ipr_trace;
return FAILED;
}
@@ -3205,7 +3212,8 @@ static void ipr_worker_thread(struct wor
restart:
do {
did_work = 0;
- if (!ioa_cfg->allow_cmds || !ioa_cfg->allow_ml_add_del) {
+ if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds ||
+ !ioa_cfg->allow_ml_add_del) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return;
}
@@ -3453,7 +3461,7 @@ static ssize_t ipr_show_adapter_state(st
int len;
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
- if (ioa_cfg->ioa_is_dead)
+ if (ioa_cfg->hrrq[IPR_INIT_HRRQ].ioa_is_dead)
len = snprintf(buf, PAGE_SIZE, "offline\n");
else
len = snprintf(buf, PAGE_SIZE, "online\n");
@@ -3479,14 +3487,20 @@ static ssize_t ipr_store_adapter_state(s
struct Scsi_Host *shost = class_to_shost(dev);
struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
unsigned long lock_flags;
- int result = count;
+ int result = count, i;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
- if (ioa_cfg->ioa_is_dead && !strncmp(buf, "online", 6)) {
- ioa_cfg->ioa_is_dead = 0;
+ if (ioa_cfg->hrrq[IPR_INIT_HRRQ].ioa_is_dead &&
+ !strncmp(buf, "online", 6)) {
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ spin_lock(&ioa_cfg->hrrq[i]._lock);
+ ioa_cfg->hrrq[i].ioa_is_dead = 0;
+ spin_unlock(&ioa_cfg->hrrq[i]._lock);
+ }
+ wmb();
ioa_cfg->reset_retries = 0;
ioa_cfg->in_ioa_bringdown = 0;
ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
@@ -4066,7 +4080,7 @@ static int ipr_alloc_dump(struct ipr_ioa
ioa_cfg->dump = dump;
ioa_cfg->sdt_state = WAIT_FOR_DUMP;
- if (ioa_cfg->ioa_is_dead && !ioa_cfg->dump_taken) {
+ if (ioa_cfg->hrrq[IPR_INIT_HRRQ].ioa_is_dead && !ioa_cfg->dump_taken) {
ioa_cfg->dump_taken = 1;
schedule_work(&ioa_cfg->work_q);
}
@@ -4861,10 +4875,11 @@ static int __ipr_eh_dev_reset(struct scs
*/
if (ioa_cfg->in_reset_reload)
return FAILED;
- if (ioa_cfg->ioa_is_dead)
+ if (ioa_cfg->hrrq[IPR_INIT_HRRQ].ioa_is_dead)
return FAILED;
for_each_hrrq(hrrq, ioa_cfg) {
+ spin_lock(&hrrq->_lock);
list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) {
if (ipr_cmd->ioarcb.res_handle == res->res_handle) {
if (ipr_cmd->scsi_cmd)
@@ -4877,6 +4892,7 @@ static int __ipr_eh_dev_reset(struct scs
ipr_cmd->qc->flags |= ATA_QCFLAG_FAILED;
}
}
+ spin_unlock(&hrrq->_lock);
}
}
res->resetting_device = 1;
@@ -4889,6 +4905,7 @@ static int __ipr_eh_dev_reset(struct scs
spin_lock_irq(scsi_cmd->device->host->host_lock);
for_each_hrrq(hrrq, ioa_cfg) {
+ spin_lock(&hrrq->_lock);
list_for_each_entry(ipr_cmd,
&hrrq->hrrq_pending_q, queue) {
if (ipr_cmd->ioarcb.res_handle ==
@@ -4896,6 +4913,7 @@ static int __ipr_eh_dev_reset(struct scs
rc = -EIO;
break;
}
+ spin_unlock(&hrrq->_lock);
}
}
} else
@@ -5020,7 +5038,8 @@ static int ipr_cancel_op(struct scsi_cmn
* This will force the mid-layer to call ipr_eh_host_reset,
* which will then go to sleep and wait for the reset to complete
*/
- if (ioa_cfg->in_reset_reload || ioa_cfg->ioa_is_dead)
+ if (ioa_cfg->in_reset_reload ||
+ ioa_cfg->hrrq[IPR_INIT_HRRQ].ioa_is_dead)
return FAILED;
if (!res)
return FAILED;
@@ -5036,6 +5055,7 @@ static int ipr_cancel_op(struct scsi_cmn
return FAILED;
for_each_hrrq(hrrq, ioa_cfg) {
+ spin_lock(&hrrq->_lock);
list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) {
if (ipr_cmd->scsi_cmd == scsi_cmd) {
ipr_cmd->done = ipr_scsi_eh_done;
@@ -5043,6 +5063,7 @@ static int ipr_cancel_op(struct scsi_cmn
break;
}
}
+ spin_unlock(&hrrq->_lock);
}
if (!op_found)
@@ -5112,6 +5133,7 @@ static irqreturn_t ipr_handle_other_inte
{
irqreturn_t rc = IRQ_HANDLED;
u32 int_mask_reg;
+
int_mask_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg32);
int_reg &= ~int_mask_reg;
@@ -5173,6 +5195,7 @@ static irqreturn_t ipr_handle_other_inte
ipr_mask_and_clear_interrupts(ioa_cfg, ~0);
ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
}
+
return rc;
}
@@ -5205,7 +5228,7 @@ static int __ipr_process_hrrq(struct ipr
int num_hrrq = 0;
/* If interrupts are disabled, ignore the interrupt */
- if (!ioa_cfg->allow_interrupts)
+ if (!hrr_queue->allow_interrupts)
return 0;
while ((be32_to_cpu(*hrr_queue->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
@@ -5252,7 +5275,7 @@ static irqreturn_t ipr_isr(int irq, void
{
struct ipr_hrr_queue *hrrq = (struct ipr_hrr_queue *)devp;
struct ipr_ioa_cfg *ioa_cfg = hrrq->ioa_cfg;
- unsigned long lock_flags = 0;
+ unsigned long hrrq_flags = 0;
u32 int_reg = 0;
u32 ioasc;
u16 cmd_index;
@@ -5262,10 +5285,10 @@ static irqreturn_t ipr_isr(int irq, void
irqreturn_t rc = IRQ_NONE;
LIST_HEAD(doneq);
- spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ spin_lock_irqsave(hrrq->lock, hrrq_flags);
/* If interrupts are disabled, ignore the interrupt */
- if (!ioa_cfg->allow_interrupts) {
- spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ if (!hrrq->allow_interrupts) {
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
return IRQ_NONE;
}
@@ -5332,7 +5355,7 @@ static irqreturn_t ipr_isr(int irq, void
rc = ipr_handle_other_interrupt(ioa_cfg, int_reg);
unlock_out:
- spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
list_for_each_entry_safe(ipr_cmd, temp, &doneq, queue) {
list_del(&ipr_cmd->queue);
del_timer(&ipr_cmd->timer);
@@ -5352,17 +5375,16 @@ unlock_out:
static irqreturn_t ipr_isr_mhrrq(int irq, void *devp)
{
struct ipr_hrr_queue *hrrq = (struct ipr_hrr_queue *)devp;
- struct ipr_ioa_cfg *ioa_cfg = hrrq->ioa_cfg;
- unsigned long lock_flags = 0;
+ unsigned long hrrq_flags = 0;
struct ipr_cmnd *ipr_cmd, *temp;
irqreturn_t rc = IRQ_NONE;
LIST_HEAD(doneq);
- spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ spin_lock_irqsave(hrrq->lock, hrrq_flags);
/* If interrupts are disabled, ignore the interrupt */
- if (!ioa_cfg->allow_interrupts) {
- spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ if (!hrrq->allow_interrupts) {
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
return IRQ_NONE;
}
@@ -5372,7 +5394,7 @@ static irqreturn_t ipr_isr_mhrrq(int irq
if (__ipr_process_hrrq(hrrq, &doneq))
rc = IRQ_HANDLED;
- spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
list_for_each_entry_safe(ipr_cmd, temp, &doneq, queue) {
list_del(&ipr_cmd->queue);
@@ -5965,14 +5987,14 @@ static void ipr_scsi_done(struct ipr_cmn
if (likely(IPR_IOASC_SENSE_KEY(ioasc) == 0)) {
scsi_dma_unmap(scsi_cmd);
- spin_lock_irqsave(ioa_cfg->host->host_lock, hrrq_flags);
+ spin_lock_irqsave(ipr_cmd->hrrq->lock, hrrq_flags);
list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
scsi_cmd->scsi_done(scsi_cmd);
- spin_unlock_irqrestore(ioa_cfg->host->host_lock, hrrq_flags);
+ spin_unlock_irqrestore(ipr_cmd->hrrq->lock, hrrq_flags);
} else {
- spin_lock_irqsave(ioa_cfg->host->host_lock, hrrq_flags);
+ spin_lock_irqsave(ipr_cmd->hrrq->lock, hrrq_flags);
ipr_erp_start(ioa_cfg, ipr_cmd);
- spin_unlock_irqrestore(ioa_cfg->host->host_lock, hrrq_flags);
+ spin_unlock_irqrestore(ipr_cmd->hrrq->lock, hrrq_flags);
}
}
@@ -5995,26 +6017,34 @@ static int ipr_queuecommand(struct Scsi_
struct ipr_resource_entry *res;
struct ipr_ioarcb *ioarcb;
struct ipr_cmnd *ipr_cmd;
- unsigned long lock_flags;
+ unsigned long hrrq_flags, lock_flags;
int rc;
struct ipr_hrr_queue *hrrq;
int hrrq_id;
ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
- spin_lock_irqsave(shost->host_lock, lock_flags);
scsi_cmd->result = (DID_OK << 16);
res = scsi_cmd->device->hostdata;
+
+ if (ipr_is_gata(res) && res->sata_port) {
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ rc = ata_sas_queuecmd(scsi_cmd, res->sata_port->ap);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return rc;
+ }
+
hrrq_id = ipr_get_hrrq_index(ioa_cfg);
hrrq = &ioa_cfg->hrrq[hrrq_id];
+ spin_lock_irqsave(hrrq->lock, hrrq_flags);
/*
* We are currently blocking all devices due to a host reset
* We have told the host to stop giving us new requests, but
* ERP ops don't count. FIXME
*/
- if (unlikely(!ioa_cfg->allow_cmds && !ioa_cfg->ioa_is_dead)) {
- spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ if (unlikely(!hrrq->allow_cmds && !hrrq->ioa_is_dead)) {
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
return SCSI_MLQUEUE_HOST_BUSY;
}
@@ -6022,23 +6052,17 @@ static int ipr_queuecommand(struct Scsi_
* FIXME - Create scsi_set_host_offline interface
* and the ioa_is_dead check can be removed
*/
- if (unlikely(ioa_cfg->ioa_is_dead || !res)) {
- spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ if (unlikely(hrrq->ioa_is_dead || !res)) {
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
goto err_nodev;
}
- if (ipr_is_gata(res) && res->sata_port) {
- rc = ata_sas_queuecmd(scsi_cmd, res->sata_port->ap);
- spin_unlock_irqrestore(shost->host_lock, lock_flags);
- return rc;
- }
-
ipr_cmd = __ipr_get_free_ipr_cmnd(hrrq);
if (ipr_cmd == NULL) {
- spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
return SCSI_MLQUEUE_HOST_BUSY;
}
- spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
ipr_init_ipr_cmnd(ipr_cmd, ipr_scsi_done);
ioarcb = &ipr_cmd->ioarcb;
@@ -6068,18 +6092,18 @@ static int ipr_queuecommand(struct Scsi_
else
rc = ipr_build_ioadl(ioa_cfg, ipr_cmd);
- spin_lock_irqsave(shost->host_lock, lock_flags);
- if (unlikely(rc || (!ioa_cfg->allow_cmds && !ioa_cfg->ioa_is_dead))) {
+ spin_lock_irqsave(hrrq->lock, hrrq_flags);
+ if (unlikely(rc || (!hrrq->allow_cmds && !hrrq->ioa_is_dead))) {
list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_free_q);
- spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
if (!rc)
scsi_dma_unmap(scsi_cmd);
return SCSI_MLQUEUE_HOST_BUSY;
}
- if (unlikely(ioa_cfg->ioa_is_dead)) {
+ if (unlikely(hrrq->ioa_is_dead)) {
list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_free_q);
- spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
scsi_dma_unmap(scsi_cmd);
goto err_nodev;
}
@@ -6092,15 +6116,15 @@ static int ipr_queuecommand(struct Scsi_
list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_pending_q);
ipr_trc_hook(ipr_cmd, IPR_TRACE_START, IPR_GET_RES_PHYS_LOC(res));
ipr_send_command(ipr_cmd);
- spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
return 0;
err_nodev:
- spin_lock_irqsave(shost->host_lock, lock_flags);
+ spin_lock_irqsave(hrrq->lock, hrrq_flags);
memset(scsi_cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
scsi_cmd->result = (DID_NO_CONNECT << 16);
scsi_cmd->scsi_done(scsi_cmd);
- spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
return 0;
}
@@ -6198,7 +6222,7 @@ static void ipr_ata_phy_reset(struct ata
spin_lock_irqsave(ioa_cfg->host->host_lock, flags);
}
- if (!ioa_cfg->allow_cmds)
+ if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds)
goto out_unlock;
rc = ipr_device_reset(ioa_cfg, res);
@@ -6240,12 +6264,14 @@ static void ipr_ata_post_internal(struct
}
for_each_hrrq(hrrq, ioa_cfg) {
+ spin_lock(&hrrq->_lock);
list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) {
if (ipr_cmd->qc == qc) {
ipr_device_reset(ioa_cfg, sata_port->res);
break;
}
}
+ spin_unlock(&hrrq->_lock);
}
spin_unlock_irqrestore(ioa_cfg->host->host_lock, flags);
}
@@ -6294,6 +6320,7 @@ static void ipr_sata_done(struct ipr_cmn
struct ipr_resource_entry *res = sata_port->res;
u32 ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
+ spin_lock(&ipr_cmd->hrrq->_lock);
if (ipr_cmd->ioa_cfg->sis64)
memcpy(&sata_port->ioasa, &ipr_cmd->s.ioasa64.u.gata,
sizeof(struct ipr_ioasa_gata));
@@ -6310,6 +6337,7 @@ static void ipr_sata_done(struct ipr_cmn
else
qc->err_mask |= ac_err_mask(sata_port->ioasa.status);
list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
+ spin_unlock(&ipr_cmd->hrrq->_lock);
ata_qc_complete(qc);
}
@@ -6405,6 +6433,48 @@ static void ipr_build_ata_ioadl(struct i
}
/**
+ * ipr_qc_defer - Get a free ipr_cmd
+ * @qc: queued command
+ *
+ * Return value:
+ * 0 if success
+ **/
+static int ipr_qc_defer(struct ata_queued_cmd *qc)
+{
+ struct ata_port *ap = qc->ap;
+ struct ipr_sata_port *sata_port = ap->private_data;
+ struct ipr_ioa_cfg *ioa_cfg = sata_port->ioa_cfg;
+ struct ipr_cmnd *ipr_cmd;
+ struct ipr_hrr_queue *hrrq;
+ int hrrq_id;
+
+ hrrq_id = ipr_get_hrrq_index(ioa_cfg);
+ hrrq = &ioa_cfg->hrrq[hrrq_id];
+
+ qc->lldd_task = NULL;
+ spin_lock(&hrrq->_lock);
+ if (unlikely(hrrq->ioa_is_dead)){
+ spin_unlock(&hrrq->_lock);
+ return 0;
+ }
+
+ if (unlikely(!hrrq->allow_cmds)) {
+ spin_unlock(&hrrq->_lock);
+ return ATA_DEFER_LINK;
+ }
+
+ ipr_cmd = __ipr_get_free_ipr_cmnd(hrrq);
+ if (ipr_cmd == NULL) {
+ spin_unlock(&hrrq->_lock);
+ return ATA_DEFER_LINK;
+ }
+
+ qc->lldd_task = ipr_cmd;
+ spin_unlock(&hrrq->_lock);
+ return 0;
+}
+
+/**
* ipr_qc_issue - Issue a SATA qc to a device
* @qc: queued command
*
@@ -6420,15 +6490,23 @@ static unsigned int ipr_qc_issue(struct
struct ipr_cmnd *ipr_cmd;
struct ipr_ioarcb *ioarcb;
struct ipr_ioarcb_ata_regs *regs;
- struct ipr_hrr_queue *hrrq;
- int hrrq_id;
- if (unlikely(!ioa_cfg->allow_cmds || ioa_cfg->ioa_is_dead))
+ if (qc->lldd_task == NULL)
+ ipr_qc_defer(qc);
+
+ ipr_cmd = qc->lldd_task;
+ if (ipr_cmd == NULL)
return AC_ERR_SYSTEM;
- hrrq_id = ipr_get_hrrq_index(ioa_cfg);
- hrrq = &ioa_cfg->hrrq[hrrq_id];
- ipr_cmd = __ipr_get_free_ipr_cmnd(hrrq);
+ qc->lldd_task = NULL;
+ spin_lock(&ipr_cmd->hrrq->_lock);
+ if (unlikely(!ipr_cmd->hrrq->allow_cmds ||
+ ipr_cmd->hrrq->ioa_is_dead)) {
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
+ spin_unlock(&ipr_cmd->hrrq->_lock);
+ return AC_ERR_SYSTEM;
+ }
+
ipr_init_ipr_cmnd(ipr_cmd, ipr_lock_and_done);
ioarcb = &ipr_cmd->ioarcb;
@@ -6441,7 +6519,7 @@ static unsigned int ipr_qc_issue(struct
memset(regs, 0, sizeof(*regs));
ioarcb->add_cmd_parms_len = cpu_to_be16(sizeof(*regs));
- list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_pending_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_pending_q);
ipr_cmd->qc = qc;
ipr_cmd->done = ipr_sata_done;
ipr_cmd->ioarcb.res_handle = res->res_handle;
@@ -6485,6 +6563,7 @@ static unsigned int ipr_qc_issue(struct
}
ipr_send_command(ipr_cmd);
+ spin_unlock(&ipr_cmd->hrrq->_lock);
return 0;
}
@@ -6523,6 +6602,7 @@ static struct ata_port_operations ipr_sa
.hardreset = ipr_sata_reset,
.post_internal_cmd = ipr_ata_post_internal,
.qc_prep = ata_noop_qc_prep,
+ .qc_defer = ipr_qc_defer,
.qc_issue = ipr_qc_issue,
.qc_fill_rtf = ipr_qc_fill_rtf,
.port_start = ata_sas_port_start,
@@ -6620,11 +6700,16 @@ static int ipr_ioa_reset_done(struct ipr
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
struct ipr_resource_entry *res;
struct ipr_hostrcb *hostrcb, *temp;
- int i = 0;
+ int i = 0, j;
ENTER;
ioa_cfg->in_reset_reload = 0;
- ioa_cfg->allow_cmds = 1;
+ for (j = 0; j < ioa_cfg->hrrq_num; j++) {
+ spin_lock(&ioa_cfg->hrrq[j]._lock);
+ ioa_cfg->hrrq[j].allow_cmds = 1;
+ spin_unlock(&ioa_cfg->hrrq[j]._lock);
+ }
+ wmb();
ioa_cfg->reset_cmd = NULL;
ioa_cfg->doorbell |= IPR_RUNTIME_RESET;
@@ -6655,7 +6740,7 @@ static int ipr_ioa_reset_done(struct ipr
scsi_unblock_requests(ioa_cfg->host);
spin_lock(ioa_cfg->host->host_lock);
- if (!ioa_cfg->allow_cmds)
+ if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds)
scsi_block_requests(ioa_cfg->host);
LEAVE;
@@ -7452,8 +7537,8 @@ static int ipr_ioafp_identify_hrrq(struc
ipr_cmd->job_step = ipr_ioafp_std_inquiry;
dev_info(&ioa_cfg->pdev->dev, "Starting IOA initialization sequence.\n");
- if (ioa_cfg->hrrq_index < ioa_cfg->hrrq_num) {
- hrrq = &ioa_cfg->hrrq[ioa_cfg->hrrq_index];
+ if (ioa_cfg->identify_hrrq_index < ioa_cfg->hrrq_num) {
+ hrrq = &ioa_cfg->hrrq[ioa_cfg->identify_hrrq_index];
ioarcb->cmd_pkt.cdb[0] = IPR_ID_HOST_RR_Q;
ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
@@ -7481,7 +7566,8 @@ static int ipr_ioafp_identify_hrrq(struc
(sizeof(u32) * hrrq->size) & 0xff;
if (ioarcb->cmd_pkt.cdb[1] & IPR_ID_HRRQ_SELE_ENABLE)
- ioarcb->cmd_pkt.cdb[9] = ioa_cfg->hrrq_index;
+ ioarcb->cmd_pkt.cdb[9] =
+ ioa_cfg->identify_hrrq_index;
if (ioa_cfg->sis64) {
ioarcb->cmd_pkt.cdb[10] =
@@ -7495,24 +7581,19 @@ static int ipr_ioafp_identify_hrrq(struc
}
if (ioarcb->cmd_pkt.cdb[1] & IPR_ID_HRRQ_SELE_ENABLE)
- ioarcb->cmd_pkt.cdb[14] = ioa_cfg->hrrq_index;
+ ioarcb->cmd_pkt.cdb[14] =
+ ioa_cfg->identify_hrrq_index;
ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout,
IPR_INTERNAL_TIMEOUT);
- if (++ioa_cfg->hrrq_index < ioa_cfg->hrrq_num)
- ipr_cmd->job_step = ipr_ioafp_identify_hrrq;
+ if (++ioa_cfg->identify_hrrq_index < ioa_cfg->hrrq_num)
+ ipr_cmd->job_step = ipr_ioafp_identify_hrrq;
LEAVE;
return IPR_RC_JOB_RETURN;
-
}
- if (ioa_cfg->hrrq_num == 1)
- ioa_cfg->hrrq_index = 0;
- else
- ioa_cfg->hrrq_index = 1;
-
LEAVE;
return IPR_RC_JOB_CONTINUE;
}
@@ -7571,7 +7652,6 @@ static void ipr_reset_start_timer(struct
ipr_cmd->timer.expires = jiffies + timeout;
ipr_cmd->timer.function = (void (*)(unsigned long))ipr_reset_timer_done;
add_timer(&ipr_cmd->timer);
- LEAVE;
}
/**
@@ -7586,6 +7666,7 @@ static void ipr_init_ioa_mem(struct ipr_
struct ipr_hrr_queue *hrrq;
for_each_hrrq(hrrq, ioa_cfg) {
+ spin_lock(&hrrq->_lock);
memset(hrrq->host_rrq, 0, sizeof(u32) * hrrq->size);
/* Initialize Host RRQ pointers */
@@ -7593,9 +7674,15 @@ static void ipr_init_ioa_mem(struct ipr_
hrrq->hrrq_end = &hrrq->host_rrq[hrrq->size - 1];
hrrq->hrrq_curr = hrrq->hrrq_start;
hrrq->toggle_bit = 1;
+ spin_unlock(&hrrq->_lock);
}
+ wmb();
- ioa_cfg->hrrq_index = 0;
+ ioa_cfg->identify_hrrq_index = 0;
+ if (ioa_cfg->hrrq_num == 1)
+ atomic_set(&ioa_cfg->hrrq_index, 0);
+ else
+ atomic_set(&ioa_cfg->hrrq_index, 1);
/* Zero out config table */
memset(ioa_cfg->u.cfg_table, 0, ioa_cfg->cfg_table_size);
@@ -7673,12 +7760,18 @@ static int ipr_reset_enable_ioa(struct i
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
volatile u32 int_reg;
volatile u64 maskval;
+ int i;
ENTER;
ipr_cmd->job_step = ipr_ioafp_identify_hrrq;
ipr_init_ioa_mem(ioa_cfg);
- ioa_cfg->allow_interrupts = 1;
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ spin_lock(&ioa_cfg->hrrq[i]._lock);
+ ioa_cfg->hrrq[i].allow_interrupts = 1;
+ spin_unlock(&ioa_cfg->hrrq[i]._lock);
+ }
+ wmb();
if (ioa_cfg->sis64) {
/* Set the adapter to the correct endian mode. */
writel(IPR_ENDIAN_SWAP_KEY, ioa_cfg->regs.endian_swap_reg);
@@ -8237,7 +8330,8 @@ static int ipr_reset_shutdown_ioa(struct
int rc = IPR_RC_JOB_CONTINUE;
ENTER;
- if (shutdown_type != IPR_SHUTDOWN_NONE && !ioa_cfg->ioa_is_dead) {
+ if (shutdown_type != IPR_SHUTDOWN_NONE &&
+ !ioa_cfg->hrrq[IPR_INIT_HRRQ].ioa_is_dead) {
ipr_cmd->ioarcb.res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
ipr_cmd->ioarcb.cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
ipr_cmd->ioarcb.cmd_pkt.cdb[0] = IPR_IOA_SHUTDOWN;
@@ -8321,9 +8415,15 @@ static void _ipr_initiate_ioa_reset(stru
enum ipr_shutdown_type shutdown_type)
{
struct ipr_cmnd *ipr_cmd;
+ int i;
ioa_cfg->in_reset_reload = 1;
- ioa_cfg->allow_cmds = 0;
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ spin_lock(&ioa_cfg->hrrq[i]._lock);
+ ioa_cfg->hrrq[i].allow_cmds = 0;
+ spin_unlock(&ioa_cfg->hrrq[i]._lock);
+ }
+ wmb();
scsi_block_requests(ioa_cfg->host);
ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
@@ -8349,7 +8449,9 @@ static void _ipr_initiate_ioa_reset(stru
static void ipr_initiate_ioa_reset(struct ipr_ioa_cfg *ioa_cfg,
enum ipr_shutdown_type shutdown_type)
{
- if (ioa_cfg->ioa_is_dead)
+ int i;
+
+ if (ioa_cfg->hrrq[IPR_INIT_HRRQ].ioa_is_dead)
return;
if (ioa_cfg->in_reset_reload) {
@@ -8364,7 +8466,12 @@ static void ipr_initiate_ioa_reset(struc
"IOA taken offline - error recovery failed\n");
ioa_cfg->reset_retries = 0;
- ioa_cfg->ioa_is_dead = 1;
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ spin_lock(&ioa_cfg->hrrq[i]._lock);
+ ioa_cfg->hrrq[i].ioa_is_dead = 1;
+ spin_unlock(&ioa_cfg->hrrq[i]._lock);
+ }
+ wmb();
if (ioa_cfg->in_ioa_bringdown) {
ioa_cfg->reset_cmd = NULL;
@@ -8396,8 +8503,16 @@ static void ipr_initiate_ioa_reset(struc
*/
static int ipr_reset_freeze(struct ipr_cmnd *ipr_cmd)
{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ int i;
+
/* Disallow new interrupts, avoid loop */
- ipr_cmd->ioa_cfg->allow_interrupts = 0;
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ spin_lock(&ioa_cfg->hrrq[i]._lock);
+ ioa_cfg->hrrq[i].allow_interrupts = 0;
+ spin_unlock(&ioa_cfg->hrrq[i]._lock);
+ }
+ wmb();
list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_pending_q);
ipr_cmd->done = ipr_reset_ioa_job;
return IPR_RC_JOB_RETURN;
@@ -8455,13 +8570,19 @@ static void ipr_pci_perm_failure(struct
{
unsigned long flags = 0;
struct ipr_ioa_cfg *ioa_cfg = pci_get_drvdata(pdev);
+ int i;
spin_lock_irqsave(ioa_cfg->host->host_lock, flags);
if (ioa_cfg->sdt_state == WAIT_FOR_DUMP)
ioa_cfg->sdt_state = ABORT_DUMP;
ioa_cfg->reset_retries = IPR_NUM_RESET_RELOAD_RETRIES;
ioa_cfg->in_ioa_bringdown = 1;
- ioa_cfg->allow_cmds = 0;
+ for (i = 0; i < ioa_cfg->hrrq_num; i++) {
+ spin_lock(&ioa_cfg->hrrq[i]._lock);
+ ioa_cfg->hrrq[i].allow_cmds = 0;
+ spin_unlock(&ioa_cfg->hrrq[i]._lock);
+ }
+ wmb();
ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
spin_unlock_irqrestore(ioa_cfg->host->host_lock, flags);
}
@@ -8522,7 +8643,7 @@ static int __devinit ipr_probe_ioa_part2
wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
- if (ioa_cfg->ioa_is_dead) {
+ if (ioa_cfg->hrrq[IPR_INIT_HRRQ].ioa_is_dead) {
rc = -EIO;
} else if (ipr_invalid_adapter(ioa_cfg)) {
if (!ipr_testmode)
@@ -8629,10 +8750,13 @@ static void ipr_free_all_resources(struc
} else
free_irq(pdev->irq, &ioa_cfg->hrrq[0]);
- if (ioa_cfg->intr_flag == IPR_USE_MSI)
+ if (ioa_cfg->intr_flag == IPR_USE_MSI) {
pci_disable_msi(pdev);
- else if (ioa_cfg->intr_flag == IPR_USE_MSIX)
+ ioa_cfg->intr_flag &= ~IPR_USE_MSI;
+ } else if (ioa_cfg->intr_flag == IPR_USE_MSIX) {
pci_disable_msix(pdev);
+ ioa_cfg->intr_flag &= ~IPR_USE_MSIX;
+ }
iounmap(ioa_cfg->hdw_dma_regs);
pci_release_regions(pdev);
@@ -8798,6 +8922,11 @@ static int __devinit ipr_alloc_mem(struc
for (i = 0; i < ioa_cfg->hrrq_num; i++) {
INIT_LIST_HEAD(&ioa_cfg->hrrq[i].hrrq_free_q);
INIT_LIST_HEAD(&ioa_cfg->hrrq[i].hrrq_pending_q);
+ spin_lock_init(&ioa_cfg->hrrq[i]._lock);
+ if (i == 0)
+ ioa_cfg->hrrq[i].lock = ioa_cfg->host->host_lock;
+ else
+ ioa_cfg->hrrq[i].lock = &ioa_cfg->hrrq[i]._lock;
}
if (ipr_alloc_cmd_blks(ioa_cfg))
@@ -9153,9 +9282,9 @@ static int __devinit ipr_test_msi(struct
writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE, ioa_cfg->regs.sense_interrupt_reg32);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
wait_event_timeout(ioa_cfg->msi_wait_q, ioa_cfg->msi_received, HZ);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
ipr_mask_and_clear_interrupts(ioa_cfg, ~IPR_PCII_IOA_TRANS_TO_OPER);
- spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
if (!ioa_cfg->msi_received) {
/* MSI test failed */
dev_info(&pdev->dev, "MSI test failed. Falling back to LSI.\n");
@@ -9188,6 +9317,7 @@ static int __devinit ipr_probe_ioa(struc
void __iomem *ipr_regs;
int rc = PCIBIOS_SUCCESSFUL;
volatile u32 mask, uproc, interrupts;
+ unsigned long lock_flags;
ENTER;
@@ -9290,10 +9420,10 @@ static int __devinit ipr_probe_ioa(struc
}
if (ioa_cfg->ipr_chip->intr_type == IPR_USE_MSI &&
- ipr_enable_msix(ioa_cfg) == 0)
+ ipr_enable_msix(ioa_cfg) == 0)
ioa_cfg->intr_flag = IPR_USE_MSIX;
else if (ioa_cfg->ipr_chip->intr_type == IPR_USE_MSI &&
- ipr_enable_msi(ioa_cfg) == 0)
+ ipr_enable_msi(ioa_cfg) == 0)
ioa_cfg->intr_flag = IPR_USE_MSI;
else {
ioa_cfg->intr_flag = IPR_USE_LSI;
@@ -9379,7 +9509,9 @@ static int __devinit ipr_probe_ioa(struc
if (interrupts & IPR_PCII_IOA_UNIT_CHECKED)
ioa_cfg->ioa_unit_checked = 1;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
ipr_mask_and_clear_interrupts(ioa_cfg, ~IPR_PCII_IOA_TRANS_TO_OPER);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
if (ioa_cfg->intr_flag == IPR_USE_MSI
|| ioa_cfg->intr_flag == IPR_USE_MSIX) {
@@ -9767,7 +9899,7 @@ static int ipr_halt(struct notifier_bloc
list_for_each_entry(ioa_cfg, &ipr_ioa_head, queue) {
spin_lock_irqsave(ioa_cfg->host->host_lock, flags);
- if (!ioa_cfg->allow_cmds) {
+ if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, flags);
continue;
}
Index: b/drivers/scsi/ipr.h
===================================================================
--- a/drivers/scsi/ipr.h 2012-11-15 20:46:59.000000000 -0600
+++ b/drivers/scsi/ipr.h 2012-11-18 22:34:23.557654419 -0600
@@ -482,17 +482,18 @@ struct ipr_hrr_queue {
struct list_head hrrq_free_q;
struct list_head hrrq_pending_q;
+ spinlock_t _lock;
+ spinlock_t *lock;
volatile u32 toggle_bit;
u32 size;
u32 min_cmd_id;
u32 max_cmd_id;
+ u8 allow_interrupts:1;
+ u8 ioa_is_dead:1;
+ u8 allow_cmds:1;
};
-#define for_each_hrrq(hrrq, ioa_cfg) \
- for (hrrq = (ioa_cfg)->hrrq; \
- hrrq < ((ioa_cfg)->hrrq + (ioa_cfg)->hrrq_num); hrrq++)
-
/* Command packet structure */
struct ipr_cmd_pkt {
u8 reserved; /* Reserved by IOA */
@@ -1057,6 +1058,10 @@ struct ipr_hostrcb64_fabric_desc {
struct ipr_hostrcb64_config_element elem[1];
}__attribute__((packed, aligned (8)));
+#define for_each_hrrq(hrrq, ioa_cfg) \
+ for (hrrq = (ioa_cfg)->hrrq; \
+ hrrq < ((ioa_cfg)->hrrq + (ioa_cfg)->hrrq_num); hrrq++)
+
#define for_each_fabric_cfg(fabric, cfg) \
for (cfg = (fabric)->elem; \
cfg < ((fabric)->elem + be16_to_cpu((fabric)->num_entries)); \
@@ -1411,13 +1416,10 @@ struct ipr_ioa_cfg {
struct list_head queue;
- u8 allow_interrupts:1;
u8 in_reset_reload:1;
u8 in_ioa_bringdown:1;
u8 ioa_unit_checked:1;
- u8 ioa_is_dead:1;
u8 dump_taken:1;
- u8 allow_cmds:1;
u8 allow_ml_add_del:1;
u8 needs_hard_reset:1;
u8 dual_raid:1;
@@ -1449,7 +1451,7 @@ struct ipr_ioa_cfg {
char trace_start[8];
#define IPR_TRACE_START_LABEL "trace"
struct ipr_trace_entry *trace;
- u32 trace_index:IPR_NUM_TRACE_INDEX_BITS;
+ atomic_t trace_index;
char cfg_table_start[8];
#define IPR_CFG_TBL_START "cfg"
@@ -1476,7 +1478,8 @@ struct ipr_ioa_cfg {
struct ipr_hrr_queue hrrq[IPR_MAX_HRRQ_NUM];
u32 hrrq_num;
- u32 hrrq_index;
+ atomic_t hrrq_index;
+ u16 identify_hrrq_index;
struct ipr_bus_attributes bus_attr[IPR_MAX_NUM_BUSES];
--
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 6/7] ipr: Implement block iopoll
2012-11-26 15:55 [PATCH 0/7] Add support for new IBM SAS controllers wenxiong
` (4 preceding siblings ...)
2012-11-26 15:55 ` [PATCH 5/7] ipr: Reduce lock contention wenxiong
@ 2012-11-26 15:55 ` wenxiong
2012-12-17 10:16 ` James Bottomley
2012-11-26 15:55 ` [PATCH 7/7] ipr: Driver version 2.6.0 wenxiong
6 siblings, 1 reply; 12+ messages in thread
From: wenxiong @ 2012-11-26 15:55 UTC (permalink / raw)
To: James.Bottomley; +Cc: linux-scsi, klebers, brking, Wen Xiong
[-- Attachment #1: blk-iopoll --]
[-- Type: text/plain, Size: 11525 bytes --]
This patch implements blk iopoll in ipr driver for performance improvement.
Signed-off-by: Wen Xiong <wenxiong@linux.vnet.ibm.com>
---
drivers/scsi/ipr.c | 218 +++++++++++++++++++++++++++++++++++++++++------------
drivers/scsi/ipr.h | 6 +
2 files changed, 175 insertions(+), 49 deletions(-)
Index: b/drivers/scsi/ipr.c
===================================================================
--- a/drivers/scsi/ipr.c 2012-11-18 22:35:29.254215152 -0600
+++ b/drivers/scsi/ipr.c 2012-11-19 12:35:23.744213921 -0600
@@ -108,6 +108,7 @@ static const struct ipr_chip_cfg_t ipr_c
.max_cmds = 100,
.cache_line_size = 0x20,
.clear_isr = 1,
+ .iopoll_weight = 0,
{
.set_interrupt_mask_reg = 0x0022C,
.clr_interrupt_mask_reg = 0x00230,
@@ -132,6 +133,7 @@ static const struct ipr_chip_cfg_t ipr_c
.max_cmds = 100,
.cache_line_size = 0x20,
.clear_isr = 1,
+ .iopoll_weight = 0,
{
.set_interrupt_mask_reg = 0x00288,
.clr_interrupt_mask_reg = 0x0028C,
@@ -156,6 +158,7 @@ static const struct ipr_chip_cfg_t ipr_c
.max_cmds = 1000,
.cache_line_size = 0x20,
.clear_isr = 0,
+ .iopoll_weight = 64,
{
.set_interrupt_mask_reg = 0x00010,
.clr_interrupt_mask_reg = 0x00018,
@@ -3560,6 +3563,92 @@ static struct device_attribute ipr_ioa_r
.store = ipr_store_reset_adapter
};
+static int ipr_iopoll(struct blk_iopoll *iop, int budget);
+ /**
+ * ipr_show_iopoll_weight - Show ipr polling mode
+ * @dev: class device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_show_iopoll_weight(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags = 0;
+ int len;
+
+ spin_lock_irqsave(shost->host_lock, lock_flags);
+ len = snprintf(buf, PAGE_SIZE, "%d\n", ioa_cfg->iopoll_weight);
+ spin_unlock_irqrestore(shost->host_lock, lock_flags);
+
+ return len;
+}
+
+/**
+ * ipr_store_iopoll_weight - Change the adapter's polling mode
+ * @dev: class device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_store_iopoll_weight(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct Scsi_Host *shost = class_to_shost(dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ u32 user_iopoll_weight;
+ unsigned long lock_flags = 0;
+ int i;
+
+ if (!ioa_cfg->sis64) {
+ dev_info(&ioa_cfg->pdev->dev, "blk-iopoll not supported on this adapter\n");
+ return -EINVAL;
+ }
+ user_iopoll_weight = simple_strtoul(buf, NULL, 10);
+ if (user_iopoll_weight > 256) {
+ dev_info(&ioa_cfg->pdev->dev, "Invalid blk-iopoll weight. It must be less than 256\n");
+ return -EINVAL;
+ }
+ if (user_iopoll_weight == ioa_cfg->iopoll_weight) {
+ dev_info(&ioa_cfg->pdev->dev, "Current blk-iopoll weight has the same weight\n");
+ return strlen(buf);
+ }
+
+ if (blk_iopoll_enabled && ioa_cfg->iopoll_weight &&
+ ioa_cfg->sis64 && ioa_cfg->nvectors > 1) {
+ for (i = 1; i < ioa_cfg->hrrq_num; i++)
+ blk_iopoll_disable(&ioa_cfg->hrrq[i].iopoll);
+ }
+
+ spin_lock_irqsave(shost->host_lock, lock_flags);
+ ioa_cfg->iopoll_weight = user_iopoll_weight;
+ if (blk_iopoll_enabled && ioa_cfg->iopoll_weight &&
+ ioa_cfg->sis64 && ioa_cfg->nvectors > 1) {
+ for (i = 1; i < ioa_cfg->hrrq_num; i++) {
+ blk_iopoll_init(&ioa_cfg->hrrq[i].iopoll,
+ ioa_cfg->iopoll_weight, ipr_iopoll);
+ blk_iopoll_enable(&ioa_cfg->hrrq[i].iopoll);
+ }
+ }
+ spin_unlock_irqrestore(shost->host_lock, lock_flags);
+
+ return strlen(buf);
+}
+
+static struct device_attribute ipr_iopoll_weight_attr = {
+ .attr = {
+ .name = "iopoll_weight",
+ .mode = S_IRUGO | S_IWUSR,
+ },
+ .show = ipr_show_iopoll_weight,
+ .store = ipr_store_iopoll_weight
+};
+
/**
* ipr_alloc_ucode_buffer - Allocates a microcode download buffer
* @buf_len: buffer length
@@ -3928,6 +4017,7 @@ static struct device_attribute *ipr_ioa_
&ipr_ioa_reset_attr,
&ipr_update_fw_attr,
&ipr_ioa_fw_type_attr,
+ &ipr_iopoll_weight_attr,
NULL,
};
@@ -5218,7 +5308,7 @@ static void ipr_isr_eh(struct ipr_ioa_cf
ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
}
-static int __ipr_process_hrrq(struct ipr_hrr_queue *hrr_queue,
+static int ipr_process_hrrq(struct ipr_hrr_queue *hrr_queue, int budget,
struct list_head *doneq)
{
u32 ioasc;
@@ -5260,9 +5350,41 @@ static int __ipr_process_hrrq(struct ipr
hrr_queue->toggle_bit ^= 1u;
}
num_hrrq++;
+ if (budget > 0 && num_hrrq >= budget)
+ break;
}
+
return num_hrrq;
}
+
+static int ipr_iopoll(struct blk_iopoll *iop, int budget)
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct ipr_hrr_queue *hrrq;
+ struct ipr_cmnd *ipr_cmd, *temp;
+ unsigned long hrrq_flags;
+ int completed_ops;
+ LIST_HEAD(doneq);
+
+ hrrq = container_of(iop, struct ipr_hrr_queue, iopoll);
+ ioa_cfg = hrrq->ioa_cfg;
+
+ spin_lock_irqsave(hrrq->lock, hrrq_flags);
+ completed_ops = ipr_process_hrrq(hrrq, budget, &doneq);
+
+ if (completed_ops < budget)
+ blk_iopoll_complete(iop);
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
+
+ list_for_each_entry_safe(ipr_cmd, temp, &doneq, queue) {
+ list_del(&ipr_cmd->queue);
+ del_timer(&ipr_cmd->timer);
+ ipr_cmd->fast_done(ipr_cmd);
+ }
+
+ return completed_ops;
+}
+
/**
* ipr_isr - Interrupt service routine
* @irq: irq number
@@ -5277,8 +5399,6 @@ static irqreturn_t ipr_isr(int irq, void
struct ipr_ioa_cfg *ioa_cfg = hrrq->ioa_cfg;
unsigned long hrrq_flags = 0;
u32 int_reg = 0;
- u32 ioasc;
- u16 cmd_index;
int num_hrrq = 0;
int irq_none = 0;
struct ipr_cmnd *ipr_cmd, *temp;
@@ -5293,60 +5413,30 @@ static irqreturn_t ipr_isr(int irq, void
}
while (1) {
- ipr_cmd = NULL;
-
- while ((be32_to_cpu(*hrrq->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
- hrrq->toggle_bit) {
-
- cmd_index = (be32_to_cpu(*hrrq->hrrq_curr) &
- IPR_HRRQ_REQ_RESP_HANDLE_MASK) >> IPR_HRRQ_REQ_RESP_HANDLE_SHIFT;
-
- if (unlikely(cmd_index > hrrq->max_cmd_id ||
- cmd_index < hrrq->min_cmd_id)) {
- ipr_isr_eh(ioa_cfg,
- "Invalid response handle from IOA: ",
- cmd_index);
- rc = IRQ_HANDLED;
- goto unlock_out;
- }
-
- ipr_cmd = ioa_cfg->ipr_cmnd_list[cmd_index];
- ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
-
- ipr_trc_hook(ipr_cmd, IPR_TRACE_FINISH, ioasc);
-
- list_move_tail(&ipr_cmd->queue, &doneq);
-
- rc = IRQ_HANDLED;
-
- if (hrrq->hrrq_curr < hrrq->hrrq_end) {
- hrrq->hrrq_curr++;
- } else {
- hrrq->hrrq_curr = hrrq->hrrq_start;
- hrrq->toggle_bit ^= 1u;
- }
- }
+ if (ipr_process_hrrq(hrrq, -1, &doneq)) {
+ rc = IRQ_HANDLED;
- if (ipr_cmd && !ioa_cfg->clear_isr)
- break;
+ if (!ioa_cfg->clear_isr)
+ break;
- if (ipr_cmd != NULL) {
/* Clear the PCI interrupt */
num_hrrq = 0;
do {
- writel(IPR_PCII_HRRQ_UPDATED, ioa_cfg->regs.clr_interrupt_reg32);
+ writel(IPR_PCII_HRRQ_UPDATED,
+ ioa_cfg->regs.clr_interrupt_reg32);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg32);
} while (int_reg & IPR_PCII_HRRQ_UPDATED &&
- num_hrrq++ < IPR_MAX_HRRQ_RETRIES);
+ num_hrrq++ < IPR_MAX_HRRQ_RETRIES);
} else if (rc == IRQ_NONE && irq_none == 0) {
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg32);
irq_none++;
} else if (num_hrrq == IPR_MAX_HRRQ_RETRIES &&
int_reg & IPR_PCII_HRRQ_UPDATED) {
- ipr_isr_eh(ioa_cfg, "Error clearing HRRQ: ", num_hrrq);
+ ipr_isr_eh(ioa_cfg,
+ "Error clearing HRRQ: ", num_hrrq);
rc = IRQ_HANDLED;
- goto unlock_out;
+ break;
} else
break;
}
@@ -5354,7 +5444,6 @@ static irqreturn_t ipr_isr(int irq, void
if (unlikely(rc == IRQ_NONE))
rc = ipr_handle_other_interrupt(ioa_cfg, int_reg);
-unlock_out:
spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
list_for_each_entry_safe(ipr_cmd, temp, &doneq, queue) {
list_del(&ipr_cmd->queue);
@@ -5375,6 +5464,7 @@ unlock_out:
static irqreturn_t ipr_isr_mhrrq(int irq, void *devp)
{
struct ipr_hrr_queue *hrrq = (struct ipr_hrr_queue *)devp;
+ struct ipr_ioa_cfg *ioa_cfg = hrrq->ioa_cfg;
unsigned long hrrq_flags = 0;
struct ipr_cmnd *ipr_cmd, *temp;
irqreturn_t rc = IRQ_NONE;
@@ -5388,11 +5478,22 @@ static irqreturn_t ipr_isr_mhrrq(int irq
return IRQ_NONE;
}
- if ((be32_to_cpu(*hrrq->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
- hrrq->toggle_bit)
+ if (blk_iopoll_enabled && ioa_cfg->iopoll_weight &&
+ ioa_cfg->sis64 && ioa_cfg->nvectors > 1) {
+ if ((be32_to_cpu(*hrrq->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
+ hrrq->toggle_bit) {
+ if (!blk_iopoll_sched_prep(&hrrq->iopoll))
+ blk_iopoll_sched(&hrrq->iopoll);
+ spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
+ return IRQ_HANDLED;
+ }
+ } else {
+ if ((be32_to_cpu(*hrrq->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
+ hrrq->toggle_bit)
- if (__ipr_process_hrrq(hrrq, &doneq))
- rc = IRQ_HANDLED;
+ if (ipr_process_hrrq(hrrq, -1, &doneq))
+ rc = IRQ_HANDLED;
+ }
spin_unlock_irqrestore(hrrq->lock, hrrq_flags);
@@ -9689,7 +9790,7 @@ static int __devinit ipr_probe(struct pc
const struct pci_device_id *dev_id)
{
struct ipr_ioa_cfg *ioa_cfg;
- int rc;
+ int rc, i;
rc = ipr_probe_ioa(pdev, dev_id);
@@ -9736,6 +9837,17 @@ static int __devinit ipr_probe(struct pc
scsi_add_device(ioa_cfg->host, IPR_IOA_BUS, IPR_IOA_TARGET, IPR_IOA_LUN);
ioa_cfg->allow_ml_add_del = 1;
ioa_cfg->host->max_channel = IPR_VSET_BUS;
+ ioa_cfg->iopoll_weight = ioa_cfg->chip_cfg->iopoll_weight;
+
+ if (blk_iopoll_enabled && ioa_cfg->iopoll_weight &&
+ ioa_cfg->sis64 && ioa_cfg->nvectors > 1) {
+ for (i = 1; i < ioa_cfg->hrrq_num; i++) {
+ blk_iopoll_init(&ioa_cfg->hrrq[i].iopoll,
+ ioa_cfg->iopoll_weight, ipr_iopoll);
+ blk_iopoll_enable(&ioa_cfg->hrrq[i].iopoll);
+ }
+ }
+
schedule_work(&ioa_cfg->work_q);
return 0;
}
@@ -9754,8 +9866,16 @@ static void ipr_shutdown(struct pci_dev
{
struct ipr_ioa_cfg *ioa_cfg = pci_get_drvdata(pdev);
unsigned long lock_flags = 0;
+ int i;
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (blk_iopoll_enabled && ioa_cfg->iopoll_weight &&
+ ioa_cfg->sis64 && ioa_cfg->nvectors > 1) {
+ ioa_cfg->iopoll_weight = 0;
+ for (i = 1; i < ioa_cfg->hrrq_num; i++)
+ blk_iopoll_disable(&ioa_cfg->hrrq[i].iopoll);
+ }
+
while (ioa_cfg->in_reset_reload) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
Index: b/drivers/scsi/ipr.h
===================================================================
--- a/drivers/scsi/ipr.h 2012-11-18 22:34:23.557654419 -0600
+++ b/drivers/scsi/ipr.h 2012-11-19 12:18:58.263935158 -0600
@@ -32,6 +32,7 @@
#include <linux/libata.h>
#include <linux/list.h>
#include <linux/kref.h>
+#include <linux/blk-iopoll.h>
#include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
@@ -492,6 +493,8 @@ struct ipr_hrr_queue {
u8 allow_interrupts:1;
u8 ioa_is_dead:1;
u8 allow_cmds:1;
+
+ struct blk_iopoll iopoll;
};
/* Command packet structure */
@@ -1348,6 +1351,7 @@ struct ipr_chip_cfg_t {
u16 max_cmds;
u8 cache_line_size;
u8 clear_isr;
+ u32 iopoll_weight;
struct ipr_interrupt_offsets regs;
};
@@ -1534,6 +1538,8 @@ struct ipr_ioa_cfg {
char desc[22];
} vectors_info[IPR_MAX_MSIX_VECTORS];
+ u32 iopoll_weight;
+
}; /* struct ipr_ioa_cfg */
struct ipr_cmnd {
--
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 7/7] ipr: Driver version 2.6.0
2012-11-26 15:55 [PATCH 0/7] Add support for new IBM SAS controllers wenxiong
` (5 preceding siblings ...)
2012-11-26 15:55 ` [PATCH 6/7] ipr: Implement block iopoll wenxiong
@ 2012-11-26 15:55 ` wenxiong
6 siblings, 0 replies; 12+ messages in thread
From: wenxiong @ 2012-11-26 15:55 UTC (permalink / raw)
To: James.Bottomley; +Cc: linux-scsi, klebers, brking, Wen Xiong
[-- Attachment #1: version_bump --]
[-- Type: text/plain, Size: 666 bytes --]
Bump driver version.
Signed-off-by: Wen Xiong <wenxiong@linux.vnet.ibm.com>
---
drivers/scsi/ipr.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Index: b/drivers/scsi/ipr.h
===================================================================
--- a/drivers/scsi/ipr.h 2012-11-19 12:18:58.263935158 -0600
+++ b/drivers/scsi/ipr.h 2012-11-19 12:35:31.355166443 -0600
@@ -39,8 +39,8 @@
/*
* Literals
*/
-#define IPR_DRIVER_VERSION "2.5.4"
-#define IPR_DRIVER_DATE "(July 11, 2012)"
+#define IPR_DRIVER_VERSION "2.6.0"
+#define IPR_DRIVER_DATE "(November 16, 2012)"
/*
* IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding
--
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/7] ipr: Handler ID memory allocation failure
2012-11-26 15:55 ` [PATCH 2/7] ipr: Handler ID memory allocation failure wenxiong
@ 2012-11-30 15:30 ` James Bottomley
2012-12-03 21:25 ` wenxiong
0 siblings, 1 reply; 12+ messages in thread
From: James Bottomley @ 2012-11-30 15:30 UTC (permalink / raw)
To: wenxiong; +Cc: linux-scsi, klebers, brking
On Mon, 2012-11-26 at 09:55 -0600, wenxiong@linux.vnet.ibm.com wrote:
> plain text document attachment (fix_mem_alloc_fail)
> Add code to handle memory allocation failures at module load time.
>
> Reported-by: Fengguang Wu <fengguang.wu@intel.com>
>
> Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
The signoff chain on a lot of these patches is incorrect.
This patch only has Brian's signoff and it needs yours because you sent
it to me. If it is really Brian's patch, it needs a
From: Brian King <brking@linux.vnet.ibm.com>
To get the authorship right.
James
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/7] ipr: Handler ID memory allocation failure
2012-11-30 15:30 ` James Bottomley
@ 2012-12-03 21:25 ` wenxiong
0 siblings, 0 replies; 12+ messages in thread
From: wenxiong @ 2012-12-03 21:25 UTC (permalink / raw)
To: James Bottomley; +Cc: linux-scsi, klebers, brking
Quoting James Bottomley <James.Bottomley@hansenpartnership.com>:
> On Mon, 2012-11-26 at 09:55 -0600, wenxiong@linux.vnet.ibm.com wrote:
>> plain text document attachment (fix_mem_alloc_fail)
>> Add code to handle memory allocation failures at module load time.
>>
>> Reported-by: Fengguang Wu <fengguang.wu@intel.com>
>>
>> Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
>
> The signoff chain on a lot of these patches is incorrect.
>
> This patch only has Brian's signoff and it needs yours because you sent
> it to me. If it is really Brian's patch, it needs a
>
> From: Brian King <brking@linux.vnet.ibm.com>
>
> To get the authorship right.
>
> James
Hi James,
Thanks! I just re-sent the whole series with your suggestions.
Thanks,
Wendy
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 6/7] ipr: Implement block iopoll
2012-11-26 15:55 ` [PATCH 6/7] ipr: Implement block iopoll wenxiong
@ 2012-12-17 10:16 ` James Bottomley
0 siblings, 0 replies; 12+ messages in thread
From: James Bottomley @ 2012-12-17 10:16 UTC (permalink / raw)
To: wenxiong; +Cc: linux-scsi, klebers, brking
On Mon, 2012-11-26 at 09:55 -0600, wenxiong@linux.vnet.ibm.com wrote:
> +static ssize_t ipr_store_iopoll_weight(struct device *dev,
> + struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct Scsi_Host *shost = class_to_shost(dev);
> + struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg
> *)shost->hostdata;
> + u32 user_iopoll_weight;
> + unsigned long lock_flags = 0;
> + int i;
> +
> + if (!ioa_cfg->sis64) {
> + dev_info(&ioa_cfg->pdev->dev, "blk-iopoll not
> supported on this adapter\n");
> + return -EINVAL;
> + }
> + user_iopoll_weight = simple_strtoul(buf, NULL, 10);
Checkpatch should have told you not to do this:
WARNING: simple_strtoul is obsolete, use kstrtoul instead
#89: FILE: drivers/scsi/ipr.c:3612:
+ user_iopoll_weight = simple_strtoul(buf, NULL, 10);
Could you please resend with the corrected primitive.
Thanks,
James
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2012-12-17 10:16 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-26 15:55 [PATCH 0/7] Add support for new IBM SAS controllers wenxiong
2012-11-26 15:55 ` [PATCH 1/7] ipr: Add sereral new CCIN definitions for new adapters support wenxiong
2012-11-26 15:55 ` [PATCH 2/7] ipr: Handler ID memory allocation failure wenxiong
2012-11-30 15:30 ` James Bottomley
2012-12-03 21:25 ` wenxiong
2012-11-26 15:55 ` [PATCH 3/7] ipr: Resource path error logging cleanup wenxiong
2012-11-26 15:55 ` [PATCH 4/7] ipr: Add support for MSI-X and distributed completion wenxiong
2012-11-26 15:55 ` [PATCH 5/7] ipr: Reduce lock contention wenxiong
2012-11-26 15:55 ` [PATCH 6/7] ipr: Implement block iopoll wenxiong
2012-12-17 10:16 ` James Bottomley
2012-11-26 15:55 ` [PATCH 7/7] ipr: Driver version 2.6.0 wenxiong
[not found] <20121119213657.152897506@linux.vnet.ibm.com>
2012-11-20 21:58 ` [PATCH 0/7] Add support for new IBM SAS controllers Brian King
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).