* [patch 00/10] zfcp updates for 2.6.36 merge window
@ 2010-07-16 13:37 Christof Schmitt
2010-07-16 13:37 ` [patch 01/10] zfcp: Use memdup_user and kstrdup Christof Schmitt
` (9 more replies)
0 siblings, 10 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley; +Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens
James,
here is the series of zfcp updates for the 2.6.36 merge window. The
changes include code simplifications, cleanup, FC transport events,
experimental DIF/DIX support and logging improvements on errors.
The patches apply cleanly on top of the last series of fixes, i sent
last week:
http://marc.info/?l=linux-scsi&m=127857607617662&w=2
The last patch in the series "zfcp: Trigger logging in the FCP channel
on qdio error conditions" depends on the patch "[S390] cio: CHSC SIOSL
Support" that is already part of the linux-next tree:
http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commit;h=c5a0bc55f25e0002aaed211aef533e86c9aaff2a
Christof
^ permalink raw reply [flat|nested] 13+ messages in thread
* [patch 01/10] zfcp: Use memdup_user and kstrdup
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
@ 2010-07-16 13:37 ` Christof Schmitt
2010-07-16 13:37 ` [patch 02/10] zfcp: Remove SCSI device when removing unit Christof Schmitt
` (8 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley
Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens,
Christof Schmitt
[-- Attachment #1: 704-zfcp-dup.diff --]
[-- Type: text/plain, Size: 1909 bytes --]
From: Christof Schmitt <christof.schmitt@de.ibm.com>
Use the functions memdup_user and kstrdup to allocate memory and copy
the data in one step, saving some lines of code.
Reviewed-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
---
drivers/s390/scsi/zfcp_aux.c | 4 +---
drivers/s390/scsi/zfcp_cfdc.c | 12 +++---------
2 files changed, 4 insertions(+), 12 deletions(-)
diff -urpN linux-2.6/drivers/s390/scsi/zfcp_aux.c linux-2.6-patched/drivers/s390/scsi/zfcp_aux.c
--- linux-2.6/drivers/s390/scsi/zfcp_aux.c 2010-07-15 12:21:48.000000000 +0200
+++ linux-2.6-patched/drivers/s390/scsi/zfcp_aux.c 2010-07-15 12:22:18.000000000 +0200
@@ -98,13 +98,11 @@ static void __init zfcp_init_device_setu
u64 wwpn, lun;
/* duplicate devstr and keep the original for sysfs presentation*/
- str_saved = kmalloc(strlen(devstr) + 1, GFP_KERNEL);
+ str_saved = kstrdup(devstr, GFP_KERNEL);
str = str_saved;
if (!str)
return;
- strcpy(str, devstr);
-
token = strsep(&str, ",");
if (!token || strlen(token) >= ZFCP_BUS_ID_SIZE)
goto err_out;
diff -urpN linux-2.6/drivers/s390/scsi/zfcp_cfdc.c linux-2.6-patched/drivers/s390/scsi/zfcp_cfdc.c
--- linux-2.6/drivers/s390/scsi/zfcp_cfdc.c 2010-07-15 12:21:48.000000000 +0200
+++ linux-2.6-patched/drivers/s390/scsi/zfcp_cfdc.c 2010-07-15 12:22:18.000000000 +0200
@@ -189,18 +189,12 @@ static long zfcp_cfdc_dev_ioctl(struct f
if (!fsf_cfdc)
return -ENOMEM;
- data = kmalloc(sizeof(struct zfcp_cfdc_data), GFP_KERNEL);
- if (!data) {
- retval = -ENOMEM;
+ data = memdup_user(data_user, sizeof(*data_user));
+ if (IS_ERR(data)) {
+ retval = PTR_ERR(data);
goto no_mem_sense;
}
- retval = copy_from_user(data, data_user, sizeof(*data));
- if (retval) {
- retval = -EFAULT;
- goto free_buffer;
- }
-
if (data->signature != 0xCFDCACDF) {
retval = -EINVAL;
goto free_buffer;
^ permalink raw reply [flat|nested] 13+ messages in thread
* [patch 02/10] zfcp: Remove SCSI device when removing unit
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
2010-07-16 13:37 ` [patch 01/10] zfcp: Use memdup_user and kstrdup Christof Schmitt
@ 2010-07-16 13:37 ` Christof Schmitt
2010-07-27 20:37 ` James Bottomley
2010-07-16 13:37 ` [patch 03/10] zfcp: Use correct width for timer_interval field Christof Schmitt
` (7 subsequent siblings)
9 siblings, 1 reply; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley
Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens,
Christof Schmitt
[-- Attachment #1: 705-zfcp-unit-removal.diff --]
[-- Type: text/plain, Size: 2043 bytes --]
From: Christof Schmitt <christof.schmitt@de.ibm.com>
Configuring a LUN in zfcp, also creates a SCSI device. For
consistency, it makes sense to remove the SCSI device when the LUN is
deconfigured. Replace the flush_work with the call to
scsi_remove_device: scsi_remove_device also takes the scan_mutex that
synchronizes itself with any long running device discovery.
Reviewed-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
---
drivers/s390/scsi/zfcp_def.h | 1 +
drivers/s390/scsi/zfcp_scsi.c | 1 +
drivers/s390/scsi/zfcp_sysfs.c | 10 ++++++++--
3 files changed, 10 insertions(+), 2 deletions(-)
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -212,6 +212,7 @@ struct zfcp_port {
struct work_struct test_link_work;
struct work_struct rport_work;
enum { RPORT_NONE, RPORT_ADD, RPORT_DEL } rport_task;
+ unsigned int starget_id;
};
struct zfcp_unit {
--- a/drivers/s390/scsi/zfcp_scsi.c
+++ b/drivers/s390/scsi/zfcp_scsi.c
@@ -564,6 +564,7 @@ static void zfcp_scsi_rport_register(str
rport->maxframe_size = port->maxframe_size;
rport->supported_classes = port->supported_classes;
port->rport = rport;
+ port->starget_id = rport->scsi_target_id;
zfcp_scsi_queue_unit_register(port);
}
--- a/drivers/s390/scsi/zfcp_sysfs.c
+++ b/drivers/s390/scsi/zfcp_sysfs.c
@@ -290,6 +290,7 @@ static ssize_t zfcp_sysfs_unit_remove_st
struct zfcp_unit *unit;
u64 fcp_lun;
int retval = -EINVAL;
+ struct scsi_device *sdev;
if (!(port && get_device(&port->dev)))
return -EBUSY;
@@ -303,8 +304,13 @@ static ssize_t zfcp_sysfs_unit_remove_st
else
retval = 0;
- /* wait for possible timeout during SCSI probe */
- flush_work(&unit->scsi_work);
+ sdev = scsi_device_lookup(port->adapter->scsi_host, 0,
+ port->starget_id,
+ scsilun_to_int((struct scsi_lun *)&fcp_lun));
+ if (sdev) {
+ scsi_remove_device(sdev);
+ scsi_device_put(sdev);
+ }
write_lock_irq(&port->unit_list_lock);
list_del(&unit->list);
^ permalink raw reply [flat|nested] 13+ messages in thread
* [patch 03/10] zfcp: Use correct width for timer_interval field
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
2010-07-16 13:37 ` [patch 01/10] zfcp: Use memdup_user and kstrdup Christof Schmitt
2010-07-16 13:37 ` [patch 02/10] zfcp: Remove SCSI device when removing unit Christof Schmitt
@ 2010-07-16 13:37 ` Christof Schmitt
2010-07-16 13:37 ` [patch 04/10] zfcp: Cleanup function parameters for sbal value Christof Schmitt
` (6 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley
Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens,
Christof Schmitt
[-- Attachment #1: 706-zfcp-timer-interval.diff --]
[-- Type: text/plain, Size: 1156 bytes --]
From: Christof Schmitt <christof.schmitt@de.ibm.com>
The timer_interval is 14 bits in width. Introduce a define for
properly masking the value.
Reviewed-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
---
drivers/s390/scsi/zfcp_fsf.c | 2 +-
drivers/s390/scsi/zfcp_fsf.h | 2 ++
2 files changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -495,7 +495,7 @@ static int zfcp_fsf_exchange_config_eval
fc_host_supported_classes(shost) = FC_COS_CLASS2 | FC_COS_CLASS3;
adapter->hydra_version = bottom->adapter_type;
- adapter->timer_ticks = bottom->timer_interval;
+ adapter->timer_ticks = bottom->timer_interval & ZFCP_FSF_TIMER_INT_MASK;
adapter->stat_read_buf_num = max(bottom->status_read_buf_num,
(u16)FSF_STATUS_READS_RECOM);
--- a/drivers/s390/scsi/zfcp_fsf.h
+++ b/drivers/s390/scsi/zfcp_fsf.h
@@ -352,6 +352,8 @@ struct fsf_qtcb_bottom_support {
u8 els[256];
} __attribute__ ((packed));
+#define ZFCP_FSF_TIMER_INT_MASK 0x3FFF
+
struct fsf_qtcb_bottom_config {
u32 lic_version;
u32 feature_selection;
^ permalink raw reply [flat|nested] 13+ messages in thread
* [patch 04/10] zfcp: Cleanup function parameters for sbal value.
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
` (2 preceding siblings ...)
2010-07-16 13:37 ` [patch 03/10] zfcp: Use correct width for timer_interval field Christof Schmitt
@ 2010-07-16 13:37 ` Christof Schmitt
2010-07-16 13:37 ` [patch 05/10] zfcp: Cleanup QDIO attachment and improve processing Christof Schmitt
` (5 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley
Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens,
Swen Schillig, Christof Schmitt
[-- Attachment #1: 707-zfcp-sbal-cleanup.diff --]
[-- Type: text/plain, Size: 9044 bytes --]
From: Swen Schillig <swen@vnet.ibm.com>
A lot of functions require the amount of SBALs as one of their
parameter which is most times invariable. Therefore remove this
parameter and set the SBAL value explicitly if a non standard value is
required. In addition the warning message "oversized data" is
replaced with a BUG_ON() statement assuring the limits defined and
requested by zfcp.
Signed-off-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
---
drivers/s390/scsi/zfcp_ext.h | 2 +-
drivers/s390/scsi/zfcp_fsf.c | 40 ++++++++++++++--------------------------
drivers/s390/scsi/zfcp_fsf.h | 8 --------
drivers/s390/scsi/zfcp_qdio.c | 15 ++-------------
drivers/s390/scsi/zfcp_qdio.h | 28 ++++++++++++++++++++++++++++
drivers/s390/scsi/zfcp_scsi.c | 4 ++--
6 files changed, 47 insertions(+), 50 deletions(-)
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -146,7 +146,7 @@ extern void zfcp_qdio_destroy(struct zfc
extern int zfcp_qdio_sbal_get(struct zfcp_qdio *);
extern int zfcp_qdio_send(struct zfcp_qdio *, struct zfcp_qdio_req *);
extern int zfcp_qdio_sbals_from_sg(struct zfcp_qdio *, struct zfcp_qdio_req *,
- struct scatterlist *, int);
+ struct scatterlist *);
extern int zfcp_qdio_open(struct zfcp_qdio *);
extern void zfcp_qdio_close(struct zfcp_qdio *);
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -959,8 +959,7 @@ static void zfcp_fsf_setup_ct_els_unchai
static int zfcp_fsf_setup_ct_els_sbals(struct zfcp_fsf_req *req,
struct scatterlist *sg_req,
- struct scatterlist *sg_resp,
- int max_sbals)
+ struct scatterlist *sg_resp)
{
struct zfcp_adapter *adapter = req->adapter;
u32 feat = adapter->adapter_features;
@@ -983,15 +982,14 @@ static int zfcp_fsf_setup_ct_els_sbals(s
return 0;
}
- bytes = zfcp_qdio_sbals_from_sg(adapter->qdio, &req->qdio_req,
- sg_req, max_sbals);
+ bytes = zfcp_qdio_sbals_from_sg(adapter->qdio, &req->qdio_req, sg_req);
if (bytes <= 0)
return -EIO;
req->qtcb->bottom.support.req_buf_length = bytes;
zfcp_qdio_skip_to_last_sbale(&req->qdio_req);
bytes = zfcp_qdio_sbals_from_sg(adapter->qdio, &req->qdio_req,
- sg_resp, max_sbals);
+ sg_resp);
req->qtcb->bottom.support.resp_buf_length = bytes;
if (bytes <= 0)
return -EIO;
@@ -1002,11 +1000,11 @@ static int zfcp_fsf_setup_ct_els_sbals(s
static int zfcp_fsf_setup_ct_els(struct zfcp_fsf_req *req,
struct scatterlist *sg_req,
struct scatterlist *sg_resp,
- int max_sbals, unsigned int timeout)
+ unsigned int timeout)
{
int ret;
- ret = zfcp_fsf_setup_ct_els_sbals(req, sg_req, sg_resp, max_sbals);
+ ret = zfcp_fsf_setup_ct_els_sbals(req, sg_req, sg_resp);
if (ret)
return ret;
@@ -1046,8 +1044,7 @@ int zfcp_fsf_send_ct(struct zfcp_fc_wka_
}
req->status |= ZFCP_STATUS_FSFREQ_CLEANUP;
- ret = zfcp_fsf_setup_ct_els(req, ct->req, ct->resp,
- ZFCP_FSF_MAX_SBALS_PER_REQ, timeout);
+ ret = zfcp_fsf_setup_ct_els(req, ct->req, ct->resp, timeout);
if (ret)
goto failed_send;
@@ -1143,7 +1140,10 @@ int zfcp_fsf_send_els(struct zfcp_adapte
}
req->status |= ZFCP_STATUS_FSFREQ_CLEANUP;
- ret = zfcp_fsf_setup_ct_els(req, els->req, els->resp, 2, timeout);
+
+ zfcp_qdio_sbal_limit(qdio, &req->qdio_req, 2);
+
+ ret = zfcp_fsf_setup_ct_els(req, els->req, els->resp, timeout);
if (ret)
goto failed_send;
@@ -2259,20 +2259,9 @@ int zfcp_fsf_send_fcp_command_task(struc
zfcp_fc_scsi_to_fcp(fcp_cmnd, scsi_cmnd);
real_bytes = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req,
- scsi_sglist(scsi_cmnd),
- ZFCP_FSF_MAX_SBALS_PER_REQ);
- if (unlikely(real_bytes < 0)) {
- if (req->qdio_req.sbal_number >= ZFCP_FSF_MAX_SBALS_PER_REQ) {
- dev_err(&adapter->ccw_device->dev,
- "Oversize data package, unit 0x%016Lx "
- "on port 0x%016Lx closed\n",
- (unsigned long long)unit->fcp_lun,
- (unsigned long long)unit->port->wwpn);
- zfcp_erp_unit_shutdown(unit, 0, "fssfct1", req);
- retval = -EINVAL;
- }
+ scsi_sglist(scsi_cmnd));
+ if (unlikely(real_bytes < 0))
goto failed_scsi_cmnd;
- }
retval = zfcp_fsf_req_send(req);
if (unlikely(retval))
@@ -2391,9 +2380,8 @@ struct zfcp_fsf_req *zfcp_fsf_control_fi
bottom->operation_subtype = FSF_CFDC_OPERATION_SUBTYPE;
bottom->option = fsf_cfdc->option;
- bytes = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req,
- fsf_cfdc->sg,
- ZFCP_FSF_MAX_SBALS_PER_REQ);
+ bytes = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, fsf_cfdc->sg);
+
if (bytes != ZFCP_CFDC_MAX_SIZE) {
zfcp_fsf_req_free(req);
goto out;
--- a/drivers/s390/scsi/zfcp_fsf.h
+++ b/drivers/s390/scsi/zfcp_fsf.h
@@ -151,14 +151,6 @@
/* fc service class */
#define FSF_CLASS_3 0x00000003
-/* SBAL chaining */
-#define ZFCP_FSF_MAX_SBALS_PER_REQ 36
-
-/* max. number of (data buffer) SBALEs in largest SBAL chain
- * request ID + QTCB in SBALE 0 + 1 of first SBAL in chain */
-#define ZFCP_FSF_MAX_SBALES_PER_REQ \
- (ZFCP_FSF_MAX_SBALS_PER_REQ * ZFCP_QDIO_MAX_SBALES_PER_SBAL - 2)
-
/* logging space behind QTCB */
#define FSF_QTCB_LOG_SIZE 1024
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -141,15 +141,6 @@ static void zfcp_qdio_int_resp(struct cc
zfcp_qdio_resp_put_back(qdio, count);
}
-static void zfcp_qdio_sbal_limit(struct zfcp_qdio *qdio,
- struct zfcp_qdio_req *q_req, int max_sbals)
-{
- int count = atomic_read(&qdio->req_q.count);
- count = min(count, max_sbals);
- q_req->sbal_limit = (q_req->sbal_first + count - 1)
- % QDIO_MAX_BUFFERS_PER_Q;
-}
-
static struct qdio_buffer_element *
zfcp_qdio_sbal_chain(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
{
@@ -173,6 +164,7 @@ zfcp_qdio_sbal_chain(struct zfcp_qdio *q
/* keep this requests number of SBALs up-to-date */
q_req->sbal_number++;
+ BUG_ON(q_req->sbal_number > ZFCP_QDIO_MAX_SBALS_PER_REQ);
/* start at first SBALE of new SBAL */
q_req->sbale_curr = 0;
@@ -213,14 +205,11 @@ static void zfcp_qdio_undo_sbals(struct
* Returns: number of bytes, or error (negativ)
*/
int zfcp_qdio_sbals_from_sg(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req,
- struct scatterlist *sg, int max_sbals)
+ struct scatterlist *sg)
{
struct qdio_buffer_element *sbale;
int bytes = 0;
- /* figure out last allowed SBAL */
- zfcp_qdio_sbal_limit(qdio, q_req, max_sbals);
-
/* set storage-block type for this request */
sbale = zfcp_qdio_sbale_req(qdio, q_req);
sbale->flags |= q_req->sbtype;
--- a/drivers/s390/scsi/zfcp_qdio.h
+++ b/drivers/s390/scsi/zfcp_qdio.h
@@ -19,6 +19,14 @@
/* index of last SBALE (with respect to DMQ bug workaround) */
#define ZFCP_QDIO_LAST_SBALE_PER_SBAL (ZFCP_QDIO_MAX_SBALES_PER_SBAL - 1)
+/* Max SBALS for chaining */
+#define ZFCP_QDIO_MAX_SBALS_PER_REQ 36
+
+/* max. number of (data buffer) SBALEs in largest SBAL chain
+ * request ID + QTCB in SBALE 0 + 1 of first SBAL in chain */
+#define ZFCP_QDIO_MAX_SBALES_PER_REQ \
+ (ZFCP_QDIO_MAX_SBALS_PER_REQ * ZFCP_QDIO_MAX_SBALES_PER_SBAL - 2)
+
/**
* struct zfcp_qdio_queue - qdio queue buffer, zfcp index and free count
* @sbal: qdio buffers
@@ -134,10 +142,14 @@ void zfcp_qdio_req_init(struct zfcp_qdio
unsigned long req_id, u32 sbtype, void *data, u32 len)
{
struct qdio_buffer_element *sbale;
+ int count = min(atomic_read(&qdio->req_q.count),
+ ZFCP_QDIO_MAX_SBALS_PER_REQ);
q_req->sbal_first = q_req->sbal_last = qdio->req_q.first;
q_req->sbal_number = 1;
q_req->sbtype = sbtype;
+ q_req->sbal_limit = (q_req->sbal_first + count - 1)
+ % QDIO_MAX_BUFFERS_PER_Q;
sbale = zfcp_qdio_sbale_req(qdio, q_req);
sbale->addr = (void *) req_id;
@@ -210,4 +222,20 @@ void zfcp_qdio_skip_to_last_sbale(struct
q_req->sbale_curr = ZFCP_QDIO_LAST_SBALE_PER_SBAL;
}
+/**
+ * zfcp_qdio_sbal_limit - set the sbal limit for a request in q_req
+ * @qdio: pointer to struct zfcp_qdio
+ * @q_req: The current zfcp_qdio_req
+ * @max_sbals: maximum number of SBALs allowed
+ */
+static inline
+void zfcp_qdio_sbal_limit(struct zfcp_qdio *qdio,
+ struct zfcp_qdio_req *q_req, int max_sbals)
+{
+ int count = min(atomic_read(&qdio->req_q.count), max_sbals);
+
+ q_req->sbal_limit = (q_req->sbal_first + count - 1) %
+ QDIO_MAX_BUFFERS_PER_Q;
+}
+
#endif /* ZFCP_QDIO_H */
--- a/drivers/s390/scsi/zfcp_scsi.c
+++ b/drivers/s390/scsi/zfcp_scsi.c
@@ -701,11 +701,11 @@ struct zfcp_data zfcp_data = {
.eh_host_reset_handler = zfcp_scsi_eh_host_reset_handler,
.can_queue = 4096,
.this_id = -1,
- .sg_tablesize = ZFCP_FSF_MAX_SBALES_PER_REQ,
+ .sg_tablesize = ZFCP_QDIO_MAX_SBALES_PER_REQ,
.cmd_per_lun = 1,
.use_clustering = 1,
.sdev_attrs = zfcp_sysfs_sdev_attrs,
- .max_sectors = (ZFCP_FSF_MAX_SBALES_PER_REQ * 8),
+ .max_sectors = (ZFCP_QDIO_MAX_SBALES_PER_REQ * 8),
.dma_boundary = ZFCP_QDIO_SBALE_LEN - 1,
.shost_attrs = zfcp_sysfs_shost_attrs,
},
^ permalink raw reply [flat|nested] 13+ messages in thread
* [patch 05/10] zfcp: Cleanup QDIO attachment and improve processing.
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
` (3 preceding siblings ...)
2010-07-16 13:37 ` [patch 04/10] zfcp: Cleanup function parameters for sbal value Christof Schmitt
@ 2010-07-16 13:37 ` Christof Schmitt
2010-07-16 13:37 ` [patch 06/10] zfcp: Post events through FC transport class Christof Schmitt
` (4 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley
Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens,
Swen Schillig, Christof Schmitt
[-- Attachment #1: 709-zfcp-qdio-cleanup.diff --]
[-- Type: text/plain, Size: 16308 bytes --]
From: Swen Schillig <swen@vnet.ibm.com>
Some definitions and structures in the zfcp QDIO processing are
improved by the removal of not required variables and processing steps.
I addition the naming of some variables is changed to make their purpose
more clear.
Signed-off-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
---
drivers/s390/scsi/zfcp_fsf.c | 10 +-
drivers/s390/scsi/zfcp_qdio.c | 141 ++++++++++++++----------------------------
drivers/s390/scsi/zfcp_qdio.h | 57 +++++-----------
3 files changed, 69 insertions(+), 139 deletions(-)
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -732,7 +732,7 @@ static int zfcp_fsf_req_send(struct zfcp
zfcp_reqlist_add(adapter->req_list, req);
- req->qdio_req.qdio_outb_usage = atomic_read(&qdio->req_q.count);
+ req->qdio_req.qdio_outb_usage = atomic_read(&qdio->req_q_free);
req->issued = get_clock();
if (zfcp_qdio_send(qdio, &req->qdio_req)) {
del_timer(&req->timer);
@@ -2025,7 +2025,7 @@ static void zfcp_fsf_req_trace(struct zf
blktrc.magic = ZFCP_BLK_DRV_DATA_MAGIC;
if (req->status & ZFCP_STATUS_FSFREQ_ERROR)
blktrc.flags |= ZFCP_BLK_REQ_ERROR;
- blktrc.inb_usage = req->qdio_req.qdio_inb_usage;
+ blktrc.inb_usage = 0;
blktrc.outb_usage = req->qdio_req.qdio_outb_usage;
if (req->adapter->adapter_features & FSF_FEATURE_MEASUREMENT_DATA &&
@@ -2207,7 +2207,7 @@ int zfcp_fsf_send_fcp_command_task(struc
return -EBUSY;
spin_lock(&qdio->req_q_lock);
- if (atomic_read(&qdio->req_q.count) <= 0) {
+ if (atomic_read(&qdio->req_q_free) <= 0) {
atomic_inc(&qdio->req_q_full);
goto out;
}
@@ -2407,7 +2407,7 @@ out:
void zfcp_fsf_reqid_check(struct zfcp_qdio *qdio, int sbal_idx)
{
struct zfcp_adapter *adapter = qdio->adapter;
- struct qdio_buffer *sbal = qdio->resp_q.sbal[sbal_idx];
+ struct qdio_buffer *sbal = qdio->res_q[sbal_idx];
struct qdio_buffer_element *sbale;
struct zfcp_fsf_req *fsf_req;
unsigned long req_id;
@@ -2428,8 +2428,6 @@ void zfcp_fsf_reqid_check(struct zfcp_qd
req_id, dev_name(&adapter->ccw_device->dev));
fsf_req->qdio_req.sbal_response = sbal_idx;
- fsf_req->qdio_req.qdio_inb_usage =
- atomic_read(&qdio->resp_q.count);
zfcp_fsf_req_complete(fsf_req);
if (likely(sbale->flags & SBAL_FLAGS_LAST_ENTRY))
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -55,71 +55,46 @@ static void zfcp_qdio_zero_sbals(struct
static inline void zfcp_qdio_account(struct zfcp_qdio *qdio)
{
unsigned long long now, span;
- int free, used;
+ int used;
spin_lock(&qdio->stat_lock);
now = get_clock_monotonic();
span = (now - qdio->req_q_time) >> 12;
- free = atomic_read(&qdio->req_q.count);
- used = QDIO_MAX_BUFFERS_PER_Q - free;
+ used = QDIO_MAX_BUFFERS_PER_Q - atomic_read(&qdio->req_q_free);
qdio->req_q_util += used * span;
qdio->req_q_time = now;
spin_unlock(&qdio->stat_lock);
}
static void zfcp_qdio_int_req(struct ccw_device *cdev, unsigned int qdio_err,
- int queue_no, int first, int count,
+ int queue_no, int idx, int count,
unsigned long parm)
{
struct zfcp_qdio *qdio = (struct zfcp_qdio *) parm;
- struct zfcp_qdio_queue *queue = &qdio->req_q;
if (unlikely(qdio_err)) {
- zfcp_dbf_hba_qdio(qdio->adapter->dbf, qdio_err, first,
- count);
+ zfcp_dbf_hba_qdio(qdio->adapter->dbf, qdio_err, idx, count);
zfcp_qdio_handler_error(qdio, "qdireq1");
return;
}
/* cleanup all SBALs being program-owned now */
- zfcp_qdio_zero_sbals(queue->sbal, first, count);
+ zfcp_qdio_zero_sbals(qdio->req_q, idx, count);
zfcp_qdio_account(qdio);
- atomic_add(count, &queue->count);
+ atomic_add(count, &qdio->req_q_free);
wake_up(&qdio->req_q_wq);
}
-static void zfcp_qdio_resp_put_back(struct zfcp_qdio *qdio, int processed)
-{
- struct zfcp_qdio_queue *queue = &qdio->resp_q;
- struct ccw_device *cdev = qdio->adapter->ccw_device;
- u8 count, start = queue->first;
- unsigned int retval;
-
- count = atomic_read(&queue->count) + processed;
-
- retval = do_QDIO(cdev, QDIO_FLAG_SYNC_INPUT, 0, start, count);
-
- if (unlikely(retval)) {
- atomic_set(&queue->count, count);
- zfcp_erp_adapter_reopen(qdio->adapter, 0, "qdrpb_1", NULL);
- } else {
- queue->first += count;
- queue->first %= QDIO_MAX_BUFFERS_PER_Q;
- atomic_set(&queue->count, 0);
- }
-}
-
static void zfcp_qdio_int_resp(struct ccw_device *cdev, unsigned int qdio_err,
- int queue_no, int first, int count,
+ int queue_no, int idx, int count,
unsigned long parm)
{
struct zfcp_qdio *qdio = (struct zfcp_qdio *) parm;
int sbal_idx, sbal_no;
if (unlikely(qdio_err)) {
- zfcp_dbf_hba_qdio(qdio->adapter->dbf, qdio_err, first,
- count);
+ zfcp_dbf_hba_qdio(qdio->adapter->dbf, qdio_err, idx, count);
zfcp_qdio_handler_error(qdio, "qdires1");
return;
}
@@ -129,16 +104,16 @@ static void zfcp_qdio_int_resp(struct cc
* returned by QDIO layer
*/
for (sbal_no = 0; sbal_no < count; sbal_no++) {
- sbal_idx = (first + sbal_no) % QDIO_MAX_BUFFERS_PER_Q;
+ sbal_idx = (idx + sbal_no) % QDIO_MAX_BUFFERS_PER_Q;
/* go through all SBALEs of SBAL */
zfcp_fsf_reqid_check(qdio, sbal_idx);
}
/*
- * put range of SBALs back to response queue
- * (including SBALs which have already been free before)
+ * put SBALs back to response queue
*/
- zfcp_qdio_resp_put_back(qdio, count);
+ if (do_QDIO(cdev, QDIO_FLAG_SYNC_INPUT, 0, idx, count))
+ zfcp_erp_adapter_reopen(qdio->adapter, 0, "qdires2", NULL);
}
static struct qdio_buffer_element *
@@ -185,17 +160,6 @@ zfcp_qdio_sbale_next(struct zfcp_qdio *q
return zfcp_qdio_sbale_curr(qdio, q_req);
}
-static void zfcp_qdio_undo_sbals(struct zfcp_qdio *qdio,
- struct zfcp_qdio_req *q_req)
-{
- struct qdio_buffer **sbal = qdio->req_q.sbal;
- int first = q_req->sbal_first;
- int last = q_req->sbal_last;
- int count = (last - first + QDIO_MAX_BUFFERS_PER_Q) %
- QDIO_MAX_BUFFERS_PER_Q + 1;
- zfcp_qdio_zero_sbals(sbal, first, count);
-}
-
/**
* zfcp_qdio_sbals_from_sg - fill SBALs from scatter-gather list
* @qdio: pointer to struct zfcp_qdio
@@ -218,7 +182,8 @@ int zfcp_qdio_sbals_from_sg(struct zfcp_
sbale = zfcp_qdio_sbale_next(qdio, q_req);
if (!sbale) {
atomic_inc(&qdio->req_q_full);
- zfcp_qdio_undo_sbals(qdio, q_req);
+ zfcp_qdio_zero_sbals(qdio->req_q, q_req->sbal_first,
+ q_req->sbal_number);
return -EINVAL;
}
@@ -237,10 +202,8 @@ int zfcp_qdio_sbals_from_sg(struct zfcp_
static int zfcp_qdio_sbal_check(struct zfcp_qdio *qdio)
{
- struct zfcp_qdio_queue *req_q = &qdio->req_q;
-
spin_lock_bh(&qdio->req_q_lock);
- if (atomic_read(&req_q->count) ||
+ if (atomic_read(&qdio->req_q_free) ||
!(atomic_read(&qdio->adapter->status) & ZFCP_STATUS_ADAPTER_QDIOUP))
return 1;
spin_unlock_bh(&qdio->req_q_lock);
@@ -289,25 +252,25 @@ int zfcp_qdio_sbal_get(struct zfcp_qdio
*/
int zfcp_qdio_send(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
{
- struct zfcp_qdio_queue *req_q = &qdio->req_q;
- int first = q_req->sbal_first;
- int count = q_req->sbal_number;
int retval;
- unsigned int qdio_flags = QDIO_FLAG_SYNC_OUTPUT;
+ u8 sbal_number = q_req->sbal_number;
zfcp_qdio_account(qdio);
- retval = do_QDIO(qdio->adapter->ccw_device, qdio_flags, 0, first,
- count);
+ retval = do_QDIO(qdio->adapter->ccw_device, QDIO_FLAG_SYNC_OUTPUT, 0,
+ q_req->sbal_first, sbal_number);
+
if (unlikely(retval)) {
- zfcp_qdio_zero_sbals(req_q->sbal, first, count);
+ zfcp_qdio_zero_sbals(qdio->req_q, q_req->sbal_first,
+ sbal_number);
return retval;
}
/* account for transferred buffers */
- atomic_sub(count, &req_q->count);
- req_q->first += count;
- req_q->first %= QDIO_MAX_BUFFERS_PER_Q;
+ atomic_sub(sbal_number, &qdio->req_q_free);
+ qdio->req_q_idx += sbal_number;
+ qdio->req_q_idx %= QDIO_MAX_BUFFERS_PER_Q;
+
return 0;
}
@@ -329,8 +292,8 @@ static void zfcp_qdio_setup_init_data(st
id->input_handler = zfcp_qdio_int_resp;
id->output_handler = zfcp_qdio_int_req;
id->int_parm = (unsigned long) qdio;
- id->input_sbal_addr_array = (void **) (qdio->resp_q.sbal);
- id->output_sbal_addr_array = (void **) (qdio->req_q.sbal);
+ id->input_sbal_addr_array = (void **) (qdio->res_q);
+ id->output_sbal_addr_array = (void **) (qdio->req_q);
}
/**
@@ -343,8 +306,8 @@ static int zfcp_qdio_allocate(struct zfc
{
struct qdio_initialize init_data;
- if (zfcp_qdio_buffers_enqueue(qdio->req_q.sbal) ||
- zfcp_qdio_buffers_enqueue(qdio->resp_q.sbal))
+ if (zfcp_qdio_buffers_enqueue(qdio->req_q) ||
+ zfcp_qdio_buffers_enqueue(qdio->res_q))
return -ENOMEM;
zfcp_qdio_setup_init_data(&init_data, qdio);
@@ -358,34 +321,30 @@ static int zfcp_qdio_allocate(struct zfc
*/
void zfcp_qdio_close(struct zfcp_qdio *qdio)
{
- struct zfcp_qdio_queue *req_q;
- int first, count;
+ struct zfcp_adapter *adapter = qdio->adapter;
+ int idx, count;
- if (!(atomic_read(&qdio->adapter->status) & ZFCP_STATUS_ADAPTER_QDIOUP))
+ if (!(atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_QDIOUP))
return;
/* clear QDIOUP flag, thus do_QDIO is not called during qdio_shutdown */
- req_q = &qdio->req_q;
spin_lock_bh(&qdio->req_q_lock);
- atomic_clear_mask(ZFCP_STATUS_ADAPTER_QDIOUP, &qdio->adapter->status);
+ atomic_clear_mask(ZFCP_STATUS_ADAPTER_QDIOUP, &adapter->status);
spin_unlock_bh(&qdio->req_q_lock);
wake_up(&qdio->req_q_wq);
- qdio_shutdown(qdio->adapter->ccw_device,
- QDIO_FLAG_CLEANUP_USING_CLEAR);
+ qdio_shutdown(adapter->ccw_device, QDIO_FLAG_CLEANUP_USING_CLEAR);
/* cleanup used outbound sbals */
- count = atomic_read(&req_q->count);
+ count = atomic_read(&qdio->req_q_free);
if (count < QDIO_MAX_BUFFERS_PER_Q) {
- first = (req_q->first + count) % QDIO_MAX_BUFFERS_PER_Q;
+ idx = (qdio->req_q_idx + count) % QDIO_MAX_BUFFERS_PER_Q;
count = QDIO_MAX_BUFFERS_PER_Q - count;
- zfcp_qdio_zero_sbals(req_q->sbal, first, count);
+ zfcp_qdio_zero_sbals(qdio->req_q, idx, count);
}
- req_q->first = 0;
- atomic_set(&req_q->count, 0);
- qdio->resp_q.first = 0;
- atomic_set(&qdio->resp_q.count, 0);
+ qdio->req_q_idx = 0;
+ atomic_set(&qdio->req_q_free, 0);
}
/**
@@ -397,10 +356,11 @@ int zfcp_qdio_open(struct zfcp_qdio *qdi
{
struct qdio_buffer_element *sbale;
struct qdio_initialize init_data;
- struct ccw_device *cdev = qdio->adapter->ccw_device;
+ struct zfcp_adapter *adapter = qdio->adapter;
+ struct ccw_device *cdev = adapter->ccw_device;
int cc;
- if (atomic_read(&qdio->adapter->status) & ZFCP_STATUS_ADAPTER_QDIOUP)
+ if (atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_QDIOUP)
return -EIO;
zfcp_qdio_setup_init_data(&init_data, qdio);
@@ -412,19 +372,18 @@ int zfcp_qdio_open(struct zfcp_qdio *qdi
goto failed_qdio;
for (cc = 0; cc < QDIO_MAX_BUFFERS_PER_Q; cc++) {
- sbale = &(qdio->resp_q.sbal[cc]->element[0]);
+ sbale = &(qdio->res_q[cc]->element[0]);
sbale->length = 0;
sbale->flags = SBAL_FLAGS_LAST_ENTRY;
sbale->addr = NULL;
}
- if (do_QDIO(cdev, QDIO_FLAG_SYNC_INPUT, 0, 0,
- QDIO_MAX_BUFFERS_PER_Q))
+ if (do_QDIO(cdev, QDIO_FLAG_SYNC_INPUT, 0, 0, QDIO_MAX_BUFFERS_PER_Q))
goto failed_qdio;
/* set index of first avalable SBALS / number of available SBALS */
- qdio->req_q.first = 0;
- atomic_set(&qdio->req_q.count, QDIO_MAX_BUFFERS_PER_Q);
+ qdio->req_q_idx = 0;
+ atomic_set(&qdio->req_q_free, QDIO_MAX_BUFFERS_PER_Q);
return 0;
@@ -438,7 +397,6 @@ failed_establish:
void zfcp_qdio_destroy(struct zfcp_qdio *qdio)
{
- struct qdio_buffer **sbal_req, **sbal_resp;
int p;
if (!qdio)
@@ -447,12 +405,9 @@ void zfcp_qdio_destroy(struct zfcp_qdio
if (qdio->adapter->ccw_device)
qdio_free(qdio->adapter->ccw_device);
- sbal_req = qdio->req_q.sbal;
- sbal_resp = qdio->resp_q.sbal;
-
for (p = 0; p < QDIO_MAX_BUFFERS_PER_Q; p += QBUFF_PER_PAGE) {
- free_page((unsigned long) sbal_req[p]);
- free_page((unsigned long) sbal_resp[p]);
+ free_page((unsigned long) qdio->req_q[p]);
+ free_page((unsigned long) qdio->res_q[p]);
}
kfree(qdio);
--- a/drivers/s390/scsi/zfcp_qdio.h
+++ b/drivers/s390/scsi/zfcp_qdio.h
@@ -28,21 +28,11 @@
(ZFCP_QDIO_MAX_SBALS_PER_REQ * ZFCP_QDIO_MAX_SBALES_PER_SBAL - 2)
/**
- * struct zfcp_qdio_queue - qdio queue buffer, zfcp index and free count
- * @sbal: qdio buffers
- * @first: index of next free buffer in queue
- * @count: number of free buffers in queue
- */
-struct zfcp_qdio_queue {
- struct qdio_buffer *sbal[QDIO_MAX_BUFFERS_PER_Q];
- u8 first;
- atomic_t count;
-};
-
-/**
* struct zfcp_qdio - basic qdio data structure
- * @resp_q: response queue
+ * @res_q: response queue
* @req_q: request queue
+ * @req_q_idx: index of next free buffer
+ * @req_q_free: number of free buffers in queue
* @stat_lock: lock to protect req_q_util and req_q_time
* @req_q_lock: lock to serialize access to request queue
* @req_q_time: time of last fill level change
@@ -52,8 +42,10 @@ struct zfcp_qdio_queue {
* @adapter: adapter used in conjunction with this qdio structure
*/
struct zfcp_qdio {
- struct zfcp_qdio_queue resp_q;
- struct zfcp_qdio_queue req_q;
+ struct qdio_buffer *res_q[QDIO_MAX_BUFFERS_PER_Q];
+ struct qdio_buffer *req_q[QDIO_MAX_BUFFERS_PER_Q];
+ u8 req_q_idx;
+ atomic_t req_q_free;
spinlock_t stat_lock;
spinlock_t req_q_lock;
unsigned long long req_q_time;
@@ -73,7 +65,6 @@ struct zfcp_qdio {
* @sbale_curr: current sbale at creation of this request
* @sbal_response: sbal used in interrupt
* @qdio_outb_usage: usage of outbound queue
- * @qdio_inb_usage: usage of inbound queue
*/
struct zfcp_qdio_req {
u32 sbtype;
@@ -84,22 +75,9 @@ struct zfcp_qdio_req {
u8 sbale_curr;
u8 sbal_response;
u16 qdio_outb_usage;
- u16 qdio_inb_usage;
};
/**
- * zfcp_qdio_sbale - return pointer to sbale in qdio queue
- * @q: queue where to find sbal
- * @sbal_idx: sbal index in queue
- * @sbale_idx: sbale index in sbal
- */
-static inline struct qdio_buffer_element *
-zfcp_qdio_sbale(struct zfcp_qdio_queue *q, int sbal_idx, int sbale_idx)
-{
- return &q->sbal[sbal_idx]->element[sbale_idx];
-}
-
-/**
* zfcp_qdio_sbale_req - return pointer to sbale on req_q for a request
* @qdio: pointer to struct zfcp_qdio
* @q_rec: pointer to struct zfcp_qdio_req
@@ -108,7 +86,7 @@ zfcp_qdio_sbale(struct zfcp_qdio_queue *
static inline struct qdio_buffer_element *
zfcp_qdio_sbale_req(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
{
- return zfcp_qdio_sbale(&qdio->req_q, q_req->sbal_last, 0);
+ return &qdio->req_q[q_req->sbal_last]->element[0];
}
/**
@@ -120,8 +98,7 @@ zfcp_qdio_sbale_req(struct zfcp_qdio *qd
static inline struct qdio_buffer_element *
zfcp_qdio_sbale_curr(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
{
- return zfcp_qdio_sbale(&qdio->req_q, q_req->sbal_last,
- q_req->sbale_curr);
+ return &qdio->req_q[q_req->sbal_last]->element[q_req->sbale_curr];
}
/**
@@ -142,25 +119,25 @@ void zfcp_qdio_req_init(struct zfcp_qdio
unsigned long req_id, u32 sbtype, void *data, u32 len)
{
struct qdio_buffer_element *sbale;
- int count = min(atomic_read(&qdio->req_q.count),
+ int count = min(atomic_read(&qdio->req_q_free),
ZFCP_QDIO_MAX_SBALS_PER_REQ);
- q_req->sbal_first = q_req->sbal_last = qdio->req_q.first;
+ q_req->sbal_first = q_req->sbal_last = qdio->req_q_idx;
q_req->sbal_number = 1;
q_req->sbtype = sbtype;
+ q_req->sbale_curr = 1;
q_req->sbal_limit = (q_req->sbal_first + count - 1)
% QDIO_MAX_BUFFERS_PER_Q;
sbale = zfcp_qdio_sbale_req(qdio, q_req);
sbale->addr = (void *) req_id;
- sbale->flags |= SBAL_FLAGS0_COMMAND;
- sbale->flags |= sbtype;
+ sbale->flags = SBAL_FLAGS0_COMMAND | sbtype;
- q_req->sbale_curr = 1;
+ if (unlikely(!data))
+ return;
sbale++;
sbale->addr = data;
- if (likely(data))
- sbale->length = len;
+ sbale->length = len;
}
/**
@@ -232,7 +209,7 @@ static inline
void zfcp_qdio_sbal_limit(struct zfcp_qdio *qdio,
struct zfcp_qdio_req *q_req, int max_sbals)
{
- int count = min(atomic_read(&qdio->req_q.count), max_sbals);
+ int count = min(atomic_read(&qdio->req_q_free), max_sbals);
q_req->sbal_limit = (q_req->sbal_first + count - 1) %
QDIO_MAX_BUFFERS_PER_Q;
^ permalink raw reply [flat|nested] 13+ messages in thread
* [patch 06/10] zfcp: Post events through FC transport class
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
` (4 preceding siblings ...)
2010-07-16 13:37 ` [patch 05/10] zfcp: Cleanup QDIO attachment and improve processing Christof Schmitt
@ 2010-07-16 13:37 ` Christof Schmitt
2010-07-16 13:37 ` [patch 07/10] zfcp: Prevent access on uninitialized memory Christof Schmitt
` (3 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley
Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Sven Schuetz,
Christof Schmitt
[-- Attachment #1: 711-zfcp-post-fc-events.diff --]
[-- Type: text/plain, Size: 6051 bytes --]
From: Sven Schuetz <sven@linux.vnet.ibm.com>
Post FC transport class netlink events for usage in the userspace,
e.g. for HBAAPI. Supported events are those required for the
polled events in HBAAPI.
- link up
- link down
- incoming RSCN
(events related to FC-AL are not supported, as zfcp has no support for FC-AL)
Signed-off-by: Sven Schuetz <sven@linux.vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
---
drivers/s390/scsi/zfcp_aux.c | 4 +++
drivers/s390/scsi/zfcp_def.h | 2 +
drivers/s390/scsi/zfcp_ext.h | 3 ++
drivers/s390/scsi/zfcp_fc.c | 54 +++++++++++++++++++++++++++++++++++++++++++
drivers/s390/scsi/zfcp_fc.h | 24 +++++++++++++++++++
drivers/s390/scsi/zfcp_fsf.c | 3 ++
6 files changed, 90 insertions(+)
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -524,6 +524,10 @@ struct zfcp_adapter *zfcp_adapter_enqueu
rwlock_init(&adapter->port_list_lock);
INIT_LIST_HEAD(&adapter->port_list);
+ INIT_LIST_HEAD(&adapter->events.list);
+ INIT_WORK(&adapter->events.work, zfcp_fc_post_event);
+ spin_lock_init(&adapter->events.list_lock);
+
init_waitqueue_head(&adapter->erp_ready_wq);
init_waitqueue_head(&adapter->erp_done_wqh);
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -37,6 +37,7 @@
#include <asm/ebcdic.h>
#include <asm/sysinfo.h>
#include "zfcp_fsf.h"
+#include "zfcp_fc.h"
#include "zfcp_qdio.h"
struct zfcp_reqlist;
@@ -190,6 +191,7 @@ struct zfcp_adapter {
struct service_level service_level;
struct workqueue_struct *work_queue;
struct device_dma_parameters dma_parms;
+ struct zfcp_fc_events events;
};
struct zfcp_port {
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -96,6 +96,9 @@ extern void zfcp_erp_adapter_access_chan
extern void zfcp_erp_timeout_handler(unsigned long);
/* zfcp_fc.c */
+extern void zfcp_fc_enqueue_event(struct zfcp_adapter *,
+ enum fc_host_event_code event_code, u32);
+extern void zfcp_fc_post_event(struct work_struct *);
extern void zfcp_fc_scan_ports(struct work_struct *);
extern void zfcp_fc_incoming_els(struct zfcp_fsf_req *);
extern void zfcp_fc_port_did_lookup(struct work_struct *);
--- a/drivers/s390/scsi/zfcp_fc.c
+++ b/drivers/s390/scsi/zfcp_fc.c
@@ -23,6 +23,58 @@ static u32 zfcp_fc_rscn_range_mask[] = {
[ELS_ADDR_FMT_FAB] = 0x000000,
};
+/**
+ * zfcp_fc_post_event - post event to userspace via fc_transport
+ * @work: work struct with enqueued events
+ */
+void zfcp_fc_post_event(struct work_struct *work)
+{
+ struct zfcp_fc_event *event = NULL, *tmp = NULL;
+ LIST_HEAD(tmp_lh);
+ struct zfcp_fc_events *events = container_of(work,
+ struct zfcp_fc_events, work);
+ struct zfcp_adapter *adapter = container_of(events, struct zfcp_adapter,
+ events);
+
+ spin_lock_bh(&events->list_lock);
+ list_splice_init(&events->list, &tmp_lh);
+ spin_unlock_bh(&events->list_lock);
+
+ list_for_each_entry_safe(event, tmp, &tmp_lh, list) {
+ fc_host_post_event(adapter->scsi_host, fc_get_event_number(),
+ event->code, event->data);
+ list_del(&event->list);
+ kfree(event);
+ }
+
+}
+
+/**
+ * zfcp_fc_enqueue_event - safely enqueue FC HBA API event from irq context
+ * @adapter: The adapter where to enqueue the event
+ * @event_code: The event code (as defined in fc_host_event_code in
+ * scsi_transport_fc.h)
+ * @event_data: The event data (e.g. n_port page in case of els)
+ */
+void zfcp_fc_enqueue_event(struct zfcp_adapter *adapter,
+ enum fc_host_event_code event_code, u32 event_data)
+{
+ struct zfcp_fc_event *event;
+
+ event = kmalloc(sizeof(struct zfcp_fc_event), GFP_ATOMIC);
+ if (!event)
+ return;
+
+ event->code = event_code;
+ event->data = event_data;
+
+ spin_lock(&adapter->events.list_lock);
+ list_add_tail(&event->list, &adapter->events.list);
+ spin_unlock(&adapter->events.list_lock);
+
+ queue_work(adapter->work_queue, &adapter->events.work);
+}
+
static int zfcp_fc_wka_port_get(struct zfcp_fc_wka_port *wka_port)
{
if (mutex_lock_interruptible(&wka_port->mutex))
@@ -148,6 +200,8 @@ static void zfcp_fc_incoming_rscn(struct
afmt = page->rscn_page_flags & ELS_RSCN_ADDR_FMT_MASK;
_zfcp_fc_incoming_rscn(fsf_req, zfcp_fc_rscn_range_mask[afmt],
page);
+ zfcp_fc_enqueue_event(fsf_req->adapter, FCH_EVT_RSCN,
+ *(u32 *)page);
}
queue_work(fsf_req->adapter->work_queue, &fsf_req->adapter->scan_work);
}
--- a/drivers/s390/scsi/zfcp_fc.h
+++ b/drivers/s390/scsi/zfcp_fc.h
@@ -30,6 +30,30 @@
#define ZFCP_FC_CTELS_TMO (2 * FC_DEF_R_A_TOV / 1000)
/**
+ * struct zfcp_fc_event - FC HBAAPI event for internal queueing from irq context
+ * @code: Event code
+ * @data: Event data
+ * @list: list_head for zfcp_fc_events list
+ */
+struct zfcp_fc_event {
+ enum fc_host_event_code code;
+ u32 data;
+ struct list_head list;
+};
+
+/**
+ * struct zfcp_fc_events - Infrastructure for posting FC events from irq context
+ * @list: List for queueing of events from irq context to workqueue
+ * @list_lock: Lock for event list
+ * @work: work_struct for forwarding events in workqueue
+*/
+struct zfcp_fc_events {
+ struct list_head list;
+ spinlock_t list_lock;
+ struct work_struct work;
+};
+
+/**
* struct zfcp_fc_gid_pn_req - container for ct header plus gid_pn request
* @ct_hdr: FC GS common transport header
* @gid_pn: GID_PN request
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -274,6 +274,7 @@ static void zfcp_fsf_status_read_handler
break;
case FSF_STATUS_READ_LINK_DOWN:
zfcp_fsf_status_read_link_down(req);
+ zfcp_fc_enqueue_event(adapter, FCH_EVT_LINKDOWN, 0);
break;
case FSF_STATUS_READ_LINK_UP:
dev_info(&adapter->ccw_device->dev,
@@ -286,6 +287,8 @@ static void zfcp_fsf_status_read_handler
ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED |
ZFCP_STATUS_COMMON_ERP_FAILED,
"fssrh_2", req);
+ zfcp_fc_enqueue_event(adapter, FCH_EVT_LINKUP, 0);
+
break;
case FSF_STATUS_READ_NOTIFICATION_LOST:
if (sr_buf->status_subtype & FSF_STATUS_READ_SUB_ACT_UPDATED)
^ permalink raw reply [flat|nested] 13+ messages in thread
* [patch 07/10] zfcp: Prevent access on uninitialized memory.
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
` (5 preceding siblings ...)
2010-07-16 13:37 ` [patch 06/10] zfcp: Post events through FC transport class Christof Schmitt
@ 2010-07-16 13:37 ` Christof Schmitt
2010-07-16 13:37 ` [patch 08/10] zfcp: Enable data division support for FCP devices Christof Schmitt
` (2 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley
Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens,
Swen Schillig, Christof Schmitt
[-- Attachment #1: 716-zfcp-kzalloc.diff --]
[-- Type: text/plain, Size: 899 bytes --]
From: Swen Schillig <swen@vnet.ibm.com>
Initialize allocated memory to zero to prevent access on error. This
prevents a possible error in the error handling path.
Signed-off-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
---
drivers/s390/scsi/zfcp_dbf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -urpN linux-2.6/drivers/s390/scsi/zfcp_dbf.c linux-2.6-patched/drivers/s390/scsi/zfcp_dbf.c
--- linux-2.6/drivers/s390/scsi/zfcp_dbf.c 2010-05-16 23:17:36.000000000 +0200
+++ linux-2.6-patched/drivers/s390/scsi/zfcp_dbf.c 2010-07-15 12:22:24.000000000 +0200
@@ -1005,7 +1005,7 @@ int zfcp_dbf_adapter_register(struct zfc
char dbf_name[DEBUG_MAX_NAME_LEN];
struct zfcp_dbf *dbf;
- dbf = kmalloc(sizeof(struct zfcp_dbf), GFP_KERNEL);
+ dbf = kzalloc(sizeof(struct zfcp_dbf), GFP_KERNEL);
if (!dbf)
return -ENOMEM;
^ permalink raw reply [flat|nested] 13+ messages in thread
* [patch 08/10] zfcp: Enable data division support for FCP devices
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
` (6 preceding siblings ...)
2010-07-16 13:37 ` [patch 07/10] zfcp: Prevent access on uninitialized memory Christof Schmitt
@ 2010-07-16 13:37 ` Christof Schmitt
2010-07-16 13:37 ` [patch 09/10] zfcp: Introduce experimental support for DIF/DIX Christof Schmitt
2010-07-16 13:37 ` [patch 10/10] zfcp: Trigger logging in the FCP channel on qdio error conditions Christof Schmitt
9 siblings, 0 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley
Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens,
Christof Schmitt
[-- Attachment #1: linux-2.6.34-qdio-zfcp-data.patch --]
[-- Type: text/plain, Size: 3955 bytes --]
From: Christof Schmitt <christof.schmitt@de.ibm.com>
Try to enable data division support for FCP devices and indicate in
the adapter status flag if it succeeded.
Reviewed-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
---
arch/s390/include/asm/qdio.h | 6 ++++++
drivers/s390/cio/qdio_setup.c | 2 ++
drivers/s390/scsi/zfcp_def.h | 1 +
drivers/s390/scsi/zfcp_qdio.c | 11 ++++++++++-
4 files changed, 19 insertions(+), 1 deletion(-)
--- a/arch/s390/include/asm/qdio.h
+++ b/arch/s390/include/asm/qdio.h
@@ -84,6 +84,7 @@ struct qdr {
#define QIB_AC_OUTBOUND_PCI_SUPPORTED 0x40
#define QIB_RFLAGS_ENABLE_QEBSM 0x80
+#define QIB_RFLAGS_ENABLE_DATA_DIV 0x02
/**
* struct qib - queue information block (QIB)
@@ -284,6 +285,9 @@ struct slsb {
u8 val[QDIO_MAX_BUFFERS_PER_Q];
} __attribute__ ((packed, aligned(256)));
+#define CHSC_AC2_DATA_DIV_AVAILABLE 0x0010
+#define CHSC_AC2_DATA_DIV_ENABLED 0x0002
+
struct qdio_ssqd_desc {
u8 flags;
u8:8;
@@ -332,6 +336,7 @@ typedef void qdio_handler_t(struct ccw_d
* @adapter_name: name for the adapter
* @qib_param_field_format: format for qib_parm_field
* @qib_param_field: pointer to 128 bytes or NULL, if no param field
+ * @qib_rflags: rflags to set
* @input_slib_elements: pointer to no_input_qs * 128 words of data or NULL
* @output_slib_elements: pointer to no_output_qs * 128 words of data or NULL
* @no_input_qs: number of input queues
@@ -348,6 +353,7 @@ struct qdio_initialize {
unsigned char adapter_name[8];
unsigned int qib_param_field_format;
unsigned char *qib_param_field;
+ unsigned char qib_rflags;
unsigned long *input_slib_elements;
unsigned long *output_slib_elements;
unsigned int no_input_qs;
--- a/drivers/s390/cio/qdio_setup.c
+++ b/drivers/s390/cio/qdio_setup.c
@@ -368,6 +368,8 @@ static void setup_qib(struct qdio_irq *i
if (qebsm_possible())
irq_ptr->qib.rflags |= QIB_RFLAGS_ENABLE_QEBSM;
+ irq_ptr->qib.rflags |= init_data->qib_rflags;
+
irq_ptr->qib.qfmt = init_data->q_format;
if (init_data->no_input_qs)
irq_ptr->qib.isliba =
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -77,6 +77,7 @@ struct zfcp_reqlist;
#define ZFCP_STATUS_ADAPTER_HOST_CON_INIT 0x00000010
#define ZFCP_STATUS_ADAPTER_ERP_PENDING 0x00000100
#define ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED 0x00000200
+#define ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED 0x00000400
/* remote port status */
#define ZFCP_STATUS_PORT_PHYS_OPEN 0x00000001
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -283,6 +283,7 @@ static void zfcp_qdio_setup_init_data(st
id->q_format = QDIO_ZFCP_QFMT;
memcpy(id->adapter_name, dev_name(&id->cdev->dev), 8);
ASCEBC(id->adapter_name, 8);
+ id->qib_rflags = QIB_RFLAGS_ENABLE_DATA_DIV;
id->qib_param_field_format = 0;
id->qib_param_field = NULL;
id->input_slib_elements = NULL;
@@ -294,8 +295,8 @@ static void zfcp_qdio_setup_init_data(st
id->int_parm = (unsigned long) qdio;
id->input_sbal_addr_array = (void **) (qdio->res_q);
id->output_sbal_addr_array = (void **) (qdio->req_q);
-
}
+
/**
* zfcp_qdio_allocate - allocate queue memory and initialize QDIO data
* @adapter: pointer to struct zfcp_adapter
@@ -358,6 +359,7 @@ int zfcp_qdio_open(struct zfcp_qdio *qdi
struct qdio_initialize init_data;
struct zfcp_adapter *adapter = qdio->adapter;
struct ccw_device *cdev = adapter->ccw_device;
+ struct qdio_ssqd_desc ssqd;
int cc;
if (atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_QDIOUP)
@@ -368,6 +370,13 @@ int zfcp_qdio_open(struct zfcp_qdio *qdi
if (qdio_establish(&init_data))
goto failed_establish;
+ if (qdio_get_ssqd_desc(init_data.cdev, &ssqd))
+ goto failed_qdio;
+
+ if (ssqd.qdioac2 & CHSC_AC2_DATA_DIV_ENABLED)
+ atomic_set_mask(ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED,
+ &qdio->adapter->status);
+
if (qdio_activate(cdev))
goto failed_qdio;
^ permalink raw reply [flat|nested] 13+ messages in thread
* [patch 09/10] zfcp: Introduce experimental support for DIF/DIX
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
` (7 preceding siblings ...)
2010-07-16 13:37 ` [patch 08/10] zfcp: Enable data division support for FCP devices Christof Schmitt
@ 2010-07-16 13:37 ` Christof Schmitt
2010-07-16 13:37 ` [patch 10/10] zfcp: Trigger logging in the FCP channel on qdio error conditions Christof Schmitt
9 siblings, 0 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley
Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens, Felix Beck,
Christof Schmitt
[-- Attachment #1: linux-2.6.34-zfcp-san0616.3-e2e.diff --]
[-- Type: text/plain, Size: 14475 bytes --]
From: Felix Beck <felix.beck@de.ibm.com>
Introduce support for DIF/DIX in zfcp: Report the capabilities for the
Scsi_host, map the protection data when issuing I/O requests and
handle the new error codes. Also add the fsf data_direction field to
the hba trace, it is useful information for debugging in that area.
This is an EXPERIMENTAL feature for now.
Signed-off-by: Felix Beck <felix.beck@de.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
---
drivers/s390/scsi/zfcp_dbf.c | 3 +
drivers/s390/scsi/zfcp_dbf.h | 1
drivers/s390/scsi/zfcp_ext.h | 2
drivers/s390/scsi/zfcp_fc.h | 3 +
drivers/s390/scsi/zfcp_fsf.c | 109 +++++++++++++++++++++++++++++++++---------
drivers/s390/scsi/zfcp_fsf.h | 24 ++++++++-
drivers/s390/scsi/zfcp_qdio.c | 4 -
drivers/s390/scsi/zfcp_qdio.h | 16 ++++++
drivers/s390/scsi/zfcp_scsi.c | 53 ++++++++++++++++++++
drivers/scsi/Kconfig | 4 +
10 files changed, 189 insertions(+), 30 deletions(-)
--- a/drivers/s390/scsi/zfcp_dbf.c
+++ b/drivers/s390/scsi/zfcp_dbf.c
@@ -155,6 +155,8 @@ void _zfcp_dbf_hba_fsf_response(const ch
if (scsi_cmnd) {
response->u.fcp.cmnd = (unsigned long)scsi_cmnd;
response->u.fcp.serial = scsi_cmnd->serial_number;
+ response->u.fcp.data_dir =
+ qtcb->bottom.io.data_direction;
}
break;
@@ -326,6 +328,7 @@ static void zfcp_dbf_hba_view_response(c
case FSF_QTCB_FCP_CMND:
if (r->fsf_req_status & ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT)
break;
+ zfcp_dbf_out(p, "data_direction", "0x%04x", r->u.fcp.data_dir);
zfcp_dbf_out(p, "scsi_cmnd", "0x%0Lx", r->u.fcp.cmnd);
zfcp_dbf_out(p, "scsi_serial", "0x%016Lx", r->u.fcp.serial);
*p += sprintf(*p, "\n");
--- a/drivers/s390/scsi/zfcp_dbf.h
+++ b/drivers/s390/scsi/zfcp_dbf.h
@@ -111,6 +111,7 @@ struct zfcp_dbf_hba_record_response {
struct {
u64 cmnd;
u64 serial;
+ u32 data_dir;
} fcp;
struct {
u64 wwpn;
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -164,6 +164,8 @@ extern void zfcp_scsi_schedule_rport_blo
extern void zfcp_scsi_schedule_rports_block(struct zfcp_adapter *);
extern void zfcp_scsi_scan(struct zfcp_unit *);
extern void zfcp_scsi_scan_work(struct work_struct *);
+extern void zfcp_scsi_set_prot(struct zfcp_adapter *);
+extern void zfcp_scsi_dif_sense_error(struct scsi_cmnd *, int);
/* zfcp_sysfs.c */
extern struct attribute_group zfcp_sysfs_unit_attrs;
--- a/drivers/s390/scsi/zfcp_fc.h
+++ b/drivers/s390/scsi/zfcp_fc.h
@@ -220,6 +220,9 @@ void zfcp_fc_scsi_to_fcp(struct fcp_cmnd
memcpy(fcp->fc_cdb, scsi->cmnd, scsi->cmd_len);
fcp->fc_dl = scsi_bufflen(scsi);
+
+ if (scsi_get_prot_type(scsi) == SCSI_PROT_DIF_TYPE1)
+ fcp->fc_dl += fcp->fc_dl / scsi->device->sector_size * 8;
}
/**
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -526,6 +526,8 @@ static int zfcp_fsf_exchange_config_eval
return -EIO;
}
+ zfcp_scsi_set_prot(adapter);
+
return 0;
}
@@ -988,6 +990,7 @@ static int zfcp_fsf_setup_ct_els_sbals(s
bytes = zfcp_qdio_sbals_from_sg(adapter->qdio, &req->qdio_req, sg_req);
if (bytes <= 0)
return -EIO;
+ zfcp_qdio_set_sbale_last(adapter->qdio, &req->qdio_req);
req->qtcb->bottom.support.req_buf_length = bytes;
zfcp_qdio_skip_to_last_sbale(&req->qdio_req);
@@ -996,6 +999,7 @@ static int zfcp_fsf_setup_ct_els_sbals(s
req->qtcb->bottom.support.resp_buf_length = bytes;
if (bytes <= 0)
return -EIO;
+ zfcp_qdio_set_sbale_last(adapter->qdio, &req->qdio_req);
return 0;
}
@@ -2038,9 +2042,13 @@ static void zfcp_fsf_req_trace(struct zf
blktrc.fabric_lat = lat_in->fabric_lat * ticks;
switch (req->qtcb->bottom.io.data_direction) {
+ case FSF_DATADIR_DIF_READ_STRIP:
+ case FSF_DATADIR_DIF_READ_CONVERT:
case FSF_DATADIR_READ:
lat = &unit->latencies.read;
break;
+ case FSF_DATADIR_DIF_WRITE_INSERT:
+ case FSF_DATADIR_DIF_WRITE_CONVERT:
case FSF_DATADIR_WRITE:
lat = &unit->latencies.write;
break;
@@ -2081,6 +2089,21 @@ static void zfcp_fsf_send_fcp_command_ta
goto skip_fsfstatus;
}
+ switch (req->qtcb->header.fsf_status) {
+ case FSF_INCONSISTENT_PROT_DATA:
+ case FSF_INVALID_PROT_PARM:
+ set_host_byte(scpnt, DID_ERROR);
+ goto skip_fsfstatus;
+ case FSF_BLOCK_GUARD_CHECK_FAILURE:
+ zfcp_scsi_dif_sense_error(scpnt, 0x1);
+ goto skip_fsfstatus;
+ case FSF_APP_TAG_CHECK_FAILURE:
+ zfcp_scsi_dif_sense_error(scpnt, 0x2);
+ goto skip_fsfstatus;
+ case FSF_REF_TAG_CHECK_FAILURE:
+ zfcp_scsi_dif_sense_error(scpnt, 0x3);
+ goto skip_fsfstatus;
+ }
fcp_rsp = (struct fcp_resp_with_ext *) &req->qtcb->bottom.io.fcp_rsp;
zfcp_fc_eval_fcp_rsp(fcp_rsp, scpnt);
@@ -2190,6 +2213,44 @@ skip_fsfstatus:
}
}
+static int zfcp_fsf_set_data_dir(struct scsi_cmnd *scsi_cmnd, u32 *data_dir)
+{
+ switch (scsi_get_prot_op(scsi_cmnd)) {
+ case SCSI_PROT_NORMAL:
+ switch (scsi_cmnd->sc_data_direction) {
+ case DMA_NONE:
+ *data_dir = FSF_DATADIR_CMND;
+ break;
+ case DMA_FROM_DEVICE:
+ *data_dir = FSF_DATADIR_READ;
+ break;
+ case DMA_TO_DEVICE:
+ *data_dir = FSF_DATADIR_WRITE;
+ break;
+ case DMA_BIDIRECTIONAL:
+ return -EINVAL;
+ }
+ break;
+
+ case SCSI_PROT_READ_STRIP:
+ *data_dir = FSF_DATADIR_DIF_READ_STRIP;
+ break;
+ case SCSI_PROT_WRITE_INSERT:
+ *data_dir = FSF_DATADIR_DIF_WRITE_INSERT;
+ break;
+ case SCSI_PROT_READ_PASS:
+ *data_dir = FSF_DATADIR_DIF_READ_CONVERT;
+ break;
+ case SCSI_PROT_WRITE_PASS:
+ *data_dir = FSF_DATADIR_DIF_WRITE_CONVERT;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/**
* zfcp_fsf_send_fcp_command_task - initiate an FCP command (for a SCSI command)
* @unit: unit where command is sent to
@@ -2201,9 +2262,10 @@ int zfcp_fsf_send_fcp_command_task(struc
struct zfcp_fsf_req *req;
struct fcp_cmnd *fcp_cmnd;
unsigned int sbtype = SBAL_FLAGS0_TYPE_READ;
- int real_bytes, retval = -EIO;
+ int real_bytes, retval = -EIO, dix_bytes = 0;
struct zfcp_adapter *adapter = unit->port->adapter;
struct zfcp_qdio *qdio = adapter->qdio;
+ struct fsf_qtcb_bottom_io *io;
if (unlikely(!(atomic_read(&unit->status) &
ZFCP_STATUS_COMMON_UNBLOCKED)))
@@ -2226,46 +2288,46 @@ int zfcp_fsf_send_fcp_command_task(struc
goto out;
}
+ scsi_cmnd->host_scribble = (unsigned char *) req->req_id;
+
+ io = &req->qtcb->bottom.io;
req->status |= ZFCP_STATUS_FSFREQ_CLEANUP;
req->unit = unit;
req->data = scsi_cmnd;
req->handler = zfcp_fsf_send_fcp_command_handler;
req->qtcb->header.lun_handle = unit->handle;
req->qtcb->header.port_handle = unit->port->handle;
- req->qtcb->bottom.io.service_class = FSF_CLASS_3;
- req->qtcb->bottom.io.fcp_cmnd_length = FCP_CMND_LEN;
+ io->service_class = FSF_CLASS_3;
+ io->fcp_cmnd_length = FCP_CMND_LEN;
- scsi_cmnd->host_scribble = (unsigned char *) req->req_id;
-
- /*
- * set depending on data direction:
- * data direction bits in SBALE (SB Type)
- * data direction bits in QTCB
- */
- switch (scsi_cmnd->sc_data_direction) {
- case DMA_NONE:
- req->qtcb->bottom.io.data_direction = FSF_DATADIR_CMND;
- break;
- case DMA_FROM_DEVICE:
- req->qtcb->bottom.io.data_direction = FSF_DATADIR_READ;
- break;
- case DMA_TO_DEVICE:
- req->qtcb->bottom.io.data_direction = FSF_DATADIR_WRITE;
- break;
- case DMA_BIDIRECTIONAL:
- goto failed_scsi_cmnd;
+ if (scsi_get_prot_op(scsi_cmnd) != SCSI_PROT_NORMAL) {
+ io->data_block_length = scsi_cmnd->device->sector_size;
+ io->ref_tag_value = scsi_get_lba(scsi_cmnd) & 0xFFFFFFFF;
}
+ zfcp_fsf_set_data_dir(scsi_cmnd, &io->data_direction);
+
get_device(&unit->dev);
fcp_cmnd = (struct fcp_cmnd *) &req->qtcb->bottom.io.fcp_cmnd;
zfcp_fc_scsi_to_fcp(fcp_cmnd, scsi_cmnd);
+ if (scsi_prot_sg_count(scsi_cmnd)) {
+ zfcp_qdio_set_data_div(qdio, &req->qdio_req,
+ scsi_prot_sg_count(scsi_cmnd));
+ dix_bytes = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req,
+ scsi_prot_sglist(scsi_cmnd));
+ io->prot_data_length = dix_bytes;
+ }
+
real_bytes = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req,
scsi_sglist(scsi_cmnd));
- if (unlikely(real_bytes < 0))
+
+ if (unlikely(real_bytes < 0) || unlikely(dix_bytes < 0))
goto failed_scsi_cmnd;
+ zfcp_qdio_set_sbale_last(adapter->qdio, &req->qdio_req);
+
retval = zfcp_fsf_req_send(req);
if (unlikely(retval))
goto failed_scsi_cmnd;
@@ -2389,6 +2451,7 @@ struct zfcp_fsf_req *zfcp_fsf_control_fi
zfcp_fsf_req_free(req);
goto out;
}
+ zfcp_qdio_set_sbale_last(adapter->qdio, &req->qdio_req);
zfcp_fsf_start_timer(req, ZFCP_FSF_REQUEST_TIMEOUT);
retval = zfcp_fsf_req_send(req);
--- a/drivers/s390/scsi/zfcp_fsf.h
+++ b/drivers/s390/scsi/zfcp_fsf.h
@@ -80,11 +80,15 @@
#define FSF_REQUEST_SIZE_TOO_LARGE 0x00000061
#define FSF_RESPONSE_SIZE_TOO_LARGE 0x00000062
#define FSF_SBAL_MISMATCH 0x00000063
+#define FSF_INCONSISTENT_PROT_DATA 0x00000070
+#define FSF_INVALID_PROT_PARM 0x00000071
+#define FSF_BLOCK_GUARD_CHECK_FAILURE 0x00000081
+#define FSF_APP_TAG_CHECK_FAILURE 0x00000082
+#define FSF_REF_TAG_CHECK_FAILURE 0x00000083
#define FSF_ADAPTER_STATUS_AVAILABLE 0x000000AD
#define FSF_UNKNOWN_COMMAND 0x000000E2
#define FSF_UNKNOWN_OP_SUBTYPE 0x000000E3
#define FSF_INVALID_COMMAND_OPTION 0x000000E5
-/* #define FSF_ERROR 0x000000FF */
#define FSF_PROT_STATUS_QUAL_SIZE 16
#define FSF_STATUS_QUALIFIER_SIZE 16
@@ -147,6 +151,13 @@
#define FSF_DATADIR_WRITE 0x00000001
#define FSF_DATADIR_READ 0x00000002
#define FSF_DATADIR_CMND 0x00000004
+#define FSF_DATADIR_DIF_WRITE_INSERT 0x00000009
+#define FSF_DATADIR_DIF_READ_STRIP 0x0000000a
+#define FSF_DATADIR_DIF_WRITE_CONVERT 0x0000000b
+#define FSF_DATADIR_DIF_READ_CONVERT 0X0000000c
+
+/* data protection control flags */
+#define FSF_APP_TAG_CHECK_ENABLE 0x10
/* fc service class */
#define FSF_CLASS_3 0x00000003
@@ -162,6 +173,8 @@
#define FSF_FEATURE_ELS_CT_CHAINED_SBALS 0x00000020
#define FSF_FEATURE_UPDATE_ALERT 0x00000100
#define FSF_FEATURE_MEASUREMENT_DATA 0x00000200
+#define FSF_FEATURE_DIF_PROT_TYPE1 0x00010000
+#define FSF_FEATURE_DIX_PROT_TCPIP 0x00020000
/* host connection features */
#define FSF_FEATURE_NPIV_MODE 0x00000001
@@ -316,9 +329,14 @@ struct fsf_qtcb_header {
struct fsf_qtcb_bottom_io {
u32 data_direction;
u32 service_class;
- u8 res1[8];
+ u8 res1;
+ u8 data_prot_flags;
+ u16 app_tag_value;
+ u32 ref_tag_value;
u32 fcp_cmnd_length;
- u8 res2[12];
+ u32 data_block_length;
+ u32 prot_data_length;
+ u8 res2[4];
u8 fcp_cmnd[FSF_FCP_CMND_SIZE];
u8 fcp_rsp[FSF_FCP_RSP_SIZE];
u8 res3[64];
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -193,10 +193,6 @@ int zfcp_qdio_sbals_from_sg(struct zfcp_
bytes += sg->length;
}
- /* assume that no other SBALEs are to follow in the same SBAL */
- sbale = zfcp_qdio_sbale_curr(qdio, q_req);
- sbale->flags |= SBAL_FLAGS_LAST_ENTRY;
-
return bytes;
}
--- a/drivers/s390/scsi/zfcp_qdio.h
+++ b/drivers/s390/scsi/zfcp_qdio.h
@@ -215,4 +215,20 @@ void zfcp_qdio_sbal_limit(struct zfcp_qd
QDIO_MAX_BUFFERS_PER_Q;
}
+/**
+ * zfcp_qdio_set_data_div - set data division count
+ * @qdio: pointer to struct zfcp_qdio
+ * @q_req: The current zfcp_qdio_req
+ * @count: The data division count
+ */
+static inline
+void zfcp_qdio_set_data_div(struct zfcp_qdio *qdio,
+ struct zfcp_qdio_req *q_req, u32 count)
+{
+ struct qdio_buffer_element *sbale;
+
+ sbale = &qdio->req_q[q_req->sbal_first]->element[0];
+ sbale->length = count;
+}
+
#endif /* ZFCP_QDIO_H */
--- a/drivers/s390/scsi/zfcp_scsi.c
+++ b/drivers/s390/scsi/zfcp_scsi.c
@@ -12,6 +12,7 @@
#include <linux/types.h>
#include <linux/slab.h>
#include <scsi/fc/fc_fcp.h>
+#include <scsi/scsi_eh.h>
#include <asm/atomic.h>
#include "zfcp_ext.h"
#include "zfcp_dbf.h"
@@ -22,6 +23,13 @@ static unsigned int default_depth = 32;
module_param_named(queue_depth, default_depth, uint, 0600);
MODULE_PARM_DESC(queue_depth, "Default queue depth for new SCSI devices");
+static bool enable_dif;
+
+#ifdef CONFIG_ZFCP_DIF
+module_param_named(dif, enable_dif, bool, 0600);
+MODULE_PARM_DESC(dif, "Enable DIF/DIX data integrity support");
+#endif
+
static int zfcp_scsi_change_queue_depth(struct scsi_device *sdev, int depth,
int reason)
{
@@ -652,6 +660,51 @@ void zfcp_scsi_scan_work(struct work_str
put_device(&unit->dev);
}
+/**
+ * zfcp_scsi_set_prot - Configure DIF/DIX support in scsi_host
+ * @adapter: The adapter where to configure DIF/DIX for the SCSI host
+ */
+void zfcp_scsi_set_prot(struct zfcp_adapter *adapter)
+{
+ unsigned int mask = 0;
+ unsigned int data_div;
+ struct Scsi_Host *shost = adapter->scsi_host;
+
+ data_div = atomic_read(&adapter->status) &
+ ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED;
+
+ if (enable_dif &&
+ adapter->adapter_features & FSF_FEATURE_DIF_PROT_TYPE1)
+ mask |= SHOST_DIF_TYPE1_PROTECTION;
+
+ if (enable_dif && data_div &&
+ adapter->adapter_features & FSF_FEATURE_DIX_PROT_TCPIP) {
+ mask |= SHOST_DIX_TYPE1_PROTECTION;
+ scsi_host_set_guard(shost, SHOST_DIX_GUARD_IP);
+ shost->sg_tablesize = ZFCP_QDIO_MAX_SBALES_PER_REQ / 2;
+ shost->max_sectors = ZFCP_QDIO_MAX_SBALES_PER_REQ * 8 / 2;
+ }
+
+ scsi_host_set_prot(shost, mask);
+}
+
+/**
+ * zfcp_scsi_dif_sense_error - Report DIF/DIX error as driver sense error
+ * @scmd: The SCSI command to report the error for
+ * @ascq: The ASCQ to put in the sense buffer
+ *
+ * See the error handling in sd_done for the sense codes used here.
+ * Set DID_SOFT_ERROR to retry the request, if possible.
+ */
+void zfcp_scsi_dif_sense_error(struct scsi_cmnd *scmd, int ascq)
+{
+ scsi_build_sense_buffer(1, scmd->sense_buffer,
+ ILLEGAL_REQUEST, 0x10, ascq);
+ set_driver_byte(scmd, DRIVER_SENSE);
+ scmd->result |= SAM_STAT_CHECK_CONDITION;
+ set_host_byte(scmd, DID_SOFT_ERROR);
+}
+
struct fc_function_template zfcp_transport_functions = {
.show_starget_port_id = 1,
.show_starget_port_name = 1,
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1847,6 +1847,10 @@ config ZFCP
called zfcp. If you want to compile it as a module, say M here
and read <file:Documentation/kbuild/modules.txt>.
+config ZFCP_DIF
+ tristate "T10 DIF/DIX support for the zfcp driver (EXPERIMENTAL)"
+ depends on ZFCP && EXPERIMENTAL
+
config SCSI_PMCRAID
tristate "PMC SIERRA Linux MaxRAID adapter support"
depends on PCI && SCSI
^ permalink raw reply [flat|nested] 13+ messages in thread
* [patch 10/10] zfcp: Trigger logging in the FCP channel on qdio error conditions
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
` (8 preceding siblings ...)
2010-07-16 13:37 ` [patch 09/10] zfcp: Introduce experimental support for DIF/DIX Christof Schmitt
@ 2010-07-16 13:37 ` Christof Schmitt
9 siblings, 0 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-16 13:37 UTC (permalink / raw)
To: James Bottomley
Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens,
Christof Schmitt
[-- Attachment #1: 700-zfcp-log-errors.diff --]
[-- Type: text/plain, Size: 5634 bytes --]
From: Christof Schmitt <christof.schmitt@de.ibm.com>
Exploit the cio siosl function to trigger logging in the FCP channel
on qdio error conditions. Add a helper function in zfcp_qdio to ensure
that tracing is only triggered once before calling qdio_shutdown.
Trigger in zfcp for hardware logs are:
- timeout for FSF requests to the FCP channel
- "no recommendation" status from FCP channel
- invalid FSF protocol status
- stalled outbound queue
- unknown request id on inbound queue
- QDIO_ERROR_SLSB_STATE
All of the above triggers run from the Linux qdio softirq context, so
no additional synchronization is necessary for the handling of the
ZFCP_STATUS_ADAPTER_SIOSL_ISSUED flag.
Reviewed-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
---
drivers/s390/scsi/zfcp_def.h | 1 +
drivers/s390/scsi/zfcp_ext.h | 1 +
drivers/s390/scsi/zfcp_fsf.c | 7 ++++++-
drivers/s390/scsi/zfcp_qdio.c | 35 ++++++++++++++++++++++++++++++++---
4 files changed, 40 insertions(+), 4 deletions(-)
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -73,6 +73,7 @@ struct zfcp_reqlist;
/* adapter status */
#define ZFCP_STATUS_ADAPTER_QDIOUP 0x00000002
+#define ZFCP_STATUS_ADAPTER_SIOSL_ISSUED 0x00000004
#define ZFCP_STATUS_ADAPTER_XCONFIG_OK 0x00000008
#define ZFCP_STATUS_ADAPTER_HOST_CON_INIT 0x00000010
#define ZFCP_STATUS_ADAPTER_ERP_PENDING 0x00000100
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -152,6 +152,7 @@ extern int zfcp_qdio_sbals_from_sg(struc
struct scatterlist *);
extern int zfcp_qdio_open(struct zfcp_qdio *);
extern void zfcp_qdio_close(struct zfcp_qdio *);
+extern void zfcp_qdio_siosl(struct zfcp_adapter *);
/* zfcp_scsi.c */
extern struct zfcp_data zfcp_data;
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -21,6 +21,7 @@
static void zfcp_fsf_request_timeout_handler(unsigned long data)
{
struct zfcp_adapter *adapter = (struct zfcp_adapter *) data;
+ zfcp_qdio_siosl(adapter);
zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED,
"fsrth_1", NULL);
}
@@ -326,6 +327,7 @@ static void zfcp_fsf_fsfstatus_qual_eval
dev_err(&req->adapter->ccw_device->dev,
"The FCP adapter reported a problem "
"that cannot be recovered\n");
+ zfcp_qdio_siosl(req->adapter);
zfcp_erp_adapter_shutdown(req->adapter, 0, "fsfsqe1", req);
break;
}
@@ -416,6 +418,7 @@ static void zfcp_fsf_protstatus_eval(str
dev_err(&adapter->ccw_device->dev,
"0x%x is not a valid transfer protocol status\n",
qtcb->prefix.prot_status);
+ zfcp_qdio_siosl(adapter);
zfcp_erp_adapter_shutdown(adapter, 0, "fspse_9", req);
}
req->status |= ZFCP_STATUS_FSFREQ_ERROR;
@@ -2485,13 +2488,15 @@ void zfcp_fsf_reqid_check(struct zfcp_qd
req_id = (unsigned long) sbale->addr;
fsf_req = zfcp_reqlist_find_rm(adapter->req_list, req_id);
- if (!fsf_req)
+ if (!fsf_req) {
/*
* Unknown request means that we have potentially memory
* corruption and must stop the machine immediately.
*/
+ zfcp_qdio_siosl(adapter);
panic("error: unknown req_id (%lx) on adapter %s.\n",
req_id, dev_name(&adapter->ccw_device->dev));
+ }
fsf_req->qdio_req.sbal_response = sbal_idx;
zfcp_fsf_req_complete(fsf_req);
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -30,12 +30,15 @@ static int zfcp_qdio_buffers_enqueue(str
return 0;
}
-static void zfcp_qdio_handler_error(struct zfcp_qdio *qdio, char *id)
+static void zfcp_qdio_handler_error(struct zfcp_qdio *qdio, char *id,
+ unsigned int qdio_err)
{
struct zfcp_adapter *adapter = qdio->adapter;
dev_warn(&adapter->ccw_device->dev, "A QDIO problem occurred\n");
+ if (qdio_err & QDIO_ERROR_SLSB_STATE)
+ zfcp_qdio_siosl(adapter);
zfcp_erp_adapter_reopen(adapter,
ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED |
ZFCP_STATUS_COMMON_ERP_FAILED, id, NULL);
@@ -74,7 +77,7 @@ static void zfcp_qdio_int_req(struct ccw
if (unlikely(qdio_err)) {
zfcp_dbf_hba_qdio(qdio->adapter->dbf, qdio_err, idx, count);
- zfcp_qdio_handler_error(qdio, "qdireq1");
+ zfcp_qdio_handler_error(qdio, "qdireq1", qdio_err);
return;
}
@@ -95,7 +98,7 @@ static void zfcp_qdio_int_resp(struct cc
if (unlikely(qdio_err)) {
zfcp_dbf_hba_qdio(qdio->adapter->dbf, qdio_err, idx, count);
- zfcp_qdio_handler_error(qdio, "qdires1");
+ zfcp_qdio_handler_error(qdio, "qdires1", qdio_err);
return;
}
@@ -361,6 +364,9 @@ int zfcp_qdio_open(struct zfcp_qdio *qdi
if (atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_QDIOUP)
return -EIO;
+ atomic_clear_mask(ZFCP_STATUS_ADAPTER_SIOSL_ISSUED,
+ &qdio->adapter->status);
+
zfcp_qdio_setup_init_data(&init_data, qdio);
if (qdio_establish(&init_data))
@@ -440,3 +446,26 @@ int zfcp_qdio_setup(struct zfcp_adapter
return 0;
}
+/**
+ * zfcp_qdio_siosl - Trigger logging in FCP channel
+ * @adapter: The zfcp_adapter where to trigger logging
+ *
+ * Call the cio siosl function to trigger hardware logging. This
+ * wrapper function sets a flag to ensure hardware logging is only
+ * triggered once before going through qdio shutdown.
+ *
+ * The triggers are always run from qdio tasklet context, so no
+ * additional synchronization is necessary.
+ */
+void zfcp_qdio_siosl(struct zfcp_adapter *adapter)
+{
+ int rc;
+
+ if (atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_SIOSL_ISSUED)
+ return;
+
+ rc = ccw_device_siosl(adapter->ccw_device);
+ if (!rc)
+ atomic_set_mask(ZFCP_STATUS_ADAPTER_SIOSL_ISSUED,
+ &adapter->status);
+}
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [patch 02/10] zfcp: Remove SCSI device when removing unit
2010-07-16 13:37 ` [patch 02/10] zfcp: Remove SCSI device when removing unit Christof Schmitt
@ 2010-07-27 20:37 ` James Bottomley
2010-07-28 8:35 ` Christof Schmitt
0 siblings, 1 reply; 13+ messages in thread
From: James Bottomley @ 2010-07-27 20:37 UTC (permalink / raw)
To: Christof Schmitt; +Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens
On Fri, 2010-07-16 at 15:37 +0200, Christof Schmitt wrote:
> plain text document attachment (705-zfcp-unit-removal.diff)
> From: Christof Schmitt <christof.schmitt@de.ibm.com>
>
> Configuring a LUN in zfcp, also creates a SCSI device. For
> consistency, it makes sense to remove the SCSI device when the LUN is
> deconfigured. Replace the flush_work with the call to
> scsi_remove_device: scsi_remove_device also takes the scan_mutex that
> synchronizes itself with any long running device discovery.
This one's rejecting:
jejb@mulgrave:~/git/scsi-misc-2.6> patch -p1 < ~/tmp/tmp/02
patching file drivers/s390/scsi/zfcp_def.h
patching file drivers/s390/scsi/zfcp_scsi.c
Hunk #1 FAILED at 564.
1 out of 1 hunk FAILED -- saving rejects to file
drivers/s390/scsi/zfcp_scsi.c.rej
It looks like a missing patch somewhere.
Could you check the current state of scsi-misc-2.6 (it includes
scsi-rc-fixes-2.6) and see if I've lost something?
Thanks,
James
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [patch 02/10] zfcp: Remove SCSI device when removing unit
2010-07-27 20:37 ` James Bottomley
@ 2010-07-28 8:35 ` Christof Schmitt
0 siblings, 0 replies; 13+ messages in thread
From: Christof Schmitt @ 2010-07-28 8:35 UTC (permalink / raw)
To: James Bottomley; +Cc: linux-scsi, linux-s390, schwidefsky, heiko.carstens
On Tue, Jul 27, 2010 at 03:37:55PM -0500, James Bottomley wrote:
> On Fri, 2010-07-16 at 15:37 +0200, Christof Schmitt wrote:
> > plain text document attachment (705-zfcp-unit-removal.diff)
> > From: Christof Schmitt <christof.schmitt@de.ibm.com>
> >
> > Configuring a LUN in zfcp, also creates a SCSI device. For
> > consistency, it makes sense to remove the SCSI device when the LUN is
> > deconfigured. Replace the flush_work with the call to
> > scsi_remove_device: scsi_remove_device also takes the scan_mutex that
> > synchronizes itself with any long running device discovery.
>
> This one's rejecting:
>
> jejb@mulgrave:~/git/scsi-misc-2.6> patch -p1 < ~/tmp/tmp/02
> patching file drivers/s390/scsi/zfcp_def.h
> patching file drivers/s390/scsi/zfcp_scsi.c
> Hunk #1 FAILED at 564.
> 1 out of 1 hunk FAILED -- saving rejects to file
> drivers/s390/scsi/zfcp_scsi.c.rej
>
> It looks like a missing patch somewhere.
>
> Could you check the current state of scsi-misc-2.6 (it includes
> scsi-rc-fixes-2.6) and see if I've lost something?
scsi-misc and scsi-rc-fixes have the three fixes i sent for 2.6.35. I
sent them again with six additional fixes here:
http://marc.info/?l=linux-scsi&m=127857607617662&w=2
The six additional patches are missing and this seems to cause the
reject. On top of scsi-misc, these patches apply cleanly in this
order:
From http://marc.info/?l=linux-scsi&m=127857607617662&w=2
zfcp: Do not unblock rport from REOPEN_PORT_FORCED
zfcp: Do not try "forced close" when port is already closed
zfcp: Register SCSI devices after successful fc_remote_port_add
zfcp: Use forced_reopen in terminate_rport_io callback
zfcp: Fail erp after timeout
zfcp: Fix retry after failed "open port" erp action
From http://marc.info/?l=linux-scsi&m=127928780100336&w=2
zfcp: Use memdup_user and kstrdup
zfcp: Remove SCSI device when removing unit
zfcp: Use correct width for timer_interval field
zfcp: Cleanup function parameters for sbal value.
zfcp: Cleanup QDIO attachment and improve processing.
zfcp: Post events through FC transport class
zfcp: Prevent access on uninitialized memory.
zfcp: Enable data division support for FCP devices
zfcp: Introduce experimental support for DIF/DIX
zfcp: Trigger logging in the FCP channel on qdio error conditions
I can also resend them in one series if this makes things easier.
Christof
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2010-07-28 8:35 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-16 13:37 [patch 00/10] zfcp updates for 2.6.36 merge window Christof Schmitt
2010-07-16 13:37 ` [patch 01/10] zfcp: Use memdup_user and kstrdup Christof Schmitt
2010-07-16 13:37 ` [patch 02/10] zfcp: Remove SCSI device when removing unit Christof Schmitt
2010-07-27 20:37 ` James Bottomley
2010-07-28 8:35 ` Christof Schmitt
2010-07-16 13:37 ` [patch 03/10] zfcp: Use correct width for timer_interval field Christof Schmitt
2010-07-16 13:37 ` [patch 04/10] zfcp: Cleanup function parameters for sbal value Christof Schmitt
2010-07-16 13:37 ` [patch 05/10] zfcp: Cleanup QDIO attachment and improve processing Christof Schmitt
2010-07-16 13:37 ` [patch 06/10] zfcp: Post events through FC transport class Christof Schmitt
2010-07-16 13:37 ` [patch 07/10] zfcp: Prevent access on uninitialized memory Christof Schmitt
2010-07-16 13:37 ` [patch 08/10] zfcp: Enable data division support for FCP devices Christof Schmitt
2010-07-16 13:37 ` [patch 09/10] zfcp: Introduce experimental support for DIF/DIX Christof Schmitt
2010-07-16 13:37 ` [patch 10/10] zfcp: Trigger logging in the FCP channel on qdio error conditions Christof Schmitt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).