* [PATCH v2 0/8] scaling and bug fixes for 2.6.38
@ 2011-01-05 20:35 David Dillow
2011-01-05 20:35 ` [PATCH v2 1/8] IB/srp: allow task management without a previous request David Dillow
` (3 more replies)
0 siblings, 4 replies; 17+ messages in thread
From: David Dillow @ 2011-01-05 20:35 UTC (permalink / raw)
To: linux-rdma; +Cc: linux-scsi, bvanassche
Updated patches to reflect the list's review, and better reflect Bart's
role in preparing them. Once he's had a chance to look over them again,
I'll push these to my repo and send a pull request.
For convenience, the diff from the results of the first series to the
results of this series is below.
Bart Van Assche (6):
IB/srp: consolidate state change code
IB/srp: allow lockless work posting
IB/srp: don't move active requests to their own list
IB/srp: reduce local coverage for command submission and EH
IB/srp: reduce lock coverage of command completion
IB/srp: stop sharing the host lock with SCSI
David Dillow (8):
IB/srp: allow task management without a previous request
IB/srp: consolidate hot-path variables into cache lines
drivers/infiniband/ulp/srp/ib_srp.c | 392 ++++++++++++++++-------------------
drivers/infiniband/ulp/srp/ib_srp.h | 46 +++--
2 files changed, 206 insertions(+), 232 deletions(-)
--
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index cef6191..7dcefe4 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -445,7 +445,7 @@ static bool srp_change_state(struct srp_target_port *target,
enum srp_target_state old,
enum srp_target_state new)
{
- int changed = false;
+ bool changed = false;
spin_lock_irq(&target->lock);
if (target->state == old) {
@@ -558,7 +558,7 @@ static void srp_remove_req(struct srp_target_port *target,
spin_lock_irqsave(&target->lock, flags);
target->req_lim += req_lim_delta;
req->scmnd = NULL;
- list_move_tail(&req->list, &target->free_reqs);
+ list_add_tail(&req->list, &target->free_reqs);
spin_unlock_irqrestore(&target->lock, flags);
}
@@ -609,7 +609,7 @@ static int srp_reconnect_target(struct srp_target_port *target)
list_del_init(&target->free_tx);
for (i = 0; i < SRP_SQ_SIZE; ++i)
- list_move(&target->tx_ring[i]->list, &target->free_tx);
+ list_add(&target->tx_ring[i]->list, &target->free_tx);
target->qp_in_error = 0;
ret = srp_connect_target(target);
@@ -871,7 +871,7 @@ static struct srp_iu *__srp_get_tx_iu(struct srp_target_port *target,
}
iu = list_first_entry(&target->free_tx, struct srp_iu, list);
- list_del_init(&iu->list);
+ list_del(&iu->list);
return iu;
}
@@ -916,8 +916,13 @@ static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp)
{
struct srp_request *req;
struct scsi_cmnd *scmnd;
+ unsigned long flags;
if (unlikely(rsp->tag & SRP_TAG_TSK_MGMT)) {
+ spin_lock_irqsave(&target->lock, flags);
+ target->req_lim += be32_to_cpu(rsp->req_lim_delta);
+ spin_unlock_irqrestore(&target->lock, flags);
+
target->tsk_mgmt_status = -1;
if (be32_to_cpu(rsp->resp_data_len) >= 4)
target->tsk_mgmt_status = rsp->data[3];
@@ -1126,13 +1131,11 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
}
spin_lock_irqsave(&target->lock, flags);
- /* This goes away once the scsi_eh routines stop testing it. */
- scsi_cmd_get_serial(shost, scmnd);
iu = __srp_get_tx_iu(target, SRP_IU_CMD);
if (iu) {
req = list_first_entry(&target->free_reqs, struct srp_request,
list);
- list_del_init(&req->list);
+ list_del(&req->list);
}
spin_unlock_irqrestore(&target->lock, flags);
@@ -1207,7 +1210,6 @@ static int srp_alloc_iu_bufs(struct srp_target_port *target)
if (!target->tx_ring[i])
goto err;
- INIT_LIST_HEAD(&target->tx_ring[i]->list);
list_add(&target->tx_ring[i]->list, &target->free_tx);
}
@@ -1461,10 +1463,10 @@ static int srp_send_tsk_mgmt(struct srp_target_port *target,
memset(tsk_mgmt, 0, sizeof *tsk_mgmt);
tsk_mgmt->opcode = SRP_TSK_MGMT;
- tsk_mgmt->lun = cpu_to_be64((u64) lun << 48);
- tsk_mgmt->tag = req_tag | SRP_TAG_TSK_MGMT;
+ tsk_mgmt->lun = cpu_to_be64((u64) lun << 48);
+ tsk_mgmt->tag = req_tag | SRP_TAG_TSK_MGMT;
tsk_mgmt->tsk_mgmt_func = func;
- tsk_mgmt->task_tag = req_tag;
+ tsk_mgmt->task_tag = req_tag;
ib_dma_sync_single_for_device(dev, iu->dma, sizeof *tsk_mgmt,
DMA_TO_DEVICE);
diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h
index 43f9129..9dc6fc3 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.h
+++ b/drivers/infiniband/ulp/srp/ib_srp.h
@@ -66,8 +66,8 @@ enum {
SRP_TSK_MGMT_SQ_SIZE = 1,
SRP_CMD_SQ_SIZE = SRP_REQ_SQ_SIZE - SRP_TSK_MGMT_SQ_SIZE,
- SRP_TAG_NO_REQ = ~0UL,
- SRP_TAG_TSK_MGMT = 1UL << 31,
+ SRP_TAG_NO_REQ = ~0U,
+ SRP_TAG_TSK_MGMT = 1U << 31,
SRP_FMR_SIZE = 256,
SRP_FMR_POOL_SIZE = 1024,
@@ -124,7 +124,7 @@ struct srp_target_port {
s32 req_lim;
/* These are read-only in the hot path */
- struct ib_cq *send_cq ____cacheline_aligned;
+ struct ib_cq *send_cq ____cacheline_aligned_in_smp;
struct ib_cq *recv_cq;
struct ib_qp *qp;
u32 lkey;
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH v2 1/8] IB/srp: allow task management without a previous request 2011-01-05 20:35 [PATCH v2 0/8] scaling and bug fixes for 2.6.38 David Dillow @ 2011-01-05 20:35 ` David Dillow 2011-01-05 20:35 ` [PATCH v2 3/8] IB/srp: allow lockless work posting David Dillow ` (2 subsequent siblings) 3 siblings, 0 replies; 17+ messages in thread From: David Dillow @ 2011-01-05 20:35 UTC (permalink / raw) To: linux-rdma; +Cc: linux-scsi, bvanassche We can only have one task management comment outstanding, so move the completion and status to the target port. This allows us to handle resets of a LUN without a corresponding request having been sent. Meanwhile, we don't need to play games with host_scribble, just use it as the pointer it is. This fixes a crash when we issue a bus reset using sg_reset. Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=13893 Reported-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: David Dillow <dillowda@ornl.gov> --- drivers/infiniband/ulp/srp/ib_srp.c | 90 ++++++++++++---------------------- drivers/infiniband/ulp/srp/ib_srp.h | 10 ++-- 2 files changed, 37 insertions(+), 63 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 1e1e347..29429a1 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -542,6 +542,7 @@ static void srp_unmap_data(struct scsi_cmnd *scmnd, static void srp_remove_req(struct srp_target_port *target, struct srp_request *req) { srp_unmap_data(req->scmnd, target, req); + req->scmnd = NULL; list_move_tail(&req->list, &target->free_reqs); } @@ -925,15 +926,13 @@ static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp) target->req_lim += delta; - req = &target->req_ring[rsp->tag & ~SRP_TAG_TSK_MGMT]; - if (unlikely(rsp->tag & SRP_TAG_TSK_MGMT)) { - if (be32_to_cpu(rsp->resp_data_len) < 4) - req->tsk_status = -1; - else - req->tsk_status = rsp->data[3]; - complete(&req->done); + target->tsk_mgmt_status = -1; + if (be32_to_cpu(rsp->resp_data_len) >= 4) + target->tsk_mgmt_status = rsp->data[3]; + complete(&target->tsk_mgmt_done); } else { + req = &target->req_ring[rsp->tag]; scmnd = req->scmnd; if (!scmnd) shost_printk(KERN_ERR, target->scsi_host, @@ -953,13 +952,9 @@ static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp) else if (rsp->flags & (SRP_RSP_FLAG_DIOVER | SRP_RSP_FLAG_DIUNDER)) scsi_set_resid(scmnd, be32_to_cpu(rsp->data_in_res_cnt)); - if (!req->tsk_mgmt) { - scmnd->host_scribble = (void *) -1L; - scmnd->scsi_done(scmnd); - - srp_remove_req(target, req); - } else - req->cmd_done = 1; + scmnd->host_scribble = NULL; + scmnd->scsi_done(scmnd); + srp_remove_req(target, req); } spin_unlock_irqrestore(target->scsi_host->host_lock, flags); @@ -1155,7 +1150,7 @@ static int srp_queuecommand_lck(struct scsi_cmnd *scmnd, scmnd->scsi_done = done; scmnd->result = 0; - scmnd->host_scribble = (void *) (long) req->index; + scmnd->host_scribble = (void *) req; cmd = iu->buf; memset(cmd, 0, sizeof *cmd); @@ -1167,8 +1162,6 @@ static int srp_queuecommand_lck(struct scsi_cmnd *scmnd, req->scmnd = scmnd; req->cmd = iu; - req->cmd_done = 0; - req->tsk_mgmt = NULL; len = srp_map_data(scmnd, target, req); if (len < 0) { @@ -1442,7 +1435,7 @@ static int srp_cm_handler(struct ib_cm_id *cm_id, struct ib_cm_event *event) } static int srp_send_tsk_mgmt(struct srp_target_port *target, - struct srp_request *req, u8 func) + u64 req_tag, unsigned int lun, u8 func) { struct ib_device *dev = target->srp_host->srp_dev->dev; struct srp_iu *iu; @@ -1451,12 +1444,10 @@ static int srp_send_tsk_mgmt(struct srp_target_port *target, spin_lock_irq(target->scsi_host->host_lock); if (target->state == SRP_TARGET_DEAD || - target->state == SRP_TARGET_REMOVED) { - req->scmnd->result = DID_BAD_TARGET << 16; + target->state == SRP_TARGET_REMOVED) goto out; - } - init_completion(&req->done); + init_completion(&target->tsk_mgmt_done); iu = __srp_get_tx_iu(target, SRP_IU_TSK_MGMT); if (!iu) @@ -1468,21 +1459,19 @@ static int srp_send_tsk_mgmt(struct srp_target_port *target, memset(tsk_mgmt, 0, sizeof *tsk_mgmt); tsk_mgmt->opcode = SRP_TSK_MGMT; - tsk_mgmt->lun = cpu_to_be64((u64) req->scmnd->device->lun << 48); - tsk_mgmt->tag = req->index | SRP_TAG_TSK_MGMT; + tsk_mgmt->lun = cpu_to_be64((u64) lun << 48); + tsk_mgmt->tag = req_tag | SRP_TAG_TSK_MGMT; tsk_mgmt->tsk_mgmt_func = func; - tsk_mgmt->task_tag = req->index; + tsk_mgmt->task_tag = req_tag; ib_dma_sync_single_for_device(dev, iu->dma, sizeof *tsk_mgmt, DMA_TO_DEVICE); if (__srp_post_send(target, iu, sizeof *tsk_mgmt)) goto out; - req->tsk_mgmt = iu; - spin_unlock_irq(target->scsi_host->host_lock); - if (!wait_for_completion_timeout(&req->done, + if (!wait_for_completion_timeout(&target->tsk_mgmt_done, msecs_to_jiffies(SRP_ABORT_TIMEOUT_MS))) return -1; @@ -1493,43 +1482,29 @@ out: return -1; } -static int srp_find_req(struct srp_target_port *target, - struct scsi_cmnd *scmnd, - struct srp_request **req) -{ - if (scmnd->host_scribble == (void *) -1L) - return -1; - - *req = &target->req_ring[(long) scmnd->host_scribble]; - - return 0; -} - static int srp_abort(struct scsi_cmnd *scmnd) { struct srp_target_port *target = host_to_target(scmnd->device->host); - struct srp_request *req; + struct srp_request *req = (struct srp_request *) scmnd->host_scribble; int ret = SUCCESS; shost_printk(KERN_ERR, target->scsi_host, "SRP abort called\n"); - if (target->qp_in_error) - return FAILED; - if (srp_find_req(target, scmnd, &req)) + if (!req || target->qp_in_error) return FAILED; - if (srp_send_tsk_mgmt(target, req, SRP_TSK_ABORT_TASK)) + if (srp_send_tsk_mgmt(target, req->index, scmnd->device->lun, + SRP_TSK_ABORT_TASK)) return FAILED; spin_lock_irq(target->scsi_host->host_lock); - if (req->cmd_done) { - srp_remove_req(target, req); - scmnd->scsi_done(scmnd); - } else if (!req->tsk_status) { - srp_remove_req(target, req); - scmnd->result = DID_ABORT << 16; - } else - ret = FAILED; + if (req->scmnd) { + if (!target->tsk_mgmt_status) { + srp_remove_req(target, req); + scmnd->result = DID_ABORT << 16; + } else + ret = FAILED; + } spin_unlock_irq(target->scsi_host->host_lock); @@ -1545,17 +1520,16 @@ static int srp_reset_device(struct scsi_cmnd *scmnd) if (target->qp_in_error) return FAILED; - if (srp_find_req(target, scmnd, &req)) - return FAILED; - if (srp_send_tsk_mgmt(target, req, SRP_TSK_LUN_RESET)) + if (srp_send_tsk_mgmt(target, SRP_TAG_NO_REQ, scmnd->device->lun, + SRP_TSK_LUN_RESET)) return FAILED; - if (req->tsk_status) + if (target->tsk_mgmt_status) return FAILED; spin_lock_irq(target->scsi_host->host_lock); list_for_each_entry_safe(req, tmp, &target->req_queue, list) - if (req->scmnd->device == scmnd->device) + if (req->scmnd && req->scmnd->device == scmnd->device) srp_reset_req(target, req); spin_unlock_irq(target->scsi_host->host_lock); diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h index ed0dce9..f8b689a 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.h +++ b/drivers/infiniband/ulp/srp/ib_srp.h @@ -68,7 +68,8 @@ enum { SRP_TSK_MGMT_SQ_SIZE = 1, SRP_CMD_SQ_SIZE = SRP_REQ_SQ_SIZE - SRP_TSK_MGMT_SQ_SIZE, - SRP_TAG_TSK_MGMT = 1 << (SRP_RQ_SHIFT + 1), + SRP_TAG_NO_REQ = ~0U, + SRP_TAG_TSK_MGMT = 1U << 31, SRP_FMR_SIZE = 256, SRP_FMR_POOL_SIZE = 1024, @@ -113,12 +114,8 @@ struct srp_request { struct list_head list; struct scsi_cmnd *scmnd; struct srp_iu *cmd; - struct srp_iu *tsk_mgmt; struct ib_pool_fmr *fmr; - struct completion done; short index; - u8 cmd_done; - u8 tsk_status; }; struct srp_target_port { @@ -165,6 +162,9 @@ struct srp_target_port { int status; enum srp_target_state state; int qp_in_error; + + struct completion tsk_mgmt_done; + u8 tsk_mgmt_status; }; struct srp_iu { -- 1.7.3.4 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 3/8] IB/srp: allow lockless work posting 2011-01-05 20:35 [PATCH v2 0/8] scaling and bug fixes for 2.6.38 David Dillow 2011-01-05 20:35 ` [PATCH v2 1/8] IB/srp: allow task management without a previous request David Dillow @ 2011-01-05 20:35 ` David Dillow [not found] ` <1294259716-30706-4-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> 2011-01-05 20:35 ` [PATCH v2 8/8] IB/srp: consolidate hot-path variables into cache lines David Dillow [not found] ` <1294259716-30706-1-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> 3 siblings, 1 reply; 17+ messages in thread From: David Dillow @ 2011-01-05 20:35 UTC (permalink / raw) To: linux-rdma; +Cc: linux-scsi, bvanassche From: Bart Van Assche <bvanassche@acm.org> Only one CPU at a time will own an RX IU, so using the address of the IU as the work request cookie allows us to avoid taking a lock. We can similarly prepare the TX path for lockless posting by moving the free TX IUs to a list. This also removes the requirement that the queue sizes be a power of 2. Signed-off-by: Bart Van Assche <bvanassche@acm.org> [ broken out, small cleanups, and modified to avoid needing an extra field in the IU by David Dillow] Signed-off-by: David Dillow <dillowda@ornl.gov> --- drivers/infiniband/ulp/srp/ib_srp.c | 65 ++++++++++++++--------------------- drivers/infiniband/ulp/srp/ib_srp.h | 7 +--- 2 files changed, 28 insertions(+), 44 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index def9e6b..aa78d26 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -568,7 +568,7 @@ static int srp_reconnect_target(struct srp_target_port *target) struct ib_qp_attr qp_attr; struct srp_request *req, *tmp; struct ib_wc wc; - int ret; + int i, ret; if (!srp_change_state(target, SRP_TARGET_LIVE, SRP_TARGET_CONNECTING)) return -EAGAIN; @@ -601,9 +601,9 @@ static int srp_reconnect_target(struct srp_target_port *target) srp_reset_req(target, req); spin_unlock_irq(target->scsi_host->host_lock); - target->rx_head = 0; - target->tx_head = 0; - target->tx_tail = 0; + list_del_init(&target->free_tx); + for (i = 0; i < SRP_SQ_SIZE; ++i) + list_move(&target->tx_ring[i]->list, &target->free_tx); target->qp_in_error = 0; ret = srp_connect_target(target); @@ -817,7 +817,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target, /* * Must be called with target->scsi_host->host_lock held to protect - * req_lim and tx_head. Lock cannot be dropped between call here and + * req_lim and free_tx. Lock cannot be dropped between call here and * call to __srp_post_send(). * * Note: @@ -837,7 +837,7 @@ static struct srp_iu *__srp_get_tx_iu(struct srp_target_port *target, srp_send_completion(target->send_cq, target); - if (target->tx_head - target->tx_tail >= SRP_SQ_SIZE) + if (list_empty(&target->free_tx)) return NULL; /* Initiator responses to target requests do not consume credits */ @@ -846,14 +846,14 @@ static struct srp_iu *__srp_get_tx_iu(struct srp_target_port *target, return NULL; } - iu = target->tx_ring[target->tx_head & SRP_SQ_MASK]; + iu = list_first_entry(&target->free_tx, struct srp_iu, list); iu->type = iu_type; return iu; } /* * Must be called with target->scsi_host->host_lock held to protect - * req_lim and tx_head. + * req_lim and free_tx. */ static int __srp_post_send(struct srp_target_port *target, struct srp_iu *iu, int len) @@ -867,7 +867,7 @@ static int __srp_post_send(struct srp_target_port *target, list.lkey = target->srp_host->srp_dev->mr->lkey; wr.next = NULL; - wr.wr_id = target->tx_head & SRP_SQ_MASK; + wr.wr_id = (uintptr_t) iu; wr.sg_list = &list; wr.num_sge = 1; wr.opcode = IB_WR_SEND; @@ -876,7 +876,7 @@ static int __srp_post_send(struct srp_target_port *target, ret = ib_post_send(target->qp, &wr, &bad_wr); if (!ret) { - ++target->tx_head; + list_del(&iu->list); if (iu->type != SRP_IU_RSP) --target->req_lim; } @@ -884,36 +884,21 @@ static int __srp_post_send(struct srp_target_port *target, return ret; } -static int srp_post_recv(struct srp_target_port *target) +static int srp_post_recv(struct srp_target_port *target, struct srp_iu *iu) { - unsigned long flags; - struct srp_iu *iu; - struct ib_sge list; struct ib_recv_wr wr, *bad_wr; - unsigned int next; - int ret; - - spin_lock_irqsave(target->scsi_host->host_lock, flags); - - next = target->rx_head & SRP_RQ_MASK; - wr.wr_id = next; - iu = target->rx_ring[next]; + struct ib_sge list; list.addr = iu->dma; list.length = iu->size; list.lkey = target->srp_host->srp_dev->mr->lkey; wr.next = NULL; + wr.wr_id = (uintptr_t) iu; wr.sg_list = &list; wr.num_sge = 1; - ret = ib_post_recv(target->qp, &wr, &bad_wr); - if (!ret) - ++target->rx_head; - - spin_unlock_irqrestore(target->scsi_host->host_lock, flags); - - return ret; + return ib_post_recv(target->qp, &wr, &bad_wr); } static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp) @@ -1030,14 +1015,11 @@ static void srp_process_aer_req(struct srp_target_port *target, static void srp_handle_recv(struct srp_target_port *target, struct ib_wc *wc) { - struct ib_device *dev; - struct srp_iu *iu; + struct ib_device *dev = target->srp_host->srp_dev->dev; + struct srp_iu *iu = (struct srp_iu *) wc->wr_id; int res; u8 opcode; - iu = target->rx_ring[wc->wr_id]; - - dev = target->srp_host->srp_dev->dev; ib_dma_sync_single_for_cpu(dev, iu->dma, target->max_ti_iu_len, DMA_FROM_DEVICE); @@ -1078,7 +1060,7 @@ static void srp_handle_recv(struct srp_target_port *target, struct ib_wc *wc) ib_dma_sync_single_for_device(dev, iu->dma, target->max_ti_iu_len, DMA_FROM_DEVICE); - res = srp_post_recv(target); + res = srp_post_recv(target, iu); if (res != 0) shost_printk(KERN_ERR, target->scsi_host, PFX "Recv failed with error code %d\n", res); @@ -1107,6 +1089,7 @@ static void srp_send_completion(struct ib_cq *cq, void *target_ptr) { struct srp_target_port *target = target_ptr; struct ib_wc wc; + struct srp_iu *iu; while (ib_poll_cq(cq, 1, &wc) > 0) { if (wc.status) { @@ -1117,7 +1100,8 @@ static void srp_send_completion(struct ib_cq *cq, void *target_ptr) break; } - ++target->tx_tail; + iu = (struct srp_iu *) wc.wr_id; + list_add(&iu->list, &target->free_tx); } } @@ -1212,6 +1196,8 @@ static int srp_alloc_iu_bufs(struct srp_target_port *target) GFP_KERNEL, DMA_TO_DEVICE); if (!target->tx_ring[i]) goto err; + + list_add(&target->tx_ring[i]->list, &target->free_tx); } return 0; @@ -1373,7 +1359,8 @@ static int srp_cm_handler(struct ib_cm_id *cm_id, struct ib_cm_event *event) break; for (i = 0; i < SRP_RQ_SIZE; i++) { - target->status = srp_post_recv(target); + struct srp_iu *iu = target->rx_ring[i]; + target->status = srp_post_recv(target, iu); if (target->status) break; } @@ -1965,6 +1952,7 @@ static ssize_t srp_create_target(struct device *dev, target->scsi_host = target_host; target->srp_host = host; + INIT_LIST_HEAD(&target->free_tx); INIT_LIST_HEAD(&target->free_reqs); INIT_LIST_HEAD(&target->req_queue); for (i = 0; i < SRP_CMD_SQ_SIZE; ++i) { @@ -2235,8 +2223,7 @@ static int __init srp_init_module(void) { int ret; - BUILD_BUG_ON_NOT_POWER_OF_2(SRP_SQ_SIZE); - BUILD_BUG_ON_NOT_POWER_OF_2(SRP_RQ_SIZE); + BUILD_BUG_ON(FIELD_SIZEOF(struct ib_wc, wr_id) < sizeof(void *)); if (srp_sg_tablesize > 255) { printk(KERN_WARNING PFX "Clamping srp_sg_tablesize to 255\n"); diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h index f8b689a..41ecb46 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.h +++ b/drivers/infiniband/ulp/srp/ib_srp.h @@ -59,10 +59,8 @@ enum { SRP_RQ_SHIFT = 6, SRP_RQ_SIZE = 1 << SRP_RQ_SHIFT, - SRP_RQ_MASK = SRP_RQ_SIZE - 1, SRP_SQ_SIZE = SRP_RQ_SIZE, - SRP_SQ_MASK = SRP_SQ_SIZE - 1, SRP_RSP_SQ_SIZE = 1, SRP_REQ_SQ_SIZE = SRP_SQ_SIZE - SRP_RSP_SQ_SIZE, SRP_TSK_MGMT_SQ_SIZE = 1, @@ -144,11 +142,9 @@ struct srp_target_port { int zero_req_lim; - unsigned rx_head; struct srp_iu *rx_ring[SRP_RQ_SIZE]; - unsigned tx_head; - unsigned tx_tail; + struct list_head free_tx; struct srp_iu *tx_ring[SRP_SQ_SIZE]; struct list_head free_reqs; @@ -168,6 +164,7 @@ struct srp_target_port { }; struct srp_iu { + struct list_head list; u64 dma; void *buf; size_t size; -- 1.7.3.4 ^ permalink raw reply related [flat|nested] 17+ messages in thread
[parent not found: <1294259716-30706-4-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org>]
* Re: [PATCH v2 3/8] IB/srp: allow lockless work posting [not found] ` <1294259716-30706-4-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> @ 2011-01-09 15:59 ` Bart Van Assche [not found] ` <AANLkTinExFSLHAo97S0Q8tbfUb5Lc5h06kHXyyzj-2Cj-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 0 siblings, 1 reply; 17+ messages in thread From: Bart Van Assche @ 2011-01-09 15:59 UTC (permalink / raw) To: David Dillow; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On Wed, Jan 5, 2011 at 9:35 PM, David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> wrote: > From: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> > > Only one CPU at a time will own an RX IU, so using the address of the IU > as the work request cookie allows us to avoid taking a lock. We can > similarly prepare the TX path for lockless posting by moving the free TX > IUs to a list. This also removes the requirement that the queue sizes be > a power of 2. > > [ ... ] > @@ -601,9 +601,9 @@ static int srp_reconnect_target(struct srp_target_port *target) > srp_reset_req(target, req); > spin_unlock_irq(target->scsi_host->host_lock); > > - target->rx_head = 0; > - target->tx_head = 0; > - target->tx_tail = 0; > + list_del_init(&target->free_tx); > + for (i = 0; i < SRP_SQ_SIZE; ++i) > + list_move(&target->tx_ring[i]->list, &target->free_tx); > > target->qp_in_error = 0; > ret = srp_connect_target(target); Hello Dave, Sorry that I hadn't noticed this before: invoking list_del_init() on &target->free_tx seems strange to me since &target->free_tx is a list head and not a list element. While list_del_init() will probably work fine here, using INIT_LIST_HEAD() here seems more appropriate to me. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 17+ messages in thread
[parent not found: <AANLkTinExFSLHAo97S0Q8tbfUb5Lc5h06kHXyyzj-2Cj-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: [PATCH v2 3/8] IB/srp: allow lockless work posting [not found] ` <AANLkTinExFSLHAo97S0Q8tbfUb5Lc5h06kHXyyzj-2Cj-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> @ 2011-01-09 16:35 ` David Dillow [not found] ` <1294590920.3071.4.camel-1q1vX8mYZiGLUyTwlgNVppKKF0rrzTr+@public.gmane.org> 0 siblings, 1 reply; 17+ messages in thread From: David Dillow @ 2011-01-09 16:35 UTC (permalink / raw) To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On Sun, 2011-01-09 at 16:59 +0100, Bart Van Assche wrote: > On Wed, Jan 5, 2011 at 9:35 PM, David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> wrote: > > From: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> > > + list_del_init(&target->free_tx); > > + for (i = 0; i < SRP_SQ_SIZE; ++i) > > + list_move(&target->tx_ring[i]->list, &target->free_tx); > > > > target->qp_in_error = 0; > > ret = srp_connect_target(target); > Sorry that I hadn't noticed this before: invoking list_del_init() on > &target->free_tx seems strange to me since &target->free_tx is a list > head and not a list element. While list_del_init() will probably work > fine here, using INIT_LIST_HEAD() here seems more appropriate to me. Yeah, it works, but INIT_LIST_HEAD() would be more appropriate. I'll fix that when I put this in my repository. Thanks! -- Dave Dillow National Center for Computational Science Oak Ridge National Laboratory (865) 241-6602 office -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 17+ messages in thread
[parent not found: <1294590920.3071.4.camel-1q1vX8mYZiGLUyTwlgNVppKKF0rrzTr+@public.gmane.org>]
* Re: [PATCH v2 3/8] IB/srp: allow lockless work posting [not found] ` <1294590920.3071.4.camel-1q1vX8mYZiGLUyTwlgNVppKKF0rrzTr+@public.gmane.org> @ 2011-01-10 22:37 ` David Dillow 0 siblings, 0 replies; 17+ messages in thread From: David Dillow @ 2011-01-10 22:37 UTC (permalink / raw) To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On Sun, 2011-01-09 at 11:35 -0500, David Dillow wrote: > On Sun, 2011-01-09 at 16:59 +0100, Bart Van Assche wrote: > > On Wed, Jan 5, 2011 at 9:35 PM, David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> wrote: > > > From: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> > > > > + list_del_init(&target->free_tx); > > > + for (i = 0; i < SRP_SQ_SIZE; ++i) > > > + list_move(&target->tx_ring[i]->list, &target->free_tx); > > > > > > target->qp_in_error = 0; > > > ret = srp_connect_target(target); > > > Sorry that I hadn't noticed this before: invoking list_del_init() on > > &target->free_tx seems strange to me since &target->free_tx is a list > > head and not a list element. While list_del_init() will probably work > > fine here, using INIT_LIST_HEAD() here seems more appropriate to me. > > Yeah, it works, but INIT_LIST_HEAD() would be more appropriate. I'll fix > that when I put this in my repository. Actually, using INIT_LIST_HEAD() here on a non-empty free_tx corrupts things when used with list_move(). That's not a problem in the next patch, where we transition to list_add(), so I made the change in patch 4. Thinking about it, we could just do an INIT_LIST_HEAD() here on both free_tx and req_queue, and move to list_add() here. However, I've already retested the change in patch 4 and pushed to git, so I'm going to leave this be. -- Dave Dillow National Center for Computational Science Oak Ridge National Laboratory (865) 241-6602 office -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 8/8] IB/srp: consolidate hot-path variables into cache lines 2011-01-05 20:35 [PATCH v2 0/8] scaling and bug fixes for 2.6.38 David Dillow 2011-01-05 20:35 ` [PATCH v2 1/8] IB/srp: allow task management without a previous request David Dillow 2011-01-05 20:35 ` [PATCH v2 3/8] IB/srp: allow lockless work posting David Dillow @ 2011-01-05 20:35 ` David Dillow [not found] ` <1294259716-30706-1-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> 3 siblings, 0 replies; 17+ messages in thread From: David Dillow @ 2011-01-05 20:35 UTC (permalink / raw) To: linux-rdma; +Cc: linux-scsi, bvanassche Put the variables accessed together in the hot-path into common cachelines, and separate them by RW vs RO to avoid false dirtying. We keep a local copy of the lkey and rkey in the target to avoid traversing pointers (and associated cache lines) to find them. Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: David Dillow <dillowda@ornl.gov> --- drivers/infiniband/ulp/srp/ib_srp.c | 12 +++++++----- drivers/infiniband/ulp/srp/ib_srp.h | 31 +++++++++++++++++++------------ 2 files changed, 26 insertions(+), 17 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 2f62367..7dcefe4 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -768,7 +768,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target, struct srp_direct_buf *buf = (void *) cmd->add_data; buf->va = cpu_to_be64(ib_sg_dma_address(ibdev, scat)); - buf->key = cpu_to_be32(dev->mr->rkey); + buf->key = cpu_to_be32(target->rkey); buf->len = cpu_to_be32(ib_sg_dma_len(ibdev, scat)); } else if (srp_map_fmr(target, scat, count, req, (void *) cmd->add_data)) { @@ -793,7 +793,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target, buf->desc_list[i].va = cpu_to_be64(ib_sg_dma_address(ibdev, sg)); buf->desc_list[i].key = - cpu_to_be32(dev->mr->rkey); + cpu_to_be32(target->rkey); buf->desc_list[i].len = cpu_to_be32(dma_len); datalen += dma_len; } @@ -806,7 +806,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target, buf->table_desc.va = cpu_to_be64(req->cmd->dma + sizeof *cmd + sizeof *buf); buf->table_desc.key = - cpu_to_be32(target->srp_host->srp_dev->mr->rkey); + cpu_to_be32(target->rkey); buf->table_desc.len = cpu_to_be32(count * sizeof (struct srp_direct_buf)); @@ -883,7 +883,7 @@ static int srp_post_send(struct srp_target_port *target, list.addr = iu->dma; list.length = len; - list.lkey = target->srp_host->srp_dev->mr->lkey; + list.lkey = target->lkey; wr.next = NULL; wr.wr_id = (uintptr_t) iu; @@ -902,7 +902,7 @@ static int srp_post_recv(struct srp_target_port *target, struct srp_iu *iu) list.addr = iu->dma; list.length = iu->size; - list.lkey = target->srp_host->srp_dev->mr->lkey; + list.lkey = target->lkey; wr.next = NULL; wr.wr_id = (uintptr_t) iu; @@ -1955,6 +1955,8 @@ static ssize_t srp_create_target(struct device *dev, target->io_class = SRP_REV16A_IB_IO_CLASS; target->scsi_host = target_host; target->srp_host = host; + target->lkey = host->srp_dev->mr->lkey; + target->rkey = host->srp_dev->mr->rkey; spin_lock_init(&target->lock); INIT_LIST_HEAD(&target->free_tx); diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h index acb435d..9dc6fc3 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.h +++ b/drivers/infiniband/ulp/srp/ib_srp.h @@ -117,6 +117,24 @@ struct srp_request { }; struct srp_target_port { + /* These are RW in the hot path, and commonly used together */ + struct list_head free_tx; + struct list_head free_reqs; + spinlock_t lock; + s32 req_lim; + + /* These are read-only in the hot path */ + struct ib_cq *send_cq ____cacheline_aligned_in_smp; + struct ib_cq *recv_cq; + struct ib_qp *qp; + u32 lkey; + u32 rkey; + enum srp_target_state state; + + /* Everything above this point is used in the hot path of + * command processing. Try to keep them packed into cachelines. + */ + __be64 id_ext; __be64 ioc_guid; __be64 service_id; @@ -133,23 +151,13 @@ struct srp_target_port { int path_query_id; struct ib_cm_id *cm_id; - struct ib_cq *recv_cq; - struct ib_cq *send_cq; - struct ib_qp *qp; int max_ti_iu_len; - s32 req_lim; int zero_req_lim; - struct srp_iu *rx_ring[SRP_RQ_SIZE]; - - spinlock_t lock; - - struct list_head free_tx; struct srp_iu *tx_ring[SRP_SQ_SIZE]; - - struct list_head free_reqs; + struct srp_iu *rx_ring[SRP_RQ_SIZE]; struct srp_request req_ring[SRP_CMD_SQ_SIZE]; struct work_struct work; @@ -157,7 +165,6 @@ struct srp_target_port { struct list_head list; struct completion done; int status; - enum srp_target_state state; int qp_in_error; struct completion tsk_mgmt_done; -- 1.7.3.4 ^ permalink raw reply related [flat|nested] 17+ messages in thread
[parent not found: <1294259716-30706-1-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org>]
* [PATCH v2 2/8] IB/srp: consolidate state change code [not found] ` <1294259716-30706-1-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> @ 2011-01-05 20:35 ` David Dillow [not found] ` <1294259716-30706-3-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> 2011-01-05 20:35 ` [PATCH v2 4/8] IB/srp: don't move active requests to their own list David Dillow ` (4 subsequent siblings) 5 siblings, 1 reply; 17+ messages in thread From: David Dillow @ 2011-01-05 20:35 UTC (permalink / raw) To: linux-rdma-u79uwXL29TY76Z2rM5mHXA Cc: linux-scsi-u79uwXL29TY76Z2rM5mHXA, bvanassche-HInyCGIudOg From: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> Signed-off-by: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> [ broken out and small cleanups by David Dillow ] Signed-off-by: David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> --- drivers/infiniband/ulp/srp/ib_srp.c | 45 ++++++++++++++++++---------------- 1 files changed, 24 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 29429a1..def9e6b 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -441,18 +441,28 @@ static void srp_disconnect_target(struct srp_target_port *target) wait_for_completion(&target->done); } +static bool srp_change_state(struct srp_target_port *target, + enum srp_target_state old, + enum srp_target_state new) +{ + bool changed = false; + + spin_lock_irq(target->scsi_host->host_lock); + if (target->state == old) { + target->state = new; + changed = true; + } + spin_unlock_irq(target->scsi_host->host_lock); + return changed; +} + static void srp_remove_work(struct work_struct *work) { struct srp_target_port *target = container_of(work, struct srp_target_port, work); - spin_lock_irq(target->scsi_host->host_lock); - if (target->state != SRP_TARGET_DEAD) { - spin_unlock_irq(target->scsi_host->host_lock); + if (!srp_change_state(target, SRP_TARGET_DEAD, SRP_TARGET_REMOVED)) return; - } - target->state = SRP_TARGET_REMOVED; - spin_unlock_irq(target->scsi_host->host_lock); spin_lock(&target->srp_host->target_lock); list_del(&target->list); @@ -560,13 +570,8 @@ static int srp_reconnect_target(struct srp_target_port *target) struct ib_wc wc; int ret; - spin_lock_irq(target->scsi_host->host_lock); - if (target->state != SRP_TARGET_LIVE) { - spin_unlock_irq(target->scsi_host->host_lock); + if (!srp_change_state(target, SRP_TARGET_LIVE, SRP_TARGET_CONNECTING)) return -EAGAIN; - } - target->state = SRP_TARGET_CONNECTING; - spin_unlock_irq(target->scsi_host->host_lock); srp_disconnect_target(target); /* @@ -605,13 +610,8 @@ static int srp_reconnect_target(struct srp_target_port *target) if (ret) goto err; - spin_lock_irq(target->scsi_host->host_lock); - if (target->state == SRP_TARGET_CONNECTING) { - ret = 0; - target->state = SRP_TARGET_LIVE; - } else + if (!srp_change_state(target, SRP_TARGET_CONNECTING, SRP_TARGET_LIVE)) ret = -EAGAIN; - spin_unlock_irq(target->scsi_host->host_lock); return ret; @@ -621,9 +621,12 @@ err: /* * We couldn't reconnect, so kill our target port off. - * However, we have to defer the real removal because we might - * be in the context of the SCSI error handler now, which - * would deadlock if we call scsi_remove_host(). + * However, we have to defer the real removal because we + * are in the context of the SCSI error handler now, which + * will deadlock if we call scsi_remove_host(). + * + * Schedule our work inside the lock to avoid a race with + * the flush_scheduled_work() in srp_remove_one(). */ spin_lock_irq(target->scsi_host->host_lock); if (target->state == SRP_TARGET_CONNECTING) { -- 1.7.3.4 -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 17+ messages in thread
[parent not found: <1294259716-30706-3-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org>]
* Re: [PATCH v2 2/8] IB/srp: consolidate state change code [not found] ` <1294259716-30706-3-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> @ 2011-01-09 15:43 ` Bart Van Assche [not found] ` <AANLkTim8Ru5AerQev33rVxQQ1i6m69WSBRU9YtWdQvqN-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 0 siblings, 1 reply; 17+ messages in thread From: Bart Van Assche @ 2011-01-09 15:43 UTC (permalink / raw) To: David Dillow; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On Wed, Jan 5, 2011 at 9:35 PM, David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> wrote: > > From: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> > > Signed-off-by: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> > [ broken out and small cleanups by David Dillow ] > Signed-off-by: David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> > --- [ ... ] Hello David, The patch itself looks fine to me, but it seems like the patch description fell off ? Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 17+ messages in thread
[parent not found: <AANLkTim8Ru5AerQev33rVxQQ1i6m69WSBRU9YtWdQvqN-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: [PATCH v2 2/8] IB/srp: consolidate state change code [not found] ` <AANLkTim8Ru5AerQev33rVxQQ1i6m69WSBRU9YtWdQvqN-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> @ 2011-01-09 16:33 ` David Dillow 0 siblings, 0 replies; 17+ messages in thread From: David Dillow @ 2011-01-09 16:33 UTC (permalink / raw) To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On Sun, 2011-01-09 at 16:43 +0100, Bart Van Assche wrote: > On Wed, Jan 5, 2011 at 9:35 PM, David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> wrote: > > > > From: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> > > > > Signed-off-by: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> > > [ broken out and small cleanups by David Dillow ] > > Signed-off-by: David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> > > --- > [ ... ] > > Hello David, > > The patch itself looks fine to me, but it seems like the patch > description fell off ? The old description was just a credit for your work, now covered by the author field. The one-liner pretty much covers it otherwise. -- Dave Dillow National Center for Computational Science Oak Ridge National Laboratory (865) 241-6602 office -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 4/8] IB/srp: don't move active requests to their own list [not found] ` <1294259716-30706-1-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> 2011-01-05 20:35 ` [PATCH v2 2/8] IB/srp: consolidate state change code David Dillow @ 2011-01-05 20:35 ` David Dillow 2011-01-05 20:35 ` [PATCH v2 5/8] IB/srp: reduce local coverage for command submission and EH David Dillow ` (3 subsequent siblings) 5 siblings, 0 replies; 17+ messages in thread From: David Dillow @ 2011-01-05 20:35 UTC (permalink / raw) To: linux-rdma-u79uwXL29TY76Z2rM5mHXA Cc: linux-scsi-u79uwXL29TY76Z2rM5mHXA, bvanassche-HInyCGIudOg From: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> We use req->scmnd != NULL to indicate an active request, so there's no need to keep a separate list for them. We can afford the array iteration during error handling, and dropping it gives us one less item that needs lock protection. Signed-off-by: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> [ broken out and small cleanups by David Dillow ] Signed-off-by: David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> --- drivers/infiniband/ulp/srp/ib_srp.c | 21 ++++++++++++--------- drivers/infiniband/ulp/srp/ib_srp.h | 1 - 2 files changed, 12 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index aa78d26..6f36dbe 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -553,7 +553,7 @@ static void srp_remove_req(struct srp_target_port *target, struct srp_request *r { srp_unmap_data(req->scmnd, target, req); req->scmnd = NULL; - list_move_tail(&req->list, &target->free_reqs); + list_add_tail(&req->list, &target->free_reqs); } static void srp_reset_req(struct srp_target_port *target, struct srp_request *req) @@ -566,7 +566,6 @@ static void srp_reset_req(struct srp_target_port *target, struct srp_request *re static int srp_reconnect_target(struct srp_target_port *target) { struct ib_qp_attr qp_attr; - struct srp_request *req, *tmp; struct ib_wc wc; int i, ret; @@ -597,13 +596,16 @@ static int srp_reconnect_target(struct srp_target_port *target) ; /* nothing */ spin_lock_irq(target->scsi_host->host_lock); - list_for_each_entry_safe(req, tmp, &target->req_queue, list) - srp_reset_req(target, req); + for (i = 0; i < SRP_CMD_SQ_SIZE; ++i) { + struct srp_request *req = &target->req_ring[i]; + if (req->scmnd) + srp_reset_req(target, req); + } spin_unlock_irq(target->scsi_host->host_lock); list_del_init(&target->free_tx); for (i = 0; i < SRP_SQ_SIZE; ++i) - list_move(&target->tx_ring[i]->list, &target->free_tx); + list_add(&target->tx_ring[i]->list, &target->free_tx); target->qp_in_error = 0; ret = srp_connect_target(target); @@ -1165,7 +1167,7 @@ static int srp_queuecommand_lck(struct scsi_cmnd *scmnd, goto err_unmap; } - list_move_tail(&req->list, &target->req_queue); + list_del(&req->list); return 0; @@ -1504,7 +1506,7 @@ static int srp_abort(struct scsi_cmnd *scmnd) static int srp_reset_device(struct scsi_cmnd *scmnd) { struct srp_target_port *target = host_to_target(scmnd->device->host); - struct srp_request *req, *tmp; + int i; shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n"); @@ -1518,9 +1520,11 @@ static int srp_reset_device(struct scsi_cmnd *scmnd) spin_lock_irq(target->scsi_host->host_lock); - list_for_each_entry_safe(req, tmp, &target->req_queue, list) + for (i = 0; i < SRP_CMD_SQ_SIZE; ++i) { + struct srp_request *req = &target->req_ring[i]; if (req->scmnd && req->scmnd->device == scmnd->device) srp_reset_req(target, req); + } spin_unlock_irq(target->scsi_host->host_lock); @@ -1954,7 +1958,6 @@ static ssize_t srp_create_target(struct device *dev, INIT_LIST_HEAD(&target->free_tx); INIT_LIST_HEAD(&target->free_reqs); - INIT_LIST_HEAD(&target->req_queue); for (i = 0; i < SRP_CMD_SQ_SIZE; ++i) { target->req_ring[i].index = i; list_add_tail(&target->req_ring[i].list, &target->free_reqs); diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h index 41ecb46..924d8e9 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.h +++ b/drivers/infiniband/ulp/srp/ib_srp.h @@ -148,7 +148,6 @@ struct srp_target_port { struct srp_iu *tx_ring[SRP_SQ_SIZE]; struct list_head free_reqs; - struct list_head req_queue; struct srp_request req_ring[SRP_CMD_SQ_SIZE]; struct work_struct work; -- 1.7.3.4 -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 5/8] IB/srp: reduce local coverage for command submission and EH [not found] ` <1294259716-30706-1-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> 2011-01-05 20:35 ` [PATCH v2 2/8] IB/srp: consolidate state change code David Dillow 2011-01-05 20:35 ` [PATCH v2 4/8] IB/srp: don't move active requests to their own list David Dillow @ 2011-01-05 20:35 ` David Dillow 2011-01-05 20:35 ` [PATCH v2 6/8] IB/srp: reduce lock coverage of command completion David Dillow ` (2 subsequent siblings) 5 siblings, 0 replies; 17+ messages in thread From: David Dillow @ 2011-01-05 20:35 UTC (permalink / raw) To: linux-rdma-u79uwXL29TY76Z2rM5mHXA Cc: linux-scsi-u79uwXL29TY76Z2rM5mHXA, bvanassche-HInyCGIudOg From: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> We only need locks to protect our lists and number of credits available. By pre-consuming the credit for the request, we can reduce our lock coverage to just those areas. If we don't actually send the request, we'll need to put the credit back into the pool. Signed-off-by: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> [ broken out and small cleanups by David Dillow ] Signed-off-by: David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> --- drivers/infiniband/ulp/srp/ib_srp.c | 124 +++++++++++++++++++---------------- drivers/infiniband/ulp/srp/ib_srp.h | 1 - 2 files changed, 67 insertions(+), 58 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 6f36dbe..d320dd4 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -818,9 +818,24 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target, } /* + * Return an IU and possible credit to the free pool + */ +static void srp_put_tx_iu(struct srp_target_port *target, struct srp_iu *iu, + enum srp_iu_type iu_type) +{ + unsigned long flags; + + spin_lock_irqsave(target->scsi_host->host_lock, flags); + list_add(&iu->list, &target->free_tx); + if (iu_type != SRP_IU_RSP) + ++target->req_lim; + spin_unlock_irqrestore(target->scsi_host->host_lock, flags); +} + +/* * Must be called with target->scsi_host->host_lock held to protect - * req_lim and free_tx. Lock cannot be dropped between call here and - * call to __srp_post_send(). + * req_lim and free_tx. If IU is not sent, it must be returned using + * srp_put_tx_iu(). * * Note: * An upper limit for the number of allocated information units for each @@ -843,26 +858,25 @@ static struct srp_iu *__srp_get_tx_iu(struct srp_target_port *target, return NULL; /* Initiator responses to target requests do not consume credits */ - if (target->req_lim <= rsv && iu_type != SRP_IU_RSP) { - ++target->zero_req_lim; - return NULL; + if (iu_type != SRP_IU_RSP) { + if (target->req_lim <= rsv) { + ++target->zero_req_lim; + return NULL; + } + + --target->req_lim; } iu = list_first_entry(&target->free_tx, struct srp_iu, list); - iu->type = iu_type; + list_del(&iu->list); return iu; } -/* - * Must be called with target->scsi_host->host_lock held to protect - * req_lim and free_tx. - */ -static int __srp_post_send(struct srp_target_port *target, - struct srp_iu *iu, int len) +static int srp_post_send(struct srp_target_port *target, + struct srp_iu *iu, int len) { struct ib_sge list; struct ib_send_wr wr, *bad_wr; - int ret = 0; list.addr = iu->dma; list.length = len; @@ -875,15 +889,7 @@ static int __srp_post_send(struct srp_target_port *target, wr.opcode = IB_WR_SEND; wr.send_flags = IB_SEND_SIGNALED; - ret = ib_post_send(target->qp, &wr, &bad_wr); - - if (!ret) { - list_del(&iu->list); - if (iu->type != SRP_IU_RSP) - --target->req_lim; - } - - return ret; + return ib_post_send(target->qp, &wr, &bad_wr); } static int srp_post_recv(struct srp_target_port *target, struct srp_iu *iu) @@ -953,34 +959,33 @@ static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp) static int srp_response_common(struct srp_target_port *target, s32 req_delta, void *rsp, int len) { - struct ib_device *dev; + struct ib_device *dev = target->srp_host->srp_dev->dev; unsigned long flags; struct srp_iu *iu; - int err = 1; - - dev = target->srp_host->srp_dev->dev; + int err; spin_lock_irqsave(target->scsi_host->host_lock, flags); target->req_lim += req_delta; - iu = __srp_get_tx_iu(target, SRP_IU_RSP); + spin_unlock_irqrestore(target->scsi_host->host_lock, flags); + if (!iu) { shost_printk(KERN_ERR, target->scsi_host, PFX "no IU available to send response\n"); - goto out; + return 1; } ib_dma_sync_single_for_cpu(dev, iu->dma, len, DMA_TO_DEVICE); memcpy(iu->buf, rsp, len); ib_dma_sync_single_for_device(dev, iu->dma, len, DMA_TO_DEVICE); - err = __srp_post_send(target, iu, len); - if (err) + err = srp_post_send(target, iu, len); + if (err) { shost_printk(KERN_ERR, target->scsi_host, PFX "unable to post response: %d\n", err); + srp_put_tx_iu(target, iu, SRP_IU_RSP); + } -out: - spin_unlock_irqrestore(target->scsi_host->host_lock, flags); return err; } @@ -1107,14 +1112,14 @@ static void srp_send_completion(struct ib_cq *cq, void *target_ptr) } } -static int srp_queuecommand_lck(struct scsi_cmnd *scmnd, - void (*done)(struct scsi_cmnd *)) +static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd) { - struct srp_target_port *target = host_to_target(scmnd->device->host); + struct srp_target_port *target = host_to_target(shost); struct srp_request *req; struct srp_iu *iu; struct srp_cmd *cmd; struct ib_device *dev; + unsigned long flags; int len; if (target->state == SRP_TARGET_CONNECTING) @@ -1123,11 +1128,19 @@ static int srp_queuecommand_lck(struct scsi_cmnd *scmnd, if (target->state == SRP_TARGET_DEAD || target->state == SRP_TARGET_REMOVED) { scmnd->result = DID_BAD_TARGET << 16; - done(scmnd); + scmnd->scsi_done(scmnd); return 0; } + spin_lock_irqsave(shost->host_lock, flags); iu = __srp_get_tx_iu(target, SRP_IU_CMD); + if (iu) { + req = list_first_entry(&target->free_reqs, struct srp_request, + list); + list_del(&req->list); + } + spin_unlock_irqrestore(shost->host_lock, flags); + if (!iu) goto err; @@ -1135,9 +1148,6 @@ static int srp_queuecommand_lck(struct scsi_cmnd *scmnd, ib_dma_sync_single_for_cpu(dev, iu->dma, srp_max_iu_len, DMA_TO_DEVICE); - req = list_first_entry(&target->free_reqs, struct srp_request, list); - - scmnd->scsi_done = done; scmnd->result = 0; scmnd->host_scribble = (void *) req; @@ -1156,30 +1166,33 @@ static int srp_queuecommand_lck(struct scsi_cmnd *scmnd, if (len < 0) { shost_printk(KERN_ERR, target->scsi_host, PFX "Failed to map data\n"); - goto err; + goto err_iu; } ib_dma_sync_single_for_device(dev, iu->dma, srp_max_iu_len, DMA_TO_DEVICE); - if (__srp_post_send(target, iu, len)) { + if (srp_post_send(target, iu, len)) { shost_printk(KERN_ERR, target->scsi_host, PFX "Send failed\n"); goto err_unmap; } - list_del(&req->list); - return 0; err_unmap: srp_unmap_data(scmnd, target, req); +err_iu: + srp_put_tx_iu(target, iu, SRP_IU_CMD); + + spin_lock_irqsave(shost->host_lock, flags); + list_add(&req->list, &target->free_reqs); + spin_unlock_irqrestore(shost->host_lock, flags); + err: return SCSI_MLQUEUE_HOST_BUSY; } -static DEF_SCSI_QCMD(srp_queuecommand) - static int srp_alloc_iu_bufs(struct srp_target_port *target) { int i; @@ -1433,17 +1446,18 @@ static int srp_send_tsk_mgmt(struct srp_target_port *target, struct srp_iu *iu; struct srp_tsk_mgmt *tsk_mgmt; - spin_lock_irq(target->scsi_host->host_lock); - if (target->state == SRP_TARGET_DEAD || target->state == SRP_TARGET_REMOVED) - goto out; + return -1; init_completion(&target->tsk_mgmt_done); + spin_lock_irq(target->scsi_host->host_lock); iu = __srp_get_tx_iu(target, SRP_IU_TSK_MGMT); + spin_unlock_irq(target->scsi_host->host_lock); + if (!iu) - goto out; + return -1; ib_dma_sync_single_for_cpu(dev, iu->dma, sizeof *tsk_mgmt, DMA_TO_DEVICE); @@ -1458,20 +1472,16 @@ static int srp_send_tsk_mgmt(struct srp_target_port *target, ib_dma_sync_single_for_device(dev, iu->dma, sizeof *tsk_mgmt, DMA_TO_DEVICE); - if (__srp_post_send(target, iu, sizeof *tsk_mgmt)) - goto out; - - spin_unlock_irq(target->scsi_host->host_lock); + if (srp_post_send(target, iu, sizeof *tsk_mgmt)) { + srp_put_tx_iu(target, iu, SRP_IU_TSK_MGMT); + return -1; + } if (!wait_for_completion_timeout(&target->tsk_mgmt_done, msecs_to_jiffies(SRP_ABORT_TIMEOUT_MS))) return -1; return 0; - -out: - spin_unlock_irq(target->scsi_host->host_lock); - return -1; } static int srp_abort(struct scsi_cmnd *scmnd) diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h index 924d8e9..81686ee 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.h +++ b/drivers/infiniband/ulp/srp/ib_srp.h @@ -168,7 +168,6 @@ struct srp_iu { void *buf; size_t size; enum dma_data_direction direction; - enum srp_iu_type type; }; #endif /* IB_SRP_H */ -- 1.7.3.4 -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 6/8] IB/srp: reduce lock coverage of command completion [not found] ` <1294259716-30706-1-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> ` (2 preceding siblings ...) 2011-01-05 20:35 ` [PATCH v2 5/8] IB/srp: reduce local coverage for command submission and EH David Dillow @ 2011-01-05 20:35 ` David Dillow 2011-01-05 20:35 ` [PATCH v2 7/8] IB/srp: stop sharing the host lock with SCSI David Dillow 2011-01-09 16:01 ` [PATCH v2 0/8] scaling and bug fixes for 2.6.38 Bart Van Assche 5 siblings, 0 replies; 17+ messages in thread From: David Dillow @ 2011-01-05 20:35 UTC (permalink / raw) To: linux-rdma-u79uwXL29TY76Z2rM5mHXA Cc: linux-scsi-u79uwXL29TY76Z2rM5mHXA, bvanassche-HInyCGIudOg From: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> We only need the lock to cover list and credit manipulations, so push those into srp_remove_req() and update the call chains. We reorder the request removal and command completion in srp_process_rsp() to avoid the SCSI mid-layer sending another command before we've released our request and added any credits returned by the target. This prevents us from returning HOST_BUSY unneccesarily. Signed-off-by: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> [ broken out, small cleanups, and modified to avoid potential extraneous HOST_BUSY returns by David Dillow ] Signed-off-by: David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> --- drivers/infiniband/ulp/srp/ib_srp.c | 37 +++++++++++++--------------------- 1 files changed, 14 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index d320dd4..15c3ae3 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -549,18 +549,24 @@ static void srp_unmap_data(struct scsi_cmnd *scmnd, scsi_sg_count(scmnd), scmnd->sc_data_direction); } -static void srp_remove_req(struct srp_target_port *target, struct srp_request *req) +static void srp_remove_req(struct srp_target_port *target, + struct srp_request *req, s32 req_lim_delta) { + unsigned long flags; + srp_unmap_data(req->scmnd, target, req); + spin_lock_irqsave(target->scsi_host->host_lock, flags); + target->req_lim += req_lim_delta; req->scmnd = NULL; list_add_tail(&req->list, &target->free_reqs); + spin_unlock_irqrestore(target->scsi_host->host_lock, flags); } static void srp_reset_req(struct srp_target_port *target, struct srp_request *req) { req->scmnd->result = DID_RESET << 16; req->scmnd->scsi_done(req->scmnd); - srp_remove_req(target, req); + srp_remove_req(target, req, 0); } static int srp_reconnect_target(struct srp_target_port *target) @@ -595,13 +601,11 @@ static int srp_reconnect_target(struct srp_target_port *target) while (ib_poll_cq(target->send_cq, 1, &wc) > 0) ; /* nothing */ - spin_lock_irq(target->scsi_host->host_lock); for (i = 0; i < SRP_CMD_SQ_SIZE; ++i) { struct srp_request *req = &target->req_ring[i]; if (req->scmnd) srp_reset_req(target, req); } - spin_unlock_irq(target->scsi_host->host_lock); list_del_init(&target->free_tx); for (i = 0; i < SRP_SQ_SIZE; ++i) @@ -914,15 +918,12 @@ static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp) struct srp_request *req; struct scsi_cmnd *scmnd; unsigned long flags; - s32 delta; - - delta = (s32) be32_to_cpu(rsp->req_lim_delta); - - spin_lock_irqsave(target->scsi_host->host_lock, flags); - - target->req_lim += delta; if (unlikely(rsp->tag & SRP_TAG_TSK_MGMT)) { + spin_lock_irqsave(target->scsi_host->host_lock, flags); + target->req_lim += be32_to_cpu(rsp->req_lim_delta); + spin_unlock_irqrestore(target->scsi_host->host_lock, flags); + target->tsk_mgmt_status = -1; if (be32_to_cpu(rsp->resp_data_len) >= 4) target->tsk_mgmt_status = rsp->data[3]; @@ -948,12 +949,10 @@ static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp) else if (rsp->flags & (SRP_RSP_FLAG_DIOVER | SRP_RSP_FLAG_DIUNDER)) scsi_set_resid(scmnd, be32_to_cpu(rsp->data_in_res_cnt)); + srp_remove_req(target, req, be32_to_cpu(rsp->req_lim_delta)); scmnd->host_scribble = NULL; scmnd->scsi_done(scmnd); - srp_remove_req(target, req); } - - spin_unlock_irqrestore(target->scsi_host->host_lock, flags); } static int srp_response_common(struct srp_target_port *target, s32 req_delta, @@ -1498,18 +1497,14 @@ static int srp_abort(struct scsi_cmnd *scmnd) SRP_TSK_ABORT_TASK)) return FAILED; - spin_lock_irq(target->scsi_host->host_lock); - if (req->scmnd) { if (!target->tsk_mgmt_status) { - srp_remove_req(target, req); + srp_remove_req(target, req, 0); scmnd->result = DID_ABORT << 16; } else ret = FAILED; } - spin_unlock_irq(target->scsi_host->host_lock); - return ret; } @@ -1528,16 +1523,12 @@ static int srp_reset_device(struct scsi_cmnd *scmnd) if (target->tsk_mgmt_status) return FAILED; - spin_lock_irq(target->scsi_host->host_lock); - for (i = 0; i < SRP_CMD_SQ_SIZE; ++i) { struct srp_request *req = &target->req_ring[i]; if (req->scmnd && req->scmnd->device == scmnd->device) srp_reset_req(target, req); } - spin_unlock_irq(target->scsi_host->host_lock); - return SUCCESS; } -- 1.7.3.4 -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 7/8] IB/srp: stop sharing the host lock with SCSI [not found] ` <1294259716-30706-1-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> ` (3 preceding siblings ...) 2011-01-05 20:35 ` [PATCH v2 6/8] IB/srp: reduce lock coverage of command completion David Dillow @ 2011-01-05 20:35 ` David Dillow 2011-01-09 16:01 ` [PATCH v2 0/8] scaling and bug fixes for 2.6.38 Bart Van Assche 5 siblings, 0 replies; 17+ messages in thread From: David Dillow @ 2011-01-05 20:35 UTC (permalink / raw) To: linux-rdma-u79uwXL29TY76Z2rM5mHXA Cc: linux-scsi-u79uwXL29TY76Z2rM5mHXA, bvanassche-HInyCGIudOg From: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> We don't need protection against the SCSI stack, so use our own lock to allow parallel progress on separate CPUs. Signed-off-by: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> [ broken out and small cleanups by David Dillow ] Signed-off-by: David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> --- drivers/infiniband/ulp/srp/ib_srp.c | 46 +++++++++++++++++----------------- drivers/infiniband/ulp/srp/ib_srp.h | 2 + 2 files changed, 25 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 15c3ae3..2f62367 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -447,12 +447,12 @@ static bool srp_change_state(struct srp_target_port *target, { bool changed = false; - spin_lock_irq(target->scsi_host->host_lock); + spin_lock_irq(&target->lock); if (target->state == old) { target->state = new; changed = true; } - spin_unlock_irq(target->scsi_host->host_lock); + spin_unlock_irq(&target->lock); return changed; } @@ -555,11 +555,11 @@ static void srp_remove_req(struct srp_target_port *target, unsigned long flags; srp_unmap_data(req->scmnd, target, req); - spin_lock_irqsave(target->scsi_host->host_lock, flags); + spin_lock_irqsave(&target->lock, flags); target->req_lim += req_lim_delta; req->scmnd = NULL; list_add_tail(&req->list, &target->free_reqs); - spin_unlock_irqrestore(target->scsi_host->host_lock, flags); + spin_unlock_irqrestore(&target->lock, flags); } static void srp_reset_req(struct srp_target_port *target, struct srp_request *req) @@ -634,13 +634,13 @@ err: * Schedule our work inside the lock to avoid a race with * the flush_scheduled_work() in srp_remove_one(). */ - spin_lock_irq(target->scsi_host->host_lock); + spin_lock_irq(&target->lock); if (target->state == SRP_TARGET_CONNECTING) { target->state = SRP_TARGET_DEAD; INIT_WORK(&target->work, srp_remove_work); schedule_work(&target->work); } - spin_unlock_irq(target->scsi_host->host_lock); + spin_unlock_irq(&target->lock); return ret; } @@ -829,17 +829,16 @@ static void srp_put_tx_iu(struct srp_target_port *target, struct srp_iu *iu, { unsigned long flags; - spin_lock_irqsave(target->scsi_host->host_lock, flags); + spin_lock_irqsave(&target->lock, flags); list_add(&iu->list, &target->free_tx); if (iu_type != SRP_IU_RSP) ++target->req_lim; - spin_unlock_irqrestore(target->scsi_host->host_lock, flags); + spin_unlock_irqrestore(&target->lock, flags); } /* - * Must be called with target->scsi_host->host_lock held to protect - * req_lim and free_tx. If IU is not sent, it must be returned using - * srp_put_tx_iu(). + * Must be called with target->lock held to protect req_lim and free_tx. + * If IU is not sent, it must be returned using srp_put_tx_iu(). * * Note: * An upper limit for the number of allocated information units for each @@ -920,9 +919,9 @@ static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp) unsigned long flags; if (unlikely(rsp->tag & SRP_TAG_TSK_MGMT)) { - spin_lock_irqsave(target->scsi_host->host_lock, flags); + spin_lock_irqsave(&target->lock, flags); target->req_lim += be32_to_cpu(rsp->req_lim_delta); - spin_unlock_irqrestore(target->scsi_host->host_lock, flags); + spin_unlock_irqrestore(&target->lock, flags); target->tsk_mgmt_status = -1; if (be32_to_cpu(rsp->resp_data_len) >= 4) @@ -963,10 +962,10 @@ static int srp_response_common(struct srp_target_port *target, s32 req_delta, struct srp_iu *iu; int err; - spin_lock_irqsave(target->scsi_host->host_lock, flags); + spin_lock_irqsave(&target->lock, flags); target->req_lim += req_delta; iu = __srp_get_tx_iu(target, SRP_IU_RSP); - spin_unlock_irqrestore(target->scsi_host->host_lock, flags); + spin_unlock_irqrestore(&target->lock, flags); if (!iu) { shost_printk(KERN_ERR, target->scsi_host, PFX @@ -1131,14 +1130,14 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd) return 0; } - spin_lock_irqsave(shost->host_lock, flags); + spin_lock_irqsave(&target->lock, flags); iu = __srp_get_tx_iu(target, SRP_IU_CMD); if (iu) { req = list_first_entry(&target->free_reqs, struct srp_request, list); list_del(&req->list); } - spin_unlock_irqrestore(shost->host_lock, flags); + spin_unlock_irqrestore(&target->lock, flags); if (!iu) goto err; @@ -1184,9 +1183,9 @@ err_unmap: err_iu: srp_put_tx_iu(target, iu, SRP_IU_CMD); - spin_lock_irqsave(shost->host_lock, flags); + spin_lock_irqsave(&target->lock, flags); list_add(&req->list, &target->free_reqs); - spin_unlock_irqrestore(shost->host_lock, flags); + spin_unlock_irqrestore(&target->lock, flags); err: return SCSI_MLQUEUE_HOST_BUSY; @@ -1451,9 +1450,9 @@ static int srp_send_tsk_mgmt(struct srp_target_port *target, init_completion(&target->tsk_mgmt_done); - spin_lock_irq(target->scsi_host->host_lock); + spin_lock_irq(&target->lock); iu = __srp_get_tx_iu(target, SRP_IU_TSK_MGMT); - spin_unlock_irq(target->scsi_host->host_lock); + spin_unlock_irq(&target->lock); if (!iu) return -1; @@ -1957,6 +1956,7 @@ static ssize_t srp_create_target(struct device *dev, target->scsi_host = target_host; target->srp_host = host; + spin_lock_init(&target->lock); INIT_LIST_HEAD(&target->free_tx); INIT_LIST_HEAD(&target->free_reqs); for (i = 0; i < SRP_CMD_SQ_SIZE; ++i) { @@ -2186,9 +2186,9 @@ static void srp_remove_one(struct ib_device *device) */ spin_lock(&host->target_lock); list_for_each_entry(target, &host->target_list, list) { - spin_lock_irq(target->scsi_host->host_lock); + spin_lock_irq(&target->lock); target->state = SRP_TARGET_REMOVED; - spin_unlock_irq(target->scsi_host->host_lock); + spin_unlock_irq(&target->lock); } spin_unlock(&host->target_lock); diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h index 81686ee..acb435d 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.h +++ b/drivers/infiniband/ulp/srp/ib_srp.h @@ -144,6 +144,8 @@ struct srp_target_port { struct srp_iu *rx_ring[SRP_RQ_SIZE]; + spinlock_t lock; + struct list_head free_tx; struct srp_iu *tx_ring[SRP_SQ_SIZE]; -- 1.7.3.4 -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v2 0/8] scaling and bug fixes for 2.6.38 [not found] ` <1294259716-30706-1-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org> ` (4 preceding siblings ...) 2011-01-05 20:35 ` [PATCH v2 7/8] IB/srp: stop sharing the host lock with SCSI David Dillow @ 2011-01-09 16:01 ` Bart Van Assche [not found] ` <AANLkTikKQRC=VrdqqNW6Swu6i1XPG5baoLvmvK_oZNVs-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 5 siblings, 1 reply; 17+ messages in thread From: Bart Van Assche @ 2011-01-09 16:01 UTC (permalink / raw) To: David Dillow; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On Wed, Jan 5, 2011 at 9:35 PM, David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> wrote: > Updated patches to reflect the list's review, and better reflect Bart's > role in preparing them. Once he's had a chance to look over them again, > I'll push these to my repo and send a pull request. Hello Dave, Thanks for all the work you have done on this patch series. All tests I have run on these patches ran fine so far. The only general remark I have is that as far as I know Roland has done reviewing work on this patch series too but has not been credited yet for that work ? Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 17+ messages in thread
[parent not found: <AANLkTikKQRC=VrdqqNW6Swu6i1XPG5baoLvmvK_oZNVs-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: [PATCH v2 0/8] scaling and bug fixes for 2.6.38 [not found] ` <AANLkTikKQRC=VrdqqNW6Swu6i1XPG5baoLvmvK_oZNVs-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> @ 2011-01-09 16:38 ` David Dillow 2011-01-10 5:59 ` Roland Dreier 1 sibling, 0 replies; 17+ messages in thread From: David Dillow @ 2011-01-09 16:38 UTC (permalink / raw) To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On Sun, 2011-01-09 at 17:01 +0100, Bart Van Assche wrote: > On Wed, Jan 5, 2011 at 9:35 PM, David Dillow <dillowda-1Heg1YXhbW8@public.gmane.org> wrote: > > Updated patches to reflect the list's review, and better reflect Bart's > > role in preparing them. Once he's had a chance to look over them again, > > I'll push these to my repo and send a pull request. > > Hello Dave, > > Thanks for all the work you have done on this patch series. All tests > I have run on these patches ran fine so far. The only general remark I > have is that as far as I know Roland has done reviewing work on this > patch series too but has not been credited yet for that work ? I expect that to be covered when he adds his Signed-off-by to the series when he pulls my tree. Roland, if you'd rather I put in the Reviewed-by tags, please let me know. I'll expect to send the pull request late Wednesday, or earlier if I hear back. Thanks! -- Dave Dillow National Center for Computational Science Oak Ridge National Laboratory (865) 241-6602 office -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v2 0/8] scaling and bug fixes for 2.6.38 [not found] ` <AANLkTikKQRC=VrdqqNW6Swu6i1XPG5baoLvmvK_oZNVs-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2011-01-09 16:38 ` David Dillow @ 2011-01-10 5:59 ` Roland Dreier 1 sibling, 0 replies; 17+ messages in thread From: Roland Dreier @ 2011-01-10 5:59 UTC (permalink / raw) To: Bart Van Assche; +Cc: David Dillow, linux-rdma-u79uwXL29TY76Z2rM5mHXA > Thanks for all the work you have done on this patch series. All tests > I have run on these patches ran fine so far. The only general remark I > have is that as far as I know Roland has done reviewing work on this > patch series too but has not been credited yet for that work ? I really just read the descriptions and barely skimmed over the code. So no need to give me any credit at all. As a general rule I don't add "reviewed-by" tags unless someone explicitly replies with such a tag or posts a really in-depth review for a patch. Similarly I don't add "acked-by" unless there is such a tag in a reply or at least a very explicit "this looks good to me." - R. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2011-01-10 22:37 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-01-05 20:35 [PATCH v2 0/8] scaling and bug fixes for 2.6.38 David Dillow
2011-01-05 20:35 ` [PATCH v2 1/8] IB/srp: allow task management without a previous request David Dillow
2011-01-05 20:35 ` [PATCH v2 3/8] IB/srp: allow lockless work posting David Dillow
[not found] ` <1294259716-30706-4-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org>
2011-01-09 15:59 ` Bart Van Assche
[not found] ` <AANLkTinExFSLHAo97S0Q8tbfUb5Lc5h06kHXyyzj-2Cj-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-01-09 16:35 ` David Dillow
[not found] ` <1294590920.3071.4.camel-1q1vX8mYZiGLUyTwlgNVppKKF0rrzTr+@public.gmane.org>
2011-01-10 22:37 ` David Dillow
2011-01-05 20:35 ` [PATCH v2 8/8] IB/srp: consolidate hot-path variables into cache lines David Dillow
[not found] ` <1294259716-30706-1-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org>
2011-01-05 20:35 ` [PATCH v2 2/8] IB/srp: consolidate state change code David Dillow
[not found] ` <1294259716-30706-3-git-send-email-dillowda-1Heg1YXhbW8@public.gmane.org>
2011-01-09 15:43 ` Bart Van Assche
[not found] ` <AANLkTim8Ru5AerQev33rVxQQ1i6m69WSBRU9YtWdQvqN-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-01-09 16:33 ` David Dillow
2011-01-05 20:35 ` [PATCH v2 4/8] IB/srp: don't move active requests to their own list David Dillow
2011-01-05 20:35 ` [PATCH v2 5/8] IB/srp: reduce local coverage for command submission and EH David Dillow
2011-01-05 20:35 ` [PATCH v2 6/8] IB/srp: reduce lock coverage of command completion David Dillow
2011-01-05 20:35 ` [PATCH v2 7/8] IB/srp: stop sharing the host lock with SCSI David Dillow
2011-01-09 16:01 ` [PATCH v2 0/8] scaling and bug fixes for 2.6.38 Bart Van Assche
[not found] ` <AANLkTikKQRC=VrdqqNW6Swu6i1XPG5baoLvmvK_oZNVs-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-01-09 16:38 ` David Dillow
2011-01-10 5:59 ` Roland Dreier
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox