* [PATCH 0/2] target/iscsi: Convert to RX context task mapping+queueing
@ 2011-06-04 6:01 Nicholas A. Bellinger
2011-06-04 6:01 ` [PATCH 1/2] target: Add transport_handle_cdb_direct optimization Nicholas A. Bellinger
2011-06-04 6:01 ` [PATCH 2/2] iscsi-target: Convert to cmdsn_mutex and transport_handle_cdb_direct usage Nicholas A. Bellinger
0 siblings, 2 replies; 9+ messages in thread
From: Nicholas A. Bellinger @ 2011-06-04 6:01 UTC (permalink / raw)
To: Andy Grover, Christoph Hellwig
Cc: target-devel, linux-scsi, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
Hi again Andy and Christoph,
These are two additional patches I am pushing into lio-core-2.6.git/master
for LIO v4.1 to properly enable the RX side direct queueing optimization with
iSCSI READs and non immediate iSCSI WRITE payloads. Note the logic to enable
this for all cases was not merged originally in Patch #39 from Andy's series.
So for the moment in Patch #1, the optimization has been added as a seperate
new caller: transport_handle_cdb_direct(), and transport_generic_handle_cdb() has
been left as-is for existing fabric code (namely tcm_fc, ib_srpt and ibmvscsis)
that have not tested with these changes so far.
The other changes in Patch #2 are specific to iscsi-target for enabling
transport_handle_cdb_direct() usage. This involves converting to a sleeping
sess->cmdsn_mutex for protecting the session wide CmdSN list in the path:
iscsit_sequence_cmd() ->
iscsit_execute_cmd() ->
transport_handle_cdb_direct() ->
transport_generic_new_cmd()
This is required because iscsit_execute_cmd() is being called with the CmdSN
lock held, and spinning here was causing issues during testing as we may
end up sleeping inside of transport_generic_new_cmd() while performing
allocations in transport_new_cmd_obj() -> transport_allocate_tasks().
Please have a look,
Thanks!
--nab
Nicholas Bellinger (2):
target: Add transport_handle_cdb_direct optimization
iscsi-target: Convert to cmdsn_mutex and transport_handle_cdb_direct
usage
drivers/target/iscsi/iscsi_target_core.h | 2 +-
drivers/target/iscsi/iscsi_target_device.c | 4 ++--
drivers/target/iscsi/iscsi_target_erl1.c | 18 +++++++++---------
drivers/target/iscsi/iscsi_target_erl2.c | 4 ++--
drivers/target/iscsi/iscsi_target_login.c | 2 +-
drivers/target/iscsi/iscsi_target_nego.c | 4 ++--
drivers/target/iscsi/iscsi_target_util.c | 4 ++--
drivers/target/target_core_transport.c | 24 ++++++++++++++++++++++++
include/target/target_core_transport.h | 1 +
9 files changed, 44 insertions(+), 19 deletions(-)
--
1.7.2.5
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/2] target: Add transport_handle_cdb_direct optimization
2011-06-04 6:01 [PATCH 0/2] target/iscsi: Convert to RX context task mapping+queueing Nicholas A. Bellinger
@ 2011-06-04 6:01 ` Nicholas A. Bellinger
2011-06-04 14:03 ` Christoph Hellwig
2011-06-04 6:01 ` [PATCH 2/2] iscsi-target: Convert to cmdsn_mutex and transport_handle_cdb_direct usage Nicholas A. Bellinger
1 sibling, 1 reply; 9+ messages in thread
From: Nicholas A. Bellinger @ 2011-06-04 6:01 UTC (permalink / raw)
To: Andy Grover, Christoph Hellwig
Cc: target-devel, linux-scsi, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
This patch adds a transport_handle_cdb_direct() optimization for mapping
and queueing tasks directly from within fabric processing context by calling
the newly exported transport_generic_new_cmd(). This currently expects to
be called from process context only, and will fail if called within interrupt
context.
This patch also leaves transport_generic_handle_cdb() unmodified for the
moment to function as expected with existing tcm_fc and ib_srpt fabrics,
and will be removed once these have been converted and tested with v4.1
code using transport_handle_cdb_direct().
Based on Andy's original patch here:
[PATCH 39/42] target: Call transport_new_cmd instead of adding to cmd queue
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
drivers/target/target_core_transport.c | 24 ++++++++++++++++++++++++
include/target/target_core_transport.h | 1 +
2 files changed, 25 insertions(+), 0 deletions(-)
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 95082b0..53a951e 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1847,12 +1847,36 @@ int transport_generic_handle_cdb(
printk(KERN_ERR "cmd->se_lun is NULL\n");
return -EINVAL;
}
+
transport_add_cmd_to_queue(cmd, TRANSPORT_NEW_CMD);
return 0;
}
EXPORT_SYMBOL(transport_generic_handle_cdb);
/*
+ * Used by fabric module frontends to queue tasks directly.
+ * Many only be used from process context only
+ */
+int transport_handle_cdb_direct(
+ struct se_cmd *cmd)
+{
+ if (!cmd->se_lun) {
+ dump_stack();
+ printk(KERN_ERR "cmd->se_lun is NULL\n");
+ return -EINVAL;
+ }
+ if (in_interrupt()) {
+ dump_stack();
+ printk(KERN_ERR "transport_generic_handle_cdb cannot be called"
+ " from interrupt context\n");
+ return -EINVAL;
+ }
+
+ return transport_generic_new_cmd(cmd);
+}
+EXPORT_SYMBOL(transport_handle_cdb_direct);
+
+/*
* Used by fabric module frontends defining a TFO->new_cmd_map() caller
* to queue up a newly setup se_cmd w/ TRANSPORT_NEW_CMD_MAP in order to
* complete setup in TCM process context w/ TFO->new_cmd_map().
diff --git a/include/target/target_core_transport.h b/include/target/target_core_transport.h
index 604e669..2aae764 100644
--- a/include/target/target_core_transport.h
+++ b/include/target/target_core_transport.h
@@ -166,6 +166,7 @@ extern void transport_init_se_cmd(struct se_cmd *,
extern void transport_free_se_cmd(struct se_cmd *);
extern int transport_generic_allocate_tasks(struct se_cmd *, unsigned char *);
extern int transport_generic_handle_cdb(struct se_cmd *);
+extern int transport_handle_cdb_direct(struct se_cmd *);
extern int transport_generic_handle_cdb_map(struct se_cmd *);
extern int transport_generic_handle_data(struct se_cmd *);
extern void transport_new_cmd_failure(struct se_cmd *);
--
1.7.2.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/2] iscsi-target: Convert to cmdsn_mutex and transport_handle_cdb_direct usage
2011-06-04 6:01 [PATCH 0/2] target/iscsi: Convert to RX context task mapping+queueing Nicholas A. Bellinger
2011-06-04 6:01 ` [PATCH 1/2] target: Add transport_handle_cdb_direct optimization Nicholas A. Bellinger
@ 2011-06-04 6:01 ` Nicholas A. Bellinger
1 sibling, 0 replies; 9+ messages in thread
From: Nicholas A. Bellinger @ 2011-06-04 6:01 UTC (permalink / raw)
To: Andy Grover, Christoph Hellwig
Cc: target-devel, linux-scsi, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
This patch converts iscsi-target from a spinning sess->cmdsn_lock to a
sleeping sess->cmdsn_mutex which has been done to allow direct RX thread
context task queueing using the new transport_handle_cdb_direct() caller
into target core.
This conversion was necessary in order to queue tasks directly with
transport_handle_cdb_direct() -> transport_generic_new_cmd(), which
may end up sleeping in the following execution path:
iscsit_handle_scsi_cmd() ->
iscsit_sequence_cmd() ->
iscsit_execute_cmd() ->
transport_handle_cdb_direct()
transport_generic_new_cmd()
This patch converts iscsit_execute_cmd() to be protected by a sleeping
->cmdsn_mutex so that transport_generic_new_cmd() can be called while
protecting the session wide CmdSN list.
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
drivers/target/iscsi/iscsi_target_core.h | 2 +-
drivers/target/iscsi/iscsi_target_device.c | 4 ++--
drivers/target/iscsi/iscsi_target_erl1.c | 18 +++++++++---------
drivers/target/iscsi/iscsi_target_erl2.c | 4 ++--
drivers/target/iscsi/iscsi_target_login.c | 2 +-
drivers/target/iscsi/iscsi_target_nego.c | 4 ++--
drivers/target/iscsi/iscsi_target_util.c | 4 ++--
7 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/drivers/target/iscsi/iscsi_target_core.h b/drivers/target/iscsi/iscsi_target_core.h
index 4aef392..5bf2f7a 100644
--- a/drivers/target/iscsi/iscsi_target_core.h
+++ b/drivers/target/iscsi/iscsi_target_core.h
@@ -613,7 +613,7 @@ struct iscsi_session {
u32 cmdsn_window;
/* protects cmdsn values */
- spinlock_t cmdsn_lock;
+ struct mutex cmdsn_mutex;
/* session wide counter: expected command sequence number */
u32 exp_cmd_sn;
/* session wide counter: maximum allowed command sequence number */
diff --git a/drivers/target/iscsi/iscsi_target_device.c b/drivers/target/iscsi/iscsi_target_device.c
index 3693258..b855c68 100644
--- a/drivers/target/iscsi/iscsi_target_device.c
+++ b/drivers/target/iscsi/iscsi_target_device.c
@@ -81,8 +81,8 @@ void iscsit_increment_maxcmdsn(struct iscsi_cmd *cmd, struct iscsi_session *sess
cmd->maxcmdsn_inc = 1;
- spin_lock(&sess->cmdsn_lock);
+ mutex_lock(&sess->cmdsn_mutex);
sess->max_cmd_sn += 1;
TRACE(TRACE_ISCSI, "Updated MaxCmdSN to 0x%08x\n", sess->max_cmd_sn);
- spin_unlock(&sess->cmdsn_lock);
+ mutex_unlock(&sess->cmdsn_mutex);
}
diff --git a/drivers/target/iscsi/iscsi_target_erl1.c b/drivers/target/iscsi/iscsi_target_erl1.c
index 9d66ada..c4242d8 100644
--- a/drivers/target/iscsi/iscsi_target_erl1.c
+++ b/drivers/target/iscsi/iscsi_target_erl1.c
@@ -803,7 +803,7 @@ static struct iscsi_ooo_cmdsn *iscsit_allocate_ooo_cmdsn(void)
}
/*
- * Called with sess->cmdsn_lock held.
+ * Called with sess->cmdsn_mutex held.
*/
static int iscsit_attach_ooo_cmdsn(
struct iscsi_session *sess,
@@ -850,7 +850,7 @@ static int iscsit_attach_ooo_cmdsn(
/*
* Removes an struct iscsi_ooo_cmdsn from a session's list,
- * called with struct iscsi_session->cmdsn_lock held.
+ * called with struct iscsi_session->cmdsn_mutex held.
*/
void iscsit_remove_ooo_cmdsn(
struct iscsi_session *sess,
@@ -865,18 +865,18 @@ void iscsit_clear_ooo_cmdsns_for_conn(struct iscsi_conn *conn)
struct iscsi_ooo_cmdsn *ooo_cmdsn;
struct iscsi_session *sess = conn->sess;
- spin_lock(&sess->cmdsn_lock);
+ mutex_lock(&sess->cmdsn_mutex);
list_for_each_entry(ooo_cmdsn, &sess->sess_ooo_cmdsn_list, ooo_list) {
if (ooo_cmdsn->cid != conn->cid)
continue;
ooo_cmdsn->cmd = NULL;
}
- spin_unlock(&sess->cmdsn_lock);
+ mutex_unlock(&sess->cmdsn_mutex);
}
/*
- * Called with sess->cmdsn_lock held.
+ * Called with sess->cmdsn_mutex held.
*/
int iscsit_execute_ooo_cmdsns(struct iscsi_session *sess)
{
@@ -917,7 +917,7 @@ int iscsit_execute_ooo_cmdsns(struct iscsi_session *sess)
/*
* Called either:
*
- * 1. With sess->cmdsn_lock held from iscsi_execute_ooo_cmdsns()
+ * 1. With sess->cmdsn_mutex held from iscsi_execute_ooo_cmdsns()
* or iscsi_check_received_cmdsn().
* 2. With no locks held directly from iscsi_handle_XXX_pdu() functions
* for immediate commands.
@@ -1012,7 +1012,7 @@ int iscsit_execute_cmd(struct iscsi_cmd *cmd, int ooo)
iscsit_start_dataout_timer(cmd, cmd->conn);
spin_unlock_bh(&cmd->dataout_timeout_lock);
}
- return transport_generic_handle_cdb(&cmd->se_cmd);
+ return transport_handle_cdb_direct(&cmd->se_cmd);
case ISCSI_OP_NOOP_OUT:
case ISCSI_OP_TEXT:
@@ -1062,14 +1062,14 @@ void iscsit_free_all_ooo_cmdsns(struct iscsi_session *sess)
{
struct iscsi_ooo_cmdsn *ooo_cmdsn, *ooo_cmdsn_tmp;
- spin_lock(&sess->cmdsn_lock);
+ mutex_lock(&sess->cmdsn_mutex);
list_for_each_entry_safe(ooo_cmdsn, ooo_cmdsn_tmp,
&sess->sess_ooo_cmdsn_list, ooo_list) {
list_del(&ooo_cmdsn->ooo_list);
kmem_cache_free(lio_ooo_cache, ooo_cmdsn);
}
- spin_unlock(&sess->cmdsn_lock);
+ mutex_unlock(&sess->cmdsn_mutex);
}
int iscsit_handle_ooo_cmdsn(
diff --git a/drivers/target/iscsi/iscsi_target_erl2.c b/drivers/target/iscsi/iscsi_target_erl2.c
index ad7d7ce..a810e59 100644
--- a/drivers/target/iscsi/iscsi_target_erl2.c
+++ b/drivers/target/iscsi/iscsi_target_erl2.c
@@ -300,7 +300,7 @@ int iscsit_discard_unacknowledged_ooo_cmdsns_for_conn(struct iscsi_conn *conn)
struct iscsi_ooo_cmdsn *ooo_cmdsn, *ooo_cmdsn_tmp;
struct iscsi_session *sess = conn->sess;
- spin_lock(&sess->cmdsn_lock);
+ mutex_lock(&sess->cmdsn_mutex);
list_for_each_entry_safe(ooo_cmdsn, ooo_cmdsn_tmp,
&sess->sess_ooo_cmdsn_list, ooo_list) {
@@ -313,7 +313,7 @@ int iscsit_discard_unacknowledged_ooo_cmdsns_for_conn(struct iscsi_conn *conn)
ooo_cmdsn->cmdsn, conn->cid);
iscsit_remove_ooo_cmdsn(sess, ooo_cmdsn);
}
- spin_unlock(&sess->cmdsn_lock);
+ mutex_unlock(&sess->cmdsn_mutex);
spin_lock_bh(&conn->cmd_lock);
list_for_each_entry_safe(cmd, cmd_tmp, &conn->conn_cmd_list, i_list) {
diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
index 44642ff..1b2ffe0 100644
--- a/drivers/target/iscsi/iscsi_target_login.c
+++ b/drivers/target/iscsi/iscsi_target_login.c
@@ -240,7 +240,7 @@ static int iscsi_login_zero_tsih_s1(
init_completion(&sess->reinstatement_comp);
init_completion(&sess->session_wait_comp);
init_completion(&sess->session_waiting_on_uc_comp);
- spin_lock_init(&sess->cmdsn_lock);
+ mutex_init(&sess->cmdsn_mutex);
spin_lock_init(&sess->conn_lock);
spin_lock_init(&sess->cr_a_lock);
spin_lock_init(&sess->cr_i_lock);
diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
index ab387c9..565dbea 100644
--- a/drivers/target/iscsi/iscsi_target_nego.c
+++ b/drivers/target/iscsi/iscsi_target_nego.c
@@ -368,10 +368,10 @@ static int iscsi_target_do_tx_login_io(struct iscsi_conn *conn, struct iscsi_log
login_rsp->tsih = be16_to_cpu(login_rsp->tsih);
login_rsp->itt = be32_to_cpu(login_rsp->itt);
login_rsp->statsn = be32_to_cpu(login_rsp->statsn);
- spin_lock(&sess->cmdsn_lock);
+ mutex_lock(&sess->cmdsn_mutex);
login_rsp->exp_cmdsn = be32_to_cpu(sess->exp_cmd_sn);
login_rsp->max_cmdsn = be32_to_cpu(sess->max_cmd_sn);
- spin_unlock(&sess->cmdsn_lock);
+ mutex_unlock(&sess->cmdsn_mutex);
return 0;
}
diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
index 7eeddd2..e677fa0 100644
--- a/drivers/target/iscsi/iscsi_target_util.c
+++ b/drivers/target/iscsi/iscsi_target_util.c
@@ -435,7 +435,7 @@ int iscsit_sequence_cmd(
int ret;
int cmdsn_ret;
- spin_lock(&conn->sess->cmdsn_lock);
+ mutex_lock(&conn->sess->cmdsn_mutex);
cmdsn_ret = iscsit_check_received_cmdsn(conn->sess, cmdsn);
switch (cmdsn_ret) {
@@ -456,7 +456,7 @@ int iscsit_sequence_cmd(
ret = cmdsn_ret;
break;
}
- spin_unlock(&conn->sess->cmdsn_lock);
+ mutex_unlock(&conn->sess->cmdsn_mutex);
return ret;
}
--
1.7.2.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] target: Add transport_handle_cdb_direct optimization
2011-06-04 6:01 ` [PATCH 1/2] target: Add transport_handle_cdb_direct optimization Nicholas A. Bellinger
@ 2011-06-04 14:03 ` Christoph Hellwig
2011-06-16 19:13 ` Andy Grover
0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2011-06-04 14:03 UTC (permalink / raw)
To: Nicholas A. Bellinger
Cc: Andy Grover, Christoph Hellwig, target-devel, linux-scsi
> + */
> +int transport_handle_cdb_direct(
> + struct se_cmd *cmd)
> +{
> + if (!cmd->se_lun) {
> + dump_stack();
> + printk(KERN_ERR "cmd->se_lun is NULL\n");
> + return -EINVAL;
> + }
> + if (in_interrupt()) {
> + dump_stack();
> + printk(KERN_ERR "transport_generic_handle_cdb cannot be called"
> + " from interrupt context\n");
> + return -EINVAL;
> + }
> +
> + return transport_generic_new_cmd(cmd);
I can't really see any reason to add this helper. It just adds rather
pointless debug checks for cases that already will blow up "properly" with
the current code. Let's keep the callchain lean and just leave it out.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] target: Add transport_handle_cdb_direct optimization
2011-06-04 14:03 ` Christoph Hellwig
@ 2011-06-16 19:13 ` Andy Grover
2011-06-16 19:22 ` Christoph Hellwig
0 siblings, 1 reply; 9+ messages in thread
From: Andy Grover @ 2011-06-16 19:13 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Nicholas A. Bellinger, target-devel, linux-scsi
On 06/04/2011 07:03 AM, Christoph Hellwig wrote:
>> + */
>> +int transport_handle_cdb_direct(
>> + struct se_cmd *cmd)
>> +{
>> + if (!cmd->se_lun) {
>> + dump_stack();
>> + printk(KERN_ERR "cmd->se_lun is NULL\n");
>> + return -EINVAL;
>> + }
>> + if (in_interrupt()) {
>> + dump_stack();
>> + printk(KERN_ERR "transport_generic_handle_cdb cannot be called"
>> + " from interrupt context\n");
>> + return -EINVAL;
>> + }
>> +
>> + return transport_generic_new_cmd(cmd);
>
> I can't really see any reason to add this helper. It just adds rather
> pointless debug checks for cases that already will blow up "properly" with
> the current code. Let's keep the callchain lean and just leave it out.
I agree. Also, I don't think there should be a handle_cdb_direct because
I think handle_cdb should call transport_generic_new_cmd, we don't need
a "direct" version. transport_generic_new_cmd should be safe (or made
safe) for calling from interrupt context. There's nothing in it that
demands it be executed in the backstore thread's context, and doing so
just incurs a two context switch latency penalty.
Regards -- Andy
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] target: Add transport_handle_cdb_direct optimization
2011-06-16 19:13 ` Andy Grover
@ 2011-06-16 19:22 ` Christoph Hellwig
2011-06-16 22:29 ` Andy Grover
0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2011-06-16 19:22 UTC (permalink / raw)
To: Andy Grover
Cc: Christoph Hellwig, Nicholas A. Bellinger, target-devel,
linux-scsi
On Thu, Jun 16, 2011 at 12:13:26PM -0700, Andy Grover wrote:
> I agree. Also, I don't think there should be a handle_cdb_direct because
> I think handle_cdb should call transport_generic_new_cmd, we don't need
> a "direct" version. transport_generic_new_cmd should be safe (or made
> safe) for calling from interrupt context. There's nothing in it that
> demands it be executed in the backstore thread's context, and doing so
> just incurs a two context switch latency penalty.
I think the whole target context/threading model needs a makeover. As
we need to switch to a usermode context for just about any operation
we should just declare that any fabrict module must call all entry points
from user context. The fabric modules can trivially do that using the
concurrency managed workqueues if nessecary. At that point the target core
can just go ahead by itself without bothering with context switches, and
just using another workqueue where it might need additional concurreny
(e.g. executing multiple task is parallel for the file backend)
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] target: Add transport_handle_cdb_direct optimization
2011-06-16 19:22 ` Christoph Hellwig
@ 2011-06-16 22:29 ` Andy Grover
2011-06-17 12:01 ` Christoph Hellwig
0 siblings, 1 reply; 9+ messages in thread
From: Andy Grover @ 2011-06-16 22:29 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Nicholas A. Bellinger, target-devel, linux-scsi
On 06/16/2011 12:22 PM, Christoph Hellwig wrote:
> On Thu, Jun 16, 2011 at 12:13:26PM -0700, Andy Grover wrote:
>> I agree. Also, I don't think there should be a handle_cdb_direct because
>> I think handle_cdb should call transport_generic_new_cmd, we don't need
>> a "direct" version. transport_generic_new_cmd should be safe (or made
>> safe) for calling from interrupt context. There's nothing in it that
>> demands it be executed in the backstore thread's context, and doing so
>> just incurs a two context switch latency penalty.
>
> I think the whole target context/threading model needs a makeover. As
> we need to switch to a usermode context for just about any operation
> we should just declare that any fabrict module must call all entry points
> from user context. The fabric modules can trivially do that using the
> concurrency managed workqueues if nessecary. At that point the target core
> can just go ahead by itself without bothering with context switches, and
> just using another workqueue where it might need additional concurreny
> (e.g. executing multiple task is parallel for the file backend)
I don't think we should be making it easy on the core, I think we should
be making it easy on the *fabrics*, if for no other reason that there
are >1 of them, but only one core. Less code duplicated.
Furthermore, many commands can be handled and generate a response
without submitting a read/write request to the backstores, which is the
only part that may need to sleep. If we decide all fabric calls to the
core must be from a thread, then all fabrics that could've gotten a
response to a command without context switching would then be the ones
taking unneeded context switches.
It's not clear to me yet what the barriers to fabrics calling core APIs
from interrupt are at this point. Do we just need to avoid GFP_KERNEL
allocs? The architecture as-is may be pretty close to doing this, and
then we'd be close to a reasonable framework.
Regards -- Andy
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] target: Add transport_handle_cdb_direct optimization
2011-06-16 22:29 ` Andy Grover
@ 2011-06-17 12:01 ` Christoph Hellwig
2011-06-17 14:41 ` Christoph Hellwig
0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2011-06-17 12:01 UTC (permalink / raw)
To: Andy Grover; +Cc: Nicholas A. Bellinger, target-devel, linux-scsi
On Thu, Jun 16, 2011 at 03:29:30PM -0700, Andy Grover wrote:
> I don't think we should be making it easy on the core, I think we should
> be making it easy on the *fabrics*, if for no other reason that there
> are >1 of them, but only one core. Less code duplicated.
The userspace offload really is just a call to the workqueue code. I
can't see how it can be made easier by moving it into the core, but if
there's a way to make it easier and I'm certainly not against it.
> Furthermore, many commands can be handled and generate a response
> without submitting a read/write request to the backstores, which is the
> only part that may need to sleep. If we decide all fabric calls to the
> core must be from a thread, then all fabrics that could've gotten a
> response to a command without context switching would then be the ones
> taking unneeded context switches.
There's actually a lot of those commands, but all of them are in the
slow path, that is discovery, error recovery, fail over, misc administration.
All the data path commands, and thas includes cache flushes and discards in
addition to reads and writes in various forms, actually need to do I/O and
allocations.
> It's not clear to me yet what the barriers to fabrics calling core APIs
> from interrupt are at this point. Do we just need to avoid GFP_KERNEL
> allocs? The architecture as-is may be pretty close to doing this, and
> then we'd be close to a reasonable framework.
It jut means a lot of irqsave locking, GFP_ATOMIC or conditional allocations
and generally a lot of pain. If it would actually help with performance
that might be a worthwile tradeoff, but in the end we do require the process
context for all data path commands anyway. Adding a new special case for
slow path commands doesn't seem like a worthwhile tradeoff to me.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] target: Add transport_handle_cdb_direct optimization
2011-06-17 12:01 ` Christoph Hellwig
@ 2011-06-17 14:41 ` Christoph Hellwig
0 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2011-06-17 14:41 UTC (permalink / raw)
To: Andy Grover; +Cc: Nicholas A. Bellinger, target-devel, linux-scsi
On Fri, Jun 17, 2011 at 02:01:39PM +0200, Christoph Hellwig wrote:
> On Thu, Jun 16, 2011 at 03:29:30PM -0700, Andy Grover wrote:
> > I don't think we should be making it easy on the core, I think we should
> > be making it easy on the *fabrics*, if for no other reason that there
> > are >1 of them, but only one core. Less code duplicated.
>
> The userspace offload really is just a call to the workqueue code. I
s/userspace/user context/
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2011-06-17 14:42 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-04 6:01 [PATCH 0/2] target/iscsi: Convert to RX context task mapping+queueing Nicholas A. Bellinger
2011-06-04 6:01 ` [PATCH 1/2] target: Add transport_handle_cdb_direct optimization Nicholas A. Bellinger
2011-06-04 14:03 ` Christoph Hellwig
2011-06-16 19:13 ` Andy Grover
2011-06-16 19:22 ` Christoph Hellwig
2011-06-16 22:29 ` Andy Grover
2011-06-17 12:01 ` Christoph Hellwig
2011-06-17 14:41 ` Christoph Hellwig
2011-06-04 6:01 ` [PATCH 2/2] iscsi-target: Convert to cmdsn_mutex and transport_handle_cdb_direct usage Nicholas A. Bellinger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox