public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Andy Grover <agrover@redhat.com>
Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org>,
	target-devel <target-devel@vger.kernel.org>,
	linux-scsi <linux-scsi@vger.kernel.org>
Subject: Re: [PATCH 1/2] target: Add transport_handle_cdb_direct optimization
Date: Fri, 17 Jun 2011 14:01:39 +0200	[thread overview]
Message-ID: <20110617120139.GA28409@lst.de> (raw)
In-Reply-To: <4DFA83CA.60600@redhat.com>

On Thu, Jun 16, 2011 at 03:29:30PM -0700, Andy Grover wrote:
> I don't think we should be making it easy on the core, I think we should
> be making it easy on the *fabrics*, if for no other reason that there
> are >1 of them, but only one core. Less code duplicated.

The userspace offload really is just a call to the workqueue code.  I
can't see how it can be made easier by moving it into the core, but if
there's a way to make it easier and I'm certainly not against it.

> Furthermore, many commands can be handled and generate a response
> without submitting a read/write request to the backstores, which is the
> only part that may need to sleep. If we decide all fabric calls to the
> core must be from a thread, then all fabrics that could've gotten a
> response to a command without context switching would then be the ones
> taking unneeded context switches.

There's actually a lot of those commands, but all of them are in the
slow path, that is discovery, error recovery, fail over, misc administration.
All the data path commands, and thas includes cache flushes and discards in
addition to reads and writes in various forms, actually need to do I/O and
allocations.

> It's not clear to me yet what the barriers to fabrics calling core APIs
> from interrupt are at this point. Do we just need to avoid GFP_KERNEL
> allocs? The architecture as-is may be pretty close to doing this, and
> then we'd be close to a reasonable framework.

It jut means a lot of irqsave locking, GFP_ATOMIC or conditional allocations
and generally a lot of pain.  If it would actually help with performance
that might be a worthwile tradeoff, but in the end we do require the process
context for all data path commands anyway.  Adding a new special case for
slow path commands doesn't seem like a worthwhile tradeoff to me.


  reply	other threads:[~2011-06-17 12:01 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-04  6:01 [PATCH 0/2] target/iscsi: Convert to RX context task mapping+queueing Nicholas A. Bellinger
2011-06-04  6:01 ` [PATCH 1/2] target: Add transport_handle_cdb_direct optimization Nicholas A. Bellinger
2011-06-04 14:03   ` Christoph Hellwig
2011-06-16 19:13     ` Andy Grover
2011-06-16 19:22       ` Christoph Hellwig
2011-06-16 22:29         ` Andy Grover
2011-06-17 12:01           ` Christoph Hellwig [this message]
2011-06-17 14:41             ` Christoph Hellwig
2011-06-04  6:01 ` [PATCH 2/2] iscsi-target: Convert to cmdsn_mutex and transport_handle_cdb_direct usage Nicholas A. Bellinger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110617120139.GA28409@lst.de \
    --to=hch@lst.de \
    --cc=agrover@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=nab@linux-iscsi.org \
    --cc=target-devel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox