From: Christoph Hellwig <hch@lst.de>
To: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
Cc: Christoph Hellwig <hch@lst.de>,
"Nicholas A. Bellinger" <nab@daterainc.com>,
target-devel <target-devel@vger.kernel.org>,
linux-scsi <linux-scsi@vger.kernel.org>,
linux-kernel <linux-kernel@vger.kernel.org>,
Hannes Reinecke <hare@suse.de>,
Sagi Grimberg <sagig@mellanox.com>
Subject: Re: [RFC 0/2] target: Add TFO->complete_irq queue_work bypass
Date: Tue, 9 Jun 2015 09:19:21 +0200 [thread overview]
Message-ID: <20150609071921.GA10590@lst.de> (raw)
In-Reply-To: <1433401569.18125.112.camel@haakon3.risingtidesystems.com>
On Thu, Jun 04, 2015 at 12:06:09AM -0700, Nicholas A. Bellinger wrote:
> So I've been using tcm_loop + RAMDISK backends for prototyping, but this
> patch is intended for vhost-scsi so it can avoid the unnecessary
> queue_work() context switch within target_complete_cmd() for all backend
> driver types.
>
> This is because vhost_work_queue() is just updating vhost_dev->work_list
> and immediately wake_up_process() into a different vhost_worker()
> process context. For heavy small block workloads into fast IBLOCK
> backends, avoiding this extra context switch should be a nice efficiency
> win.
How about trying to merge the two workers instead?
> Perhaps tcm_loop LLD code should just be limited to RAMDISK here..?
I'd prefer to not do it especially for the loopback code, as that
should serve as a simple example. But before making further judgement
I'd really like to see the numbers.
Note that something that might help much more is getting rid of
the remaining irq or bh disabling spinlocks in the target core,
as that tends to introduce a lot of additional latency. Moving
additional code to hardirq context is fairly diametrical to that
design.
next prev parent reply other threads:[~2015-06-09 7:19 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-22 7:57 [RFC 0/2] target: Add TFO->complete_irq queue_work bypass Nicholas A. Bellinger
2015-05-22 7:57 ` [RFC 1/2] target: Add support for fabric IRQ completion Nicholas A. Bellinger
2015-06-09 7:27 ` Christoph Hellwig
2015-06-29 9:51 ` Sagi Grimberg
2015-05-22 7:57 ` [RFC 2/2] loopback: Enable TFO->complete_irq for fast-path ->scsi_done Nicholas A. Bellinger
2015-06-03 12:57 ` [RFC 0/2] target: Add TFO->complete_irq queue_work bypass Christoph Hellwig
2015-06-04 7:06 ` Nicholas A. Bellinger
2015-06-04 17:01 ` Sagi Grimberg
2015-06-09 7:19 ` Christoph Hellwig [this message]
2015-06-10 7:10 ` Nicholas A. Bellinger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150609071921.GA10590@lst.de \
--to=hch@lst.de \
--cc=hare@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=nab@daterainc.com \
--cc=nab@linux-iscsi.org \
--cc=sagig@mellanox.com \
--cc=target-devel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).