From: Li Feng <lifeng1519@gmail.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
"open list:NVM EXPRESS DRIVER" <linux-nvme@lists.infradead.org>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2] nvme/tcp: Add support to set the tcp worker cpu affinity
Date: Mon, 17 Apr 2023 15:50:46 +0800 [thread overview]
Message-ID: <D8FFAFCB-5486-4211-9AC8-2779AE368183@gmail.com> (raw)
In-Reply-To: <ZDz3TlFUxMxaO1W4@ovpn-8-16.pek2.redhat.com>
> 2023年4月17日 下午3:37,Ming Lei <ming.lei@redhat.com> 写道:
>
> On Thu, Apr 13, 2023 at 09:29:41PM +0800, Li Feng wrote:
>> The default worker affinity policy is using all online cpus, e.g. from 0
>> to N-1. However, some cpus are busy for other jobs, then the nvme-tcp will
>> have a bad performance.
>
> Can you explain in detail how nvme-tcp performs worse in this situation?
>
> If some of CPUs are knows as busy, you can submit the nvme-tcp io jobs
> on other non-busy CPUs via taskset, or scheduler is supposed to choose
> proper CPUs for you. And usually nvme-tcp device should be saturated
> with limited io depth or jobs/cpus.
>
>
> Thanks,
> Ming
>
Taskset can’t work on nvme-tcp io-queues, because the worker cpu has decided at the nvme-tcp ‘connect’ stage,
not the sending io stage. Assume there is only one io-queue, the binding cpu is CPU0, no matter io jobs
run other cpus.
next prev parent reply other threads:[~2023-04-17 8:25 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20230413062339.2454616-1-fengli@smartx.com>
2023-04-13 6:33 ` [PATCH] nvme/tcp: Add support to set the tcp worker cpu affinity Li Feng
2023-04-13 12:53 ` kernel test robot
2023-04-17 13:45 ` Sagi Grimberg
2023-04-18 3:39 ` Li Feng
2023-04-19 9:32 ` Sagi Grimberg
2023-04-25 8:32 ` Li Feng
2023-04-26 11:31 ` Hannes Reinecke
2023-04-27 12:21 ` Sagi Grimberg
2023-04-27 14:36 ` Ming Lei
2023-04-27 12:11 ` Sagi Grimberg
2023-04-18 3:58 ` Chaitanya Kulkarni
2023-04-18 4:21 ` Li Feng
2023-04-18 9:20 ` Li Feng
2023-04-13 13:29 ` [PATCH v2] " Li Feng
2023-04-14 8:36 ` Hannes Reinecke
2023-04-14 9:35 ` Li Feng
2023-04-15 20:21 ` Chaitanya Kulkarni
2023-04-15 21:06 ` David Laight
2023-04-17 3:31 ` Li Feng
2023-04-17 6:27 ` Hannes Reinecke
2023-04-17 8:32 ` Li Feng
2023-04-17 7:37 ` Ming Lei
2023-04-17 7:50 ` Li Feng [this message]
2023-04-17 8:05 ` Ming Lei
2023-04-17 13:33 ` Sagi Grimberg
2023-04-18 3:29 ` Li Feng
2023-04-18 4:33 ` Ming Lei
2023-04-18 9:32 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D8FFAFCB-5486-4211-9AC8-2779AE368183@gmail.com \
--to=lifeng1519@gmail.com \
--cc=axboe@fb.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=ming.lei@redhat.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox