Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Ping Gan <jacky_gam_2001@163.com>,
	sagi@grimberg.me, hch@lst.de, kch@nvidia.com,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Cc: ping.gan@dell.com
Subject: Re: [PATCH 0/2] nvmet: support polling task for RDMA and TCP
Date: Tue, 16 Jul 2024 12:36:47 +0200	[thread overview]
Message-ID: <af243508-e5f3-4835-8d8e-c1bb741e22f3@suse.de> (raw)
In-Reply-To: <20240704081015.63584-1-jacky_gam_2001@163.com>

On 7/4/24 10:10, Ping Gan wrote:
>> On 02/07/2024 13:02, Ping Gan wrote:
[ .. ]
>>> And the bandwidth of a node is only 3100MB. While we used the patch
>>> and enable 6 polling task, the bandwidth can be 4000MB. It's a good
>>> improvement.
>>
>> I think you will see similar performance with unbound workqueue and
>> rps.
> 
> Yes, I remodified the nvmet-tcp/nvmet-rdma code for supporting unbound
> workqueue, and in same prerequisites of above to run test, and compared
> the result of unbound workqueue and polling mode task. And I got a good
> performance for unbound workqueue. For unbound workqueue TCP we got
> 3850M/node, it's almost equal to polling task. And also tested
> nvmet-rdma we get 5100M/node for unbound workqueue RDMA versus 5600M for
> polling task, seems the diff is very small. Anyway, your advice is good.
> Do you think we should submit the unbound workqueue patches for nvmet-tcp
> and nvmet-rdma to upstream nvmet?

Please do. I have been using pretty much the same patch during
development of my nvme-tcp scalability patchset, and using WQ_UNBOUND
definitely improves the situation here.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



  parent reply	other threads:[~2024-07-16 10:36 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-26  8:28 [PATCH 0/2] nvmet: support polling task for RDMA and TCP Ping Gan
2024-06-26  8:28 ` [PATCH 1/2] nvmet-rdma: add polling cq task for nvmet-rdma Ping Gan
2024-06-26  8:28 ` [PATCH 2/2] nvmet-tcp: add polling task for nvmet-tcp Ping Gan
2024-06-30  8:58 ` [PATCH 0/2] nvmet: support polling task for RDMA and TCP Sagi Grimberg
2024-07-01  7:42   ` Ping Gan
2024-07-01  7:42     ` Ping Gan
2024-07-01  8:22     ` Sagi Grimberg
2024-07-02 10:02       ` Ping Gan
2024-07-02 10:02         ` Ping Gan
2024-07-03 19:58         ` Sagi Grimberg
2024-07-04  8:10           ` Ping Gan
2024-07-04  8:40             ` Sagi Grimberg
2024-07-04 10:35               ` Ping Gan
2024-07-05  5:59                 ` Sagi Grimberg
2024-07-05  6:28                   ` Ping Gan
2024-07-16 10:36             ` Hannes Reinecke [this message]
2024-07-17  0:53               ` Ping Gan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=af243508-e5f3-4835-8d8e-c1bb741e22f3@suse.de \
    --to=hare@suse.de \
    --cc=hch@lst.de \
    --cc=jacky_gam_2001@163.com \
    --cc=kch@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ping.gan@dell.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox