Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Geliang Tang <geliang@kernel.org>
To: lsf-pc@lists.linux-foundation.org,
	"Javier González" <javier.gonz@samsung.com>,
	"Nilay Shroff" <nilay@linux.ibm.com>,
	"Ming Lei" <ming.lei@redhat.com>,
	"Matthieu Baerts" <matttbe@kernel.org>,
	"Mat Martineau" <martineau@kernel.org>,
	"Paolo Abeni" <pabeni@redhat.com>,
	"Hannes Reinecke" <hare@suse.de>,
	"John Meneghini" <jmeneghi@redhat.com>,
	"Randy Jennings" <randyj@purestorage.com>
Cc: mptcp@lists.linux.dev, linux-nvme@lists.infradead.org
Subject: Re: [LSF/MM/BPF TOPIC] NVMe over MPTCP: Multi-Fold Acceleration for NVMe over TCP in Multi-NIC Environments
Date: Wed, 13 May 2026 18:04:52 +0800	[thread overview]
Message-ID: <6042b928aa8116d531c5e360ba19fa077af87bcc.camel@kernel.org> (raw)
In-Reply-To: <ffc23fb3bc82988608237be48a3eea7fdd3a4dd7.camel@kernel.org>

[-- Attachment #1: Type: text/plain, Size: 3494 bytes --]

Hello everyone,

Thank you for your interest in NVMe over MPTCP. I have attached the
slides from the presentation to this email.

Please note that the demo in the slides only configured a single NVMe
multipath. Subsequently, I will post the MPTCP performance test results
under several NVMe multipaths here.

Thanks,
-Geliang

On Thu, 2026-03-05 at 12:30 +0800, Geliang Tang wrote:
> Hi Nilay, Ming,
> 
> Thank you again for your interest in NVMe over MPTCP.
> 
> On Thu, 2026-02-26 at 17:54 +0800, Geliang Tang wrote:
> > Hi Nilay,
> > 
> > Thanks for your reply.
> > 
> > On Wed, 2026-02-25 at 20:37 +0530, Nilay Shroff wrote:
> > > 
> > > 
> > > On 1/29/26 9:43 AM, Geliang Tang wrote:
> > > > 3. Performance Benefits
> > > > 
> > > > This new feature has been evaluated in different environments:
> > > > 
> > > > I conducted 'NVMe over MPTCP' tests between two PCs, each
> > > > equipped
> > > > with
> > > > two Gigabit NICs and directly connected via Ethernet cables.
> > > > Using
> > > > 'NVMe over TCP', the fio benchmark showed a speed of
> > > > approximately
> > > > 100
> > > > MiB/s. In contrast, 'NVMe over MPTCP' achieved about 200 MiB/s
> > > > with
> > > > fio, doubling the throughput.
> > > > 
> > > > In a virtual machine test environment simulating four NICs on
> > > > both
> > > > sides, 'NVMe over MPTCP' delivered bandwidth up to four times
> > > > that
> > > > of
> > > > standard TCP.
> > > 
> > > This is interesting. Did you try using an NVMe multipath iopolicy
> > > other
> > > than the default numa policy? Assuming both the host and target
> > > are
> > > multihomed,
> > > configuring round-robin or queue-depth may provide performance
> > > comparable
> > > to what you are seeing with MPTCP.
> > > 
> > > I think MPTCP shall distribute traffic using transport-level
> > > metrics
> > > such as
> > > RTT, cwnd, and packet loss, whereas the NVMe multipath layer
> > > makes
> > > decisions
> > > based on ANA state, queue depth, and NUMA locality. In a setup
> > > with
> > > multiple
> > > active paths, switching the iopolicy from numa to round-robin or
> > > queue-depth
> > > could improve load distribution across controllers and thus
> > > improve
> > > performance.
> > > 
> > > IMO, it would be useful to test with those policies and compare
> > > the
> > > results
> > > against the MPTCP setup.
> > 
> > Ming Lei also made a similar comment. In my experiments, I didn't
> > set
> > the multipath iopolicy, so I was using the default numa policy. In
> > the
> > follow-up, I'll adjust it to round-robin or queue-depth and rerun
> > the
> > experiments. I'll share the results in this email thread.
> 
> Based on your feedback, I have added iopolicy support to the NVMe
> over
> MPTCP selftest script (see patch 8 in [1]). We can set the iopolicy
> to
> round-robin like this:
> 
>  # ./mptcp_nvme.sh mptcp round-robin
> 
> This demonstrates that "NVMe over MPTCP" and "NVMe multipath" can
> work
> simultaneously without conflict.
> 
> Using this test script, I compared three I/O policies: numa, round-
> robin, and queue-depth. The results for fio were very similar. It's
> possible that this test environment doesn't fully reflect the
> differences in I/O policies. I will continue to follow up with
> further
> tests.
> 
> Thanks,
> -Geliang
> 
> [1]
> NVME over MPTCP, v4
> https://patchwork.kernel.org/project/mptcp/cover/cover.1772683110.git.tanggeliang@kylinos.cn/
> 
> > 
> > Thanks,
> > -Geliang
> > 
> > > 
> > > Thanks,
> > > --Nilay


[-- Attachment #2: lsfmmbpf2026-nvme-over-mptcp.pdf --]
[-- Type: application/pdf, Size: 130018 bytes --]

      reply	other threads:[~2026-05-13 10:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-29  4:13 [LSF/MM/BPF TOPIC] NVMe over MPTCP: Multi-Fold Acceleration for NVMe over TCP in Multi-NIC Environments Geliang Tang
2026-02-25  5:57 ` Ming Lei
2026-02-26  9:44   ` Geliang Tang
2026-02-25 15:07 ` Nilay Shroff
2026-02-26  9:54   ` Geliang Tang
2026-03-05  4:30     ` Geliang Tang
2026-05-13 10:04       ` Geliang Tang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6042b928aa8116d531c5e360ba19fa077af87bcc.camel@kernel.org \
    --to=geliang@kernel.org \
    --cc=hare@suse.de \
    --cc=javier.gonz@samsung.com \
    --cc=jmeneghi@redhat.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=martineau@kernel.org \
    --cc=matttbe@kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=mptcp@lists.linux.dev \
    --cc=nilay@linux.ibm.com \
    --cc=pabeni@redhat.com \
    --cc=randyj@purestorage.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox