public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Geliang Tang <geliang@kernel.org>
To: Nilay Shroff <nilay@linux.ibm.com>,
	lsf-pc@lists.linux-foundation.org,
	 linux-nvme@lists.infradead.org
Cc: mptcp@lists.linux.dev, Matthieu Baerts <matttbe@kernel.org>,
	Mat Martineau	 <martineau@kernel.org>,
	Paolo Abeni <pabeni@redhat.com>, Hannes Reinecke	 <hare@suse.de>
Subject: Re: [LSF/MM/BPF TOPIC] NVMe over MPTCP: Multi-Fold Acceleration for NVMe over TCP in Multi-NIC Environments
Date: Thu, 26 Feb 2026 17:54:29 +0800	[thread overview]
Message-ID: <ecb30d89b67b992bfc136186f18550f9fb974baa.camel@kernel.org> (raw)
In-Reply-To: <48e429f3-e29f-4eac-b4d3-3bf9e0d1c245@linux.ibm.com>

Hi Nilay,

Thanks for your reply.

On Wed, 2026-02-25 at 20:37 +0530, Nilay Shroff wrote:
> 
> 
> On 1/29/26 9:43 AM, Geliang Tang wrote:
> > 3. Performance Benefits
> > 
> > This new feature has been evaluated in different environments:
> > 
> > I conducted 'NVMe over MPTCP' tests between two PCs, each equipped
> > with
> > two Gigabit NICs and directly connected via Ethernet cables. Using
> > 'NVMe over TCP', the fio benchmark showed a speed of approximately
> > 100
> > MiB/s. In contrast, 'NVMe over MPTCP' achieved about 200 MiB/s with
> > fio, doubling the throughput.
> > 
> > In a virtual machine test environment simulating four NICs on both
> > sides, 'NVMe over MPTCP' delivered bandwidth up to four times that
> > of
> > standard TCP.
> 
> This is interesting. Did you try using an NVMe multipath iopolicy
> other
> than the default numa policy? Assuming both the host and target are
> multihomed,
> configuring round-robin or queue-depth may provide performance
> comparable
> to what you are seeing with MPTCP.
> 
> I think MPTCP shall distribute traffic using transport-level metrics
> such as
> RTT, cwnd, and packet loss, whereas the NVMe multipath layer makes
> decisions
> based on ANA state, queue depth, and NUMA locality. In a setup with
> multiple
> active paths, switching the iopolicy from numa to round-robin or
> queue-depth
> could improve load distribution across controllers and thus improve
> performance.
> 
> IMO, it would be useful to test with those policies and compare the
> results
> against the MPTCP setup.

Ming Lei also made a similar comment. In my experiments, I didn't set
the multipath iopolicy, so I was using the default numa policy. In the
follow-up, I'll adjust it to round-robin or queue-depth and rerun the
experiments. I'll share the results in this email thread.

Thanks,
-Geliang

> 
> Thanks,
> --Nilay


  reply	other threads:[~2026-02-26  9:54 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-29  4:13 [LSF/MM/BPF TOPIC] NVMe over MPTCP: Multi-Fold Acceleration for NVMe over TCP in Multi-NIC Environments Geliang Tang
2026-02-25  5:57 ` Ming Lei
2026-02-26  9:44   ` Geliang Tang
2026-02-25 15:07 ` Nilay Shroff
2026-02-26  9:54   ` Geliang Tang [this message]
2026-03-05  4:30     ` Geliang Tang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ecb30d89b67b992bfc136186f18550f9fb974baa.camel@kernel.org \
    --to=geliang@kernel.org \
    --cc=hare@suse.de \
    --cc=linux-nvme@lists.infradead.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=martineau@kernel.org \
    --cc=matttbe@kernel.org \
    --cc=mptcp@lists.linux.dev \
    --cc=nilay@linux.ibm.com \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox