From: Nilay Shroff <nilay@linux.ibm.com>
To: Geliang Tang <geliang@kernel.org>,
lsf-pc@lists.linux-foundation.org,
linux-nvme@lists.infradead.org
Cc: mptcp@lists.linux.dev, Matthieu Baerts <matttbe@kernel.org>,
Mat Martineau <martineau@kernel.org>,
Paolo Abeni <pabeni@redhat.com>, Hannes Reinecke <hare@suse.de>
Subject: Re: [LSF/MM/BPF TOPIC] NVMe over MPTCP: Multi-Fold Acceleration for NVMe over TCP in Multi-NIC Environments
Date: Wed, 25 Feb 2026 20:37:00 +0530 [thread overview]
Message-ID: <48e429f3-e29f-4eac-b4d3-3bf9e0d1c245@linux.ibm.com> (raw)
In-Reply-To: <a9f115aa5719e1088702a3fdeee766a3166611b1.camel@kernel.org>
On 1/29/26 9:43 AM, Geliang Tang wrote:
> 3. Performance Benefits
>
> This new feature has been evaluated in different environments:
>
> I conducted 'NVMe over MPTCP' tests between two PCs, each equipped with
> two Gigabit NICs and directly connected via Ethernet cables. Using
> 'NVMe over TCP', the fio benchmark showed a speed of approximately 100
> MiB/s. In contrast, 'NVMe over MPTCP' achieved about 200 MiB/s with
> fio, doubling the throughput.
>
> In a virtual machine test environment simulating four NICs on both
> sides, 'NVMe over MPTCP' delivered bandwidth up to four times that of
> standard TCP.
This is interesting. Did you try using an NVMe multipath iopolicy other
than the default numa policy? Assuming both the host and target are multihomed,
configuring round-robin or queue-depth may provide performance comparable
to what you are seeing with MPTCP.
I think MPTCP shall distribute traffic using transport-level metrics such as
RTT, cwnd, and packet loss, whereas the NVMe multipath layer makes decisions
based on ANA state, queue depth, and NUMA locality. In a setup with multiple
active paths, switching the iopolicy from numa to round-robin or queue-depth
could improve load distribution across controllers and thus improve performance.
IMO, it would be useful to test with those policies and compare the results
against the MPTCP setup.
Thanks,
--Nilay
next prev parent reply other threads:[~2026-02-25 15:07 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-29 4:13 [LSF/MM/BPF TOPIC] NVMe over MPTCP: Multi-Fold Acceleration for NVMe over TCP in Multi-NIC Environments Geliang Tang
2026-02-25 5:57 ` Ming Lei
2026-02-26 9:44 ` Geliang Tang
2026-02-25 15:07 ` Nilay Shroff [this message]
2026-02-26 9:54 ` Geliang Tang
2026-03-05 4:30 ` Geliang Tang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=48e429f3-e29f-4eac-b4d3-3bf9e0d1c245@linux.ibm.com \
--to=nilay@linux.ibm.com \
--cc=geliang@kernel.org \
--cc=hare@suse.de \
--cc=linux-nvme@lists.infradead.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=martineau@kernel.org \
--cc=matttbe@kernel.org \
--cc=mptcp@lists.linux.dev \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox