From: "Toke Høiland-Jørgensen" <toke@redhat.com>
To: Federico Parola <federico.parola@polito.it>, xdp-newbies@vger.kernel.org
Subject: Re: XDP and AF_XDP performance comparison
Date: Thu, 22 Sep 2022 20:38:55 +0200 [thread overview]
Message-ID: <87r103tfsw.fsf@toke.dk> (raw)
In-Reply-To: <26480f7b-44b4-c6d3-2376-9b4be8781645@polito.it>
Federico Parola <federico.parola@polito.it> writes:
> Dear all,
> I would like to share with this community a draft I recently wrote [1]
> on the performance comparison of XDP and AF_XDP packet processing.
> In the paper we found some interesting and unexpected results
> (especially related to the impact of addressed memory on the performance
> of the two technologies) and tried to envision a combined use of the two
> technologies, especially to tackle the poor performance of re-injecting
> packets into the kernel from user space to leverage the TCP/IP stack.
> Any comment and suggestion from this community or any type of joint
> work/collaboration would be very appreciated.
Hi Federico
Thank you for the link! All in all I thought it was a nicely done
performance comparison.
One thing that might be interesting would be to do the same comparison
on a different driver. A lot of the performance details you're
discovering in this paper boils down to details about how the driver
data path is implemented. For instance, it's an Intel-specific thing
that there's a whole separate path for zero-copy AF_XDP. Any plans to
replicate the study using, say, an mlx5-based NIC?
Also, a couple of comments on details:
- The performance delta you show in Figure 9 where AF_XDP is faster at
hair-pin forwarding than XDP was a bit puzzling; the two applications
should basically be doing the same thing. It seems to be because the
i40e driver converts the xdp_buff struct to an xdp_frame before
transmitting it out the interface again:
https://elixir.bootlin.com/linux/latest/source/drivers/net/ethernet/intel/i40e/i40e_txrx.c#L2280
- It's interesting that userspace seems to handle scattered memory
accesses over a large range better than kernel-space. It would be
interesting to know why; you mention you're leaving this to future
studies, any plans of following up and trying to figure this out? :)
Finally, since you seem to have your tests packaged up nicely, do you
think it would be possible to take (some of) them and turn them into a
kind of "performance CI" test suite, that can be run automatically, or
semi-automatically to catch future performance regressions in the XDP
stack? Such a test suite would be pretty great to have so we can avoid
the "death by a thousand paper cuts" type of gradual performance
degradation as we add new features...
-Toke
next prev parent reply other threads:[~2022-09-22 18:40 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-22 8:21 XDP and AF_XDP performance comparison Federico Parola
2022-09-22 18:38 ` Toke Høiland-Jørgensen [this message]
2022-09-23 13:11 ` Federico Parola
2022-12-16 15:11 ` Toke Høiland-Jørgensen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87r103tfsw.fsf@toke.dk \
--to=toke@redhat.com \
--cc=federico.parola@polito.it \
--cc=xdp-newbies@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox