public inbox for dev@dpdk.org
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: "Morten Brørup" <mb@smartsharesystems.com>
Cc: <dev@dpdk.org>,
	Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>,
	"Vipin Varghese" <vipin.varghese@amd.com>,
	Stephen Hemminger <stephen@networkplumber.org>,
	Liangxing Wang <wangliangxing@hygon.cn>,
	Thiyagarajan P <Thiyagarajan.P@amd.com>,
	Bala Murali Krishna <Bala.MuraliKrishna@amd.com>
Subject: Re: [PATCH v7] eal/x86: optimize memcpy of small sizes
Date: Wed, 11 Mar 2026 19:09:41 +0000	[thread overview]
Message-ID: <abG99TTs1W3-iX1e@bricha3-mobl1.ger.corp.intel.com> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35F6577E@smartserver.smartshare.dk>

On Wed, Mar 11, 2026 at 07:29:38PM +0100, Morten Brørup wrote:
> > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > Sent: Wednesday, 11 March 2026 17.59
> > 
> > On Fri, Feb 20, 2026 at 11:08:24AM +0000, Morten Brørup wrote:
> > > The implementation for copying up to 64 bytes does not depend on
> > address
> > > alignment with the size of the CPU's vector registers. Nonetheless,
> > the
> > > exact same code for copying up to 64 bytes was present in both the
> > aligned
> > > copy function and all the CPU vector register size specific variants
> > of
> > > the unaligned copy functions.
> > > With this patch, the implementation for copying up to 64 bytes was
> > > consolidated into one instance, located in the common copy function,
> > > before checking alignment requirements.
> > > This provides three benefits:
> > > 1. No copy-paste in the source code.
> > > 2. A performance gain for copying up to 64 bytes, because the
> > > address alignment check is avoided in this case.
> > > 3. Reduced instruction memory footprint, because the compiler only
> > > generates one instance of the function for copying up to 64 bytes,
> > instead
> > > of two instances (one in the unaligned copy function, and one in the
> > > aligned copy function).
> > >
> > > Furthermore, the function for copying less than 16 bytes was replaced
> > with
> > > a smarter implementation using fewer branches and potentially fewer
> > > load/store operations.
> > > This function was also extended to handle copying of up to 16 bytes,
> > > instead of up to 15 bytes.
> > > This small extension reduces the code path, and thus improves the
> > > performance, for copying two pointers on 64-bit architectures and
> > four
> > > pointers on 32-bit architectures.
> > >
> > > Also, __rte_restrict was added to source and destination addresses.
> > >
> > > And finally, the missing implementation of rte_mov48() was added.
> > >
> > > Regarding performance, the memcpy performance test showed cache-to-
> > cache
> > > copying of up to 32 bytes now takes 2 cycles, versus ca. 6.5 cycles
> > before
> > > this patch.
> > > Copying 64 bytes now takes 4 cycles, versus 7 cycles before.
> > >
> > > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > > ---
> > > v7:
> > > * Updated patch description. Mainly to clarify that the changes
> > related to
> > >   copying up to 64 bytes simply replaces multiple instances of copy-
> > pasted
> > >   code with one common instance.
> > > * Fixed copy of build time known 16 bytes in rte_mov17_to_32().
> > (Vipin)
> > > * Rebased.
> > > v6:
> > > * Went back to using rte_uintN_alias structures for copying instead
> > of
> > >   using memcpy(). They were there for a reason.
> > >   (Inspired by the discussion about optimizing the checksum
> > function.)
> > > * Removed note about copying uninitialized data.
> > > * Added __rte_restrict to source and destination addresses.
> > >   Updated function descriptions from "should" to "must" not overlap.
> > > * Changed rte_mov48() AVX implementation to copy 32+16 bytes instead
> > of
> > >   copying 32 + 32 overlapping bytes. (Konstantin)
> > > * Ignoring "-Wstringop-overflow" is not needed, so it was removed.
> > > v5:
> > > * Reverted v4: Replace SSE2 _mm_loadu_si128() with SSE3
> > _mm_lddqu_si128().
> > >   It was slower.
> > > * Improved some comments. (Konstantin Ananyev)
> > > * Moved the size range 17..32 inside the size <= 64 branch, so when
> > >   building for SSE, the generated code can start copying the first
> > >   16 bytes before comparing if the size is greater than 32 or not.
> > > * Just require RTE_MEMCPY_AVX for using rte_mov32() in
> > rte_mov33_to_64().
> > > v4:
> > > * Replace SSE2 _mm_loadu_si128() with SSE3 _mm_lddqu_si128().
> > > v3:
> > > * Fixed typo in comment.
> > > v2:
> > > * Updated patch title to reflect that the performance is improved.
> > > * Use the design pattern of two overlapping stores for small copies
> > too.
> > > * Expanded first branch from size < 16 to size <= 16.
> > > * Handle more build time constant copy sizes.
> > > ---
> > >  lib/eal/x86/include/rte_memcpy.h | 526 ++++++++++++++++++++---------
> > --
> > >  1 file changed, 348 insertions(+), 178 deletions(-)
> > >
> > 
> > I'm a little unhappy to see the amount of memcpy code growing rather
> > than
> > shrinking, but since it improves performance I'm ok with it. We should
> > keep
> > it under constant review though.
> 
> Agree!
> 
> I just counted; 149 of the added lines are for handling __rte_constant(n). So it's not as bad as it looks.
> But still growing, which was not the intention.
> When I started working on this patch, the intention was to consolidate the copy-pasted instances for handling up to 64 bytes into one instance. This should have reduced the amount of code.
> But then it somehow grew anyway.
> 
> > 
> > > diff --git a/lib/eal/x86/include/rte_memcpy.h
> > b/lib/eal/x86/include/rte_memcpy.h
> > > index 46d34b8081..ed8e5f8dc4 100644
> > > --- a/lib/eal/x86/include/rte_memcpy.h
> > > +++ b/lib/eal/x86/include/rte_memcpy.h
> > > @@ -22,11 +22,6 @@
> > >  extern "C" {
> > >  #endif
> > >
> > > -#if defined(RTE_TOOLCHAIN_GCC) && (GCC_VERSION >= 100000)
> > > -#pragma GCC diagnostic push
> > > -#pragma GCC diagnostic ignored "-Wstringop-overflow"
> > > -#endif
> > > -
> > >  /*
> > >   * GCC older than version 11 doesn't compile AVX properly, so use
> > SSE instead.
> > >   * There are no problems with AVX2.
> > > @@ -40,9 +35,6 @@ extern "C" {
> > >  /**
> > >   * Copy bytes from one location to another. The locations must not
> > overlap.
> > >   *
> > > - * @note This is implemented as a macro, so it's address should not
> > be taken
> > > - * and care is needed as parameter expressions may be evaluated
> > multiple times.
> > > - *
> > 
> > I'd be wary about completely removing this comment, as we may well want
> > to
> > go back to a macro in the future, e.g. if we decide to remove the
> > custom
> > rte_memcpy altogether. Therefore, rather than removing the comment, can
> > we
> > tweak it to say "This may be implemented as a macro..."
> 
> The comment is still present in the "generic" header file used for the Doxygen documentation:
> https://elixir.bootlin.com/dpdk/v26.03-rc1/source/lib/eal/include/generic/rte_memcpy.h#L99
> 
> All other architectures rely on the "generic" header file, and have no Doxygen comments at all.
> We could also remove them from the x86 implementation.
> That would shrink the file even more. ;-)
> But I'd rather keep the comments - at least for now.
> 
> > 
> > 
> > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> 
> Thank you for quick response, Bruce.
> 
> > 
> > PS: If we want a little further cleanup, I'd consider removing the
> > RTE_MEMCPY_AVX macro and replacing it with a straight check for
> > __AVX2__.
> > CPUs with AVX2 was introduced in 2013, and checking Claude and
> > Wikipedia
> > says that AMD parts started having it in 2015, meaning that there were
> > only
> > a few generations of CPUs >10 years ago which had AVX but not AVX2.
> > [There
> > were later CPUs e.g. lower-end parts, which didn't have AVX2, but they
> > didn't have AVX1 either, so SSE is the only choice there]
> > Not a big cleanup if we did remove it, but sometimes every little
> > helps!
> 
> Good idea. But let's not do it now.
>
Agree on all counts. 

  reply	other threads:[~2026-03-11 19:09 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-20 11:45 [PATCH] eal/x86: reduce memcpy code duplication Morten Brørup
2025-11-21 10:35 ` [PATCH v2] eal/x86: optimize memcpy of small sizes Morten Brørup
2025-11-21 16:57   ` Stephen Hemminger
2025-11-21 17:02     ` Bruce Richardson
2025-11-21 17:11       ` Stephen Hemminger
2025-11-21 21:36         ` Morten Brørup
2025-11-21 10:40 ` Morten Brørup
2025-11-21 10:40 ` [PATCH v3] " Morten Brørup
2025-11-24 13:36   ` Morten Brørup
2025-11-24 15:46     ` Patrick Robb
2025-11-28 14:02   ` Konstantin Ananyev
2025-11-28 15:55     ` Morten Brørup
2025-11-28 18:10       ` Konstantin Ananyev
2025-11-29  2:17         ` Morten Brørup
2025-12-01  9:35           ` Konstantin Ananyev
2025-12-01 10:41             ` Morten Brørup
2025-11-24 20:31 ` [PATCH v4] " Morten Brørup
2025-11-25  8:19   ` Morten Brørup
2025-12-01 15:55 ` [PATCH v5] " Morten Brørup
2025-12-03 13:29   ` Morten Brørup
2026-01-03 17:53   ` Morten Brørup
2026-01-09 15:05     ` Varghese, Vipin
2026-01-11 15:52     ` Konstantin Ananyev
2026-01-11 16:01       ` Stephen Hemminger
2026-01-12  8:02       ` Morten Brørup
2026-01-12 16:00         ` Scott Mitchell
2026-01-13  0:39           ` Stephen Hemminger
2026-01-12 12:03 ` [PATCH v6] " Morten Brørup
2026-01-13 23:19   ` Stephen Hemminger
2026-01-20 11:00     ` Varghese, Vipin
2026-01-20 11:19       ` Varghese, Vipin
2026-01-20 11:22         ` Morten Brørup
2026-01-21 11:48           ` Varghese, Vipin
2026-01-22  6:59             ` Varghese, Vipin
2026-01-22  7:28               ` Liangxing Wang
2026-01-23  6:58               ` Varghese, Vipin
2026-02-20 11:08 ` [PATCH v7] " Morten Brørup
2026-03-11  7:28   ` Morten Brørup
2026-03-11 16:58   ` Bruce Richardson
2026-03-11 18:29     ` Morten Brørup
2026-03-11 19:09       ` Bruce Richardson [this message]
2026-03-12  8:33   ` Konstantin Ananyev
2026-03-19 15:55   ` Morten Brørup

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=abG99TTs1W3-iX1e@bricha3-mobl1.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=Bala.MuraliKrishna@amd.com \
    --cc=Thiyagarajan.P@amd.com \
    --cc=dev@dpdk.org \
    --cc=konstantin.v.ananyev@yandex.ru \
    --cc=mb@smartsharesystems.com \
    --cc=stephen@networkplumber.org \
    --cc=vipin.varghese@amd.com \
    --cc=wangliangxing@hygon.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox