public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: David Laight <David.Laight@ACULAB.COM>
To: 'David Howells' <dhowells@redhat.com>,
	Linus Torvalds <torvalds@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>,
	kernel test robot <oliver.sang@intel.com>,
	"oe-lkp@lists.linux.dev" <oe-lkp@lists.linux.dev>,
	"lkp@intel.com" <lkp@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Christian Brauner <brauner@kernel.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
	Christian Brauner <christian@brauner.io>,
	Matthew Wilcox <willy@infradead.org>,
	"ying.huang@intel.com" <ying.huang@intel.com>,
	"feng.tang@intel.com" <feng.tang@intel.com>,
	"fengwei.yin@intel.com" <fengwei.yin@intel.com>
Subject: RE: [linus:master] [iov_iter] c9eec08bac: vm-scalability.throughput -16.9% regression
Date: Mon, 20 Nov 2023 16:09:02 +0000	[thread overview]
Message-ID: <ade6cd8de43b492589125295c3bc88d5@AcuMS.aculab.com> (raw)
In-Reply-To: <2284219.1700487177@warthog.procyon.org.uk>

From: David Howells 
> Sent: 20 November 2023 13:33
> 
> Linus Torvalds <torvalds@linux-foundation.org> wrote:
> 
> > So I don't think we should use either of these benchmarks as a "we
> > need to optimize for *this*", but it is another example of how much
> > memcpy() does matter. Even if the end result is then "but different
> > microarchitectrues react so differently that we can't please
> > everybody".
> 
> So what, if anything, should I change?  Should I make it directly call
> __memcpy?  Or should we just leave it to the compiler?  I would prefer to
> leave memcpy_from_iter() and memcpy_to_iter() as __always_inline to eliminate
> the function pointer call we otherwise end up with and to eliminate the return
> value (which is always 0 in this case).

I'd have thought you'd just want to call memcpy() (or xxxx_memcpy())
Anything that matters here is likely to make more difference elsewhere.

I wonder if the kernel ever uses the return value from memcpy().
I suspect it only exists for very historic reasons.

The wrapper:
#define memcpy(d, s, l) {( \
	const void *dd = d; \
	memcpy_void(dd, s, l); \
	dd; \
)}
would save all the asm implementations from saving the result.

I did some more measurements over the weekend.
A quick summary - I've not quite finished (and need to find some
more test systems - newer and amd).
I'm now thinking that the 5k clocks is a TLB miss.
In any case it is a feature of my test not the instruction.
I'm also subtracting off a baseline that has 'nop; nop' not 'rep movsb'.

I'm not entirely certain about the fractional clocks!
I counting 10 operations and getting pretty consistent counts.
I suspect they are end effects.

These measurements are also for 4k aligned src and dest.

An ivy bridge i7-3xxx seems to do:
      0	41.4 clocks
   1-64	31.5 clocks
  65-128	44.3
 129-191	55.1
     192	47.4
 193-255	58.8
then an extra 3 clocks every 64 bytes.

Whereas kaby lake i7-7xxx does:
     0	51.5 clocks
  1-64	22.9
 65-95	25.3
    96	30.5
 97-127	34.1
    128	31.5
then an extra clock every 32 bytes (if dest aligned).

Note that this system is somewhat slower if the destination
is less than (iirc) 48 bytes before the source (mod 4k).
There are several different slow speeds worst is about half
the speed.

I might be able to find a newer system with fsrm.

I was going to measure orig_memcpy() and also see what I can write.
Both those cpu can do a read and write every clock.
So a 64bit copy loop can execute at 8 bytes/clock.
It should be possible to get a 2 clock loop copying 16 bytes.
But that will need a few instructions to setup.
You need to use negative offsets from the end so that only
one register is changed and the 'add' sets Z for the jump.
It can be written in C - but gcc will pessimise it for you.

You also need a conditional branch for short copies (< 16 bytes)
that could easily be mispredicted pretty much 50% of the time.
(IIRC no static prediction on recent x86 cpu.)
And probably a separate test for 0.
It is hard genning a sane clock count for short copies because
the mispredicted branches kill you.
Trouble is any benchmark measurement is likely to train the
branch predictor.
It might actually be hard to reliably beat the ~20 clocks
for 'rep movsb' on kaby lake.

This graph is from the fsrm patch:

Time (cycles) for memmove() sizes 1..31 with neither source nor
destination in cache.

  1800 +-+-------+--------+---------+---------+---------+--------+-------+-+
       +         +        +         +         +         +        +         +
  1600 +-+                                          'memmove-fsrm' *******-+
       |   ######                                   'memmove-orig' ####### |
  1400 +-+ #     #####################                                   +-+
       |   #                          ############                         |
  1200 +-+#                                       ##################     +-+
       |  #                                                                |
  1000 +-+#                                                              +-+
       |  #                                                                |
       | #                                                                 |
   800 +-#                                                               +-+
       | #                                                                 |
   600 +-***********************                                         +-+
       |                        *****************************              |
   400 +-+                                                   *******     +-+
       |                                                                   |
   200 +-+                                                               +-+
       +         +        +         +         +         +        +         +
     0 +-+-------+--------+---------+---------+---------+--------+-------+-+
       0         5        10        15        20        25       30        35

I don't know what that was measured on.
600 clocks seems a lot - could be dominated by loading the cache.
I'd have thought short buffers are actually likely to be in the cache
and/or wanted in it.

There is also the lack of 'rep movsb' on erms (on various cpu).

	David

	

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


  parent reply	other threads:[~2023-11-20 16:08 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-07  1:40 [linus:master] [iov_iter] c9eec08bac: vm-scalability.throughput -16.9% regression kernel test robot
2023-11-15 12:48 ` David Howells
2023-11-15 13:18 ` David Howells
2023-11-15 15:20 ` David Howells
2023-11-15 16:53   ` Linus Torvalds
2023-11-15 17:38     ` Linus Torvalds
2023-11-15 18:35       ` David Howells
2023-11-15 18:45         ` Linus Torvalds
2023-11-15 19:09           ` Linus Torvalds
2023-11-15 20:54           ` David Howells
2023-11-15 18:38       ` Linus Torvalds
2023-11-15 19:09         ` Borislav Petkov
2023-11-15 19:15           ` Linus Torvalds
2023-11-15 20:07             ` Linus Torvalds
2023-11-16 10:07               ` David Laight
2023-11-16 10:14                 ` David Howells
2023-11-16 11:38                   ` David Laight
2023-11-15 19:26           ` Linus Torvalds
2023-11-16 15:44             ` Borislav Petkov
2023-11-16 16:44               ` David Howells
2023-11-17 11:35                 ` Borislav Petkov
2023-11-17 14:12                   ` David Howells
2023-11-17 16:09                     ` Borislav Petkov
2023-11-17 16:32                       ` Linus Torvalds
2023-11-17 16:44                         ` Linus Torvalds
2023-11-17 19:12                           ` Borislav Petkov
2023-11-17 21:57                             ` Linus Torvalds
2023-11-20 13:32                               ` David Howells
2023-11-20 16:06                                 ` Linus Torvalds
2023-11-20 16:09                                 ` David Laight [this message]
2023-11-16 16:48               ` Linus Torvalds
2023-11-16 16:58                 ` David Laight
2023-11-17 11:44                 ` Borislav Petkov
2023-11-17 12:09                   ` Jakub Jelinek
2023-11-17 12:18                     ` Borislav Petkov
2023-11-17 13:09                   ` David Laight
2023-11-17 13:36                     ` Linus Torvalds
2023-11-17 15:20                       ` David Laight
2023-11-15 21:43         ` David Howells
2023-11-15 21:50           ` Linus Torvalds
2023-11-15 21:59             ` Borislav Petkov
2023-11-15 22:59             ` David Howells
2023-11-16  3:26               ` Linus Torvalds
2023-11-16 16:55                 ` David Laight
2023-11-16 17:24                   ` Linus Torvalds
2023-11-16 22:53                     ` David Laight
2023-11-16 21:09                 ` David Howells
2023-11-16 22:36                   ` Linus Torvalds
2023-11-20 11:52             ` Borislav Petkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ade6cd8de43b492589125295c3bc88d5@AcuMS.aculab.com \
    --to=david.laight@aculab.com \
    --cc=axboe@kernel.dk \
    --cc=bp@alien8.de \
    --cc=brauner@kernel.org \
    --cc=christian@brauner.io \
    --cc=dhowells@redhat.com \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=hch@lst.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=oe-lkp@lists.linux.dev \
    --cc=oliver.sang@intel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox