From: David Laight <david.laight.linux@gmail.com>
To: Nicolas Pitre <nico@fluxnic.net>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org, u.kleine-koenig@baylibre.com,
Oleg Nesterov <oleg@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Biju Das <biju.das.jz@bp.renesas.com>
Subject: Re: [PATCH v3 next 09/10] lib: mul_u64_u64_div_u64() Optimise the divide code
Date: Wed, 18 Jun 2025 23:26:48 +0100 [thread overview]
Message-ID: <20250618232648.0f58a27f@pumpkin> (raw)
In-Reply-To: <0143891q-ssp6-83on-4o61-nrp2qn3678o1@syhkavp.arg>
On Wed, 18 Jun 2025 16:12:49 -0400 (EDT)
Nicolas Pitre <nico@fluxnic.net> wrote:
> On Wed, 18 Jun 2025, David Laight wrote:
>
> > On Wed, 18 Jun 2025 11:39:20 -0400 (EDT)
> > Nicolas Pitre <nico@fluxnic.net> wrote:
> >
> > > > > + q_digit = n_long / d_msig;
> > > >
> > > > I think you want to do the divide right at the top - maybe even if the
> > > > result isn't used!
> > > > All the shifts then happen while the divide instruction is in progress
> > > > (even without out-of-order execution).
>
> Well.... testing on my old Intel Core i7-4770R doesn't show a gain.
>
> With your proposed patch as is: ~34ns per call
>
> With my proposed changes: ~31ns per call
>
> With my changes but leaving the divide at the top of the loop: ~32ns per call
Wonders what makes the difference...
Is that random 64bit values (where you don't expect zero digits)
or values where there are likely to be small divisors and/or zero digits?
On x86 you can use the PERF_COUNT_HW_CPU_CYCLES counter to get pretty accurate
counts for a single call.
The 'trick' is to use syscall(___NR_perf_event_open,...) and pc = mmap() to get
the register number pc->index - 1.
Then you want:
static inline unsigned int rdpmc(unsigned int counter)
{
unsigned int low, high;
asm volatile("rdpmc" : "=a" (low), "=d" (high) : "c" (counter));
return low;
}
and do:
unsigned int start = rdpmc(pc->index - 1);
unsigned int zero = 0;
OPTIMISER_HIDE_VAR(zero);
q = mul_u64_add_u64_div_u64(a + (start & zero), b, c, d);
elapsed = rdpmc(pc->index - 1 + (q & zero)) - start;
That carefully forces the rdpmc include the code being tested without
the massive penalty of lfence/mfence.
Do 10 calls and the last 8 will be pretty similar.
Lets you time cold-cache and branch mis-prediction effects.
> > Can you do accurate timings for arm64 or arm32?
>
> On a Broadcom BCM2712 (ARM Cortex-A76):
>
> With your proposed patch as is: ~20 ns per call
>
> With my proposed changes: ~19 ns per call
>
> With my changes but leaving the divide at the top of the loop: ~19 ns per call
Pretty much no difference.
Is that 64bit or 32bit (or the 16 bits per iteration on 64bit) ?
The shifts get more expensive on 32bit.
Have you timed the original code?
>
> Both CPUs have the same max CPU clock rate (2.4 GHz). These are obtained
> with clock_gettime(CLOCK_MONOTONIC) over 56000 calls. There is some
> noise in the results over multiple runs though but still.
That many loops definitely trains the branch predictor and ignores
any effects of loading the I-cache.
As Linus keeps saying, the kernel tends to be 'cold cache', code size
matters.
That also means that branches are 50% likely to be mis-predicted.
(Although working out what cpu actually do is hard.)
>
> I could get cycle measurements on the RPi5 but that requires a kernel
> recompile.
Or a loadable module - shame there isn't a sysctl.
>
> > I've found a 2004 Arm book that includes several I-cache busting
> > divide algorithms.
> > But I'm sure this pi-5 has hardware divide.
>
> It does.
>
>
> Nicolas
next prev parent reply other threads:[~2025-06-18 22:26 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-14 9:53 [PATCH v3 next 00/10] Implement mul_u64_u64_div_u64_roundup() David Laight
2025-06-14 9:53 ` [PATCH v3 next 01/10] lib: mul_u64_u64_div_u64() rename parameter 'c' to 'd' David Laight
2025-06-14 9:53 ` [PATCH v3 next 02/10] lib: mul_u64_u64_div_u64() Use WARN_ONCE() for divide errors David Laight
2025-06-14 15:17 ` Nicolas Pitre
2025-06-14 21:26 ` David Laight
2025-06-14 22:23 ` Nicolas Pitre
2025-06-14 9:53 ` [PATCH v3 next 03/10] lib: mul_u64_u64_div_u64() simplify check for a 64bit product David Laight
2025-06-14 14:01 ` Nicolas Pitre
2025-06-14 9:53 ` [PATCH v3 next 04/10] lib: Add mul_u64_add_u64_div_u64() and mul_u64_u64_div_u64_roundup() David Laight
2025-06-14 14:06 ` Nicolas Pitre
2025-06-14 9:53 ` [PATCH v3 next 05/10] lib: Add tests for mul_u64_u64_div_u64_roundup() David Laight
2025-06-14 15:19 ` Nicolas Pitre
2025-06-17 4:30 ` Nicolas Pitre
2025-09-18 14:00 ` Uwe Kleine-König
2025-09-18 21:06 ` David Laight
2025-06-14 9:53 ` [PATCH v3 next 06/10] lib: test_mul_u64_u64_div_u64: Test both generic and arch versions David Laight
2025-06-14 15:25 ` Nicolas Pitre
2025-06-18 1:39 ` Nicolas Pitre
2025-06-14 9:53 ` [PATCH v3 next 07/10] lib: mul_u64_u64_div_u64() optimise multiply on 32bit x86 David Laight
2025-06-14 15:31 ` Nicolas Pitre
2025-06-14 9:53 ` [PATCH v3 next 08/10] lib: mul_u64_u64_div_u64() Separate multiply to a helper for clarity David Laight
2025-06-14 15:37 ` Nicolas Pitre
2025-06-14 21:30 ` David Laight
2025-06-14 22:27 ` Nicolas Pitre
2025-06-14 9:53 ` [PATCH v3 next 09/10] lib: mul_u64_u64_div_u64() Optimise the divide code David Laight
2025-06-17 4:16 ` Nicolas Pitre
2025-06-18 1:33 ` Nicolas Pitre
2025-06-18 9:16 ` David Laight
2025-06-18 15:39 ` Nicolas Pitre
2025-06-18 16:42 ` Nicolas Pitre
2025-06-18 17:54 ` David Laight
2025-06-18 20:12 ` Nicolas Pitre
2025-06-18 22:26 ` David Laight [this message]
2025-06-19 2:43 ` Nicolas Pitre
2025-06-19 8:32 ` David Laight
2025-06-26 21:46 ` David Laight
2025-06-27 3:48 ` Nicolas Pitre
2025-07-09 14:24 ` David Laight
2025-07-10 9:39 ` 答复: [????] " Li,Rongqing
2025-07-10 10:35 ` David Laight
2025-07-11 21:17 ` David Laight
2025-07-11 21:40 ` Nicolas Pitre
2025-07-14 7:06 ` David Laight
2025-06-14 9:53 ` [PATCH v3 next 10/10] lib: test_mul_u64_u64_div_u64: Test the 32bit code on 64bit David Laight
2025-06-14 10:27 ` [PATCH v3 next 00/10] Implement mul_u64_u64_div_u64_roundup() Peter Zijlstra
2025-06-14 11:59 ` David Laight
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250618232648.0f58a27f@pumpkin \
--to=david.laight.linux@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=biju.das.jz@bp.renesas.com \
--cc=linux-kernel@vger.kernel.org \
--cc=nico@fluxnic.net \
--cc=oleg@redhat.com \
--cc=peterz@infradead.org \
--cc=u.kleine-koenig@baylibre.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox