public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: K Prateek Nayak <kprateek.nayak@amd.com>
To: John Stultz <jstultz@google.com>, LKML <linux-kernel@vger.kernel.org>
Cc: Joel Fernandes <joelaf@google.com>,
	Qais Yousef <qyousef@google.com>, Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Valentin Schneider <vschneid@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>,
	Zimuzo Ezeozue <zezeozue@google.com>,
	Youssef Esmat <youssefesmat@google.com>,
	Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Will Deacon <will@kernel.org>, Waiman Long <longman@redhat.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Metin Kaya <Metin.Kaya@arm.com>,
	Xuewen Yan <xuewen.yan94@gmail.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	kernel-team@android.com
Subject: Re: [PATCH v9 0/7] Preparatory changes for Proxy Execution v9
Date: Mon, 25 Mar 2024 12:13:54 +0530	[thread overview]
Message-ID: <218a5f34-dcca-ab0e-e098-807993ca3898@amd.com> (raw)
In-Reply-To: <20240315044007.2778856-1-jstultz@google.com>

Hello John,

On 3/15/2024 10:09 AM, John Stultz wrote:
> As mentioned last time[1], after previous submissions of the
> Proxy Execution series, I got feedback that the patch series was
> getting a bit unwieldy to review, and Qais suggested I break out
> just the cleanups/preparatory components of the patch series and
> submit them on their own in the hope we can start to merge the
> less complex bits and discussion can focus on the more
> complicated portions afterwards. This so far has not been very
> successful, with the submission & RESEND of the v8 preparatory
> changes not getting much in the way of review.
> 
> Nonetheless, for v9 of this series, I’m again only submitting
> those early cleanup/preparatory changes here (which have not
> changed since the v8 submissions, but to avoid confusion with the
> git branch names, I’m labeling it as v9). In the meantime, I’ve
> continued to put a lot of effort into the full series, mostly
> focused on polishing the series for correctness, and fixing some
> hard to trip races.
> 
> If you are interested, the full v9 series, it can be found here:
>   https://github.com/johnstultz-work/linux-dev/commits/proxy-exec-v9-6.8
>   https://github.com/johnstultz-work/linux-dev.git proxy-exec-v9-6.8

Tested the v9 of the series.

tl;dr

o I still see a small regression for hackbench. I'll get some perf
  profiles for the same and leave them in this thread soon (I do
  not have them at the moment unfortunately)

o There is a regression for some combinations in schbench. I'll
  have to recheck if I can consistently reproduce this or not and
  look at the perf profile to see if something is sticking out.

Rest of the benchmark results look good. I'll leave them below and
go digging at the regressions.

o System Details

- 3rd Generation EPYC System
- 2 x 64C/128T
- NPS1 mode

o Kernels

tip:			tip:sched/core at commit 8cec3dd9e593
			("sched/core: Simplify code by removing
			 duplicate #ifdefs")

proxy-exec-full:	tip + proxy execution commits from
			"proxy-exec-v9-6.8"

o Results

==================================================================
Test          : hackbench
Units         : Normalized time in seconds
Interpretation: Lower is better
Statistic     : AMean
==================================================================
Case:           tip[pct imp](CV)    proxy_exec_v9[pct imp](CV)
 1-groups     1.00 [ -0.00]( 1.80)     1.03 [ -2.88]( 2.71)
 2-groups     1.00 [ -0.00]( 1.76)     1.02 [ -2.32]( 1.71)
 4-groups     1.00 [ -0.00]( 1.82)     1.03 [ -2.79]( 0.84)
 8-groups     1.00 [ -0.00]( 1.40)     1.02 [ -1.89]( 0.89)
16-groups     1.00 [ -0.00]( 3.38)     1.01 [ -0.53]( 1.61)


==================================================================
Test          : tbench
Units         : Normalized throughput
Interpretation: Higher is better
Statistic     : AMean
==================================================================
Clients:           tip[pct imp](CV)    proxy_exec_v9[pct imp](CV)
    1     1.00 [  0.00]( 0.44)     0.99 [ -1.30]( 0.66)
    2     1.00 [  0.00]( 0.39)     0.98 [ -1.76]( 0.64)
    4     1.00 [  0.00]( 0.40)     0.99 [ -1.12]( 0.63)
    8     1.00 [  0.00]( 0.16)     0.97 [ -2.94]( 1.49)
   16     1.00 [  0.00]( 3.00)     1.01 [  0.92]( 2.18)
   32     1.00 [  0.00]( 0.84)     1.01 [  0.66]( 1.22)
   64     1.00 [  0.00]( 1.66)     1.00 [ -0.39]( 0.24)
  128     1.00 [  0.00]( 1.04)     0.99 [ -1.23]( 2.26)
  256     1.00 [  0.00]( 0.26)     1.02 [  1.92]( 1.09)
  512     1.00 [  0.00]( 0.15)     1.02 [  1.84]( 0.17)
 1024     1.00 [  0.00]( 0.20)     1.03 [  2.71]( 0.33)


==================================================================
Test          : stream-10
Units         : Normalized Bandwidth, MB/s
Interpretation: Higher is better
Statistic     : HMean
==================================================================
Test:           tip[pct imp](CV)    proxy_exec_v9[pct imp](CV)
 Copy     1.00 [  0.00]( 6.19)     1.11 [ 11.16]( 2.57)
Scale     1.00 [  0.00]( 6.47)     0.98 [ -2.43]( 7.68)
  Add     1.00 [  0.00]( 6.50)     0.99 [ -0.74]( 7.25)
Triad     1.00 [  0.00]( 5.70)     1.03 [  2.95]( 4.41)


==================================================================
Test          : stream-100
Units         : Normalized Bandwidth, MB/s
Interpretation: Higher is better
Statistic     : HMean
==================================================================
Test:           tip[pct imp](CV)    proxy_exec_v9[pct imp](CV)
 Copy     1.00 [  0.00]( 3.22)     1.04 [  4.29]( 3.02)
Scale     1.00 [  0.00]( 6.17)     1.02 [  1.97]( 1.55)
  Add     1.00 [  0.00]( 5.12)     1.02 [  2.48]( 1.55)
Triad     1.00 [  0.00]( 2.29)     1.01 [  1.06]( 1.49)


==================================================================
Test          : netperf
Units         : Normalized Througput
Interpretation: Higher is better
Statistic     : AMean
==================================================================
Clients:           tip[pct imp](CV)    proxy_exec_v9[pct imp](CV)
 1-clients     1.00 [  0.00]( 0.17)     0.98 [ -1.99]( 0.24)
 2-clients     1.00 [  0.00]( 0.49)     0.98 [ -1.86]( 0.45)
 4-clients     1.00 [  0.00]( 0.65)     0.98 [ -1.65]( 0.30)
 8-clients     1.00 [  0.00]( 0.56)     0.98 [ -1.73]( 0.41)
16-clients     1.00 [  0.00]( 0.78)     0.98 [ -1.52]( 0.34)
32-clients     1.00 [  0.00]( 0.62)     0.98 [ -1.90]( 0.73)
64-clients     1.00 [  0.00]( 1.41)     0.99 [ -1.46]( 1.39)
128-clients    1.00 [  0.00]( 0.83)     0.98 [ -1.63]( 0.89)
256-clients    1.00 [  0.00]( 4.60)     1.01 [  1.47]( 2.12)
512-clients    1.00 [  0.00](54.18)     1.02 [  2.25](56.18)


==================================================================
Test          : schbench
Units         : Normalized 99th percentile latency in us
Interpretation: Lower is better
Statistic     : Median
==================================================================
#workers:           tip[pct imp](CV)    proxy_exec_v9[pct imp](CV)
  1     1.00 [ -0.00](34.63)     1.43 [-43.33]( 2.73)
  2     1.00 [ -0.00]( 2.70)     0.89 [ 10.81](23.82)
  4     1.00 [ -0.00]( 4.70)     1.04 [ -4.44](12.54)
  8     1.00 [ -0.00]( 5.09)     0.87 [ 13.21](14.08)
 16     1.00 [ -0.00]( 5.08)     1.03 [ -3.39]( 4.10)
 32     1.00 [ -0.00]( 2.91)     1.14 [-14.44]( 0.56)
 64     1.00 [ -0.00]( 2.73)     1.04 [ -4.17]( 2.77)
128     1.00 [ -0.00]( 7.89)     1.07 [ -7.14]( 2.83)
256     1.00 [ -0.00](28.55)     0.69 [ 31.37](19.96)
512     1.00 [ -0.00]( 2.11)     1.01 [ -1.20]( 1.07)
--

I'll leave more test results on the thread as I get to them. It has been
a slightly busy season so sorry about the delays.

> 
> 
> New in v9:
> (In the git tree. Again, none of the preparatory patches
> submitted here have changed since v8)

Since these changes in this preparatory series have remained the same,
please feel free to add:

Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>

> ---------
> * Change to force mutex lock handoff when we have a blocked donor
>   (preserves optimistic spinning elsewhere, but still prioritizes
>   donor when present on unlock)
> 
> * Do return migration whenever we’re not on the wake_cpu (should
>   address placement concerns brought up earlier by Xuewen Yan)
> 
> * Closed hole where we might mark a task as BO_RUNNABLE without
>   doing return migration
> 
> * Much improved handling of balance callbacks when we need to
>   pick_again
> 
> * Fixes for cases where we put_prev_task() but left a dangling
>   pointer to rq_selected() when deactivating a task (as it could
>   then be migrated away while we still have a reference to it),
>   by selecting idle before deactivating next.
> 
> * Fixes for dangling references to rq->curr (which had been
>   put_prev_task’ed)  when we drop rq lock for proxy_migration
> 
> * Fixes for ttwu / find_proxy_task() races if the lock owner was
>   being return migrated, and ttwu hadn’t yet set_task_cpu() and
>   activated it, which allowed that task to be scheduled on two
>   cpus at the same time.
> 
> * Fix for live-lock between activate_blocked_tasks() and
>   proxy_enqueue_on_owner() if activated owner went right back to
>   sleep (which also simplifies the locking in
>   activate_blocked_tasks())
> 
> * Cleanups to avoid locked BO_WAKING->BO_RUNNABLE transition in
>   try_to_wake_up() if proxy execution isn't enabled
> 
> * Fix for psi_dequeue, as proxy changes assumptions around
>   voluntary sleeps.
> 
> * Numerous typos, comment improvements, and other fixups
>   suggested by Metin
> 
> * And more!
> 
> 
> Performance:
> ---------
> K Prateek Nayak provided some feedback on the v8 series here[2].
> Given the potential extra overhead of doing rq migrations/return
> migrations/etc for the proxy case, it’s not completely surprising
> a few of K Prateek’s test cases saw ~3-5% regressions, but I’m
> hoping to look into this soon to see if we can reduce those
> further. The donor mutex handoff in this revision may help some.
> 
> 
> Issues still to address:
> ---------
> * The chain migration functionality needs further iterations and
>   better validation to ensure it truly maintains the RT/DL load
>   balancing invariants.
> 
> * CFS load balancing. There was concern that blocked tasks may
>   carry forward load (PELT) to the lock owner's CPU, so the CPU
>   may look like it is overloaded. Needs investigation.
> 
> * The sleeping owner handling (where we deactivate waiting tasks
>   and enqueue them onto a list, then reactivate them when the
>   owner wakes up) doesn’t feel great. This is in part because
>   when we want to activate tasks, we’re already holding a
>   task.pi_lock and a rq_lock, just not the locks for the task
>   we’re activating, nor the rq we’re enqueuing it onto. So there
>   has to be a bit of lock juggling to drop and acquire the right
>   locks (in the right order). It feels like there’s got to be a
>   better way. Also needs some rework to get rid of the recursion.
> 
> 
> Credit/Disclaimer:
> —--------------------
> As mentioned previously, this Proxy Execution series has a long
> history: First described in a paper[3] by Watkins, Straub,
> Niehaus, then from patches from Peter Zijlstra, extended with
> lots of work by Juri Lelli, Valentin Schneider, and Connor
> O'Brien. (and thank you to Steven Rostedt for providing
> additional details here!)
> 
> So again, many thanks to those above, as all the credit for this
> series really is due to them - while the mistakes are likely
> mine.
> 
> Thanks so much!
> -john
> 
> [1] https://lore.kernel.org/lkml/20240224001153.2584030-1-jstultz@google.com/
> [2] https://lore.kernel.org/lkml/c26251d2-e1bf-e5c7-0636-12ad886e1ea8@amd.com/
> [3] https://static.lwn.net/images/conf/rtlws11/papers/proc/p38.pdf
> 
>[..snip..]
> 

--
Thanks and Regards,
Prateek

  parent reply	other threads:[~2024-03-25  6:44 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-15  4:39 [PATCH v9 0/7] Preparatory changes for Proxy Execution v9 John Stultz
2024-03-15  4:39 ` [PATCH v9 1/7] locking/mutex: Remove wakeups from under mutex::wait_lock John Stultz
2024-03-25 18:56   ` Davidlohr Bueso
2024-03-15  4:39 ` [PATCH v9 2/7] locking/mutex: Make mutex::wait_lock irq safe John Stultz
2024-03-15  4:39 ` [PATCH v9 3/7] locking/mutex: Expose __mutex_owner() John Stultz
2024-03-15  4:39 ` [PATCH v9 4/7] sched: Add do_push_task helper John Stultz
2024-03-15  4:39 ` [PATCH v9 5/7] sched: Consolidate pick_*_task to task_is_pushable helper John Stultz
2024-03-15  4:39 ` [PATCH v9 6/7] sched: Split out __schedule() deactivate task logic into a helper John Stultz
2024-03-15  4:39 ` [PATCH v9 7/7] sched: Split scheduler and execution contexts John Stultz
2024-03-18 15:06   ` Metin Kaya
2024-04-01 23:34     ` John Stultz
2024-03-18 15:05 ` [PATCH v9 0/7] Preparatory changes for Proxy Execution v9 Metin Kaya
2024-03-25  6:43 ` K Prateek Nayak [this message]
2024-04-01 21:28   ` John Stultz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=218a5f34-dcca-ab0e-e098-807993ca3898@amd.com \
    --to=kprateek.nayak@amd.com \
    --cc=Metin.Kaya@arm.com \
    --cc=boqun.feng@gmail.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=joelaf@google.com \
    --cc=jstultz@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=kernel-team@android.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=qyousef@google.com \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    --cc=will@kernel.org \
    --cc=xuewen.yan94@gmail.com \
    --cc=youssefesmat@google.com \
    --cc=zezeozue@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox