public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Salvatore Dipietro <dipiets@amazon.it>
To: <mark.rutland@arm.com>, <andres@anarazel.de>
Cc: <abuehaze@amazon.com>, <alisaidi@amazon.com>,
	<bigeasy@linutronix.de>, <blakgeof@amazon.com>,
	<dipietro.salvatore@gmail.com>, <dipiets@amazon.it>,
	<linux-kernel@vger.kernel.org>, <peterz@infradead.org>,
	<tglx@kernel.org>, <vschneid@redhat.com>
Subject: Re: [PATCH 0/1] sched: Restore PREEMPT_NONE as default
Date: Wed, 8 Apr 2026 20:08:15 +0000	[thread overview]
Message-ID: <20260408200815.19050-1-dipiets@amazon.it> (raw)
In-Reply-To: <adToQS5dousG6UVZ@J2N7QTR9R3>

On 2026-04-04 17:42 UTC, Andres Freund wrote:
> Salvatore, could you repeat that benchmark in some variations?
> 1) Use huge pages

Enabling Transparent Huge page on the system and on the Postgres configuration,
the regression disappears and both system has a throughput in the 185k tps range. 
Looking at /proc/vmstat, I can see high volume of minor fault page rate that 
indicate the memory pressure when HP are not enabled.

| Instance      | Arch | Baseline   | Preempt None  | Ratio |
|               |      |            |               |       |
| m8g.24xlarge  | ARM  | 186,664.56 | 189,934.34    | 1.01x |


On 2026-04-05  1:40 UTC, Andres Freund wrote:
> Now, this machine is smaller and a different arch, so who knows.

To compare the results, I have run the same reproducer with huge page off on 
different instances architecture and size. I can see that, in most of the cases,
regression is present. In particular for Graviton, increasing the instance 
size, increases the regression as well since it creates more contention on
the resources.

| Instance     | Arch | Baseline  | Preempt None | Ratio |
|              |      |           |              |       |
| m8g.2xlarge  | ARM  | 23,438.98 | 21,378.73    | 0.91x |
| m8g.4xlarge  | ARM  | 40,843.86 | 42,496.78    | 1.04x |
| m8g.8xlarge  | ARM  | 49,096.64 | 85,796.66    | 1.75x |
|              |      |           |              |       |
| m7i.2xlarge  | x86  | 16,615.54 | 23,381.16    | 1.40x |
| m7i.4xlarge  | x86  | 28,759.26 | 32,758.62    | 1.14x |
| m7i.8xlarge  | x86  | 73,456.28 | 83,419.36    | 1.14x |
| m7i.24xlarge | x86  | 63,489.67 | 67,314.40    | 1.06x |



On 2026-04-05  1:40 UTC, Andres Freund wrote:
> Could you run something like the following while the benchmark is running:
>  SELECT backend_type, wait_event_type, wait_event, state, count(*) FROM pg_stat_activity where wait_event_type NOT IN ('Activity') GROUP BY backend_type, wait_event_type, wait_event, state order by count(*) desc \watch 1
> and show what you see at the time your profile shows the bad contention?

On baseline, I constantly see SpinDelay as first record with significantly higher count 
than the other wait event types while, with the patch, WALWrite is constantly 
the first record.

Baseline:

  backend_type  | wait_event_type |      wait_event      |        state        | count
----------------+-----------------+----------------------+---------------------+-------
 client backend | Timeout         | SpinDelay            | active              |   838
 client backend | LWLock          | WALWrite             | idle in transaction |    10
 client backend | Client          | ClientRead           | idle in transaction |     4
 client backend | LWLock          | WALWrite             | active              |     3
 client backend | Timeout         | SpinDelay            | idle                |     2
 client backend | Client          | ClientRead           | idle                |     1
 client backend | Client          | ClientRead           | active              |     1
 client backend | IO              | WalSync              | idle in transaction |     1
 checkpointer   | Timeout         | CheckpointWriteDelay |                     |     1
(9 rows)


With patch (PREEMPT_NONE):


  backend_type  | wait_event_type |      wait_event      |        state        | count
----------------+-----------------+----------------------+---------------------+-------
 client backend | LWLock          | WALWrite             | active              |   922
 client backend | IPC             | ProcarrayGroupUpdate | active              |    26
 client backend | Client          | ClientRead           | active              |    24
 client backend | IO              | DataFileRead         | active              |    11
 client backend | LWLock          | WALWrite             | idle                |     5
 client backend | Timeout         | SpinDelay            | active              |     4
 client backend | IO              | DataFileWrite        | active              |     3
 client backend | IO              | WalSync              | active              |     2
 client backend | LWLock          | WALWrite             | idle in transaction |     1
 walwriter      | LWLock          | WALWrite             |                     |     1
 checkpointer   | IO              | DataFileSync         |                     |     1
 client backend | IO              | DataFileRead         | idle                |     1
(12 rows)




On 2026-04-05 14:44 UTC, Mitsumasa KONDO wrote:
> That said, this change is likely to cause similar breakage in other
> user-space applications beyond PostgreSQL that rely on lightweight
> spin loops on arm64. So I agree that the patch to retain PREEMPT_NONE
> is the right approach. At the same time, this is also something that
> distributions can resolve by patching their default kernel configuration.

That's correct in my view. PostgreSQL is where we first noticed the regression 
but, it is probable that it is not limited to this application only.


On 2026-04-06  1:46 UTC, Mitsumasa KONDO wrote:
> Also worth noting: Salvatore's environment is an EC2 instance
> (m8g.24xlarge), not bare metal. Hypervisor-level vCPU scheduling
> adds another layer on top of PREEMPT_LAZY -- a lock holder can be
> descheduled not only by the kernel scheduler but also by the
> hypervisor, and the guest kernel has no visibility into this. This
> could amplify the regression in ways that are not reproducible on
> bare-metal systems, regardless of architecture.

I run it against metal system of the same instance size (m8g.metal-24xl) 
and results are similar. This suggests that the hypervisor does not add 
significant overhead to the regression on single-socket benchmarks.


| Instance        | Arch | Baseline  | Preempt None | Ratio |
|                 |      |           |              |       |
| m8g.metal-24xl  | ARM  | 61,489.83 | 90,225.66    | 1.47x |



On 2026-04-07 11:19 UTC, Mark Rutland wrote:
> Salvatore, was there a specific reason to test with PG_HUGE_PAGES=off
> rather than PG_HUGE_PAGES=try?

We test with various configurations to ensure customers don't encounter 
regressions regardless of their setup choices, even if some configurations 
aren't optimal for maximum performance.




AMAZON DEVELOPMENT CENTER ITALY SRL, viale Monte Grappa 3/5, 20124 Milano, Italia, Registro delle Imprese di Milano Monza Brianza Lodi REA n. 2504859, Capitale Sociale: 10.000 EUR i.v., Cod. Fisc. e P.IVA 10100050961, Societa con Socio Unico




  reply	other threads:[~2026-04-08 20:08 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-03 19:19 [PATCH 0/1] sched: Restore PREEMPT_NONE as default Salvatore Dipietro
2026-04-03 19:19 ` [PATCH 1/1] " Salvatore Dipietro
2026-04-03 21:32 ` [PATCH 0/1] " Peter Zijlstra
2026-04-04 17:42   ` Andres Freund
2026-04-05  1:40     ` Andres Freund
2026-04-05  4:21       ` Andres Freund
2026-04-05  6:08         ` Ritesh Harjani
2026-04-05 14:09           ` Andres Freund
2026-04-05 14:44             ` Andres Freund
2026-04-07  8:29               ` Peter Zijlstra
2026-04-07  8:27             ` Peter Zijlstra
2026-04-07 10:17             ` David Laight
2026-04-07  8:20           ` Peter Zijlstra
2026-04-07  9:07             ` Peter Zijlstra
2026-04-07 11:19         ` Mark Rutland
2026-04-08 20:08           ` Salvatore Dipietro [this message]
2026-04-08 20:51             ` Andres Freund
2026-04-10 15:38               ` Mitsumasa KONDO
2026-04-07  8:49     ` Peter Zijlstra
2026-04-06  0:43   ` Qais Yousef
2026-04-05 14:44 ` Mitsumasa KONDO
2026-04-05 16:43   ` Andres Freund
2026-04-06  1:46     ` Mitsumasa KONDO

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260408200815.19050-1-dipiets@amazon.it \
    --to=dipiets@amazon.it \
    --cc=abuehaze@amazon.com \
    --cc=alisaidi@amazon.com \
    --cc=andres@anarazel.de \
    --cc=bigeasy@linutronix.de \
    --cc=blakgeof@amazon.com \
    --cc=dipietro.salvatore@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=peterz@infradead.org \
    --cc=tglx@kernel.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox