From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from pdx-out-001.esa.us-west-2.outbound.mail-perimeter.amazon.com (pdx-out-001.esa.us-west-2.outbound.mail-perimeter.amazon.com [44.245.243.92]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87BBA347BA9 for ; Wed, 8 Apr 2026 20:08:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=44.245.243.92 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775678913; cv=none; b=eiPh0tlkMu5/7cW6VinJzh+ZlDkjlnHNwLVoUEC+0h46gQ/GQtn7l0LcPjMJzBmsQVt6aEYGeRzcWQ9GQp1X0gjzpl3D+P6JTjHy9nj9InNeCpqlDWn+zaNv/vBI/2quASkGg5dN4iUOHkvOEwKNS0+6DLdxn80AYUwC+0YHHzI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775678913; c=relaxed/simple; bh=5SwPCUWQsvpvxqJqcwK5lG2xYicWJLFdgU9ddG7hIRE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Dz+TiavyNuUoOe1R5W9bl7kIm2nzTrubZQs1kww65/4/+/cmZTlo8qIBJb2WVD1AQ/fBpEm/ldKaN78dFWz6vRXBU4jPK9QEd8551tomR0wssxotqYO0Zyis89ZL0z6NLr5ONHKFfAHgWZB4b+UHkEbIOmISdhWHUD60DlwK6Lg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.it; spf=pass smtp.mailfrom=amazon.it; dkim=pass (2048-bit key) header.d=amazon.it header.i=@amazon.it header.b=VEcFXj4s; arc=none smtp.client-ip=44.245.243.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.it Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.it Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=amazon.it header.i=@amazon.it header.b="VEcFXj4s" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.it; i=@amazon.it; q=dns/txt; s=amazoncorp2; t=1775678911; x=1807214911; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VBtrU1In4z7bqd1jstCgCtrCDWwuwFNqrKGi6o3k8aw=; b=VEcFXj4s7QZfj/PYTe4YCH5r/1gP+5ENgtCmhJl2a42FpU67MhLX6/n+ PjAljmIeKEISbGkyMsF9qnCFNRD2HluVGRZE7KD/veO2c9zVFdKZu+IJy IJjKrpo1tswdxt0uDPKobF9HQFZVMiCtovYMLRWsnrSXrEv/HuDOL+LVW XnyKXPCQNynuyxGt3VI36Iujs0QlFJZ4wuGDPPm0odcJtz3NIjIG2D1UX P6jYSsTMaCDMcP3DxbFR/AjjwRix8lLOIIUaYLZpkNFt7XGqNQwxESH0N 8P2Sz7G4UIEW81psgE8sQeLtvvhXpDCqGQYjpib+3QB9I5WgqXQdHr6tt Q==; X-CSE-ConnectionGUID: t+hDa+cYTl2AChgpqxp3TA== X-CSE-MsgGUID: f2rcoeeLSD6X+6VYR7UPCg== X-IronPort-AV: E=Sophos;i="6.23,168,1770595200"; d="scan'208";a="16387757" Received: from ip-10-5-12-219.us-west-2.compute.internal (HELO smtpout.naws.us-west-2.prod.farcaster.email.amazon.dev) ([10.5.12.219]) by internal-pdx-out-001.esa.us-west-2.outbound.mail-perimeter.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Apr 2026 20:08:28 +0000 Received: from EX19MTAUWB002.ant.amazon.com [205.251.233.111:23847] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.60.232:2525] with esmtp (Farcaster) id 548598b7-8c22-4d14-ba2f-5a8f2bae3f02; Wed, 8 Apr 2026 20:08:27 +0000 (UTC) X-Farcaster-Flow-ID: 548598b7-8c22-4d14-ba2f-5a8f2bae3f02 Received: from EX19D001UWA001.ant.amazon.com (10.13.138.214) by EX19MTAUWB002.ant.amazon.com (10.250.64.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.2562.37; Wed, 8 Apr 2026 20:08:27 +0000 Received: from dev-dsk-dipiets-2b-fa1865ee.us-west-2.amazon.com (172.22.139.101) by EX19D001UWA001.ant.amazon.com (10.13.138.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.2562.37; Wed, 8 Apr 2026 20:08:27 +0000 From: Salvatore Dipietro To: , CC: , , , , , , , , , Subject: Re: [PATCH 0/1] sched: Restore PREEMPT_NONE as default Date: Wed, 8 Apr 2026 20:08:15 +0000 Message-ID: <20260408200815.19050-1-dipiets@amazon.it> X-Mailer: git-send-email 2.47.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D041UWB004.ant.amazon.com (10.13.139.143) To EX19D001UWA001.ant.amazon.com (10.13.138.214) Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit On 2026-04-04 17:42 UTC, Andres Freund wrote: > Salvatore, could you repeat that benchmark in some variations? > 1) Use huge pages Enabling Transparent Huge page on the system and on the Postgres configuration, the regression disappears and both system has a throughput in the 185k tps range. Looking at /proc/vmstat, I can see high volume of minor fault page rate that indicate the memory pressure when HP are not enabled. | Instance | Arch | Baseline | Preempt None | Ratio | | | | | | | | m8g.24xlarge | ARM | 186,664.56 | 189,934.34 | 1.01x | On 2026-04-05 1:40 UTC, Andres Freund wrote: > Now, this machine is smaller and a different arch, so who knows. To compare the results, I have run the same reproducer with huge page off on different instances architecture and size. I can see that, in most of the cases, regression is present. In particular for Graviton, increasing the instance size, increases the regression as well since it creates more contention on the resources. | Instance | Arch | Baseline | Preempt None | Ratio | | | | | | | | m8g.2xlarge | ARM | 23,438.98 | 21,378.73 | 0.91x | | m8g.4xlarge | ARM | 40,843.86 | 42,496.78 | 1.04x | | m8g.8xlarge | ARM | 49,096.64 | 85,796.66 | 1.75x | | | | | | | | m7i.2xlarge | x86 | 16,615.54 | 23,381.16 | 1.40x | | m7i.4xlarge | x86 | 28,759.26 | 32,758.62 | 1.14x | | m7i.8xlarge | x86 | 73,456.28 | 83,419.36 | 1.14x | | m7i.24xlarge | x86 | 63,489.67 | 67,314.40 | 1.06x | On 2026-04-05 1:40 UTC, Andres Freund wrote: > Could you run something like the following while the benchmark is running: > SELECT backend_type, wait_event_type, wait_event, state, count(*) FROM pg_stat_activity where wait_event_type NOT IN ('Activity') GROUP BY backend_type, wait_event_type, wait_event, state order by count(*) desc \watch 1 > and show what you see at the time your profile shows the bad contention? On baseline, I constantly see SpinDelay as first record with significantly higher count than the other wait event types while, with the patch, WALWrite is constantly the first record. Baseline: backend_type | wait_event_type | wait_event | state | count ----------------+-----------------+----------------------+---------------------+------- client backend | Timeout | SpinDelay | active | 838 client backend | LWLock | WALWrite | idle in transaction | 10 client backend | Client | ClientRead | idle in transaction | 4 client backend | LWLock | WALWrite | active | 3 client backend | Timeout | SpinDelay | idle | 2 client backend | Client | ClientRead | idle | 1 client backend | Client | ClientRead | active | 1 client backend | IO | WalSync | idle in transaction | 1 checkpointer | Timeout | CheckpointWriteDelay | | 1 (9 rows) With patch (PREEMPT_NONE): backend_type | wait_event_type | wait_event | state | count ----------------+-----------------+----------------------+---------------------+------- client backend | LWLock | WALWrite | active | 922 client backend | IPC | ProcarrayGroupUpdate | active | 26 client backend | Client | ClientRead | active | 24 client backend | IO | DataFileRead | active | 11 client backend | LWLock | WALWrite | idle | 5 client backend | Timeout | SpinDelay | active | 4 client backend | IO | DataFileWrite | active | 3 client backend | IO | WalSync | active | 2 client backend | LWLock | WALWrite | idle in transaction | 1 walwriter | LWLock | WALWrite | | 1 checkpointer | IO | DataFileSync | | 1 client backend | IO | DataFileRead | idle | 1 (12 rows) On 2026-04-05 14:44 UTC, Mitsumasa KONDO wrote: > That said, this change is likely to cause similar breakage in other > user-space applications beyond PostgreSQL that rely on lightweight > spin loops on arm64. So I agree that the patch to retain PREEMPT_NONE > is the right approach. At the same time, this is also something that > distributions can resolve by patching their default kernel configuration. That's correct in my view. PostgreSQL is where we first noticed the regression but, it is probable that it is not limited to this application only. On 2026-04-06 1:46 UTC, Mitsumasa KONDO wrote: > Also worth noting: Salvatore's environment is an EC2 instance > (m8g.24xlarge), not bare metal. Hypervisor-level vCPU scheduling > adds another layer on top of PREEMPT_LAZY -- a lock holder can be > descheduled not only by the kernel scheduler but also by the > hypervisor, and the guest kernel has no visibility into this. This > could amplify the regression in ways that are not reproducible on > bare-metal systems, regardless of architecture. I run it against metal system of the same instance size (m8g.metal-24xl) and results are similar. This suggests that the hypervisor does not add significant overhead to the regression on single-socket benchmarks. | Instance | Arch | Baseline | Preempt None | Ratio | | | | | | | | m8g.metal-24xl | ARM | 61,489.83 | 90,225.66 | 1.47x | On 2026-04-07 11:19 UTC, Mark Rutland wrote: > Salvatore, was there a specific reason to test with PG_HUGE_PAGES=off > rather than PG_HUGE_PAGES=try? We test with various configurations to ensure customers don't encounter regressions regardless of their setup choices, even if some configurations aren't optimal for maximum performance. AMAZON DEVELOPMENT CENTER ITALY SRL, viale Monte Grappa 3/5, 20124 Milano, Italia, Registro delle Imprese di Milano Monza Brianza Lodi REA n. 2504859, Capitale Sociale: 10.000 EUR i.v., Cod. Fisc. e P.IVA 10100050961, Societa con Socio Unico