From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BA88CD128A for ; Tue, 9 Apr 2024 15:37:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 724C96B0083; Tue, 9 Apr 2024 11:37:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D49D6B0087; Tue, 9 Apr 2024 11:37:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59C0A6B0088; Tue, 9 Apr 2024 11:37:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 38E8B6B0083 for ; Tue, 9 Apr 2024 11:37:43 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E45231403DC for ; Tue, 9 Apr 2024 15:37:42 +0000 (UTC) X-FDA: 81990398364.17.911C401 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf03.hostedemail.com (Postfix) with ESMTP id F1D1A20003 for ; Tue, 9 Apr 2024 15:37:39 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JUdyfeNU; spf=pass (imf03.hostedemail.com: domain of longman@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=longman@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712677060; a=rsa-sha256; cv=none; b=tnhxPG4MBiiJ2u33B/oMbAOIv27P00YetrIbDt3nAbHbshs7edNXWlfllzagZi9Yduyboz Ld8T0XEpWoM2OxYh9PtV1fTekoLE12zjeJ8lmiiL2m5ngqDdIIxPuH9OS/BhKzjpOevWpi wPIzP1a8hh5SN8raOBKuPrtTUujWl/U= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JUdyfeNU; spf=pass (imf03.hostedemail.com: domain of longman@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=longman@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712677060; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=A3Hwu2ip+GHzxQ0LRYtJWTGCmx4DOpmKRqbnqLq7zeM=; b=Z5Fg9G9Qii9aDmOphVJjtcm72VAFj4WFlrmU3lnDiSSFvU/kMT8asyKgvLqkeixnmH7VlT gyIDAJpxdg1aoWsgSXD9ipyRWdKBfCJBKmTyXdsm+PmDTlH3+Kaj4pz5dBgmIZCxnDbejC wun8XuhR7sjgJPeim+SelBlO8xuvT5A= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712677059; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=A3Hwu2ip+GHzxQ0LRYtJWTGCmx4DOpmKRqbnqLq7zeM=; b=JUdyfeNUjQytsOYXcbeOYFBM8KI6qFW51i8wBVMSFer1qF+GaYYTXNSCJOTuJp1fje6L/R JtcrdG2nm75PC+lfQ1tqijEgnl9Fo9rCn0uW/npnhOqobs7qF6OZGpeanLwlwC+KEea/Il qunw82mRP+oXUh7fEwRU+DoUn2gqNLc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-86-J_fxg3y_Mgi_b_A_o-ulDA-1; Tue, 09 Apr 2024 11:37:35 -0400 X-MC-Unique: J_fxg3y_Mgi_b_A_o-ulDA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C11061807ACD; Tue, 9 Apr 2024 15:37:34 +0000 (UTC) Received: from [10.22.10.13] (unknown [10.22.10.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8AE001C06666; Tue, 9 Apr 2024 15:37:33 +0000 (UTC) Message-ID: <96728c6d-3863-48c7-986b-b0b37689849e@redhat.com> Date: Tue, 9 Apr 2024 11:37:31 -0400 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: Advice on cgroup rstat lock Content-Language: en-US To: Jesper Dangaard Brouer , Yosry Ahmed , Johannes Weiner Cc: Tejun Heo , Jesper Dangaard Brouer , "David S. Miller" , Sebastian Andrzej Siewior , Shakeel Butt , Arnaldo Carvalho de Melo , Daniel Bristot de Oliveira , kernel-team , cgroups@vger.kernel.org, Linux-MM , Netdev , bpf , LKML , Ivan Babrou References: <7cd05fac-9d93-45ca-aa15-afd1a34329c6@kernel.org> <20240319154437.GA144716@cmpxchg.org> <56556042-5269-4c7e-99ed-1a1ab21ac27f@kernel.org> From: Waiman Long In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: F1D1A20003 X-Stat-Signature: bqhhikbcag7o1wz6hby3g5dm37d6mphg X-Rspam-User: X-HE-Tag: 1712677059-566924 X-HE-Meta: U2FsdGVkX1/UJmCDLbPLRAmhStTXYqRjiy+Op4PXUW07SRZtdGM+fOf4O11hi0x0FMCFM27R4C1JNKRDvTVgfZ+2RMzrOcf0JJJzxPxKAcw+O04TN5zbKUjPcgfx8i4Kt4IqoLx3VDK3kqApCO0dQ3X8wCPcNVxmJaAbaWOKVNHJJhK+8dj+yUEDc7lDwqF80sXnsiMTgznifmY21eH7CHXdIb5X8fQYku3EW0eFkeMPNnaDKFvLN7VGr9Lu2jsXIPJg0qO2wUZiNOrD6amZ2bdBefiap1R5b47G7Xz5XWzd4YzSljDxJs7brh6umh4bVpqSR1/dlW8epIXZTd3yPbbJejXbGEt6JG/IrM56ohYYRK2cEmowZWFA4Wwxujoybt081EC14zVayNRZsxicJv2iKAx39Zi6NkEX92R0nCEeQrbSVWYcJclZMI3NPSmAbyAEqkMwCVmKY+KdSEspJzGHyFVEseWOWgkPkwQ9rmhwP/5bKLYrewogFsczuNFD0HKVYedSINczGXkH4RqkhG2QeeSfPKt5yqbtZYhviO5MKtMXkZBpckH82y4xG9u9URPe0y/o+iw9M3+2/OwlObKiquaWSDztw7K5X9Pu9tclqSSSCR7HMK+Bat7kYVECHMEZd5DU2RH5Bhzsp1D8miRbyG//sv+UFKt+688GzCJI+JuO9bEKKfBDkLTLMqkp1YXkyJ6fLPdXqJluMaKEh89FUF/LTdzSqSBHVJnMdnPLDOfJMv+B/6Fk3RFOhNB6Mw4m8uBtWV/y5wIifUiC7Yerjy+1j6vUkO9udW/klBAaQc/Kcljq53hrZk4rORZE13KaLS6JtT83Sz+TIcKARkBsdJHQ2HM3JdmVAvdj3s0yST9jLGWIA6BcIjtNYbZcyi6TXLOMYYu6xSeaKVqQfdXEq8/pO0US1n3J6Pc+oW3qO1thig8sDaU3XJnYICEiZNTA1uvgk/ZXdIh0xKB 1q5d5xyd BKhjUeVgfl+ihptipRq5iqD6W7MTVNxQr8UdQRdgXTFGsEZaxX2Ejf2tEZYUTIf384iykF9SeHZ9ct72S5RCBjG0AaWbUC+7RYw+vJ/Gb2JNttXftVU4YWmigtF5TqdlI/lvRuxaEC6REzQPpQU6rTpAUNd94XnfdxIttlMP1athJcuZLWcaukl3tXeJi9SHXQmjdX/Z2TSfIRns/ElV75d2QeqdYwQOu9W9i8F0mOmMB4943/hFdBKcut476TrLILYfmq0Owz1BlqNRqgawaOLv2DqYwG79gBZts/d/gGQdEiyRfyJ0e8kC9LSmvWEEcX8sZQ6u5FCIfw4jMiwwXBB4ih4N8Zhs3Orr2hoYUYyE66aJT92rQwO/1JL/gPeNmlsJ9J/8ImfboYyw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000043, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 4/9/24 07:08, Jesper Dangaard Brouer wrote: > Let move this discussion upstream. > > On 22/03/2024 19.32, Yosry Ahmed wrote: >> [..] >>>> There was a couple of series that made all calls to >>>> cgroup_rstat_flush() sleepable, which allows the lock to be dropped >>>> (and IRQs enabled) in between CPU iterations. This fixed a similar >>>> problem that we used to face (except in our case, we saw hard lockups >>>> in extreme scenarios): >>>> https://lore.kernel.org/linux-mm/20230330191801.1967435-1-yosryahmed@google.com/ >>>> >>>> https://lore.kernel.org/lkml/20230421174020.2994750-1-yosryahmed@google.com/ >>>> >>> >>> I've only done the 6.6 backport, and these were in 6.5/6.6. > > Given I have these in my 6.6 kernel. You are basically saying I should > be able to avoid IRQ-disable for the lock, right? > > My main problem with the global cgroup_rstat_lock[3] is it disables IRQs > and (thereby also) BH/softirq (spin_lock_irq).  This cause production > issues elsewhere, e.g. we are seeing network softirq "not-able-to-run" > latency issues (debug via softirq_net_latency.bt [5]). > >   [3] > https://elixir.bootlin.com/linux/v6.9-rc3/source/kernel/cgroup/rstat.c#L10 >   [5] > https://github.com/xdp-project/xdp-project/blob/master/areas/latency/softirq_net_latency.bt > > >>> And between 6.1 to 6.6 we did observe an improvement in this area. >>> (Maybe I don't have to do the 6.1 backport if the 6.6 release plan >>> progress) >>> >>> I've had a chance to get running in prod for 6.6 backport. >>> As you can see in attached grafana heatmap pictures, we do observe an >>> improved/reduced softirq wait time. >>> These softirq "not-able-to-run" outliers is *one* of the prod issues we >>> observed.  As you can see, I still have other areas to improve/fix. >> >> I am not very familiar with such heatmaps, but I am glad there is an >> improvement with 6.6 and the backports. Let me know if there is >> anything I could do to help with your effort. > > The heatmaps give me an overview, but I needed a debugging tool, so I > developed some bpftrace scripts [1][2] I'm running on production. > To measure how long time we hold the cgroup rstat lock (results below). > Adding ACME and Daniel as I hope there is an easier way to measure lock > hold time and congestion. Notice tricky release/yield in > cgroup_rstat_flush_locked[4]. > > My production results on 6.6 with backported patches (below signature) > vs a our normal 6.6 kernel, with script [2]. The `@lock_time_hist_ns` > shows how long time the lock+IRQs were disabled (taking into account it > can be released in the loop [4]). > > Patched kernel: > > 21:49:02  time elapsed: 43200 sec > @lock_time_hist_ns: > [2K, 4K)              61 |      | > [4K, 8K)             734 |      | > [8K, 16K)         121500 |@@@@@@@@@@@@@@@@      | > [16K, 32K)        385714 > |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| > [32K, 64K)        145600 |@@@@@@@@@@@@@@@@@@@      | > [64K, 128K)       156873 |@@@@@@@@@@@@@@@@@@@@@      | > [128K, 256K)      261027 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ | > [256K, 512K)      291986 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@      | > [512K, 1M)        101859 |@@@@@@@@@@@@@      | > [1M, 2M)           19866 |@@      | > [2M, 4M)           10146 |@      | > [4M, 8M)           30633 |@@@@      | > [8M, 16M)          40365 |@@@@@      | > [16M, 32M)         21650 |@@      | > [32M, 64M)          5842 |      | > [64M, 128M)            8 |      | > > And normal 6.6 kernel: > > 21:48:32  time elapsed: 43200 sec > @lock_time_hist_ns: > [1K, 2K)              25 |      | > [2K, 4K)            1146 |      | > [4K, 8K)           59397 |@@@@      | > [8K, 16K)         571528 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@      | > [16K, 32K)        542648 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@      | > [32K, 64K)        202810 |@@@@@@@@@@@@@      | > [64K, 128K)       134564 |@@@@@@@@@      | > [128K, 256K)       72870 |@@@@@      | > [256K, 512K)       56914 |@@@      | > [512K, 1M)         83140 |@@@@@      | > [1M, 2M)          170514 |@@@@@@@@@@@      | > [2M, 4M)          396304 |@@@@@@@@@@@@@@@@@@@@@@@@@@@      | > [4M, 8M)          755537 > |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| > [8M, 16M)         231222 |@@@@@@@@@@@@@@@      | > [16M, 32M)         76370 |@@@@@      | > [32M, 64M)          1043 |      | > [64M, 128M)           12 |      | > > > For the unpatched kernel we see more events in 4ms to 8ms bucket than > any other bucket. > For patched kernel, we clearly see a significant reduction of events in > the 4 ms to 64 ms area, but we still have some events in this area.  I'm > very happy to see these patches improves the situation.  But for network > processing I'm not happy to see events in area 16ms to 128ms area.  If > we can just avoid disabling IRQs/softirq for the lock, I would be happy. > > How far can we go... could cgroup_rstat_lock be converted to a mutex? The cgroup_rstat_lock was originally a mutex. It was converted to a spinlock in commit 0fa294fb1985 ("group: Replace cgroup_rstat_mutex with a spinlock"). Irq was disabled to enable calling from atomic context. Since commit 0a2dc6ac3329 ("cgroup: remove cgroup_rstat_flush_atomic()"), the rstat API hadn't been called from atomic context anymore. Theoretically, we could change it back to a mutex or not disabling interrupt. That will require that the API cannot be called from atomic context going forward. Cheers, Longman