From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7087C27C76 for ; Wed, 25 Jan 2023 18:22:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6296D6B0078; Wed, 25 Jan 2023 13:22:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D9366B007B; Wed, 25 Jan 2023 13:22:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45EEE6B007D; Wed, 25 Jan 2023 13:22:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 34E956B0078 for ; Wed, 25 Jan 2023 13:22:30 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E51371C5BDC for ; Wed, 25 Jan 2023 18:22:29 +0000 (UTC) X-FDA: 80394141618.05.A53DDD7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id C91E018000F for ; Wed, 25 Jan 2023 18:22:27 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hVfCdBeo; spf=pass (imf24.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674670947; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8TAHgioL+cbo0qEgcEaZEdMYDgI4uLoD69uULXZCMWk=; b=yS5TpHn2dTx6IuuYNI2QrbgvRScTcrCcX4mu5SuztlK1euYBErc3W7u4vGIE/FlR9cz74c bdONDdgEnRVbjnk5jZdfaSQd+ez7AR9i+fWOm/9mc9ftDPbw9SGGeliC3QeatylDjuo75P iPYH4inJY5eHwK1j0S5W+ZZxstHGdHk= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hVfCdBeo; spf=pass (imf24.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674670947; a=rsa-sha256; cv=none; b=EFrFtnxwLqtictdOLkWEBswkiBAjdxv/pN7A4UIm1C0FKiV2QmnkJ+jv7UubpdQ2tHEJDI 7hUUL0MG8iWjA/F1f0iWoTQs7zUcaQ21iHCOtnxjq3e9athCDIxqgnMnmn6dGMVkkEivxy 5xPFZbdIp3oHLZ1Wdc84JfIdaDLhR8c= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674670947; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8TAHgioL+cbo0qEgcEaZEdMYDgI4uLoD69uULXZCMWk=; b=hVfCdBeoL/OE3BtOEqEE3CkYqQljPS/Kgv6kXGBa8l5+iTFlr6NT7wjcbSlwWMkZYYY5a6 UXUCmto478leB1T762e0eKLD6prEQOEn8LO1AdQfFV5S266FsSlU87b0WfD5U+M7uBfGRm OeHdhCuISZZQRuCtxnjP9Fx22CCeMzw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-292-deB-As1LMKysAVKBxmpPAg-1; Wed, 25 Jan 2023 13:22:21 -0500 X-MC-Unique: deB-As1LMKysAVKBxmpPAg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BADC885C073; Wed, 25 Jan 2023 18:22:20 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6EFB5492B01; Wed, 25 Jan 2023 18:22:20 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 94285403C674B; Wed, 25 Jan 2023 15:22:00 -0300 (-03) Date: Wed, 25 Jan 2023 15:22:00 -0300 From: Marcelo Tosatti To: Leonardo =?iso-8859-1?Q?Br=E1s?= Cc: Michal Hocko , Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Message-ID: References: <20230125073502.743446-1-leobras@redhat.com> <9e61ab53e1419a144f774b95230b789244895424.camel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <9e61ab53e1419a144f774b95230b789244895424.camel@redhat.com> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C91E018000F X-Stat-Signature: cgfjs6kt4mrpnn3gdz8zxzho4fb4q7kx X-Rspam-User: X-HE-Tag: 1674670947-961651 X-HE-Meta: U2FsdGVkX18/PUTxB5hS7OKSvuXD0BgHpAWbGV+IcPxbnKAtOhq4Nzfh22n4rYtTgTfS/qAaYClJve8hiG39g+dkK5YRhoJgfOLJa6Pvr+GGZsURMcaaa6wzXK+XR/dLXQqDFNgaWP4BHLLAxrflaVFiDaK/O5T1b2ynS+gEa4vbTi6wYmUHkgQZsendvu65opLDjcj3QUoZGAggpJ+qtEplVRVsPRF+CrKhdIHs+oEOFSB83npRXSmdXJNPPh19J9TojJbzcqVSIRLV9gxRQ3vO5zPZw/WGmJie497rKsp4AfRzPfvkKv+MBVM6ss588DkdDPWiCC8fkv2rJ4cBipmWPmFOk53sOFTRO9NS73BXtIloJmrprkD7zPr+92g7WAD7Xnqs5PWoE5ipeGJgfahgXsvzAP0bLM+ZraX+cVgFdC0drT0nYvijIu0JYVsZiBp+vKeKAhs/wyFWWdHHkjRDg2LW+c1tLYwC7WlzxofG2ORLmH0s9D9GvGSs6TuG4r1YWzJ93lYuedRp1P9OVa0kLk2L2Gj2lDHFyaw6NRFmPTlv4M6/i4Qc0hHgFBHOsHUNYp4FZ5fn/TnRRcEO5Alo7DbKwwoVjSp8dIJBN43ub6+x6SQJ0D8zsg62lsB76j6tkUxEaPZaGRC0/kKNr6wVo8tvkJmXlJys9fXv0uZmYT20xoBhwUWGBEqKQAVIEzk1NpouQXzCLfBqK3jw8hMeN1N8iWoRo/4arX5n7zqPMw7OmeMQ+D4C2ewGqdd+9rmByuEcbSEgVmkgcUfIsc7nbVqCpirRmXhRrj4ZAl2UAfa9BnR6Xf1im672uvgToViuinJXoGjPm+9vy7O1OBwI3TAPPBVwOzgonv0gNS/NNo1bJhA5bAWIIiRZ6oMp5urTkSUD59vV6oz2cc+MsGcWPnlsotNMjalcI40lRmaZ8S+pVeRDTpHPtqtJR/Ek03J/1Wa0r6G0M5A8ZWP pYBSn+es kAjVBc1ma/2qa285TE7Gu+5SRQI2TuGGMZj+yyg4q+I2rac7ORRBxofo5cHzCAmELh5028rAPRv8OV076zOnbaauuKN6eDdnCWECBcqgjMErhufUwaRyJhGqwBIsD2iGeGIBgHfQ1Ob6PfTI8moqQZonIkRpRzB2QhV8dgCr+AbQAyV6fhoVFB4oFKbQG+oDJ3qwkvWh520/jtnm0pB0JvsTppKGFQ1fUY17IeGYtNiF/OgicJqbakl6R1ghkgeuZZs+6XvU8L+hh4U7JExB20XoHaQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000142, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jan 25, 2023 at 08:06:46AM -0300, Leonardo Brás wrote: > On Wed, 2023-01-25 at 09:33 +0100, Michal Hocko wrote: > > On Wed 25-01-23 04:34:57, Leonardo Bras wrote: > > > Disclaimer: > > > a - The cover letter got bigger than expected, so I had to split it in > > > sections to better organize myself. I am not very confortable with it. > > > b - Performance numbers below did not include patch 5/5 (Remove flags > > > from memcg_stock_pcp), which could further improve performance for > > > drain_all_stock(), but I could only notice the optimization at the > > > last minute. > > > > > > > > > 0 - Motivation: > > > On current codebase, when drain_all_stock() is ran, it will schedule a > > > drain_local_stock() for each cpu that has a percpu stock associated with a > > > descendant of a given root_memcg. > > > > > > This happens even on 'isolated cpus', a feature commonly used on workloads that > > > are sensitive to interruption and context switching such as vRAN and Industrial > > > Control Systems. > > > > > > Since this scheduling behavior is a problem to those workloads, the proposal is > > > to replace the current local_lock + schedule_work_on() solution with a per-cpu > > > spinlock. > > > > If IIRC we have also discussed that isolated CPUs can simply opt out > > from the pcp caching and therefore the problem would be avoided > > altogether without changes to the locking scheme. I do not see anything > > regarding that in this submission. Could you elaborate why you have > > abandoned this option? > > Hello Michal, > > I understand pcp caching is a nice to have. > So while I kept the idea of disabling pcp caching in mind as an option, I first > tried to understand what kind of impacts we would be seeing when trying to > change the locking scheme. Remote draining reduces interruptions whether CPU is marked as isolated or not: - Allows isolated CPUs from benefiting of pcp caching. - Removes the interruption to non isolated CPUs. See for example https://lkml.org/lkml/2022/6/13/2769 "Minchan Kim tested this independently and reported; My workload is not NOHZ CPUs but run apps under heavy memory pressure so they goes to direct reclaim and be stuck on drain_all_pages until work on workqueue run. unit: nanosecond max(dur) avg(dur) count(dur) 166713013 487511.77786438033 1283 From traces, system encountered the drain_all_pages 1283 times and worst case was 166ms and avg was 487us. The other problem was alloc_contig_range in CMA. The PCP draining takes several hundred millisecond sometimes though there is no memory pressure or a few of pages to be migrated out but CPU were fully booked. Your patch perfectly removed those wasted time." > After I raised the data in the cover letter, I found that the performance impact > appears not be that big. So in order to try keeping the pcp cache on isolated > cpus active, I decided to focus effort on the locking scheme change. > > I mean, my rationale is: if is there a non-expensive way of keeping the feature, > why should we abandon it? > > Best regards, > Leo > > > > > > > >