From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD022421F16 for ; Mon, 2 Mar 2026 15:53:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772466815; cv=none; b=SxTBzMEGC/Dn/RF6sEFa4IeZ8csJ+dEwjDopnaxU8zW0irUZlvAMai/5adqDxqr1j+orpHoHbZhJnJ/GwQ6rxnP0lDtcUwWgAVYPHj9Sd0HZjMB2Krm06U2tWixzUBRgEUCsvGeZtN40HNBNGPmv46U+Mey2Jl58LmNWUQRUe60= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772466815; c=relaxed/simple; bh=3hTs6TdlYIZr2CVpEVhU2U0naEH31kgEN2TXKH8ea2I=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=QVK/WUKAC3Q/EA9oGhWU5TJ9F6mL4n8HTg0WXQFOxj2GYwTPrF6mfjliIBgzPpeXtS9Mg9KnSbZDVe6+mj2N5xed9PO3mWisHXdnySrTGY9bvtKcMC/Ve/BiwwWxfHyQmJIktuDsi2gALvteATtma1pcYalmVjaVPuxkX6San6A= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GXX1qUru; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GXX1qUru" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772466808; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=dLbuFPWOWGHaHFWL2ILwbNpXfhNKBfkyvba0iiQAlY0=; b=GXX1qUrutn+OPmBu6Z7xdfct0Ro4B6Kfjyl2q615w7ruTaN4y2kN3B+DbGeJcXt0X89cYx EMbckcU3NZ/IAJMa9rNf8mOf+58xZYPr5vzc7lf6fblHSQNYncTknsPxB0ZjtpCRHzLLTB Uc/KfgPbvIudhpU0MyThL9FUzOfQ9wU= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-650-h8wrkvvyNiGreQWPAh48UQ-1; Mon, 02 Mar 2026 10:53:25 -0500 X-MC-Unique: h8wrkvvyNiGreQWPAh48UQ-1 X-Mimecast-MFC-AGG-ID: h8wrkvvyNiGreQWPAh48UQ_1772466802 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7682819560A2; Mon, 2 Mar 2026 15:53:22 +0000 (UTC) Received: from tpad.localdomain (unknown [10.96.133.6]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id F3D2A1800592; Mon, 2 Mar 2026 15:53:20 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id C42E64014FBC7; Thu, 26 Feb 2026 12:49:18 -0300 (-03) Date: Thu, 26 Feb 2026 12:49:18 -0300 From: Marcelo Tosatti To: Leonardo Bras Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Leonardo Bras , Thomas Gleixner , Waiman Long , Boqun Feng Subject: Re: [PATCH 3/4] swap: apply new queue_percpu_work_on() interface Message-ID: References: <20260206143430.021026873@redhat.com> <20260206143741.589656953@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 On Fri, Feb 06, 2026 at 10:06:28PM -0300, Leonardo Bras wrote: > > + cpu = smp_processor_id(); > > Wondering if for these cases it would make sense to have something like: > > qpw_get_local_cpu() and > qpw_put_local_cpu() > > so we could encapsulate these migrate_{en,dis}able() > and the smp_processor_id(). > > Or even, > > int qpw_local_lock() { > migrate_disable(); > cpu = smp_processor_id(); > qpw_lock(..., cpu); > > return cpu; > } > > and > > qpw_local_unlock(cpu){ > qpw_unlock(...,cpu); > migrate_enable(); > } > > so it's more direct to convert the local-only cases. > > What do you think? Switched to local_qpw_lock variants. > > { > > - local_lock(&cpu_fbatches.lock); > > - lru_add_drain_cpu(smp_processor_id()); > > and here ? Fixed lack of migrate_disable/migrate_enable, thanks! > > @@ -950,7 +954,7 @@ void lru_cache_disable(void) > > #ifdef CONFIG_SMP > > __lru_add_drain_all(true); > > #else > > - lru_add_mm_drain(); > > and here, I wonder This is !CONFIG_SMP, so smp_processor_id is always 0. > > drain_pages(cpu); > > > > /* > > > > > > TBH, I am still trying to understand if we need the migrate_{en,dis}able(): > - There is a data dependency beween cpu being filled and being used. > - If we get the cpu, and then migrate to a different cpu, the operation > will still be executed with the data from that starting cpu Yes, but on a remote CPU. What prevents the original CPU from accessing its per-CPU local data, therefore racing with the code executing on the remote CPU. > - But maybe the compiler tries to optize this because the processor number > can be on a register and of easy access, which would break this. > > Maybe a READ_ONCE() on smp_processor_id() should suffice? > > Other than that, all the conversions done look correct. > > That being said, I understand very little about mm code, so let's hope we > get proper feedback from those who do :) > > Thanks! > Leo > >