From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30ACA338939; Wed, 24 Dec 2025 13:45:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766583941; cv=none; b=ujE7wUjdgsp2AnnIAYw4YMNNmfogD20OOjROzTAcU9/0acZGhllKEefCRpyqw4EiwH4+crG+13r+M/jcgIirW/kl/dF9gpRwdM5bpMWtW/mN1XHnolLod/VQv9vSKSaVzcEqOAPueu5i92UjcXX1qwl3XJcd23mhdjoXqbFq178= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766583941; c=relaxed/simple; bh=gRRNdybTJoPf4zghu+FAESdALpfSWbcpUTc3hVg/iP0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pdXn0gmDGeFdFsWLNP0q+5m94nl7so0PVUFTXfzJM23MtyI1fvtA5nwOMJGJrb3GcjGfsCvXS+K1q51wPTx2QbrEKPCIiqZBfbb1+iKRCFNhiGaGZaN/OFTtM3HTSqyZPy/osi9I149gsZR7giV/J9SOdc8rNYox/1WIrdtmxUM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=EY/rxLVc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="EY/rxLVc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1AF5C19424; Wed, 24 Dec 2025 13:45:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766583940; bh=gRRNdybTJoPf4zghu+FAESdALpfSWbcpUTc3hVg/iP0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EY/rxLVcj3XVGLfQuhfdznrcbQGy/9FTi8WQJ12o6Y5hVKy8WvRZ0Agl952OnUz6k 6TNPt9uSgZozs75LFMvC52n3ieQK9fVy2u22y57DdM94PbvQQ1IFpwvO0bDlp+xFmC aSmxwYc+FbNjaiOHyrQPpz8lQBamXwUAHhJRFwQzz/2I7/+oS1Zmj+GW50lW58ODpl AS5Jtn30lL8p1JTfpgp0uc69XhfvWvVKMz0fxLizZEV7dGZOvfmTdxGr0QyqDmmbPH JVCvPO6v+S95bZ8qrY1TqoMKdINOLKCOtKOpyxXw7v/tm/9+ZsQsDpDGVxMVcpOb/g tHraobqMSYmbA== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Andrew Morton , Bjorn Helgaas , Catalin Marinas , Chen Ridong , Danilo Krummrich , "David S . Miller" , Eric Dumazet , Gabriele Monaco , Greg Kroah-Hartman , Ingo Molnar , Jakub Kicinski , Jens Axboe , Johannes Weiner , Lai Jiangshan , Marco Crivellari , Michal Hocko , Muchun Song , Paolo Abeni , Peter Zijlstra , Phil Auld , "Rafael J . Wysocki" , Roman Gushchin , Shakeel Butt , Simon Horman , Tejun Heo , Thomas Gleixner , Vlastimil Babka , Waiman Long , Will Deacon , cgroups@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 01/33] PCI: Prepare to protect against concurrent isolated cpuset change Date: Wed, 24 Dec 2025 14:44:48 +0100 Message-ID: <20251224134520.33231-2-frederic@kernel.org> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251224134520.33231-1-frederic@kernel.org> References: <20251224134520.33231-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit HK_TYPE_DOMAIN will soon integrate cpuset isolated partitions and therefore be made modifiable at runtime. Synchronize against the cpumask update using RCU. The RCU locked section includes both the housekeeping CPU target election for the PCI probe work and the work enqueue. This way the housekeeping update side will simply need to flush the pending related works after updating the housekeeping mask in order to make sure that no PCI work ever executes on an isolated CPU. This part will be handled in a subsequent patch. Signed-off-by: Frederic Weisbecker --- drivers/pci/pci-driver.c | 47 ++++++++++++++++++++++++++++++++-------- 1 file changed, 38 insertions(+), 9 deletions(-) diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index 7c2d9d596258..786d6ce40999 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -302,9 +302,8 @@ struct drv_dev_and_id { const struct pci_device_id *id; }; -static long local_pci_probe(void *_ddi) +static int local_pci_probe(struct drv_dev_and_id *ddi) { - struct drv_dev_and_id *ddi = _ddi; struct pci_dev *pci_dev = ddi->dev; struct pci_driver *pci_drv = ddi->drv; struct device *dev = &pci_dev->dev; @@ -338,6 +337,19 @@ static long local_pci_probe(void *_ddi) return 0; } +struct pci_probe_arg { + struct drv_dev_and_id *ddi; + struct work_struct work; + int ret; +}; + +static void local_pci_probe_callback(struct work_struct *work) +{ + struct pci_probe_arg *arg = container_of(work, struct pci_probe_arg, work); + + arg->ret = local_pci_probe(arg->ddi); +} + static bool pci_physfn_is_probed(struct pci_dev *dev) { #ifdef CONFIG_PCI_IOV @@ -362,34 +374,51 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev, dev->is_probed = 1; cpu_hotplug_disable(); - /* * Prevent nesting work_on_cpu() for the case where a Virtual Function * device is probed from work_on_cpu() of the Physical device. */ if (node < 0 || node >= MAX_NUMNODES || !node_online(node) || pci_physfn_is_probed(dev)) { - cpu = nr_cpu_ids; + error = local_pci_probe(&ddi); } else { cpumask_var_t wq_domain_mask; + struct pci_probe_arg arg = { .ddi = &ddi }; + + INIT_WORK_ONSTACK(&arg.work, local_pci_probe_callback); if (!zalloc_cpumask_var(&wq_domain_mask, GFP_KERNEL)) { error = -ENOMEM; goto out; } + + /* + * The target election and the enqueue of the work must be within + * the same RCU read side section so that when the workqueue pool + * is flushed after a housekeeping cpumask update, further readers + * are guaranteed to queue the probing work to the appropriate + * targets. + */ + rcu_read_lock(); cpumask_and(wq_domain_mask, housekeeping_cpumask(HK_TYPE_WQ), housekeeping_cpumask(HK_TYPE_DOMAIN)); cpu = cpumask_any_and(cpumask_of_node(node), wq_domain_mask); + if (cpu < nr_cpu_ids) { + schedule_work_on(cpu, &arg.work); + rcu_read_unlock(); + flush_work(&arg.work); + error = arg.ret; + } else { + rcu_read_unlock(); + error = local_pci_probe(&ddi); + } + free_cpumask_var(wq_domain_mask); + destroy_work_on_stack(&arg.work); } - - if (cpu < nr_cpu_ids) - error = work_on_cpu(cpu, local_pci_probe, &ddi); - else - error = local_pci_probe(&ddi); out: dev->is_probed = 0; cpu_hotplug_enable(); -- 2.51.1