From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFB4E1E8329; Tue, 1 Apr 2025 09:24:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743499490; cv=none; b=tz8rMnQsIgGwwcnp3yh0r7t+MEfhxLNu00A8oYnUzVuLY1idJ3hBkNru3KfJHVHuucYK8qhGvHmvqEjsG6xBCG3Ge5YUG7MjDP0LxW8Hc2qjfKUX1QTIkiUIjoIqY6eDvOQnLXVe7d78bUSI2mrCjXFjNUcQ6M0exOxtP9msz1E= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743499490; c=relaxed/simple; bh=YJe+5k/IbiLgySfuYIrcy0ukBJS+OJdrxR9fS3/iHlU=; h=Message-ID:Date:MIME-Version:Subject:To:CC:References:From: In-Reply-To:Content-Type; b=QZLPN/UhIKVUiHRTggUnTuSeJ+Ti2dQBt3T2JLuPe35SubSrT6cb3xZl0rVUR9No+hY0SMfS/E+6rdplML8dC+5J1xW+zBgawpIFByEeLarv9IXsejO5QMySSHKxoXhQEUFdRJszHfs0PsOR/zZpE0EMjYrjo/n6xvlqXWOWclo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4ZRjBj1T7pz1k1BR; Tue, 1 Apr 2025 17:19:57 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 0D5CC1400F4; Tue, 1 Apr 2025 17:24:44 +0800 (CST) Received: from [10.67.120.129] (10.67.120.129) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 1 Apr 2025 17:24:43 +0800 Message-ID: Date: Tue, 1 Apr 2025 17:24:43 +0800 Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH net-next v5 2/2] page_pool: Track DMA-mapped pages and unmap them when destroying the pool To: Zhu Yanjun , Alexander Lobakin , =?UTF-8?Q?Toke_H=C3=B8iland-J=C3=B8rgensen?= CC: "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , Saeed Mahameed , Leon Romanovsky , Tariq Toukan , Andrew Lunn , Eric Dumazet , Paolo Abeni , Ilias Apalodimas , Simon Horman , Andrew Morton , Mina Almasry , Yonglong Liu , Pavel Begunkov , Matthew Wilcox , , , , , Qiuling Ren , Yuying Ma References: <20250328-page-pool-track-dma-v5-0-55002af683ad@redhat.com> <20250328-page-pool-track-dma-v5-2-55002af683ad@redhat.com> Content-Language: en-US From: Yunsheng Lin In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) On 2025/4/1 1:27, Zhu Yanjun wrote: > 在 2025/3/31 18:35, Alexander Lobakin 写道: >> From: Toke Høiland-Jørgensen >> Date: Fri, 28 Mar 2025 13:19:09 +0100 >> >>> When enabling DMA mapping in page_pool, pages are kept DMA mapped until >>> they are released from the pool, to avoid the overhead of re-mapping the >>> pages every time they are used. This causes resource leaks and/or >>> crashes when there are pages still outstanding while the device is torn >>> down, because page_pool will attempt an unmap through a non-existent DMA >>> device on the subsequent page return. >> >> [...] >> >>> @@ -173,10 +212,10 @@ struct page_pool { >>>       int cpuid; >>>       u32 pages_state_hold_cnt; >>>   -    bool has_init_callback:1;    /* slow::init_callback is set */ >>> +    bool dma_sync;            /* Perform DMA sync for device */ >> >> Have you seen my comment under v3 (sorry but I missed that there was v4 >> already)? Can't we just test the bit atomically? > > Perhaps test_bit series functions can test the bit atomically. Maybe there are more good options about this testing the bit atomically. But test_bit should implement the task that tests the bit atomically. There are two reading of dma_sync in this patch, the first reading is not under rcu read lock and doing the reading without READ_ONCE(), the second reading is under rcu read lock and do the reading with READ_ONCE(). The first one seems an optimization to avoid taking the rcu read lock, why might need READ_ONCE() to make KCSAN happy if we do care about making KCSAN happy. The second one does not seems to need the atomicity by using the READ_ONCE() as it is always under RCU read lock(implicit or explicit one), and there is a rcu sync after the clearing of that bit.