From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F0D99310784 for ; Thu, 23 Apr 2026 14:13:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776953639; cv=none; b=Q8zTUiUB5dX0mPdi/yk+4xtMkrD9FZeHuBp02z7RlRzeXLQTgG5mHtHwtVWV4bIDnYq+0om5VvlyHBASnKwYEQO/18z/lbRmHIWa3QPtk6zy5eJOCY5vzIfHkpIiQIRdDtD5WgLpAmb1GNAveoqsuWp4vKy8rOH16gRYp34vVZY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776953639; c=relaxed/simple; bh=MsOG1WoL4DthbK8BMCb1/S31UzH6r3oqgqhBEYtuyAI=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=eEz7x+s30IWS0rW2uy1nVsgZNMbJVcjAgRygkMbcQAJgVIwMwuvcXb5Ll32Rs2y0a3TctveYteUKAoCjDmySmKzZ9YpseXwQxOZ2N7qbaj84FM1nsiGdXTlNNg3/h8W5t/BTPk3AMuPIEq+tvGSyBbSMhbVA7el1r19bUBKQnbA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tRUaMpbm; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tRUaMpbm" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D291BC2BCB3; Thu, 23 Apr 2026 14:13:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776953638; bh=MsOG1WoL4DthbK8BMCb1/S31UzH6r3oqgqhBEYtuyAI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=tRUaMpbm3qHHCq29jygXUyj85ACEWDLUpWtgi9ly4VKD3rldB3i7FO5jOjD8S2czx 9ehFMySv41gDporVbxVuzH7GVwp6jA0EfpxPpQkhIpbaMilKGrgvkmFiKgLNI3heg3 VLCy9mj9g3c1MMYyV8YX3QFRnEFxFX7DILpZRtCuzjt3jAgfy62LwDsbEm0Zrlhjne IuKMoAgo/bdfn48UPUaiJIBkLzasVl1JDBwWLxOudbPh4urFr3gs8vUPUmf5e5Pqiw U5BIt8sNnXwVpsTpykztnOSMbtEMhfbqLa30Jsvb96l/E6Cd/FhuMdjfap5WluTN7S 1Or0tv2zPqMYQ== Date: Thu, 23 Apr 2026 19:43:54 +0530 From: Vinod Koul To: "Garg, Shivank" Cc: lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@kernel.org, willy@infradead.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net, ying.huang@linux.alibaba.com, apopple@nvidia.com, dave@stgolabs.net, Jonathan.Cameron@huawei.com, rkodsara@amd.com, bharata@amd.com, sj@kernel.org, weixugc@google.com, dan.j.williams@intel.com, rientjes@google.com, xuezhengchu@huawei.com, yiannis@zptcorp.com, dave.hansen@intel.com, hannes@cmpxchg.org, jhubbard@nvidia.com, peterx@redhat.com, riel@surriel.com, shakeel.butt@linux.dev, stalexan@redhat.com, tj@kernel.org, nifan.cxl@gmail.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, david@kernel.org Subject: Re: [RFC PATCH v4 5/6] drivers/migrate_offload: add DMA batch copy driver (dcbm) Message-ID: References: <20260309120725.308854-3-shivankg@amd.com> <20260309120725.308854-14-shivankg@amd.com> <396b4be1-376b-4aac-bd1e-2854c88b3757@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <396b4be1-376b-4aac-bd1e-2854c88b3757@amd.com> On 23-04-26, 17:40, Garg, Shivank wrote: > Hi Vinod, > > Following your suggestion at the Kernel meetup in Bangalore (11 Apr 2026) > to check 0cae04373b ("dmaengine: remove DMA_MEMCPY_SG once again") and use > DMA_MEMCPY_SG / dmaengine_prep_dma_memcpy_sg() (I added a > device_prep_dma_memcpy_sg hook in drivers/dma/amd/ptdma/ptdma-dmaengine.c > for this experiment; not posted). > I ran an A/B comparison against the existing DCBM path that uses > dmaengine_prep_dma_memcpy() in a loop over mapped SGL segments. > > I'm using the move_pages() workload to move 1 GB data per run. I do not see > significant performance difference, and results are broadly within each > other's noise band). > > Throughput (GB/s, mean ± SD), ITERATIONS=10: > > Page nr_dma_chan=1 nr_dma_chan=4 nr_dma_chan=8 nr_dma_chan=16 > order dcbm dcbm_sg dcbm dcbm_sg dcbm dcbm_sg dcbm dcbm_sg > ------ ----------- ---------- ----------- ---------- ----------- ---------- ------------ ---------- > 0 2.33 ± 0.17 2.26 ± 0.19 3.24 ± 0.21 3.18 ± 0.23 3.29 ± 0.10 3.45 ± 0.10 3.29 ± 0.13 3.49 ± 0.22 > 4 2.77 ± 0.21 2.99 ± 0.18 6.26 ± 0.99 6.75 ± 0.12 8.01 ± 0.58 7.70 ± 0.64 8.22 ± 0.89 8.72 ± 0.87 > 8 4.57 ± 0.70 4.75 ± 0.83 10.64 ± 1.97 10.94 ± 3.52 10.30 ± 1.22 10.36 ± 1.24 11.27 ± 1.21 12.47 ± 1.66 > 9 12.71 ± 0.09 12.68 ± 0.08 27.13 ± 0.15 26.89 ± 0.27 46.50 ± 0.73 45.17 ± 2.46 67.25 ± 1.42 62.78 ± 8.24 > > Notes: order 0/4/8/9 = 4K / 64K / 1M / 2M folios > dcbm = per-segment dmaengine_prep_dma_memcpy > dcbm_sg = DMA_MEMCPY_SG / dmaengine_prep_dma_memcpy_sg > > > > > + > > +static int submit_dma_transfers(struct dma_work *work) > > +{ > > + struct scatterlist *sg_src, *sg_dst; > > + struct dma_async_tx_descriptor *tx; > > + unsigned long flags = DMA_CTRL_ACK; > > + dma_cookie_t cookie; > > + int i; > > + > > + atomic_set(&work->pending, 1); > > + > > + sg_src = work->src_sgt->sgl; > > + sg_dst = work->dst_sgt->sgl; > > + for_each_sgtable_dma_sg(work->src_sgt, sg_src, i) { > > + if (i == work->src_sgt->nents - 1) > > + flags |= DMA_PREP_INTERRUPT; > > + > > + tx = dmaengine_prep_dma_memcpy(work->chan, > > + sg_dma_address(sg_dst), > > + sg_dma_address(sg_src), > > + sg_dma_len(sg_src), flags); > > + if (!tx) { > > + atomic_set(&work->pending, 0); > > + return -EIO; > > + } > > + > > + if (i == work->src_sgt->nents - 1) { > > + tx->callback = dma_completion_callback; > > + tx->callback_param = work; > > + } > > + > > + cookie = dmaengine_submit(tx); > > + if (dma_submit_error(cookie)) { > > + atomic_set(&work->pending, 0); > > + return -EIO; > > + } > > + sg_dst = sg_next(sg_dst); > > + } > > + return 0; > > +} > > static int submit_dma_transfers(struct dma_work *work) > { > struct dma_async_tx_descriptor *tx; > unsigned long flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; > dma_cookie_t cookie; > > tx = dmaengine_prep_dma_memcpy_sg(work->chan, > work->dst_sgt->sgl, work->dst_sgt->nents, > work->src_sgt->sgl, work->src_sgt->nents, > flags); > if (!tx) > return -EIO; > > atomic_set(&work->pending, 1); > tx->callback = dma_completion_callback; > tx->callback_param = work; > > cookie = dmaengine_submit(tx); > if (dma_submit_error(cookie)) { > atomic_set(&work->pending, 0); > return -EIO; > } > return 0; > } > > The memcpy_sg version does simplify submit_dma_transfers() > (one dmaengine_prep_dma_memcpy_sg + one dmaengine_submit vs a loop). Right > > My current DCBM path issues dmaengine_prep_dma_memcpy()+dmaengine_submit() > per mapped SG segment and sets DMA_PREP_INTERRUPT + callback only > on the last one, so the IRQ/callback cost is already one per batch. > > My understanding is switching to dmaengine_prep_dma_memcpy_sg() mainly > saves the per-segment prep/submit calls and hands the provider a single > multi-segment TX to program. Right, but the analysis you showed indicated the dma setup cost was quite a bit, this moving away from N transfers to single one should have saved a bit more... > > Please correct me if the benefit you had in mind is something stronger. > Thanks for the suggestion and for guidance. I still feel this looks better version... Can you compare your setup time between the two please Thanks -- ~Vinod