From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0178BC433F5 for ; Thu, 20 Jan 2022 14:09:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41F286B009D; Thu, 20 Jan 2022 09:09:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CE556B00A0; Thu, 20 Jan 2022 09:09:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BCC06B00A2; Thu, 20 Jan 2022 09:09:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id 19DFF6B009D for ; Thu, 20 Jan 2022 09:09:44 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BDCA093D84 for ; Thu, 20 Jan 2022 14:09:43 +0000 (UTC) X-FDA: 79050848646.07.C5B21AC Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by imf02.hostedemail.com (Postfix) with ESMTP id 6855C80010 for ; Thu, 20 Jan 2022 14:09:43 +0000 (UTC) Received: by verein.lst.de (Postfix, from userid 2407) id 0284E68BEB; Thu, 20 Jan 2022 15:09:40 +0100 (CET) Date: Thu, 20 Jan 2022 15:09:39 +0100 From: Christoph Hellwig To: Jason Gunthorpe Cc: Daniel Vetter , Matthew Wilcox , nvdimm@lists.linux.dev, linux-rdma@vger.kernel.org, John Hubbard , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Ming Lei , linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, Joao Martins , Logan Gunthorpe , Christoph Hellwig Subject: Re: Phyr Starter Message-ID: <20220120140939.GA11707@lst.de> References: <20220111004126.GJ2328285@nvidia.com> <20220111202648.GP2328285@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220111202648.GP2328285@nvidia.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 6855C80010 X-Stat-Signature: tnc8ojhnji4anqjeffnc6pxngnhwbm6m Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=none; spf=none (imf02.hostedemail.com: domain of hch@lst.de has no SPF policy when checking 213.95.11.211) smtp.mailfrom=hch@lst.de X-HE-Tag: 1642687783-930874 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jan 11, 2022 at 04:26:48PM -0400, Jason Gunthorpe wrote: > What I did in RDMA was make an iterator rdma_umem_for_each_dma_block() > > The driver passes in the page size it wants and the iterator breaks up > the SGL into that size. > > So, eg on a 16k page size system the SGL would be full of 16K stuff, > but the driver only support 4k and so the iterator hands out 4 pages > for each SGL entry. > > All the drivers use this to build their DMA lists and tables, it works > really well. The block layer also has the equivalent functionality by setting the virt_boundary value in the queue_limits. This is needed for NVMe PRPs and RDMA drivers.