From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8825C54E71 for ; Fri, 22 Mar 2024 18:43:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 353FA6B0092; Fri, 22 Mar 2024 14:43:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 304556B0093; Fri, 22 Mar 2024 14:43:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1CC456B0095; Fri, 22 Mar 2024 14:43:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0968B6B0092 for ; Fri, 22 Mar 2024 14:43:35 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id D18781C1317 for ; Fri, 22 Mar 2024 18:43:34 +0000 (UTC) X-FDA: 81925548348.06.EA6A32F Received: from mail-qv1-f47.google.com (mail-qv1-f47.google.com [209.85.219.47]) by imf24.hostedemail.com (Postfix) with ESMTP id C99C518000D for ; Fri, 22 Mar 2024 18:43:32 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=ziepe.ca header.s=google header.b=PltEA5aa; spf=pass (imf24.hostedemail.com: domain of jgg@ziepe.ca designates 209.85.219.47 as permitted sender) smtp.mailfrom=jgg@ziepe.ca; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711133012; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=07q//JdJmCCvVOn8q/irpx1t9tAUng9qf5/QnHEsi64=; b=Xm9Y82YTHRkjcg3z8x4bv2pdaXvvWUiSHfjgzDcn+hjvrLg68iA+C4cIDn5g4LLi92KXMI 0CYrEgC5MKBeJbMR/I+xV9y1IRCfvmIlcHiOSGpAb82CC0ByskPiQ1qInzxz2eXkxTLryp uz3c0VTDd/6P37P9o8zse19Fb+7pucU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711133012; a=rsa-sha256; cv=none; b=WX0HDxk20rcSo7kCqgGt+u+KRSZiNMEeQhlRUl0kFBWNWolyeJi7A/3X0uHljx7jtuOoQi CxcXHi87/OgZTZIrmAiGLH/bd5Doj+FhM0PEtJRj9mvI6TX85npUWCuM4sF8ycVX7wOylC BzmIoiUTd3/4ocopLJiM/5M3ck1XJZw= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=ziepe.ca header.s=google header.b=PltEA5aa; spf=pass (imf24.hostedemail.com: domain of jgg@ziepe.ca designates 209.85.219.47 as permitted sender) smtp.mailfrom=jgg@ziepe.ca; dmarc=none Received: by mail-qv1-f47.google.com with SMTP id 6a1803df08f44-6928a5e2479so15967426d6.0 for ; Fri, 22 Mar 2024 11:43:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; t=1711133012; x=1711737812; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=07q//JdJmCCvVOn8q/irpx1t9tAUng9qf5/QnHEsi64=; b=PltEA5aaLDfsX/C0TPu528i3TVDlZP4NY4wBDqATHn9la3G+Z65MtLwatgwSuExD5+ ORqUyKBS53/IqNdGduNv4jvForSYBCIm27hWJELID/r/n1YhiHM3C2i4uMNAypYaKNPf P1ftWMMUq6pm3LGfA9oPlxDOj3MPBdQ8dCp0UF88B+nRBSeKiUs+3DCqLZCqpSTHtRLG DbbwBbBYMBRUCZTvjqK/vKBB9sURINceA9g7CdGPaoKIrHmuo429p1wTOf48VffG/Ia1 akIfQ4zJkrjpDicrd4JXatz80vCuyPHlsVNlgIqXmSzWLZnY66354EwOYvlatfi7/CpK 9K2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711133012; x=1711737812; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=07q//JdJmCCvVOn8q/irpx1t9tAUng9qf5/QnHEsi64=; b=rNGKZyLwFigDyTY4ks7NqpFsiCgWU6bEdQGxGA/5NuRfGCMNtIynUV7wirYOUFVABj cC96korovxkhVwMVfZB08bEItQ6g67Vm+9XgUndjT5yz2XtYlRXdmLUo6PWDJma3+M2I ZRqOqnWhiiDFjJDKIAHfYms2IDCbsd1nJc2PgNUbCwTvPB6u/ajukhMy4JR+XON6K9KP AqJ4EUTXFfBGicKN/gWqtdEXcbflUvuWX29VPSukusNQBaBdLJDmumgm8oc7vqHChCvQ eAOJI+o/mbdc/RY0XRLEgE+5K7cGo6vKr0EY51G83CKXAJPeOEqKtqxT2VZdzuGYW7ed x1IA== X-Forwarded-Encrypted: i=1; AJvYcCWvxeDPL8ONMk4hggxN03PODx6cZBxEV7CNfNuP6JURzztl2fMomhtqLn0e5yppWFpQt+cuRmZh/Ccg4IaSVjICI8c= X-Gm-Message-State: AOJu0YzXGQP+UNr/z8MeKXLY6l33R01kNfo49TRNYyY+4I0sM8EVkfMY eK4blZs2ziYo2UnFxoihkfV8CjFJZaf51u8c9FJTuLhJZXGSTEBOnuUv9HxMA5Q= X-Google-Smtp-Source: AGHT+IGXOk5dcQgdVkv6EYgSp4tZ+XXQ52Ta863vabfcwug61wLkQ0Ayg5iISSk/K2qeTKcn+PXm/w== X-Received: by 2002:a05:6214:21ab:b0:690:3ca2:1858 with SMTP id t11-20020a05621421ab00b006903ca21858mr295314qvc.4.1711133011811; Fri, 22 Mar 2024 11:43:31 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-142-68-80-239.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.68.80.239]) by smtp.gmail.com with ESMTPSA id n13-20020a0cfbcd000000b00696731ceef6sm435222qvp.2.2024.03.22.11.43.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Mar 2024 11:43:31 -0700 (PDT) Received: from jgg by wakko with local (Exim 4.95) (envelope-from ) id 1rnjrW-00Cg1F-Pb; Fri, 22 Mar 2024 15:43:30 -0300 Date: Fri, 22 Mar 2024 15:43:30 -0300 From: Jason Gunthorpe To: Christoph Hellwig Cc: Leon Romanovsky , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Chaitanya Kulkarni , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?B?SsOpcsO0bWU=?= Glisse , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: Re: [RFC RESEND 00/16] Split IOMMU DMA mapping operation to two steps Message-ID: <20240322184330.GL66976@ziepe.ca> References: <20240306174456.GO9225@ziepe.ca> <20240306221400.GA8663@lst.de> <20240307000036.GP9225@ziepe.ca> <20240307150505.GA28978@lst.de> <20240307210116.GQ9225@ziepe.ca> <20240308164920.GA17991@lst.de> <20240308202342.GZ9225@ziepe.ca> <20240309161418.GA27113@lst.de> <20240319153620.GB66976@ziepe.ca> <20240321223910.GA22663@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240321223910.GA22663@lst.de> X-Rspamd-Queue-Id: C99C518000D X-Rspam-User: X-Stat-Signature: qw1b4hxcfi7fkz5ohera3drajifb5zkt X-Rspamd-Server: rspam03 X-HE-Tag: 1711133012-405285 X-HE-Meta: U2FsdGVkX19rzpLxWU+5ir10Rgedd1ZHgXx1+WnJAmcHpZJKNpnoL3aOV7WP1rg3Y93VKVxNC/d2jUaKtq3auxhfBQDTgQ0MA2fZrvErj5aTUWTtqImU6jZg7rMrmn72NtX1tG0N2FLcAPAjNdUSv1G/7SvnWZ8R33zz0YJFRR3Me5npvN5YfWrJ965JYzLoQiqCfsnMYfjJDH6UVTvdhN9JPffuzJE71Cp/hSKAmgmDwj51dByYy+sgvv9YIEAsW9xVorCNW/VLTUHAi1eS21lewLfWp5YzBOm+B4oMFadNl+iXC3CgKEiqAq4Dp8UkaUBb6gvTvf3LMR+gusTA3FylyWjjSDouEGTUEK5II/1zL51qGqQtC9a3F+MXqwZdil05yh76f94CclU1j6wOHirf8E83+i6GqIGYzyxYg+tXhHILCK2aaNiTj4dD0lBBht7vWs9K4mUPtC1KSxGAEOjl1DMVa2MtdFxqre080pgKShVTtCUyIMjS21EdILlaRh4+1iuZDmXbLz+JH6FsvZwCatAV23mOtizvFh/9En6Y2clTwGpD2Rm/5765fpYwyZ/LrstFf9NYbXZB7VGNmnmhx2aim2g+yifLHdLfK+F8ljniafkblp/D+5zFsTiW6bisqAoE26TmCiLSBgtj0KGBI4q41Zq3NYP2fnmhRj9GUHwN9aqPtc/SPB3XqXjnsjWahdG3es+iFbUqhlehfCwWWSGPavXOjgInuPovMn88dL1cuI8bh2Yw6/lSYRLl81OCC2hdta08sdk84EF90z8lJr5e3EoMqw8d8BTeHcgtWFFC8F7POAZBx0oolZKOghQ7oGF2MsmGTzSPLV98duu/Lq5878jIkaA+fxYN7eI8Ql0E0T2+5ZTuV3QHDl2tWCpS56q4SwRhl0CTeiYPqSfOVHQhzJbnEJG2lva7l4LXXJYuS7jedjWkhJROaMb6gwmCsez0DvJPXtSfyJv d8MKLsmA WaCT8CgnB2ICxNslU8BZwTibWCZ+95Cc4ccs8pLvIdrzsbWpomR7Kr2S9t31CMYjlADoL8v1Dy58WfBGXBNOWMzhYfqBMm52VnW4Qu5qoKoAgW+VmLMtDcbndMxXnW4VspTPLVOzamBYLXHvgOjKYCRsbUW2tVHhyCHTewsW/I6d7HEFPJWyW/sz0W5yx8M6j8hwX/6TPm7KSnEGREOqZvtWqFQFsoCQLeUmbZFGB4sqar+PbXpv5xcHgHW8oMLeiVc+y8vbcT1N2+Mo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Mar 21, 2024 at 11:39:10PM +0100, Christoph Hellwig wrote: > On Tue, Mar 19, 2024 at 12:36:20PM -0300, Jason Gunthorpe wrote: > > I kind of understand your thinking on the DMA side, but I don't see > > how this is good for users of the API beyond BIO. > > > > How will this make RDMA better? We have one MR, the MR has pages, the > > HW doesn't care about the SW distinction of p2p, swiotlb, direct, > > encrypted, iommu, etc. It needs to create one HW page list for > > whatever user VA range was given. > > Well, the hardware (as in the PCIe card) never cares. But the setup > path for the IOMMU does, and something in the OS needs to know about > it. So unless we want to stash away a 'is this P2P' flag in every > page / SG entry / bvec, or a do a lookup to find that out for each > of them we need to manage chunks at these boundaries. And that's > what I'm proposing. Okay, if we look at the struct-page-less world (which we want for DMABUF) then we need to keep track for sure. What I had drafted was to keep track in the new "per-SG entry" because that seemed easiest to migrate existing code into. Though the datastructure could also be written to be a list of uniform memory types and then a list of SG entries. (more like how bio is organized) No idea right now which is better, and I'm happy make it go either way. But Leon's series is not quite getting to this, it it still struct page based and struct page itself has all the metadata - though as you say it is a bit expensive to access. > > Or worse, whatever thing is inside a DMABUF from a DRM > > driver. DMABUF's can have a (dynamic!) mixture of P2P and regular > > AFAIK based on the GPU's migration behavior. > > And that's fine. We just need to track it efficiently. Right, DMABUF/etc will return a something that has a list of physical addresses and some meta-data to indicate the "p2p memory provider" for the P2P part. Perhaps it could be as simple as 1 bit in the physical address/length and a global "P2P memory provider" pointer for the entire DMA BUF. Unclear to me right now, but sure. > > Or triple worse, ODP can dynamically change on a page by page basis > > the type depending on what hmm_range_fault() returns. > > Same. If this changes all the time you need to track it. And we > should find a way to shared the code if we have multiple users for it. ODP (for at least the forseeable furture) is simpler because it is always struct page based so we don't need more metadata if we pay the cost to reach into the struct page. I suspect that is the right trade off for hmm_range_fault users. > But most DMA API consumers will never see P2P, and when they see it > it will be static. So don't build the DMA API to automically do > the (not exactly super cheap) checks and add complexity for it. Okay, I think I get what you'd like to see. If we are going to make caller provided uniformity a requirement, lets imagine a formal memory type idea to help keep this a little abstracted? DMA_MEMORY_TYPE_NORMAL DMA_MEMORY_TYPE_P2P_NOT_ACS DMA_MEMORY_TYPE_ENCRYPTED DMA_MEMORY_TYPE_BOUNCE_BUFFER // ?? Then maybe the driver flow looks like: if (transaction.memory_type == DMA_MEMORY_TYPE_NORMAL && dma_api_has_iommu(dev)) { struct dma_api_iommu_state state; dma_api_iommu_start(&state, transaction.num_pages); for_each_range(transaction, range) dma_api_iommu_map_range(&state, range.start_page, range.length); num_hwsgls = 1; hwsgl.addr = state.iova; hwsgl.length = transaction.length dma_api_iommu_batch_done(&state); } else if (transaction.memory_type == DMA_MEMORY_TYPE_P2P_NOT_ACS) { num_hwsgls = transcation.num_sgls; for_each_range(transaction, range) { hwsgl[i].addr = dma_api_p2p_not_acs_map(range.start_physical, range.length, p2p_memory_provider); hwsgl[i].len = range.size; } } else { /* Must be DMA_MEMORY_TYPE_NORMAL, DMA_MEMORY_TYPE_ENCRYPTED, DMA_MEMORY_TYPE_BOUNCE_BUFFER? */ num_hwsgls = transcation.num_sgls; for_each_range(transaction, range) { hwsgl[i].addr = dma_api_map_cpu_page(range.start_page, range.length); hwsgl[i].len = range.size; } } And the hmm_range_fault case is sort of like: struct dma_api_iommu_state state; dma_api_iommu_start(&state, mr.num_pages); [..] hmm_range_fault(...) if (present) dma_link_page(&state, faulting_address_offset, page); else dma_unlink_page(&state, faulting_address_offset, page); Is this looking closer? > > So I take it as a requirement that RDMA MUST make single MR's out of a > > hodgepodge of page types. RDMA MRs cannot be split. Multiple MR's are > > not a functional replacement for a single MR. > > But MRs consolidate multiple dma addresses anyway. I'm not sure I understand this? > > Go back to the start of what are we trying to do here: > > 1) Make a DMA API that can support hmm_range_fault() users in a > > sensible and performant way > > 2) Make a DMA API that can support RDMA MR's backed by DMABUF's, and > > user VA's without restriction > > 3) Allow to remove scatterlist from BIO paths > > 4) Provide a DMABUF API that is not scatterlist that can feed into > > the new DMA API - again supporting DMABUF's hodgepodge of types. > > > > I'd like to do all of these things. I know 3 is your highest priority, > > but it is my lowest :) > > Well, 3 an 4. And 3 is not just limited to bio, but all the other > pointless scatterlist uses. Well, I didn't write a '5) remove all the other pointless scatterlist case' :) Anyhow, I think we all agree on the high level objective, we just need to get to an API that fuses all of these goals together. To go back to my main thesis - I would like a high performance low level DMA API that is capable enough that it could implement scatterlist dma_map_sg() and thus also implement any future scatterlist_v2, bio, hmm_range_fault or any other thing we come up with on top of it. This is broadly what I thought we agreed to at LSF last year. Jason