From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF6362D63C for ; Mon, 13 Nov 2023 23:05:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lAF7N6Xb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 68507C433C8; Mon, 13 Nov 2023 23:05:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699916759; bh=KIbg7WSXtun715GyZV7F9AzyWxN0+AZgQG21Rel54vE=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=lAF7N6XbdGowJElZh0Qpcpm1mQ8RILmUNzGFr9fu3JccuyDj0m2XKcSyMz4ukLBR5 SQy0QCW9oecl/BFfCp1O4+40xnUJO2YYCckLGGLcxR4dABVz0emOSiP02Nc/ehr9uj mfLlyB/h4oafSQOp+qKwVz0eCy/D5d3jA71cLvJlBsOHv/OfPEK+gV/itm6+5AGLu+ K1GTQfXaZbAHNCnOLXg1ou7Bl/vuhGOalboG1CSEX1A29PO9x9mIN6kx/szK3JVRz+ M+6PmL08K0CZcO/mU3lcoZ08JWATpETmcwtI8x2tnm/nT0h2SMcaTmZraYlBaM26qc 41cd/lYa1NVkw== Date: Mon, 13 Nov 2023 18:05:54 -0500 From: Jakub Kicinski To: Mina Almasry , Yunsheng Lin Cc: davem@davemloft.net, pabeni@redhat.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Willem de Bruijn , Kaiyuan Zhang , Jesper Dangaard Brouer , Ilias Apalodimas , Eric Dumazet Subject: Re: [PATCH RFC 3/8] memory-provider: dmabuf devmem memory provider Message-ID: <20231113180554.1d1c6b1a@kernel.org> In-Reply-To: References: <20231113130041.58124-1-linyunsheng@huawei.com> <20231113130041.58124-4-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Mon, 13 Nov 2023 05:42:16 -0800 Mina Almasry wrote: > You're doing exactly what I think you're doing, and what was nacked in RFC v1. > > You've converted 'struct page_pool_iov' to essentially become a > duplicate of 'struct page'. Then, you're casting page_pool_iov* into > struct page* in mp_dmabuf_devmem_alloc_pages(), then, you're calling > mm APIs like page_ref_*() on the page_pool_iov* because you've fooled > the mm stack into thinking dma-buf memory is a struct page. > > RFC v1 was almost exactly the same, except instead of creating a > duplicate definition of struct page, it just allocated 'struct page' > instead of allocating another struct that is identical to struct page > and casting it into struct page. > > I don't think what you're doing here reverses the nacks I got in RFC > v1. You also did not CC any dma-buf or mm people on this proposal that > would bring up these concerns again. Right, but the mirror struct has some appeal to a non-mm person like myself. The problem IIUC is that this patch is the wrong way around, we should be converting everyone who can deal with non-host mem to struct page_pool_iov. Using page_address() on ppiov which hns3 seems to do in this series does not compute for me. Then we can turn the existing non-iov helpers to be a thin wrapper with just a cast from struct page to struct page_pool_iov, and a call of the iov helper. Again - never cast the other way around. Also I think this conversion can be done completely separately from the mem provider changes. Just add struct page_pool_iov and start using it. Does that make more sense?