From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6E48AC0015E for ; Tue, 20 Jun 2023 15:13:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:References:To:Subject:Cc:MIME-Version: Date:Message-ID:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=gKRvjoP7ugg/O1q9TQorMWVytBJMssiPM3Mbkg8PkJo=; b=NsbAvivUCpfhtKM43QnehCfZpu zYaoWKGpCbVTTsI04xmWYeadMRYIx3OZYbxJUBDfMbI7l2XVgEA/nAo3Wgrnf3jhA06PoMc4xxaXY XtUWVE35SwrNAH3LmUj6qAijIJOLpjvLBJ2tKmTjy2FI+2pWtbe40aetlM2SHvxTL5NjW0pBAg5Nn eFRxsXl5jUNhMwQlS7TtAzaNPghk+wzCm2ucSSc3fM6ss7ujpXFcrf5AbScJ9nlYBXOjJnlUJnGyW w4iXj75RSBDOZYBsRrb9vAPBIQudSzxPJ3+aQ/nSHkS79DRSvBe16Z2Kl+F90PP3VL47+Sqv5Z27Z DOoni3tQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qBd2g-00BdpK-1y; Tue, 20 Jun 2023 15:13:14 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qBd2d-00BdnY-0E for linux-arm-kernel@lists.infradead.org; Tue, 20 Jun 2023 15:13:13 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687273983; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RIHjNkxkimsgCBMvM6HkxzQRXw0ocJYlxEDBNvgOzO8=; b=FvhLQa0t6gttxGwBErqAFAwF+V019aNTIFSgvxHbq3dY/uqqEovRfD8aV6r4CG/GYf0ySa UkktabY+UIJl4kdMOOD+7W3OLbDvcD5bqpVAhsOhItbPDUQxf0J6gwUEHRakVz7s6uFv/X g/b3WGnGeOIVlX3SNp8mHu6Z7qBUkhg= Received: from mail-lj1-f200.google.com (mail-lj1-f200.google.com [209.85.208.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-295-OGEqTlHbMq-uHjICMTl7-w-1; Tue, 20 Jun 2023 11:12:57 -0400 X-MC-Unique: OGEqTlHbMq-uHjICMTl7-w-1 Received: by mail-lj1-f200.google.com with SMTP id 38308e7fff4ca-2b46dc4f6faso24173871fa.3 for ; Tue, 20 Jun 2023 08:12:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687273965; x=1689865965; h=content-transfer-encoding:in-reply-to:references:to :content-language:subject:cc:user-agent:mime-version:date:message-id :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RIHjNkxkimsgCBMvM6HkxzQRXw0ocJYlxEDBNvgOzO8=; b=jFRSmZIVZUBb2kXrFcE2f4EQTuRaggWe3DigT+6gP0chMBRmkwEOtvb5EWUDew8q5w 0kZ4JX84LK/VhbK+/l7nBIhd+or2awA4y5sBP7o8g7lBvp+zccFbLHsRF3hp1PnjVg4B LY3RPUz+NuqogoE13zyWcxNTWCtssxMFPFMDnZCrakKT6gkDfaIePfX9d5DnWabY+hj1 x5hZw+RNJpsEjvoQd34F4KqUXVvIL1C5dKc/2phBlPY/bq8BVSusHf4m25Va8B475HX9 7IqnB/sZ8RxwHffmJj9lZxbZoqhg4kBWKHfvBheuddofYgsEkkZzmVmy0RJJyXxWJ2IM 7UBQ== X-Gm-Message-State: AC+VfDwsyaTp0wvi0U6dq3PoT7ARPpH6qn9xzt2CpT/++nfrjmGzxj8P /2KK/7r79W+E24RvB2f6mznobdxWE5BXAwO5EydhpelHN4ZJvQpRkihHsI+FJZGltPvW7lwybli 53GVLx0UIv29mPa2fzISlXE5WiAbGno2NiYk= X-Received: by 2002:a05:651c:14e:b0:2b4:7559:f240 with SMTP id c14-20020a05651c014e00b002b47559f240mr4078167ljd.6.1687273964880; Tue, 20 Jun 2023 08:12:44 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6Lo7qb4XkZPXjkccx/HyjXuRTo3yLp1oLQpdeobTRT2dgjfvEI4TXJ3K+ZZv10qb5gfg8RjQ== X-Received: by 2002:a05:651c:14e:b0:2b4:7559:f240 with SMTP id c14-20020a05651c014e00b002b47559f240mr4078127ljd.6.1687273964526; Tue, 20 Jun 2023 08:12:44 -0700 (PDT) Received: from [192.168.42.222] (194-45-78-10.static.kviknet.net. [194.45.78.10]) by smtp.gmail.com with ESMTPSA id lf4-20020a170906ae4400b0098822e05eddsm1565730ejb.100.2023.06.20.08.12.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Jun 2023 08:12:43 -0700 (PDT) From: Jesper Dangaard Brouer X-Google-Original-From: Jesper Dangaard Brouer Message-ID: <6909d28b-0ffc-a02a-235b-7bdce594965d@redhat.com> Date: Tue, 20 Jun 2023 17:12:41 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Cc: brouer@redhat.com, Alexander Duyck , Yunsheng Lin , davem@davemloft.net, pabeni@redhat.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Lorenzo Bianconi , Yisen Zhuang , Salil Mehta , Eric Dumazet , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Saeed Mahameed , Leon Romanovsky , Felix Fietkau , Ryder Lee , Shayne Chen , Sean Wang , Kalle Valo , Matthias Brugger , AngeloGioacchino Del Regno , Jesper Dangaard Brouer , Ilias Apalodimas , linux-rdma@vger.kernel.org, linux-wireless@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, Jonathan Lemon Subject: Re: Memory providers multiplexing (Was: [PATCH net-next v4 4/5] page_pool: remove PP_FLAG_PAGE_FRAG flag) To: Jakub Kicinski , Jesper Dangaard Brouer References: <20230612130256.4572-1-linyunsheng@huawei.com> <20230612130256.4572-5-linyunsheng@huawei.com> <20230614101954.30112d6e@kernel.org> <8c544cd9-00a3-2f17-bd04-13ca99136750@huawei.com> <20230615095100.35c5eb10@kernel.org> <908b8b17-f942-f909-61e6-276df52a5ad5@huawei.com> <72ccf224-7b45-76c5-5ca9-83e25112c9c6@redhat.com> <20230616122140.6e889357@kernel.org> <20230619110705.106ec599@kernel.org> In-Reply-To: <20230619110705.106ec599@kernel.org> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230620_081311_484866_9871B59E X-CRM114-Status: GOOD ( 25.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 19/06/2023 20.07, Jakub Kicinski wrote: > On Fri, 16 Jun 2023 22:42:35 +0200 Jesper Dangaard Brouer wrote: >>> Former is better for huge pages, latter is better for IO mem >>> (peer-to-peer DMA). I wonder if you have different use case which >>> requires a different model :( >> >> I want for the network stack SKBs (and XDP) to support different memory >> types for the "head" frame and "data-frags". Eric have described this >> idea before, that hardware will do header-split, and we/he can get TCP >> data part is another page/frag, making it faster for TCP-streams, but >> this can be used for much more. >> >> My proposed use-cases involves more that TCP. We can easily imagine >> NVMe protocol header-split, and the data-frag could be a mem_type that >> actually belongs to the harddisk (maybe CPU cannot even read this). The >> same scenario goes for GPU memory, which is for the AI use-case. IIRC >> then Jonathan have previously send patches for the GPU use-case. >> >> I really hope we can work in this direction together, > > Perfect, that's also the use case I had in mind. The huge page thing > was just a quick thing to implement as a PoC (although useful in its > own right, one day I'll find the time to finish it, sigh). > > That said I couldn't convince myself that for a peer-to-peer setup we > have enough space in struct page to store all the information we need. > Or that we'd get a struct page at all, and not just a region of memory > with no struct page * allocated :S Good with big ideas, but I think we should start smaller and evolve. > > That'd require serious surgery on the page pool's fast paths to work > around. > > I haven't dug into the details, tho. If you think we can use page pool > as a frontend for iouring and/or p2p memory that'd be awesome! > Hmm... I don't like the sound of this. My point is that we should create a more plug-able memory system for netstack. And NOT try to extend page_pool to cover all use-cases. > The workaround solution I had in mind would be to create a narrower API > for just data pages. Since we'd need to sprinkle ifs anyway, pull them > up close to the call site. Allowing to switch page pool for a > completely different implementation, like the one Jonathan coded up for > iouring. Basically > > $name_alloc_page(queue) > { > if (queue->pp) > return page_pool_dev_alloc_pages(queue->pp); > else if (queue->iouring..) > ... > } Yes, this is more the direction I'm thinking. In many cases, you don't need this if-statement helper in the driver, as driver RX side code will know the API used upfront. The TX completion side will need this kind of multiplexing return helper, to return the pages to the correct memory allocator type (e.g. page_pool being one). See concept in [1] __xdp_return(). Performance wise, function pointers are slow due to RETPOLINE, but switch-case statements (below certain size) becomes a jump table, which is fast. See[1]. [1] https://elixir.bootlin.com/linux/v6.4-rc7/source/net/core/xdp.c#L377 Regarding room in "struct page", notice that page->pp_magic will have plenty room for e.g. storing xdp_mem_type or even xdp_mem_info (which also contains an ID). --Jesper _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel