From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 441FCC41535 for ; Mon, 6 Nov 2023 20:56:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232976AbjKFU4q (ORCPT ); Mon, 6 Nov 2023 15:56:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231364AbjKFU4o (ORCPT ); Mon, 6 Nov 2023 15:56:44 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5F7CD76 for ; Mon, 6 Nov 2023 12:56:41 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1cc391ca417so39832065ad.0 for ; Mon, 06 Nov 2023 12:56:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699304201; x=1699909001; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=arPwDeCFdRaC5606yGK22eodQyXfqpKFdElw4+wOp+Y=; b=ziHh/aIFvo6N6MyxzgNXH375xkB1tWcG5/44kFckWODsyRclFzTa0ua5XlrUwZw+9E N0P299UgahHVOJk8//C/BKlpneMw6foUl+DV+KKPnMUrxUKffVwHYW2XJwEwdqsTSn6o hRd+NvDO1nV2wrVay+S+M6U0mpVX5uCrbx0aFCvZuVIY6l78swB9jMdh4s94o+R2514k zzEJdNlaQyk4rUsJLidfY0Y2uMiynwmnyYOjAiXbDYlBb9aPchZSLAcH7TUf+j9LSwYK quNVP0iNH/YHDUJTW2+mN3UPEbUXIc0tBJi+dQ5tKuzFag7JaN6xpsosAxfV/s2w84Bx s6GQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699304201; x=1699909001; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=arPwDeCFdRaC5606yGK22eodQyXfqpKFdElw4+wOp+Y=; b=Qrors/Qp9hCtwuUbQbr/Abf4KJ6ME6g1zs4zKefKTncgpsxlpHTqaJ9Gvyg3dk41ke rA9tHn67fek7wfi/pgtsBZ4sKvjC3FUNtG/eFGad6kE6OGIsZ88MMSKtn0F6AoIj+Z5C WwAr0cf56yVSuiKBs2N7FF8rH4X1B6uDtyEiIxzenDrga3BeCD9dve3xSwN9cQMBUNtr +6Wpee2/pC2rIzwtaWt6Ss52oUYfsRziJhibpiwIDLKULq+UNNT62A1bwwBfcGHZS3Ad /lUkj2/5uhA+4VdMDtKnu+yXsBzxT2cWRhetRAwCoTmK3Js0lziCIRnnpuSskxsqloLm ZpeQ== X-Gm-Message-State: AOJu0Yza835I74Ivo4gyjAisV98NQgPcXBB44oVkhsyHuBixofth9LJD aQHX404TAtqDOD/iRY9GmR0pFqA= X-Google-Smtp-Source: AGHT+IH5YcYILA3SRp5xgCugD4rJNBs/UUciPiEsDuN9TZoTiWvh1fdpijNAL37ojAOMtIu1NnLdA0s= X-Received: from sdf.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5935]) (user=sdf job=sendgmr) by 2002:a17:902:ed42:b0:1cc:446c:7701 with SMTP id y2-20020a170902ed4200b001cc446c7701mr419825plb.12.1699304201375; Mon, 06 Nov 2023 12:56:41 -0800 (PST) Date: Mon, 6 Nov 2023 12:56:39 -0800 In-Reply-To: <20231106024413.2801438-10-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> <20231106024413.2801438-10-almasrymina@google.com> Message-ID: Subject: Re: [RFC PATCH v3 09/12] net: add support for skbs with unreadable frags From: Stanislav Fomichev To: Mina Almasry Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , "Christian =?utf-8?B?S8O2bmln?=" , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Content-Type: text/plain; charset="utf-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/05, Mina Almasry wrote: > For device memory TCP, we expect the skb headers to be available in host > memory for access, and we expect the skb frags to be in device memory > and unaccessible to the host. We expect there to be no mixing and > matching of device memory frags (unaccessible) with host memory frags > (accessible) in the same skb. > > Add a skb->devmem flag which indicates whether the frags in this skb > are device memory frags or not. > > __skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, > and marks the skb as skb->devmem accordingly. > > Add checks through the network stack to avoid accessing the frags of > devmem skbs and avoid coalescing devmem skbs with non devmem skbs. > > Signed-off-by: Willem de Bruijn > Signed-off-by: Kaiyuan Zhang > Signed-off-by: Mina Almasry [..] > - snaplen = skb->len; > + snaplen = skb_frags_not_readable(skb) ? skb_headlen(skb) : skb->len; > > res = run_filter(skb, sk, snaplen); > if (!res) > @@ -2279,7 +2279,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, > } > } > > - snaplen = skb->len; > + snaplen = skb_frags_not_readable(skb) ? skb_headlen(skb) : skb->len; > > res = run_filter(skb, sk, snaplen); > if (!res) Not sure it covers 100% of bpf. We might need to double-check bpf_xdp_copy_buf which is having its own, non-skb shinfo and frags. And in general, xdp can reference those shinfo frags early... (xdp part happens before we create an skb with all devmem association)