From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43F1D45010 for ; Sat, 18 May 2024 18:46:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716057974; cv=none; b=dnZQEEcHDLigSRUYMdBLPWQC9rYU+4jrzYrBOJYosebyiMnZYy9Q3swnA4Ti7dY1RQX71ijN4xXiKYZ7wyTOdwVLS3Z3M26Y3W7jnCZ3jKrcbZhLbChjpWSK/CbICwW+CNnA5weQlbbaHOJQhlnmxVPJJyiv3kXnk5U5Lu4GCEg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716057974; c=relaxed/simple; bh=zQdyUAfqyEGFQFtZHOsOkjixcTwdwCIOHdjVrBr5h0Y=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=u3BxvK40VKxTBxNXjkCEq/pYfVJKRWWjBpYQSL7rSjEhBoa9zMBGecfH0U2YI0mhQUWrD1UgeLC5U6FBPmNBEn3dqFKWDpoUWCludOje6ks2sTjB1MZhi5ywo3wdYFbO0y0azugoF4DPlLApiyEFKReWd7z/Rdz2E/8A5qyrS6w= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=M/fVgprD; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="M/fVgprD" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-1ee12baa01cso42999625ad.0 for ; Sat, 18 May 2024 11:46:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1716057972; x=1716662772; darn=vger.kernel.org; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=gtghe09oJ6h1zM6r9/hcuib2w+0SfFSyUIep5UEjz/8=; b=M/fVgprDo3qemvtSVtDRBRihNA73ylyeFz8g+H/e5sPuphR4ajsfoJ2oJA4fiO1Mf4 huJ3Hi0718/4mMzry0SjufRovVcKBA9vaY79KDjdLL+ErhfSF7admU5GZhyxxQKlw7bQ P6FzfziADju1WXT57k/HMvhjcWxFY0BPwg09Q3jaq0U5Qeu19j55rBc/Ek7v5+Wb+5DK VIo2yc1sa4o8rV8D4ZoJ89vKqeGRyfId25tWOki754yqKUAX7FPN3FlB55H5scbibug0 nbYjjdqO/YJL4bKhmNFjhLBGVMufB87xfA90viz6pJSxBaBqaLk9hRUHebVf407yE6VK pbgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716057972; x=1716662772; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gtghe09oJ6h1zM6r9/hcuib2w+0SfFSyUIep5UEjz/8=; b=AfX4W/eywOQmkAs+yWmv34zDHrzePqCDNyHrCsqcaKmRCuKuqmR5R+wNXFJ4FxMyKT GlbL8rpFAuB1tTJMkvfpx/7MK5WXWR6tkKpQJXKxqJxdn0fJyQcrq9x2CCng6l6uoTc/ +k9h81sxL/0/rc6gaxS2zd8zQmYfVMblz4tchIUHiVVuoWxXwOxo9QE6GCl9o7JZWTXP BLmH4Oyjf3/LWyrmWnXYdV9uxDRb+pbbPtJGAa0RjjaX4mAgp7QcVkBirMy8Vw6ByDEm VDDAWvuRJfSUqEnnNEgO2VlEae8EXHjG6Njha9lOIDPUJ1piAnAUtSOma4Gn1b7wQ3Pp OUDw== X-Forwarded-Encrypted: i=1; AJvYcCXW5GXZ1hmSXntK0qAkwYpJ/fsbVqMRQXfhKTNuW5KFkouB8+k9yeS4yR3r+/2/hJcgYg0lsHm+i2xlt08+xzWeakhRK5Uv84XNl55KXhJr X-Gm-Message-State: AOJu0Yw8MV1oJMIzhtTMblpiraBLLTVhcW4eHqF2QWvyegxgujxOfKw3 VU3sIjurm4VQqCTQFq0XcL8vpSQQTH1Vr+EnYYTe+gzRTMEUYTxJljcer/13hIU= X-Google-Smtp-Source: AGHT+IGg0S5X0hOpYwIoBA7prkwh880WzDoU3RkbOFeT/k39JSwrbImT1CJOj9wyUMDIZCVhykRgQw== X-Received: by 2002:a05:6a00:1988:b0:6f3:8468:f9d1 with SMTP id d2e1a72fcca58-6f4e02c7d62mr28356985b3a.14.1716057972513; Sat, 18 May 2024 11:46:12 -0700 (PDT) Received: from [192.168.1.16] (174-21-188-197.tukw.qwest.net. [174.21.188.197]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-6f4d2a6656esm16665141b3a.38.2024.05.18.11.46.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 18 May 2024 11:46:11 -0700 (PDT) Message-ID: Date: Sat, 18 May 2024 11:46:09 -0700 Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH net-next v9 04/14] netdev: support binding dma-buf to netdevice Content-Language: en-GB To: Mina Almasry , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , =?UTF-8?Q?Christian_K=C3=B6nig?= , Pavel Begunkov , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang References: <20240510232128.1105145-1-almasrymina@google.com> <20240510232128.1105145-5-almasrymina@google.com> From: David Wei In-Reply-To: <20240510232128.1105145-5-almasrymina@google.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 2024-05-10 16:21, Mina Almasry wrote: > +void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) > +{ > + struct netdev_rx_queue *rxq; > + unsigned long xa_idx; > + unsigned int rxq_idx; > + > + if (!binding) > + return; > + > + if (binding->list.next) > + list_del(&binding->list); > + > + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) { > + if (rxq->mp_params.mp_priv == binding) { > + /* We hold the rtnl_lock while binding/unbinding > + * dma-buf, so we can't race with another thread that > + * is also modifying this value. However, the page_pool > + * may read this config while it's creating its > + * rx-queues. WRITE_ONCE() here to match the > + * READ_ONCE() in the page_pool. > + */ > + WRITE_ONCE(rxq->mp_params.mp_ops, NULL); > + WRITE_ONCE(rxq->mp_params.mp_priv, NULL); > + > + rxq_idx = get_netdev_rx_queue_index(rxq); > + > + netdev_rx_queue_restart(binding->dev, rxq_idx); What if netdev_rx_queue_restart() fails? Depending on where it failed, a queue might still be filled from struct net_devmem_dmabuf_binding. This is one downside of the current situation with netdev_rx_queue_restart() needing to do allocations each time. Perhaps a full reset if individual queue restart fails?