From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 832B54C627 for ; Sat, 18 May 2024 18:46:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.170 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716057975; cv=none; b=PbakHJHvCxlKWrQeA60wcmgtJIsIJAw5s88betaBt8gUxjAzWGPZILXIP7DJVknSM3CA5CAHsPehMOMKYl2jNvFrosfwhaoTAOBzt5Bq1DNyH7Z6I+RZSlXeIlC0nMdxjy0R0YdBuH7VULPo7Xk6DShA+iLCVgJ5TpRsksXvO3w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716057975; c=relaxed/simple; bh=zQdyUAfqyEGFQFtZHOsOkjixcTwdwCIOHdjVrBr5h0Y=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=HC50UzXIqALrbZi/CxgZPF3ByEvgTx5SNzu66njQawSiyjbLoPyMFMc1g68idq8jDWR6Y9/zyIvw3zHokPSGedt03qA4/AN0YIb3WawB6l/Mk+bLMFCljUDsGtQ89BxX6n5mxpmxLYUjyluLMSVFuRQVz4WFnknvIZZZFTroKQU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=eR1BZGWi; arc=none smtp.client-ip=209.85.210.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="eR1BZGWi" Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-6f6765226d0so117720b3a.3 for ; Sat, 18 May 2024 11:46:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1716057973; x=1716662773; darn=vger.kernel.org; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=gtghe09oJ6h1zM6r9/hcuib2w+0SfFSyUIep5UEjz/8=; b=eR1BZGWiXUpu2rNKW2Dg0cebePtcPzqYX2u5sAFvIGD+F4YpaI4u3yxUrMF+gvTohG affruCAaUkXyjjMKkYm2juD+Qy8Su66gtY7GTZSw/0NV8Lp5VsnFW5Eeuy9jLGUqnduO sfBEHdUFPniJ+ymxgStYAf3knUbYbu13wMvCjt/gGpJUao1H7fIMRVoIo3Zdk0e1j7Jk gGrny5nnCtrzU5wwSTUSnMoslxlmwQHgozR5HStBaLReUsSSObFhENpDWdzmADxItp1p prY4wfdoqXZl+zZLTNCR26qlRpJN3t4e0Nz2ujcdkegRJyydgbisxR10yo8UJA7lcKKl M0sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716057973; x=1716662773; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gtghe09oJ6h1zM6r9/hcuib2w+0SfFSyUIep5UEjz/8=; b=C8ACZMTGLyj7QoEaqu4PlBEiK0J7SxQZZn0EmuPy8Jxu+IuDdjjdbriR1qGbwBzoHV oTBTLhnfaINeyzrObpOjgOFPuye5A9zbswYOHgTbdqQh9CnOyZitheqwurPy+hZ0qWQV aGW0JgQd3w838aqbwE+OtASIwEEdI5LbG29jVhRzrNAVoN1FjWRhAx/Lb9TqOjZ2CXwz VoUmnOA8GR7vwfPfSND3pZVeWpC45DkTPVoAaAOAwS6V8g5zUGHu1rqYMw5GupgH3C2g hAzFWaaG++lGkqqjdHu3DKzT+2+QBEtRKjfHbwsVct9UX5Kbyc0leA2+udIzTj9OhPg8 DvqQ== X-Forwarded-Encrypted: i=1; AJvYcCWbGkyYabzkg3iRkl3w5FRQaxsUErkzLvKmMLOIuNvDWP6MbhPnerbF/efwD3TvrbyQes9NRwqS/oFQW41J63R8nXxBWHDOLFCoNDc= X-Gm-Message-State: AOJu0YyFeXFAsLZxA0n6HO+tw4Uby+wgezlLVpmR5M/a7Q/CQeh//c0W AzOJs1maHS1Pspaq7dIebRkAEVSFxABPi9yw0y9kyupi6wFhKMqERw39aJCPKcs= X-Google-Smtp-Source: AGHT+IGg0S5X0hOpYwIoBA7prkwh880WzDoU3RkbOFeT/k39JSwrbImT1CJOj9wyUMDIZCVhykRgQw== X-Received: by 2002:a05:6a00:1988:b0:6f3:8468:f9d1 with SMTP id d2e1a72fcca58-6f4e02c7d62mr28356985b3a.14.1716057972513; Sat, 18 May 2024 11:46:12 -0700 (PDT) Received: from [192.168.1.16] (174-21-188-197.tukw.qwest.net. [174.21.188.197]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-6f4d2a6656esm16665141b3a.38.2024.05.18.11.46.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 18 May 2024 11:46:11 -0700 (PDT) Message-ID: Date: Sat, 18 May 2024 11:46:09 -0700 Precedence: bulk X-Mailing-List: linux-alpha@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH net-next v9 04/14] netdev: support binding dma-buf to netdevice Content-Language: en-GB To: Mina Almasry , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , =?UTF-8?Q?Christian_K=C3=B6nig?= , Pavel Begunkov , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang References: <20240510232128.1105145-1-almasrymina@google.com> <20240510232128.1105145-5-almasrymina@google.com> From: David Wei In-Reply-To: <20240510232128.1105145-5-almasrymina@google.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 2024-05-10 16:21, Mina Almasry wrote: > +void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) > +{ > + struct netdev_rx_queue *rxq; > + unsigned long xa_idx; > + unsigned int rxq_idx; > + > + if (!binding) > + return; > + > + if (binding->list.next) > + list_del(&binding->list); > + > + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) { > + if (rxq->mp_params.mp_priv == binding) { > + /* We hold the rtnl_lock while binding/unbinding > + * dma-buf, so we can't race with another thread that > + * is also modifying this value. However, the page_pool > + * may read this config while it's creating its > + * rx-queues. WRITE_ONCE() here to match the > + * READ_ONCE() in the page_pool. > + */ > + WRITE_ONCE(rxq->mp_params.mp_ops, NULL); > + WRITE_ONCE(rxq->mp_params.mp_priv, NULL); > + > + rxq_idx = get_netdev_rx_queue_index(rxq); > + > + netdev_rx_queue_restart(binding->dev, rxq_idx); What if netdev_rx_queue_restart() fails? Depending on where it failed, a queue might still be filled from struct net_devmem_dmabuf_binding. This is one downside of the current situation with netdev_rx_queue_restart() needing to do allocations each time. Perhaps a full reset if individual queue restart fails?