From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB9F428CF7F for ; Wed, 23 Apr 2025 18:24:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745432694; cv=none; b=XLkviS+9wJRkOLc2f3zrdAf1Xr+PNR0eDiGevi618ki/kVwi5ueSKVFUWAGXrmk514hecDNstnliIoBG38CJXcbtDnshlmyN5Ib/C3RIg0mJO0tsMNb2CGyI7rvj2xkhw6fpL+VM82jgE1hy/L2COVYoAa8aFtL5iuTti+doxiA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745432694; c=relaxed/simple; bh=2kFipTkI9R4lAA/fVQ77ATu/54yZGXXZ2osRGeIHD58=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=rmQfsNajx9RjEH5VaZx5TcjBfS3E4afql81kmpJOLeCzt1torftlWcYounmh1YOkaXo0ka9Y4q0rZ6ULmny7nAKZgtZgxdJzbDf/Guhhe+vufTDgS/EPyTkXbDr4kt5d/f3pshE/lsFnTSSSfWfjZCj6WT33xCaj2cJIpZ8Z9MQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Ul8YVMGi; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Ul8YVMGi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1745432691; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xDiBetsqPmhwr4oXU09zlLKMEaFLxeBh720CI/G1T2w=; b=Ul8YVMGi7cOP0VMD2hsaRAR8KRXGbtDrj6FbAqeUNxj8kbrz+VvKCm5c0eruYLgpHiZ/ug lV1XJ15iM54jL5QhX7Y5I0q5ppzjkBkYyLovbKKaaM3Xl+R64PRVqh7KuGv0et0MhKWxbH KYvDrQBfldv3tNWQPKvPErT/czOV8CI= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-133-916-OmU8ONmPD7_keqCMcw-1; Wed, 23 Apr 2025 14:24:49 -0400 X-MC-Unique: 916-OmU8ONmPD7_keqCMcw-1 X-Mimecast-MFC-AGG-ID: 916-OmU8ONmPD7_keqCMcw_1745432688 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-43eea5a5d80so581555e9.1 for ; Wed, 23 Apr 2025 11:24:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745432688; x=1746037488; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=xDiBetsqPmhwr4oXU09zlLKMEaFLxeBh720CI/G1T2w=; b=V3nR1Skk+G+K2rIq4jDZ/Xqngd/4vcJX2/oxcGxQGN20RNbT/CiOJPnM9I00mnPCZu 7/1MYLBjyxUhSmmeAzHK/ze+VYgFYe5VbfnOGpiRIhWMxGVcxC7N4YLhOG9n9ccuv/5R BkqdhleavSdUVrnGtSe5dquopWFovVABiJ+aKOA2wQOcVXFIceXLHonEWZEpbmoU/Tdr Hh0GOOPnTMm6XgTBe+PFlfkOkiozagjtwF/uUKegMXFmUnk/C/hLV9+DVlf1inND4/vr 87+//QUltT23ON7PCmEhqXSCFOuKyHNkQ0xq7LsAJPtGCDHq6Qpxh79CMr0plFjC2hir paOQ== X-Forwarded-Encrypted: i=1; AJvYcCV6Vh/5wleNme/WxTeIPMKb87FG0r/Jhyx1sGjFSJBJlOqleJf9yzb5Sg9R71Vk/KxRbWUHfXqKoS8=@vger.kernel.org X-Gm-Message-State: AOJu0YxVCSZlbCJD7VrjxffatEbQf08e7+lCb7wvGAo32kUZOl7nYsSe 2fb4atMRKYPl17YX6A9nDb/HfJtSzbOy8l6vucYp06FzsB5x8+UESMAE+Mrkd7lpcXa37TSrO1l z56l3Ap6I9jHvgL1gvss8qAkMPUKz1el3wO2DtUGIaGFQfQyIOO4kO52eGw== X-Gm-Gg: ASbGncu4KwXIESyu0LZQp9zYry9zTkw5hxqWP8hOa/n3sd6o29jF9teoEOjHvlVcohd 88Sagj/RlvWWyktLeBTq7oUB/wAblL8UDGUk2THsoUj78fz17PaZw2/kjID01cxC1OBuhTMNyzQ 8LyLECOQR7OKwjV8Kdx2+7u+Zc8c5mTrOwCMncXmXyqqwuuuVZPqcVujP1KVKHoF/30xtkJzuJD D35/wsb0XGhElMPc/JoTXuv3b2A2HBrg3zlPd3CH8xJK1CjYZzknU78g2NQVPA+OImWO4LKyIO8 aEfYsg== X-Received: by 2002:a05:600c:1d18:b0:43c:f3e4:d6f6 with SMTP id 5b1f17b1804b1-4406ac27928mr208905485e9.31.1745432688461; Wed, 23 Apr 2025 11:24:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGhMUybKS0xk+duePsY3EIBDyJlRwfswDnOg4sEpKMj9IfeNCbDaS3GP/dkavlx5C/2n4m6hA== X-Received: by 2002:a05:600c:1d18:b0:43c:f3e4:d6f6 with SMTP id 5b1f17b1804b1-4406ac27928mr208905055e9.31.1745432687961; Wed, 23 Apr 2025 11:24:47 -0700 (PDT) Received: from redhat.com ([2a0d:6fc0:1517:1000:ea83:8e5f:3302:3575]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-39efa4931acsm19793032f8f.72.2025.04.23.11.24.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Apr 2025 11:24:47 -0700 (PDT) Date: Wed, 23 Apr 2025 14:24:42 -0400 From: "Michael S. Tsirkin" To: Mina Almasry Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, io-uring@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman , Jonathan Corbet , Andrew Lunn , Jeroen de Borst , Harshitha Ramamurthy , Kuniyuki Iwashima , Willem de Bruijn , Jens Axboe , Pavel Begunkov , David Ahern , Neal Cardwell , Stefan Hajnoczi , Stefano Garzarella , Jason Wang , Xuan Zhuo , Eugenio =?iso-8859-1?Q?P=E9rez?= , Shuah Khan , sdf@fomichev.me, dw@davidwei.uk, Jamal Hadi Salim , Victor Nogueira , Pedro Tammela , Samiullah Khawaja , Kaiyuan Zhang Subject: Re: [PATCH net-next v10 4/9] net: devmem: Implement TX path Message-ID: <20250423140931-mutt-send-email-mst@kernel.org> References: <20250423031117.907681-1-almasrymina@google.com> <20250423031117.907681-5-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250423031117.907681-5-almasrymina@google.com> some nits On Wed, Apr 23, 2025 at 03:11:11AM +0000, Mina Almasry wrote: > @@ -189,43 +200,44 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, > } > > binding->dev = dev; > - > - err = xa_alloc_cyclic(&net_devmem_dmabuf_bindings, &binding->id, > - binding, xa_limit_32b, &id_alloc_next, > - GFP_KERNEL); > - if (err < 0) > - goto err_free_binding; > - > xa_init_flags(&binding->bound_rxqs, XA_FLAGS_ALLOC); > - > refcount_set(&binding->ref, 1); > - > binding->dmabuf = dmabuf; > given you keep iterating, don't tweak whitespace in the same patch- will make the review a tiny bit easier. > binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent); > if (IS_ERR(binding->attachment)) { > err = PTR_ERR(binding->attachment); > NL_SET_ERR_MSG(extack, "Failed to bind dmabuf to device"); > - goto err_free_id; > + goto err_free_binding; > } > > binding->sgt = dma_buf_map_attachment_unlocked(binding->attachment, > - DMA_FROM_DEVICE); > + direction); > if (IS_ERR(binding->sgt)) { > err = PTR_ERR(binding->sgt); > NL_SET_ERR_MSG(extack, "Failed to map dmabuf attachment"); > goto err_detach; > } > > + if (direction == DMA_TO_DEVICE) { > + binding->tx_vec = kvmalloc_array(dmabuf->size / PAGE_SIZE, > + sizeof(struct net_iov *), > + GFP_KERNEL); > + if (!binding->tx_vec) { > + err = -ENOMEM; > + goto err_unmap; > + } > + } > + > /* For simplicity we expect to make PAGE_SIZE allocations, but the > * binding can be much more flexible than that. We may be able to > * allocate MTU sized chunks here. Leave that for future work... > */ > - binding->chunk_pool = > - gen_pool_create(PAGE_SHIFT, dev_to_node(&dev->dev)); > + binding->chunk_pool = gen_pool_create(PAGE_SHIFT, > + dev_to_node(&dev->dev)); > if (!binding->chunk_pool) { > err = -ENOMEM; > - goto err_unmap; > + goto err_tx_vec; > } > > virtual = 0; > @@ -270,24 +282,34 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, > niov->owner = &owner->area; > page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), > net_devmem_get_dma_addr(niov)); > + if (direction == DMA_TO_DEVICE) > + binding->tx_vec[owner->area.base_virtual / PAGE_SIZE + i] = niov; > } > > virtual += len; > } > > + err = xa_alloc_cyclic(&net_devmem_dmabuf_bindings, &binding->id, > + binding, xa_limit_32b, &id_alloc_next, > + GFP_KERNEL); > + if (err < 0) > + goto err_free_id; > + > return binding; > > +err_free_id: > + xa_erase(&net_devmem_dmabuf_bindings, binding->id); > err_free_chunks: > gen_pool_for_each_chunk(binding->chunk_pool, > net_devmem_dmabuf_free_chunk_owner, NULL); > gen_pool_destroy(binding->chunk_pool); > +err_tx_vec: > + kvfree(binding->tx_vec); > err_unmap: > dma_buf_unmap_attachment_unlocked(binding->attachment, binding->sgt, > DMA_FROM_DEVICE); > err_detach: > dma_buf_detach(dmabuf, binding->attachment); > -err_free_id: > - xa_erase(&net_devmem_dmabuf_bindings, binding->id); > err_free_binding: > kfree(binding); > err_put_dmabuf: > @@ -295,6 +317,21 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, > return ERR_PTR(err); > } > > +struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id) > +{ > + struct net_devmem_dmabuf_binding *binding; > + > + rcu_read_lock(); > + binding = xa_load(&net_devmem_dmabuf_bindings, id); > + if (binding) { > + if (!net_devmem_dmabuf_binding_get(binding)) > + binding = NULL; > + } > + rcu_read_unlock(); > + > + return binding; > +} > + > void net_devmem_get_net_iov(struct net_iov *niov) > { > net_devmem_dmabuf_binding_get(net_devmem_iov_binding(niov)); > @@ -305,6 +342,53 @@ void net_devmem_put_net_iov(struct net_iov *niov) > net_devmem_dmabuf_binding_put(net_devmem_iov_binding(niov)); > } > > +struct net_devmem_dmabuf_binding *net_devmem_get_binding(struct sock *sk, > + unsigned int dmabuf_id) > +{ > + struct net_devmem_dmabuf_binding *binding; > + struct dst_entry *dst = __sk_dst_get(sk); > + int err = 0; > + > + binding = net_devmem_lookup_dmabuf(dmabuf_id); why not initialize binding together with the declaration? > + if (!binding || !binding->tx_vec) { > + err = -EINVAL; > + goto out_err; > + } > + > + /* The dma-addrs in this binding are only reachable to the corresponding > + * net_device. > + */ > + if (!dst || !dst->dev || dst->dev->ifindex != binding->dev->ifindex) { > + err = -ENODEV; > + goto out_err; > + } > + > + return binding; > + > +out_err: > + if (binding) > + net_devmem_dmabuf_binding_put(binding); > + > + return ERR_PTR(err); > +} > + > +struct net_iov * > +net_devmem_get_niov_at(struct net_devmem_dmabuf_binding *binding, > + size_t virt_addr, size_t *off, size_t *size) > +{ > + size_t idx; > + > + if (virt_addr >= binding->dmabuf->size) > + return NULL; > + > + idx = virt_addr / PAGE_SIZE; init this at where it's declared? or where it's used. > + > + *off = virt_addr % PAGE_SIZE; > + *size = PAGE_SIZE - *off; > + > + return binding->tx_vec[idx]; > +} > + > /*** "Dmabuf devmem memory provider" ***/ > > int mp_dmabuf_devmem_init(struct page_pool *pool) -- MST