From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jesper Dangaard Brouer Subject: Re: [net-next V4 PATCH 3/5] bpf: cpumap xdp_buff to skb conversion and allocation Date: Fri, 6 Oct 2017 14:11:40 +0200 Message-ID: <20171006141140.0f252f73@redhat.com> References: <150711858281.9499.7767364427831352921.stgit@firesoul> <150711863521.9499.3702385818650624585.stgit@firesoul> <59D607F3.6090306@iogearbox.net> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: netdev@vger.kernel.org, jakub.kicinski@netronome.com, "Michael S. Tsirkin" , pavel.odintsov@gmail.com, Jason Wang , mchan@broadcom.com, John Fastabend , peter.waskiewicz.jr@intel.com, Daniel Borkmann , Alexei Starovoitov , Andy Gospodarek , brouer@redhat.com To: Daniel Borkmann Return-path: Received: from mx1.redhat.com ([209.132.183.28]:48968 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751901AbdJFML4 (ORCPT ); Fri, 6 Oct 2017 08:11:56 -0400 In-Reply-To: <59D607F3.6090306@iogearbox.net> Sender: netdev-owner@vger.kernel.org List-ID: On Thu, 05 Oct 2017 12:22:43 +0200 Daniel Borkmann wrote: > On 10/04/2017 02:03 PM, Jesper Dangaard Brouer wrote: > [...] > > static int cpu_map_kthread_run(void *data) > > { > > struct bpf_cpu_map_entry *rcpu = data; > > > > set_current_state(TASK_INTERRUPTIBLE); > > while (!kthread_should_stop()) { > > + unsigned int processed = 0, drops = 0; > > struct xdp_pkt *xdp_pkt; > > > > - schedule(); > > - /* Do work */ > > - while ((xdp_pkt = ptr_ring_consume(rcpu->queue))) { > > - /* For now just "refcnt-free" */ > > - page_frag_free(xdp_pkt); > > + /* Release CPU reschedule checks */ > > + if (__ptr_ring_empty(rcpu->queue)) { > > + schedule(); > > + } else { > > + cond_resched(); > > + } > > + > > + /* Process packets in rcpu->queue */ > > + local_bh_disable(); > > + /* > > + * The bpf_cpu_map_entry is single consumer, with this > > + * kthread CPU pinned. Lockless access to ptr_ring > > + * consume side valid as no-resize allowed of queue. > > + */ > > + while ((xdp_pkt = __ptr_ring_consume(rcpu->queue))) { > > + struct sk_buff *skb; > > + int ret; > > + > > + skb = cpu_map_build_skb(rcpu, xdp_pkt); > > + if (!skb) { > > + page_frag_free(xdp_pkt); > > + continue; > > + } > > + > > + /* Inject into network stack */ > > + ret = netif_receive_skb_core(skb); > > Don't we need to hold RCU read lock for above netif_receive_skb_core()? Yes, I guess we do, but I'll place it in netif_receive_skb_core() before invoking __netif_receive_skb_core(), like netif_receive_skb() does around __netif_receive_skb(). It looks like the RCU section protects: rx_handler = rcu_dereference(skb->dev->rx_handler); -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer