From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stephen Hemminger Subject: Re: Locking in network code Date: Mon, 7 May 2018 07:48:41 -0700 Message-ID: <20180507074841.6fbdcfac@xeon-e3> References: <1525614224.300611.1362511632.7D50FB8C@webmail.messagingengine.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: "Jacob S. Moroni" , Netdev To: Alexander Duyck Return-path: Received: from mail-it0-f65.google.com ([209.85.214.65]:34148 "EHLO mail-it0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750732AbeEGOso (ORCPT ); Mon, 7 May 2018 10:48:44 -0400 Received: by mail-it0-f65.google.com with SMTP id c5-v6so11316687itj.1 for ; Mon, 07 May 2018 07:48:44 -0700 (PDT) In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: On Sun, 6 May 2018 09:16:26 -0700 Alexander Duyck wrote: > On Sun, May 6, 2018 at 6:43 AM, Jacob S. Moroni wrote: > > Hello, > > > > I have a stupid question regarding which variant of spin_lock to use > > throughout the network stack, and inside RX handlers specifically. > > > > It's my understanding that skbuffs are normally passed into the stack > > from soft IRQ context if the device is using NAPI, and hard IRQ > > context if it's not using NAPI (and I guess process context too if the > > driver does it's own workqueue thing). > > > > So, that means that handlers registered with netdev_rx_handler_register > > may end up being called from any context. > > I am pretty sure the Rx handlers are all called from softirq context. > The hard IRQ will just call netif_rx which will queue the packet up to > be handles in the soft IRQ later. The only exception is the netpoll code which runs stack in hardirq context. > > However, the RX handler in the macvlan code calls ip_check_defrag, > > which could eventually lead to a call to ip_defrag, which ends > > up taking a regular spin_lock around the call to ip_frag_queue. > > > > Is this a risk of deadlock, and if not, why? > > > > What if you're running a system with one CPU and a packet fragment > > arrives on a NAPI interface, then, while the spin_lock is held, > > another fragment somehow arrives on another interface which does > > its processing in hard IRQ context? > > > > -- > > Jacob S. Moroni > > mail@jakemoroni.com > > Take a look at the netif_rx code and it should answer most of your > questions. Basically everything is handed off from the hard IRQ to the > soft IRQ via a backlog queue and then handled in net_rx_action. > > - Alex