From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jacob S. Moroni" Subject: Locking in network code Date: Sun, 06 May 2018 09:43:44 -0400 Message-ID: <1525614224.300611.1362511632.7D50FB8C@webmail.messagingengine.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: netdev@vger.kernel.org Return-path: Received: from out3-smtp.messagingengine.com ([66.111.4.27]:46477 "EHLO out3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751187AbeEFNnp (ORCPT ); Sun, 6 May 2018 09:43:45 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 9D11321165 for ; Sun, 6 May 2018 09:43:44 -0400 (EDT) Sender: netdev-owner@vger.kernel.org List-ID: Hello, I have a stupid question regarding which variant of spin_lock to use throughout the network stack, and inside RX handlers specifically. It's my understanding that skbuffs are normally passed into the stack from soft IRQ context if the device is using NAPI, and hard IRQ context if it's not using NAPI (and I guess process context too if the driver does it's own workqueue thing). So, that means that handlers registered with netdev_rx_handler_register may end up being called from any context. However, the RX handler in the macvlan code calls ip_check_defrag, which could eventually lead to a call to ip_defrag, which ends up taking a regular spin_lock around the call to ip_frag_queue. Is this a risk of deadlock, and if not, why? What if you're running a system with one CPU and a packet fragment arrives on a NAPI interface, then, while the spin_lock is held, another fragment somehow arrives on another interface which does its processing in hard IRQ context? -- Jacob S. Moroni mail@jakemoroni.com