netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: "Jacob S. Moroni" <mail@jakemoroni.com>, Netdev <netdev@vger.kernel.org>
Subject: Re: Locking in network code
Date: Mon, 7 May 2018 07:48:41 -0700	[thread overview]
Message-ID: <20180507074841.6fbdcfac@xeon-e3> (raw)
In-Reply-To: <CAKgT0UddSGs-d0cbQV4YN8RLEqa478C7eG3HNFf1Y-yivWPUFw@mail.gmail.com>

On Sun, 6 May 2018 09:16:26 -0700
Alexander Duyck <alexander.duyck@gmail.com> wrote:

> On Sun, May 6, 2018 at 6:43 AM, Jacob S. Moroni <mail@jakemoroni.com> wrote:
> > Hello,
> >
> > I have a stupid question regarding which variant of spin_lock to use
> > throughout the network stack, and inside RX handlers specifically.
> >
> > It's my understanding that skbuffs are normally passed into the stack
> > from soft IRQ context if the device is using NAPI, and hard IRQ
> > context if it's not using NAPI (and I guess process context too if the
> > driver does it's own workqueue thing).
> >
> > So, that means that handlers registered with netdev_rx_handler_register
> > may end up being called from any context.  
> 
> I am pretty sure the Rx handlers are all called from softirq context.
> The hard IRQ will just call netif_rx which will queue the packet up to
> be handles in the soft IRQ later.

The only exception is the netpoll code which runs stack in hardirq context.

> > However, the RX handler in the macvlan code calls ip_check_defrag,
> > which could eventually lead to a call to ip_defrag, which ends
> > up taking a regular spin_lock around the call to ip_frag_queue.
> >
> > Is this a risk of deadlock, and if not, why?
> >
> > What if you're running a system with one CPU and a packet fragment
> > arrives on a NAPI interface, then, while the spin_lock is held,
> > another fragment somehow arrives on another interface which does
> > its processing in hard IRQ context?
> >
> > --
> >   Jacob S. Moroni
> >   mail@jakemoroni.com  
> 
> Take a look at the netif_rx code and it should answer most of your
> questions. Basically everything is handed off from the hard IRQ to the
> soft IRQ via a backlog queue and then handled in net_rx_action.
> 
> - Alex

      reply	other threads:[~2018-05-07 14:48 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-06 13:43 Locking in network code Jacob S. Moroni
2018-05-06 16:16 ` Alexander Duyck
2018-05-07 14:48   ` Stephen Hemminger [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180507074841.6fbdcfac@xeon-e3 \
    --to=stephen@networkplumber.org \
    --cc=alexander.duyck@gmail.com \
    --cc=mail@jakemoroni.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).