From: Heiner Kallweit <hkallweit1@gmail.com>
To: Vladimir Oltean <olteanv@gmail.com>
Cc: Jakub Kicinski <kuba@kernel.org>,
David Miller <davem@davemloft.net>,
Realtek linux nic maintainers <nic_swsd@realtek.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [PATCH net-next] r8169: set IRQF_NO_THREAD if MSI(X) is enabled
Date: Mon, 2 Nov 2020 16:18:07 +0100 [thread overview]
Message-ID: <e67de3a4-d65d-0bbc-d644-25d212c04fdd@gmail.com> (raw)
In-Reply-To: <20201102124159.hw6iry2wg4ibcggc@skbuf>
On 02.11.2020 13:41, Vladimir Oltean wrote:
> On Mon, Nov 02, 2020 at 09:01:00AM +0100, Heiner Kallweit wrote:
>> As mentioned by Eric it doesn't make sense to make the minimal hard irq
>> handlers used with NAPI a thread. This more contributes to the problem
>> than to the solution. The change here reflects this.
>
> When you say that "it doesn't make sense", is there something that is
> actually measurably worse when the hardirq handler gets force-threaded?
> Rephrased, is it something that doesn't make sense in principle, or in
> practice?
>
> My understanding is that this is not where the bulk of the NAPI poll
> processing is done anyway, so it should not have a severe negative
> impact on performance in any case.
>
> On the other hand, moving as much code as possible outside interrupt
> context (be it hardirq or softirq) is beneficial to some use cases,
> because the scheduler is not in control of that code's runtime unless it
> is in a thread.
>
According to my understanding the point is that executing the simple
hard irq handler for NAPI drivers doesn't cost significantly more than
executing the default hard irq handler (irq_default_primary_handler).
Therefore threadifying it means more or less just overhead.
forced threading:
1. irq_default_primary_handler, wakes irq thread
2. threadified driver hard irq handler (basically just calling napi_schedule)
3. NAPI processing
IRQF_NO_THREAD:
1. driver hard irq handler, scheduling NAPI
2. NAPI processing
>> The actual discussion would be how to make the NAPI processing a
>> thread (instead softirq).
>
> I don't get it, so you prefer the hardirq handler to consume CPU time
> which is not accounted for by the scheduler, but for the NAPI poll, you
> do want the scheduler to account for it? So why one but not the other?
>
The CPU time for scheduling NAPI is neglectable, but doing all the
rx and tx work in NAPI processing is significant effort.
>> For using napi_schedule_irqoff we most likely need something like
>> if (pci_dev_msi_enabled(pdev))
>> napi_schedule_irqoff(napi);
>> else
>> napi_schedule(napi);
>> and I doubt that's worth it.
>
> Yes, probably not, hence my question.
>
next prev parent reply other threads:[~2020-11-02 15:18 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-01 22:30 [PATCH net-next] r8169: set IRQF_NO_THREAD if MSI(X) is enabled Heiner Kallweit
2020-11-02 0:06 ` Vladimir Oltean
2020-11-02 8:01 ` Heiner Kallweit
2020-11-02 12:41 ` Vladimir Oltean
2020-11-02 15:18 ` Heiner Kallweit [this message]
2020-11-02 15:41 ` Vladimir Oltean
2020-11-04 1:37 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e67de3a4-d65d-0bbc-d644-25d212c04fdd@gmail.com \
--to=hkallweit1@gmail.com \
--cc=davem@davemloft.net \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=nic_swsd@realtek.com \
--cc=olteanv@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).