From: Nicolas Schichan <nschichan@freebox.fr>
To: Francois Romieu <romieu@fr.zoreil.com>
Cc: "David S. Miller" <davem@davemloft.net>,
Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v1] mv643xx_eth: only account for work done in rxq_process in poll callback.
Date: Fri, 06 Mar 2015 17:28:00 +0100 [thread overview]
Message-ID: <54F9D590.5000406@freebox.fr> (raw)
In-Reply-To: <20150304213235.GA20596@electric-eye.fr.zoreil.com>
On 03/04/2015 10:32 PM, Francois Romieu wrote:
> Nicolas Schichan <nschichan@freebox.fr> :
>> @@ -1050,7 +1049,7 @@ static int txq_reclaim(struct tx_queue *txq, int budget, int force)
>> __netif_tx_lock_bh(nq);
>>
>> reclaimed = 0;
>> - while (reclaimed < budget && txq->tx_desc_count > 0) {
>> + while (txq->tx_desc_count > 0) {
>> int tx_index;
>> struct tx_desc *desc;
>> u32 cmd_sts;
>
> You may use a local 'int count = txq->tx_desc_count' variable then
> perform a single update at the end of the locked section.
> txq->tx_used_desc could be reworked in a similar way.
Hello Francois,
I was trying to minimize the code changes wrt the current source, but if you
want that change to be in the same patch, I can certainly respin a V2 with it.
>> @@ -1105,8 +1104,7 @@ static int txq_reclaim(struct tx_queue *txq, int budget, int force)
>>
>> __netif_tx_unlock_bh(nq);
>>
>> - if (reclaimed < budget)
>> - mp->work_tx &= ~(1 << txq->index);
>> + mp->work_tx &= ~(1 << txq->index);
>>
>> return reclaimed;
>> }
>
> work_tx is also updated in irq context. I'd rather see "clear_flag() then
> reclaim()" than "reclaim() then clear_flag()" in a subsequent patch.
Just to be sure that I understand the issue here, under normal conditions,
work_tx is updated in irq context via mv643xx_eth_collect_events() and then
the mv643xx interrupts are masked and napi_schedule() is called. Only once all
the work has been completed in the poll callback and the work flags have been
cleared, are the interrupt unmasked and napi_complete() is called. As far as I
can see there should be no issue here.
The only problem I can see is in OOM condition when napi_schedule is called
from a timer callback (oom_timer_wrapper()) which will result in the poll
callback being called with the interrupts unmasked and if an interrupt fires
(possibly in an other CPU) at the wrong time, mv643xx_eth_collect_events()
will race with the various flags clear in txq_reclaim, rxq_process and
rxq_refill ?
In that case wouldn't be something like clear_bit/set_bit preferred compared
to the direct bitwise operations ?
Regards,
--
Nicolas Schichan
Freebox SAS
next prev parent reply other threads:[~2015-03-06 16:28 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-04 18:02 [PATCH v1] mv643xx_eth: only account for work done in rxq_process in poll callback Nicolas Schichan
2015-03-04 21:32 ` Francois Romieu
2015-03-06 16:28 ` Nicolas Schichan [this message]
2015-03-06 21:49 ` Francois Romieu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54F9D590.5000406@freebox.fr \
--to=nschichan@freebox.fr \
--cc=davem@davemloft.net \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=romieu@fr.zoreil.com \
--cc=sebastian.hesselbarth@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).