From: Stephen Hemminger <stephen@networkplumber.org>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: <davem@davemloft.net>, <hkallweit1@gmail.com>,
<f.fainelli@gmail.com>, <netdev@vger.kernel.org>,
<linux-kernel@vger.kernel.org>, <linuxarm@huawei.com>
Subject: Re: [PATCH net-next] net: link_watch: prevent starvation when processing linkwatch wq
Date: Mon, 27 May 2019 18:17:44 -0700 [thread overview]
Message-ID: <20190527181744.289c4b2f@hermes.lan> (raw)
In-Reply-To: <a0fe690b-2bfa-7d1a-40c5-5fb95cf57d0b@huawei.com>
On Tue, 28 May 2019 09:04:18 +0800
Yunsheng Lin <linyunsheng@huawei.com> wrote:
> On 2019/5/27 22:58, Stephen Hemminger wrote:
> > On Mon, 27 May 2019 09:47:54 +0800
> > Yunsheng Lin <linyunsheng@huawei.com> wrote:
> >
> >> When user has configured a large number of virtual netdev, such
> >> as 4K vlans, the carrier on/off operation of the real netdev
> >> will also cause it's virtual netdev's link state to be processed
> >> in linkwatch. Currently, the processing is done in a work queue,
> >> which may cause worker starvation problem for other work queue.
> >>
> >> This patch releases the cpu when link watch worker has processed
> >> a fixed number of netdev' link watch event, and schedule the
> >> work queue again when there is still link watch event remaining.
> >>
> >> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> >
> > Why not put link watch in its own workqueue so it is scheduled
> > separately from the system workqueue?
>
> From testing and debuging, the workqueue runs on the cpu where the
> workqueue is schedule when using normal workqueue, even using its
> own workqueue instead of system workqueue. So if the cpu is busy
> processing the linkwatch event, it is not able to process other
> workqueue' work when the workqueue is scheduled on the same cpu.
>
> Using unbound workqueue may solve the cpu starvation problem.
> But the __linkwatch_run_queue is called with rtnl_lock, so if it
> takes a lot time to process, other need to take the rtnl_lock may
> not be able to move forward.
Agree with the starvation issue. My cocern is that large number of
events that end up being delayed would impact things that are actually
watching for link events (like routing daemons).
It probably would be not accepted to do rtnl_unlock/sched_yield/rtnl_lock
in the loop, but that is another alternative.
next prev parent reply other threads:[~2019-05-28 1:17 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-27 1:47 [PATCH net-next] net: link_watch: prevent starvation when processing linkwatch wq Yunsheng Lin
2019-05-27 14:58 ` Stephen Hemminger
2019-05-28 1:04 ` Yunsheng Lin
2019-05-28 1:17 ` Stephen Hemminger [this message]
2019-05-28 1:48 ` Yunsheng Lin
2019-05-29 8:12 ` Salil Mehta
2019-05-29 8:41 ` Yunsheng Lin
2019-05-29 6:58 ` David Miller
2019-05-29 8:59 ` Yunsheng Lin
2019-06-25 2:28 ` Yunsheng Lin
2019-06-27 18:17 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190527181744.289c4b2f@hermes.lan \
--to=stephen@networkplumber.org \
--cc=davem@davemloft.net \
--cc=f.fainelli@gmail.com \
--cc=hkallweit1@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxarm@huawei.com \
--cc=linyunsheng@huawei.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox