From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: gregkh@suse.de,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Felipe Balbi <balbi@ti.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
Tejun Heo <tj@kernel.org>
Subject: Re: tty breakage in X (Was: tty vs workqueue oddities)
Date: Fri, 03 Jun 2011 16:56:29 +1000 [thread overview]
Message-ID: <1307084189.23876.19.camel@pasglop> (raw)
In-Reply-To: <1307081874.23876.14.camel@pasglop>
On Fri, 2011-06-03 at 16:17 +1000, Benjamin Herrenschmidt wrote:
> Some more data: It -looks- like what happens is that the flush_to_ldisc
> work queue entry constantly re-queues itself (because the PTY is full ?)
> and the workqueue thread will basically loop forver calling it without
> ever scheduling, thus starving the consumer process that could have
> emptied the PTY.
>
> At least that's a semi half-assed theory. If I add a schedule() to
> process_one_work() after dropping the lock, the problem disappears.
>
> So there's a combination of things here that are quite interesting:
>
> - A lot of work queued for the kworker will essentially go on without
> scheduling for as long as it takes to empty all work items. That doesn't
> sound very nice latency-wise. At least on a non-PREEMPT kernel.
>
> - flush_to_ldisc seems to be nasty and requeues itself over and over
> again from what I can tell, when it can't push the data out, in this
> case, I suspect because the PTY is full but I don't know for sure yet.
Interesting results from x86. I could not initially reproduce there at
all on my little Atom board (the one from kernel summit last year).
Eventually I looked at the kernel config, switched off PREEMPT_VOLUNTARY
and I can now reproduce on x86 too. Again, if you have both threads/core
running, the problem isn't as visible (you do see "hickups" when cat'ing
a large file, the atom is slow enough I suppose).
But offline a cpu (leave only one up) and cat a large file (dmesg is
enough for me to trigger it) and you see the hangs.
So I think my theory stands that flush_to_ldisc constantly reschedule
itself causing the worker thread to eat all CPU and starve the consumer
of the PTY. I won't have time to dig much deeper today nor probably this
week-end so I'm sending this email for others who want to look.
Cheers,
Ben.
next prev parent reply other threads:[~2011-06-03 6:56 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-02 7:17 tty vs workqueue oddities Benjamin Herrenschmidt
2011-06-02 8:37 ` tty breakage in X (Was: tty vs workqueue oddities) Benjamin Herrenschmidt
2011-06-02 9:30 ` Andreas Schwab
2011-06-02 10:07 ` Alan Cox
2011-06-03 0:56 ` Benjamin Herrenschmidt
2011-06-03 6:17 ` Benjamin Herrenschmidt
2011-06-03 6:56 ` Benjamin Herrenschmidt [this message]
2011-06-03 9:36 ` Linus Torvalds
2011-06-05 14:37 ` Guillaume Chazarain
2011-06-06 14:24 ` Guillaume Chazarain
2011-06-08 2:44 ` Linus Torvalds
2011-06-08 3:31 ` Linus Torvalds
2011-06-08 8:31 ` Guillaume Chazarain
2011-06-08 8:28 ` Felipe Balbi
2011-06-08 9:04 ` Alan Cox
2011-06-02 10:03 ` tty vs workqueue oddities Alan Cox
-- strict thread matches above, loose matches on Subject: below --
2011-06-03 10:23 tty breakage in X (Was: tty vs workqueue oddities) Milton Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1307084189.23876.19.camel@pasglop \
--to=benh@kernel.crashing.org \
--cc=alan@lxorguk.ukuu.org.uk \
--cc=balbi@ti.com \
--cc=gregkh@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=tj@kernel.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox