From: "Jérôme Pouiller" <jerome.pouiller@silabs.com>
To: Marc Zyngier <maz@kernel.org>
Cc: Marc Dorval <marc.dorval@silabs.com>,
Chen-Yu Tsai <wens@csie.org>,
Thomas Gleixner <tglx@linutronix.de>,
Maxime Ripard <maxime@cerno.tech>,
linux-arm-kernel@lists.infradead.org
Subject: Re: Possible race while masking IRQ on Allwinner A20
Date: Thu, 21 May 2020 16:08:38 +0200 [thread overview]
Message-ID: <5660213.hxA4Uj6jp3@pc-42> (raw)
In-Reply-To: <faca3f8ee1269b70b46a271dbdf00265@kernel.org>
On Thursday 21 May 2020 15:39:09 CEST Marc Zyngier wrote:
> On 2020-05-21 14:28, Jérôme Pouiller wrote:
> > On Thursday 21 May 2020 10:02:48 CEST Marc Zyngier wrote:
> >> On 2020-05-21 08:26, Maxime Ripard wrote:
> >> > On Tue, May 19, 2020 at 10:59:26AM +0200, Jérôme Pouiller wrote:
> > [...]
> >> >> Nevermind, I tried to use a level triggered IRQ (and my request is on
> >> >> this part). As you can see in the wfx driver (in bus_sdio.c and
> >> >> bh.c), I use a threaded IRQ for that. Unfortunately, I receive some IRQs
> >> >> twice.
> >> >> I traced the problem, I get:
> >> >>
> >> >> QSGRenderThread-981 [000] d.h. 247.485524: irq_handler_entry: irq=80 name=wfx
> >> >> QSGRenderThread-981 [000] d.h. 247.485547: irq_handler_exit: irq=80 ret=handled
> >> >> QSGRenderThread-981 [000] d.h. 247.485600: irq_handler_entry: irq=80 name=wfx
> >> >> QSGRenderThread-981 [000] d.h. 247.485606: irq_handler_exit: irq=80 ret=handled
> >> >> irq/80-wfx-260 [001] .... 247.485828: io_read32: CONTROL: 0000f046
> >> >> irq/80-wfx-260 [001] .... 247.486072: io_read32: CONTROL: 0000f046
> >> >> kworker/1:1H-116 [001] .... 247.486214: io_read: QUEUE: 8b 00 84 18 00 00 00 00 01 00 15 82 2b 48 01 1e 88 42 30 00 08 6b d7 c3 53 e0 28 80 88 67 32 af ... (192 bytes)
> >> >> kworker/1:1H-116 [001] .... 247.493097: io_read: QUEUE: 00 00 00 00 00 00 00 00 06 06 00 6a 3f 95 00 60 00 00 00 00 08 62 00 00 01 00 5e 00 00 07 28 80 ... (192 bytes)
> >> >> [...]
> >> >>
> >> >> On this trace, we can see:
> >> >> - the hard IRQ handler
> >> >> - the IRQ acknowledge from the thread irq/80-wfx-260
> >> >> - the access to the data from kworker/1:1H-116
> >> >>
> >> >> As far as I understand, the first call to the IRQ handler (at
> >> >> 247.485524) should mask the IRQ 80. So, the second IRQ (at 247.485600)
> >> >> should not happen and the thread irq/80 should be triggered only once.
> >> >>
> >> >> Do you have any idea of what is going wrong with this IRQ?
> >> >
> >> > That's pretty weird indeed. My first guess was that you weren't using
> >> > IRQF_ONESHOT, but it looks like you are. My next lead would be to see
> >> > if the mask / unmask hooks in the pinctrl driver are properly called
> >> > (and actually do what they are supposed to do). I'm not sure we have
> >> > any in-tree user of a threaded IRQ attached to the pinctrl driver, so
> >> > it might have been broken for quite some time.
> >>
> >> What is certainly puzzling is that this driver doesn't seem to use
> >> threaded IRQs at all. Instead, it uses its own workqueue that seems
> >> to bypass the core IRQ subsystem altogether. So any guarantee we'd
> >> expect goes at of the window.
> >>
> >> It is also pretty unclear to me how whether the HW supports switch
> >> from edge to level signalling. The request_irq() call definitely asks
> >> for edge, and I don't know how you'd instruct the HW to change its
> >> signalling method (in general, it isn't possible).
> >
> > You are talking about the wfx driver? Be sure you read the right
> > version
> > of the driver. The ability to use a level-triggered IRQ does not exist
> > in
> > the stable tree. You have to check the "staging-next" tree from
> > Greg[1].
>
> Right. It still remains that in this (new) code, the threaded handler
> seems to kick a workqueue, and returns saying "I'm done". With a level
> triggered interrupt, this is likely to result in an interrupt storm if
> nothing masks the interrupt.
The core the threaded IRQ handler is in bh.c/wfx_bh_request_rx():
control_reg_read(wdev, &cur);
prev = atomic_xchg(&wdev->hif.ctrl_reg, cur);
complete(&wdev->hif.ctrl_ready);
queue_work(system_highpri_wq, &wdev->hif.bh);
The call to control_reg_read() acknowledge the IRQ (and get the length of
data to read). After this function, the IRQ line is down (then, indeed the
read of data is done from a workqueue).
Concerning the hard IRQ handler, we use the default IRQ handler, that
indeed just return IRQ_WAKE_THREAD. Since we also specify IRQF_ONESHOT,
the IRQ should be masked until the threaded IRQ ends.
(You may wonder why the driver does not call wfx_bh_request_rx() from a
regular IRQ handler. It is because control_reg_read() is not atomic.)
--
Jérôme Pouiller
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-05-21 14:08 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-19 8:59 Possible race while masking IRQ on Allwinner A20 Jérôme Pouiller
2020-05-21 7:26 ` Maxime Ripard
2020-05-21 8:02 ` Marc Zyngier
2020-05-21 13:28 ` Jérôme Pouiller
2020-05-21 13:39 ` Marc Zyngier
2020-05-21 14:08 ` Jérôme Pouiller [this message]
2020-05-21 13:12 ` Jérôme Pouiller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5660213.hxA4Uj6jp3@pc-42 \
--to=jerome.pouiller@silabs.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=marc.dorval@silabs.com \
--cc=maxime@cerno.tech \
--cc=maz@kernel.org \
--cc=tglx@linutronix.de \
--cc=wens@csie.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).