linux-omap.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Grygorii Strashko <grygorii.strashko@ti.com>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Kishon Vijay Abraham I <kishon@ti.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>,
	tony@atomide.com, nsekhar@ti.com, linux-omap@vger.kernel.org,
	linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-arm <linux-arm-kernel@lists.infradead.org>,
	Felipe Balbi <balbi@ti.com>
Subject: Re: [PATCH] PCI: dra7xx: mark dra7xx_pcie_msi irq as IRQF_NO_THREAD
Date: Fri, 13 Nov 2015 20:48:11 +0200	[thread overview]
Message-ID: <5646306B.9040805@ti.com> (raw)
In-Reply-To: <5644599C.8030905@linutronix.de>

On 11/12/2015 11:19 AM, Sebastian Andrzej Siewior wrote:
> On 11/06/2015 08:59 PM, Grygorii Strashko wrote:
>> Hi Sebastian,
> 
> Hi Grygorii,
> 
>> - IRQF_NO_THREAD is the first considered option for such kind of issues.
>>    But: Now in LKML there are ~60 occurrences of IRQF_NO_THREAD - most of
>>    them are used by Arch code. And It's only used by 6 drivers (drivers/*) [Addendum 2].
>>    During past year, I've found only two threads related to IRQF_NO_THREAD
>>    and, in both cases, IRQF_NO_THREAD was added for arch specific IRQs which
>>    can't be threaded (https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-November/122659.html,
>>    https://lkml.org/lkml/2015/4/21/404).
> 
> That powerpc patch you reference is doing the same thing you are doing
> here.

Probably. I don't know this hw, so my assumption was based on commits descriptions.

> 
>> - ARM UP system: TI's am437xx SoCs for example.
>>     Here everything is started from /drivers/irqchip/irq-gic.c -> gic_handle_irq()
>>
> 
>> GIC IRQ handler gic_handle_irq() may process more than one IRQ without leaving HW IRQ mode
>> (during my experiments I saw up to 6 IRQs processed in one cycle).
> 
> not only GIC. But then what good does it do if it leaves and returns
> immediately back to the routine?
> 
>> As result, It was concluded, that having such current HW/SW and all IRQs forced threaded [1],
>> it will be potentially possible to predict system behavior, because gic_handle_irq() will
>> do the same things for most of processed IRQs.
>> But, once there will be chained [2] or IRQF_NO_THREAD [3] IRQs - complete unpredictability.
> 
> I would not go as far as "complete unpredictability". What you do (or
> should do) is testing the system for longer period of time with
> different behavior in order to estimate the worst case.
> You can't predict the system anyway since it is way too complex. Just
> try something that ensures that cyclictest is no longer cache hot and
> see what happens then.

I understand that. That's the current plan and work is in progress.
The nearest target is to get rid of all -RT specific backtracks and
ensure TI -RT kernel supports the same functionality as non-RT.
next step - try to optimize.

> 
>> So, It was selected as goal to have all PPI IRQs (forced) threaded. And if someone
>> will require faster IRQ handling - IRQF_NO_THREAD can be always added, but it will
>> be custom solution then.
>>
>> I'd be appreciated for your comments - if above problem is not a problem.
>> Good - IRQF_NO_THREAD forever!
> 
> Yes, we try to avoid IRQF_NO_THREAD under all circumstances. However it
> is required for low-level arch code. This includes basically
> everything that is using raw-locks which includes interrupt controller
> (the "real" ones like GIC or cascading like MSI or GPIO).
> Here it is simple - you have a cascading MSI-interrupt controller and
> as such it should be IRQF_NO_THREAD marked.
> The latency spikes in worst case are not huge as explained earlier: The
> only thing your cascading controller is allowed to do is to mark
> interrupt as pending (which is with threaded interrupts just a task
> wakeup).
> And this is not a -RT only problem: it is broken in vanilla linux with
> threaded interrupts as well.
> 

Ok. I've got it.  IRQF_NO_THREAD will be a solution for reference code and
for issues like this. I understand, that each, -RT based, real solution is
unique and need to be specifically tuned, so if someone will have problem
with IRQF_NO_THREAD - it can be removed easily and replaced with any sort of
custom hacks/improvements.

Thanks a lot for your comments.
I'll apply your previous comments and re-send.

-- 
regards,
-grygorii

      reply	other threads:[~2015-11-13 18:48 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-05 18:43 [PATCH] PCI: dra7xx: mark dra7xx_pcie_msi irq as IRQF_NO_THREAD Grygorii Strashko
2015-11-06  8:53 ` Sebastian Andrzej Siewior
2015-11-06 19:59   ` Grygorii Strashko
2015-11-12  9:19     ` Sebastian Andrzej Siewior
2015-11-13 18:48       ` Grygorii Strashko [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5646306B.9040805@ti.com \
    --to=grygorii.strashko@ti.com \
    --cc=balbi@ti.com \
    --cc=bhelgaas@google.com \
    --cc=bigeasy@linutronix.de \
    --cc=kishon@ti.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-omap@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=nsekhar@ti.com \
    --cc=tglx@linutronix.de \
    --cc=tony@atomide.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).