From: Marc Zyngier <marc.zyngier@arm.com>
To: Duc Dang <dhdang@apm.com>
Cc: Feng Kan <fkan@apm.com>, Arnd Bergmann <arnd@arndb.de>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
Liviu Dudau <Liviu.Dudau@arm.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"grant.likely@linaro.org" <grant.likely@linaro.org>,
Tanmay Inamdar <tinamdar@apm.com>,
Bjorn Helgaas <bhelgaas@google.com>,
linux-arm <linux-arm-kernel@lists.infradead.org>,
Loc Ho <lho@apm.com>
Subject: Re: [PATCH v3 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver
Date: Fri, 17 Apr 2015 13:45:04 +0100 [thread overview]
Message-ID: <55310050.7000003@arm.com> (raw)
In-Reply-To: <CADaLNDkAu4o6YQkw528+1npWAa_CeX-ZfFED0zWRsz+WAfwf6A@mail.gmail.com>
On 17/04/15 13:37, Duc Dang wrote:
> On Fri, Apr 17, 2015 at 3:17 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> On 17/04/15 11:00, Duc Dang wrote:
>>> On Wed, Apr 15, 2015 at 1:16 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>>>> On Tue, 14 Apr 2015 19:20:19 +0100
>>>> Duc Dang <dhdang@apm.com> wrote:
>>>>
>>>>> On Sat, Apr 11, 2015 at 5:06 AM, Marc Zyngier <marc.zyngier@arm.com>
>>>>> wrote:
>>>>>> On 2015-04-11 00:42, Duc Dang wrote:
>>>>>>>
>>>>>>> On Fri, Apr 10, 2015 at 10:20 AM, Marc Zyngier
>>>>>>> <marc.zyngier@arm.com> wrote:
>>>>>>>>
>>>>>>>> On 09/04/15 18:05, Duc Dang wrote:
>>>>>>>>>
>>>>>>>>> X-Gene v1 SoC supports total 2688 MSI/MSIX vectors coalesced into
>>>>>>>>> 16 HW IRQ lines.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Duc Dang <dhdang@apm.com>
>>>>>>>>> Signed-off-by: Tanmay Inamdar <tinamdar@apm.com>
>>>>>>>>> ---
>>>>>>>>> drivers/pci/host/Kconfig | 6 +
>>>>>>>>> drivers/pci/host/Makefile | 1 +
>>>>>>>>> drivers/pci/host/pci-xgene-msi.c | 407
>>>>>>>>> +++++++++++++++++++++++++++++++++++++++
>>>>>>>>> drivers/pci/host/pci-xgene.c | 21 ++
>>>>>>>>> 4 files changed, 435 insertions(+)
>>>>>>>>> create mode 100644 drivers/pci/host/pci-xgene-msi.c
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig
>>>>>>>>> index 7b892a9..c9b61fa 100644
>>>>>>>>> --- a/drivers/pci/host/Kconfig
>>>>>>>>> +++ b/drivers/pci/host/Kconfig
>>>>>>>>> @@ -89,11 +89,17 @@ config PCI_XGENE
>>>>>>>>> depends on ARCH_XGENE
>>>>>>>>> depends on OF
>>>>>>>>> select PCIEPORTBUS
>>>>>>>>> + select PCI_MSI_IRQ_DOMAIN if PCI_MSI
>>>>>>>>> + select PCI_XGENE_MSI if PCI_MSI
>>>>>>>>> help
>>>>>>>>> Say Y here if you want internal PCI support on APM
>>>>>>>>> X-Gene SoC. There are 5 internal PCIe ports available. Each port
>>>>>>>>> is GEN3 capable
>>>>>>>>> and have varied lanes from x1 to x8.
>>>>>>>>>
>>>>>>>>> +config PCI_XGENE_MSI
>>>>>>>>> + bool "X-Gene v1 PCIe MSI feature"
>>>>>>>>> + depends on PCI_XGENE && PCI_MSI
>>>>>>>>> +
>>>>>>>>> config PCI_LAYERSCAPE
>>>>>>>>> bool "Freescale Layerscape PCIe controller"
>>>>>>>>> depends on OF && ARM
>>>>>>>>> diff --git a/drivers/pci/host/Makefile
>>>>>>>>> b/drivers/pci/host/Makefile index e61d91c..f39bde3 100644
>>>>>>>>> --- a/drivers/pci/host/Makefile
>>>>>>>>> +++ b/drivers/pci/host/Makefile
>>>>>>>>> @@ -11,5 +11,6 @@ obj-$(CONFIG_PCIE_SPEAR13XX) +=
>>>>>>>>> pcie-spear13xx.o obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o
>>>>>>>>> pci-keystone.o obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o
>>>>>>>>> obj-$(CONFIG_PCI_XGENE) += pci-xgene.o
>>>>>>>>> +obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o
>>>>>>>>> obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
>>>>>>>>> obj-$(CONFIG_PCI_VERSATILE) += pci-versatile.o
>>>>>>>>> diff --git a/drivers/pci/host/pci-xgene-msi.c
>>>>>>>>> b/drivers/pci/host/pci-xgene-msi.c
>>>>>>>>> new file mode 100644
>>>>>>>>> index 0000000..4f0ff42
>>>>>>>>> --- /dev/null
>>>>>>>>> +++ b/drivers/pci/host/pci-xgene-msi.c
>>>>>>>>> @@ -0,0 +1,407 @@
>>>>>>>>> +/*
>>>>>>>>> + * APM X-Gene MSI Driver
>>>>>>>>> + *
>>>>>>>>> + * Copyright (c) 2014, Applied Micro Circuits Corporation
>>>>>>>>> + * Author: Tanmay Inamdar <tinamdar@apm.com>
>>>>>>>>> + * Duc Dang <dhdang@apm.com>
>>>>>>>>> + *
>>>>>>>>> + * This program is free software; you can redistribute it
>>>>>>>>> and/or modify it
>>>>>>>>> + * under the terms of the GNU General Public License as
>>>>>>>>> published by the
>>>>>>>>> + * Free Software Foundation; either version 2 of the License,
>>>>>>>>> or (at your
>>>>>>>>> + * option) any later version.
>>>>>>>>> + *
>>>>>>>>> + * This program is distributed in the hope that it will be
>>>>>>>>> useful,
>>>>>>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty
>>>>>>>>> of
>>>>>>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>>>>>>> + * GNU General Public License for more details.
>>>>>>>>> + */
>>>>>>>>> +#include <linux/interrupt.h>
>>>>>>>>> +#include <linux/module.h>
>>>>>>>>> +#include <linux/msi.h>
>>>>>>>>> +#include <linux/of_irq.h>
>>>>>>>>> +#include <linux/pci.h>
>>>>>>>>> +#include <linux/platform_device.h>
>>>>>>>>> +#include <linux/of_pci.h>
>>>>>>>>> +
>>>>>>>>> +#define MSI_INDEX0 0x000000
>>>>>>>>> +#define MSI_INT0 0x800000
>>>>>>>>> +
>>>>>>>>> +struct xgene_msi_settings {
>>>>>>>>> + u32 index_per_group;
>>>>>>>>> + u32 irqs_per_index;
>>>>>>>>> + u32 nr_msi_vec;
>>>>>>>>> + u32 nr_hw_irqs;
>>>>>>>>> +};
>>>>>>>>> +
>>>>>>>>> +struct xgene_msi {
>>>>>>>>> + struct device_node *node;
>>>>>>>>> + struct msi_controller mchip;
>>>>>>>>> + struct irq_domain *domain;
>>>>>>>>> + struct xgene_msi_settings *settings;
>>>>>>>>> + u32 msi_addr_lo;
>>>>>>>>> + u32 msi_addr_hi;
>>>>>>>>
>>>>>>>>
>>>>>>>> I'd rather see the mailbox address directly, and only do the
>>>>>>>> split when assigning it to the message (you seem to play all kind
>>>>>>>> of tricks on the address anyway).
>>>>>>>
>>>>>>>
>>>>>>> msi_addr_lo and msi_addr_hi store the physical base address of MSI
>>>>>>> controller registers. I will add comment to clarify this.
>>>>>>
>>>>>>
>>>>>> What I mean is that there is no point in keeping this around as a
>>>>>> pair of 32bit variables. You'd better keep it as a single 64bit,
>>>>>> and do the split when assigning it the the MSI message.
>>>>>
>>>>> Hi Marc,
>>>>>
>>>>> These came from device-tree (which describes 64-bit address number as
>>>>> 2 32-bit words).
>>>>
>>>> ... and converted to a resource as a 64bit word, on which you apply
>>>> {upper,lower}_32_bit(). So much for DT...
>>>>
>>>>> If I store them this way, I don't need CPU cycles to do the split
>>>>> every time assigning them to the MSI message. Please let me know what
>>>>> do you think about it.
>>>>
>>>> This is getting absolutely silly.
>>>>
>>>> How many cycles does it take to execute "lsr x1, x0, #32" on X-Gene? If
>>>> it takes so long that it is considered to be a bottleneck, I suggest
>>>> you go and design a better CPU (hint: the answer is probably 1 cycle
>>>> absolutely everywhere).
>>>>
>>>> How often are you configuring MSIs in the face of what is happening in
>>>> the rest of the kernel? Almost never!
>>>>
>>>> So, given that "never" times 1 is still never, I'll consider that
>>>> readability of the code trumps it anytime (I can't believe we're having
>>>> that kind of conversation...).
>>>>
>>> I changed to use u64 for msi_addr and split it when composing MSI messages.
>>> The change is in v4 of the patch set that I just posted.
>>>>>>
>>>>>> [...]
>>>>>>
>>>>>>>>> +static int xgene_msi_set_affinity(struct irq_data *irq_data,
>>>>>>>>> + const struct cpumask *mask,
>>>>>>>>> bool force)
>>>>>>>>> +{
>>>>>>>>> + struct xgene_msi *msi =
>>>>>>>>> irq_data_get_irq_chip_data(irq_data);
>>>>>>>>> + unsigned int gic_irq;
>>>>>>>>> + int ret;
>>>>>>>>> +
>>>>>>>>> + gic_irq = msi->msi_virqs[irq_data->hwirq %
>>>>>>>>> msi->settings->nr_hw_irqs];
>>>>>>>>> + ret = irq_set_affinity(gic_irq, mask);
>>>>>>>>
>>>>>>>>
>>>>>>>> Erm... This as the effect of moving *all* the MSIs hanging off
>>>>>>>> this interrupt to another CPU. I'm not sure that's an acceptable
>>>>>>>> effect... What if another MSI requires a different affinity?
>>>>>>>
>>>>>>>
>>>>>>> We have 16 'real' hardware IRQs. Each of these has multiple MSIs
>>>>>>> attached to it.
>>>>>>> So this will move all MSIs handing off this interrupt to another
>>>>>>> CPU; and we don't support different affinity settings for
>>>>>>> different MSIs that are attached to the same hardware IRQ.
>>>>>>
>>>>>>
>>>>>> Well, that's a significant departure from the expected behaviour.
>>>>>> In other words, I wonder how useful this is. Could you instead
>>>>>> reconfigure the MSI itself to hit the right CPU (assuming you don't
>>>>>> have more than 16 CPUs and if
>>>>>> that's only for XGene-1, this will only be 8 at most)? This would
>>>>>> reduce your number of possible MSIs, but at least the semantics of
>>>>>> set_afinity would be preserved.
>>>>>
>>>>> X-Gene-1 supports 2688 MSIs that are divided into 16 groups, each
>>>>> group has 168 MSIs that are mapped to 1 hardware GIC IRQ (so we have
>>>>> 16 hardware GIC IRQs for 2688 MSIs).
>>>>
>>>> We've already established that.
>>>>
>>>>> Setting affinity of single MSI to deliver it to a target CPU will move
>>>>> all the MSIs mapped to the same GIC IRQ to that CPU as well. This is
>>>>> not a standard behavior, but limiting the total number of MSIs will
>>>>> cause a lot of devices to fall back to INTx (with huge performance
>>>>> penalty) or even fail to load their driver as these devices request
>>>>> more than 16 MSIs during driver initialization.
>>>>
>>>> No, I think you got it wrong. If you have 168 MSIs per GIC IRQ, and
>>>> provided that you have 8 CPUs (XGene-1), you end up with 336 MSIs per
>>>> CPU (having statically assigned 2 IRQs per CPU).
>>>>
>>>> Assuming you adopt my scheme, you still have a grand total of 336 MSIs
>>>> that can be freely moved around without breaking any userspace
>>>> expectations.
>>>>
>>> Thanks Marc. This is a very good idea.
>>>
>>> But to move MSIs around, I need to change MSI termination address and data
>>> and write them to device configuration space. This may cause problems
>>> if the device
>>> fires an interrupt at the same time when I do the config write?
>>>
>>> What is your opinion here?
>>
>> There is an inherent race when changing the affinity of any interrupt,
>> whether that's an MSI or not. The kernel is perfectly prepared to handle
>> such a situation (to be honest, the kernel doesn't really care).
>>
>> The write to the config space shouldn't be a concern. You will either
>> hit the old *or* the new CPU, but that race is only during the write
>> itself (you can read back from the config space if you're really
>> paranoid). By the time you return from this read/write, the device will
>> be reconfigured.
>>
>>>> I think that 336 MSIs is a fair number (nowhere near the 16 you claim).
>>>> Most platforms are doing quite well with that kind of numbers. Also,
>>>> you don't have to allocate all the MSIs a device can possibly claim (up
>>>> to 2048 MSI-X per device), as they are all perfectly capable of using
>>>> less MSI without having to fallback to INTx).
>>>>
>>>>> I can document the limitation in affinity setting of X-Gene-1 MSI in
>>>>> the driver to hopefully not make people surprise and hope to keep the
>>>>> total number of supported MSI as 2688 so that we can support as many
>>>>> cards that require MSI/MSI-X as possible.
>>>>
>>>> I don't think this is a valid approach. This breaks userspace (think of
>>>> things like irqbalance), and breaks the SMP affinity model that Linux
>>>> uses. No amount of documentation is going to solve it, so I think you
>>>> just have to admit that the HW is mis-designed and do the best you can
>>>> to make it work like Linux expect it to work.
>>>>
>>>> The alternative would to disable affinity setting altogether instead of
>>>> introducing these horrible side effects.
>>>>
>>> I have it disabled (set_affinity does nothing) in my v4 patch.
>>
>> It would be good if you could try the above approach. It shouldn't be
>> hard to write, and it would be a lot better than just ignoring the problem.
>>
> Yes. I am working on this change.
In which case, I'll wait for a v5. No need to spend time on something
that is going to change anyway.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
next prev parent reply other threads:[~2015-04-17 12:45 UTC|newest]
Thread overview: 106+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-06 16:15 [PATCH 0/4] PCI: X-Gene: Add APM X-Gene v1 MSI/MSIX termination driver Duc Dang
2015-01-06 16:15 ` [PATCH 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe " Duc Dang
2015-01-06 19:33 ` Arnd Bergmann
2015-01-12 18:53 ` Duc Dang
2015-01-12 19:44 ` Arnd Bergmann
2015-03-04 19:39 ` [PATCH v2 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang
2015-03-18 17:43 ` Duc Dang
2015-03-19 20:49 ` Bjorn Helgaas
2015-03-19 20:59 ` Duc Dang
2015-03-19 21:08 ` Bjorn Helgaas
2015-03-04 19:39 ` [PATCH v2 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe " Duc Dang
2015-03-18 18:05 ` Marc Zyngier
2015-03-18 18:29 ` Duc Dang
2015-03-18 18:52 ` Marc Zyngier
2015-04-07 19:56 ` Duc Dang
2015-04-08 7:44 ` Marc Zyngier
2015-04-09 17:05 ` [PATCH v3 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang
2015-04-09 17:05 ` [PATCH v3 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe " Duc Dang
2015-04-09 20:11 ` Bjorn Helgaas
2015-04-09 21:52 ` Duc Dang
2015-04-09 22:39 ` Bjorn Helgaas
2015-04-09 23:26 ` Duc Dang
2015-04-10 17:20 ` Marc Zyngier
2015-04-10 23:42 ` Duc Dang
2015-04-11 12:06 ` Marc Zyngier
2015-04-14 18:20 ` Duc Dang
2015-04-15 8:16 ` Marc Zyngier
2015-04-17 9:50 ` [PATCH v4 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang
2015-04-17 9:50 ` [PATCH v4 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe " Duc Dang
2015-04-17 14:10 ` Arnd Bergmann
2015-04-19 18:40 ` Duc Dang
2015-04-19 19:55 ` Arnd Bergmann
2015-04-20 18:49 ` Feng Kan
2015-04-21 7:16 ` Arnd Bergmann
2015-04-17 9:50 ` [PATCH v4 2/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang
2015-04-17 9:50 ` [PATCH v4 3/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang
2015-04-17 9:50 ` [PATCH v4 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang
2015-04-17 10:00 ` [PATCH v3 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang
2015-04-17 10:17 ` Marc Zyngier
2015-04-17 12:37 ` Duc Dang
2015-04-17 12:45 ` Marc Zyngier [this message]
2015-04-20 18:51 ` Feng Kan
2015-04-21 8:32 ` Marc Zyngier
2015-04-21 4:04 ` [PATCH v5 0/4]PCI: X-Gene: Add APM X-Gene v1 " Duc Dang
2015-04-22 3:02 ` Jon Masters
2015-04-22 3:02 ` Jon Masters
2015-04-21 4:04 ` [PATCH v5 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe " Duc Dang
2015-04-21 15:08 ` Marc Zyngier
2015-04-22 6:15 ` [PATCH v6 0/4]PCI: X-Gene: Add APM X-Gene v1 " Duc Dang
2015-04-22 6:15 ` [PATCH v6 1/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang
2015-04-22 6:15 ` [PATCH v6 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang
2015-04-22 12:50 ` Marc Zyngier
2015-05-18 9:55 ` [PATCH v7 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang
2015-05-18 9:55 ` [PATCH v7 1/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang
2015-05-18 9:55 ` [PATCH v7 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang
2015-05-20 9:16 ` Marc Zyngier
2015-05-22 18:41 ` [PATCH v8 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang
2015-05-22 18:41 ` [PATCH v8 1/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang
2015-05-22 18:41 ` [PATCH v8 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang
2015-05-25 11:52 ` Marc Zyngier
2015-05-27 18:27 ` [PATCH v9 0/4]PCI: X-Gene: Add APM X-Gene v1 " Duc Dang
2015-05-27 18:27 ` [PATCH v9 1/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang
2015-05-27 18:27 ` [PATCH v9 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang
2015-05-28 8:05 ` Marc Zyngier
2015-05-28 17:16 ` Duc Dang
2015-05-29 18:24 ` [PATCH v10 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang
2015-06-05 21:05 ` Bjorn Helgaas
2015-06-05 21:11 ` Duc Dang
2015-05-29 18:24 ` [PATCH v10 1/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang
2015-05-29 18:24 ` [PATCH v10 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang
2015-05-29 18:24 ` [PATCH v10 3/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang
2015-05-29 18:24 ` [PATCH v10 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang
2015-05-27 18:27 ` [PATCH v9 3/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang
2015-05-27 18:27 ` [PATCH v9 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang
2015-05-27 18:31 ` [PATCH v8 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang
2015-05-22 18:41 ` [PATCH v8 3/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang
2015-05-22 18:41 ` [PATCH v8 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang
2015-05-22 18:43 ` [PATCH v7 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang
2015-05-18 9:55 ` [PATCH v7 3/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang
2015-05-18 9:55 ` [PATCH v7 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang
2015-05-18 10:12 ` [PATCH v6 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang
2015-04-22 6:15 ` [PATCH v6 3/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang
2015-04-22 6:15 ` [PATCH v6 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang
2015-04-21 4:04 ` [PATCH v5 2/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang
2015-04-21 15:19 ` Marc Zyngier
2015-04-21 18:01 ` Duc Dang
2015-04-21 4:04 ` [PATCH v5 3/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang
2015-04-21 15:42 ` Mark Rutland
2015-04-21 17:37 ` Duc Dang
2015-04-21 4:04 ` [PATCH v5 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang
2015-04-11 0:16 ` [PATCH v3 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Feng Kan
2015-04-11 12:18 ` Marc Zyngier
2015-04-11 14:50 ` Arnd Bergmann
2015-04-10 18:13 ` Paul Bolle
2015-04-10 23:55 ` Duc Dang
2015-04-09 17:05 ` [PATCH v3 2/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang
2015-04-09 17:05 ` [PATCH v3 3/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang
2015-04-09 17:05 ` [PATCH v3 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang
2015-04-09 17:20 ` [PATCH v2 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang
2015-03-04 19:39 ` [PATCH v2 2/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang
2015-03-04 19:39 ` [PATCH v2 3/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang
2015-03-04 19:40 ` [PATCH v2 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang
2015-01-06 16:15 ` [PATCH 2/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang
2015-01-06 16:15 ` [PATCH 3/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang
2015-01-06 19:34 ` Arnd Bergmann
2015-01-06 16:15 ` [PATCH 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55310050.7000003@arm.com \
--to=marc.zyngier@arm.com \
--cc=Liviu.Dudau@arm.com \
--cc=arnd@arndb.de \
--cc=bhelgaas@google.com \
--cc=dhdang@apm.com \
--cc=fkan@apm.com \
--cc=grant.likely@linaro.org \
--cc=lho@apm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=tinamdar@apm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).