All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: John Garry <john.garry@huawei.com>
Cc: Robin Murphy <robin.murphy@arm.com>,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	iommu@lists.linux-foundation.org, Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node
Date: Thu, 22 Jul 2021 23:54:08 +0800	[thread overview]
Message-ID: <YPmUoBk9u+tU2rbS@T590> (raw)
In-Reply-To: <fc552129-e89d-74ad-9e57-30e3ffe4cf5d@huawei.com>

On Thu, Jul 22, 2021 at 12:12:05PM +0100, John Garry wrote:
> On 22/07/2021 11:19, Ming Lei wrote:
> > > If you check below, you can see that cpu4 services an NVMe irq. From
> > > checking htop, during the test that cpu is at 100% load, which I put the
> > > performance drop (vs cpu0) down to.
> > nvme.poll_queues is 2 in my test, and no irq is involved. But the irq mode
> > fio test is still as bad as io_uring.
> > 
> 
> I tried that:
> 
> dmesg | grep -i nvme
> [    0.000000] Kernel command line: BOOT_IMAGE=/john/Image rdinit=/init
> crashkernel=256M@32M console=ttyAMA0,115200 earlycon acpi=force
> pcie_aspm=off noinitrd root=/dev/sda1 rw log_buf_len=16M user_debug=1
> iommu.strict=1 nvme.use_threaded_interrupts=0 irqchip.gicv3_pseudo_nmi=1
> nvme.poll_queues=2
> 
> [   30.531989] megaraid_sas 0000:08:00.0: NVMe passthru support : Yes
> [   30.615336] megaraid_sas 0000:08:00.0: NVME page size   : (4096)
> [   52.035895] nvme 0000:81:00.0: Adding to iommu group 5
> [   52.047732] nvme nvme0: pci function 0000:81:00.0
> [   52.067216] nvme nvme0: 22/0/2 default/read/poll queues
> [   52.087318]  nvme0n1: p1
> 
> So I get these results:
> cpu0 335K
> cpu32 346K
> cpu64 300K
> cpu96 300K
> 
> So still not massive changes.

In your last email, the results are the following with irq mode io_uring:

 cpu0  497K
 cpu4  307K
 cpu32 566K
 cpu64 488K
 cpu96 508K

So looks you get much worse result with real io_polling?

> 
> > > Here's some system info:
> > > 
> > > HW queue irq affinities:
> > > PCI name is 81:00.0: nvme0n1
> > > -eirq 298, cpu list 67, effective list 67
> > > -eirq 299, cpu list 32-38, effective list 35
> > > -eirq 300, cpu list 39-45, effective list 39
> > > -eirq 301, cpu list 46-51, effective list 46
> > > -eirq 302, cpu list 52-57, effective list 52
> > > -eirq 303, cpu list 58-63, effective list 60
> > > -eirq 304, cpu list 64-69, effective list 68
> > > -eirq 305, cpu list 70-75, effective list 70
> > > -eirq 306, cpu list 76-80, effective list 76
> > > -eirq 307, cpu list 81-85, effective list 84
> > > -eirq 308, cpu list 86-90, effective list 86
> > > -eirq 309, cpu list 91-95, effective list 92
> > > -eirq 310, cpu list 96-101, effective list 100
> > > -eirq 311, cpu list 102-107, effective list 102
> > > -eirq 312, cpu list 108-112, effective list 108
> > > -eirq 313, cpu list 113-117, effective list 116
> > > -eirq 314, cpu list 118-122, effective list 118
> > > -eirq 315, cpu list 123-127, effective list 124
> > > -eirq 316, cpu list 0-5, effective list 4
> > > -eirq 317, cpu list 6-11, effective list 6
> > > -eirq 318, cpu list 12-16, effective list 12
> > > -eirq 319, cpu list 17-21, effective list 20
> > > -eirq 320, cpu list 22-26, effective list 22
> > > -eirq 321, cpu list 27-31, effective list 28
> > > 
> > > 
> > > john@ubuntu:~$ lscpu | grep NUMA
> > > NUMA node(s):  4
> > > NUMA node0 CPU(s):   0-31
> > > NUMA node1 CPU(s):   32-63
> > > NUMA node2 CPU(s):   64-95
> > > NUMA node3 CPU(s):   96-127
> > > 
> > > john@ubuntu:~$ lspci | grep -i non
> > > 81:00.0 Non-Volatile memory controller: Huawei Technologies Co., Ltd. Device
> > > 0123 (rev 45)
> > > 
> > > cat /sys/block/nvme0n1/device/device/numa_node
> > > 2
> > BTW, nvme driver doesn't apply the pci numa node, and I guess the
> > following patch is needed:
> > 
> > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > index 11779be42186..3c5e10e8b0c2 100644
> > --- a/drivers/nvme/host/core.c
> > +++ b/drivers/nvme/host/core.c
> > @@ -4366,7 +4366,11 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
> >   	ctrl->dev = dev;
> >   	ctrl->ops = ops;
> >   	ctrl->quirks = quirks;
> > +#ifdef CONFIG_NUMA
> > +	ctrl->numa_node = dev->numa_node;
> > +#else
> >   	ctrl->numa_node = NUMA_NO_NODE;
> > +#endif
> 
> From a quick look at the code, is this then later set for the PCI device in
> nvme_pci_configure_admin_queue()?

Yeah, you are right, the pci numa node has been used.

> 
> >   	INIT_WORK(&ctrl->scan_work, nvme_scan_work);
> >   	INIT_WORK(&ctrl->async_event_work, nvme_async_event_work);
> >   	INIT_WORK(&ctrl->fw_act_work, nvme_fw_act_work);
> > 
> > > [   52.968495] nvme 0000:81:00.0: Adding to iommu group 5
> > > [   52.980484] nvme nvme0: pci function 0000:81:00.0
> > > [   52.999881] nvme nvme0: 23/0/0 default/read/poll queues
> > Looks you didn't enabling polling. In irq mode, it isn't strange
> > to observe IOPS difference when running fio on different CPUs.
> 
> If you are still keen to investigate more, then can try either of these:
> 
> - add iommu.strict=0 to the cmdline
> 
> - use perf record+annotate to find the hotspot
>   - For this you need to enable psuedo-NMI with 2x steps:
>     CONFIG_ARM64_PSEUDO_NMI=y in defconfig
>     Add irqchip.gicv3_pseudo_nmi=1
> 
>     See https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/Kconfig#n1745
>     Your kernel log should show:
>     [    0.000000] GICv3: Pseudo-NMIs enabled using forced ICC_PMR_EL1
> synchronisation

OK, will try the above tomorrow.

> 
> But my impression is that this may be a HW implementation issue, considering
> we don't see such a huge drop off on our HW.

Except for mpere-mtjade, we saw bad nvme performance on ThunderX2® CN99XX too,
but I don't get one CN99XX system to check if the issue is same with
this one.


Thanks,
Ming

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: John Garry <john.garry@huawei.com>
Cc: Robin Murphy <robin.murphy@arm.com>,
	iommu@lists.linux-foundation.org, Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node
Date: Thu, 22 Jul 2021 23:54:08 +0800	[thread overview]
Message-ID: <YPmUoBk9u+tU2rbS@T590> (raw)
In-Reply-To: <fc552129-e89d-74ad-9e57-30e3ffe4cf5d@huawei.com>

On Thu, Jul 22, 2021 at 12:12:05PM +0100, John Garry wrote:
> On 22/07/2021 11:19, Ming Lei wrote:
> > > If you check below, you can see that cpu4 services an NVMe irq. From
> > > checking htop, during the test that cpu is at 100% load, which I put the
> > > performance drop (vs cpu0) down to.
> > nvme.poll_queues is 2 in my test, and no irq is involved. But the irq mode
> > fio test is still as bad as io_uring.
> > 
> 
> I tried that:
> 
> dmesg | grep -i nvme
> [    0.000000] Kernel command line: BOOT_IMAGE=/john/Image rdinit=/init
> crashkernel=256M@32M console=ttyAMA0,115200 earlycon acpi=force
> pcie_aspm=off noinitrd root=/dev/sda1 rw log_buf_len=16M user_debug=1
> iommu.strict=1 nvme.use_threaded_interrupts=0 irqchip.gicv3_pseudo_nmi=1
> nvme.poll_queues=2
> 
> [   30.531989] megaraid_sas 0000:08:00.0: NVMe passthru support : Yes
> [   30.615336] megaraid_sas 0000:08:00.0: NVME page size   : (4096)
> [   52.035895] nvme 0000:81:00.0: Adding to iommu group 5
> [   52.047732] nvme nvme0: pci function 0000:81:00.0
> [   52.067216] nvme nvme0: 22/0/2 default/read/poll queues
> [   52.087318]  nvme0n1: p1
> 
> So I get these results:
> cpu0 335K
> cpu32 346K
> cpu64 300K
> cpu96 300K
> 
> So still not massive changes.

In your last email, the results are the following with irq mode io_uring:

 cpu0  497K
 cpu4  307K
 cpu32 566K
 cpu64 488K
 cpu96 508K

So looks you get much worse result with real io_polling?

> 
> > > Here's some system info:
> > > 
> > > HW queue irq affinities:
> > > PCI name is 81:00.0: nvme0n1
> > > -eirq 298, cpu list 67, effective list 67
> > > -eirq 299, cpu list 32-38, effective list 35
> > > -eirq 300, cpu list 39-45, effective list 39
> > > -eirq 301, cpu list 46-51, effective list 46
> > > -eirq 302, cpu list 52-57, effective list 52
> > > -eirq 303, cpu list 58-63, effective list 60
> > > -eirq 304, cpu list 64-69, effective list 68
> > > -eirq 305, cpu list 70-75, effective list 70
> > > -eirq 306, cpu list 76-80, effective list 76
> > > -eirq 307, cpu list 81-85, effective list 84
> > > -eirq 308, cpu list 86-90, effective list 86
> > > -eirq 309, cpu list 91-95, effective list 92
> > > -eirq 310, cpu list 96-101, effective list 100
> > > -eirq 311, cpu list 102-107, effective list 102
> > > -eirq 312, cpu list 108-112, effective list 108
> > > -eirq 313, cpu list 113-117, effective list 116
> > > -eirq 314, cpu list 118-122, effective list 118
> > > -eirq 315, cpu list 123-127, effective list 124
> > > -eirq 316, cpu list 0-5, effective list 4
> > > -eirq 317, cpu list 6-11, effective list 6
> > > -eirq 318, cpu list 12-16, effective list 12
> > > -eirq 319, cpu list 17-21, effective list 20
> > > -eirq 320, cpu list 22-26, effective list 22
> > > -eirq 321, cpu list 27-31, effective list 28
> > > 
> > > 
> > > john@ubuntu:~$ lscpu | grep NUMA
> > > NUMA node(s):  4
> > > NUMA node0 CPU(s):   0-31
> > > NUMA node1 CPU(s):   32-63
> > > NUMA node2 CPU(s):   64-95
> > > NUMA node3 CPU(s):   96-127
> > > 
> > > john@ubuntu:~$ lspci | grep -i non
> > > 81:00.0 Non-Volatile memory controller: Huawei Technologies Co., Ltd. Device
> > > 0123 (rev 45)
> > > 
> > > cat /sys/block/nvme0n1/device/device/numa_node
> > > 2
> > BTW, nvme driver doesn't apply the pci numa node, and I guess the
> > following patch is needed:
> > 
> > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > index 11779be42186..3c5e10e8b0c2 100644
> > --- a/drivers/nvme/host/core.c
> > +++ b/drivers/nvme/host/core.c
> > @@ -4366,7 +4366,11 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
> >   	ctrl->dev = dev;
> >   	ctrl->ops = ops;
> >   	ctrl->quirks = quirks;
> > +#ifdef CONFIG_NUMA
> > +	ctrl->numa_node = dev->numa_node;
> > +#else
> >   	ctrl->numa_node = NUMA_NO_NODE;
> > +#endif
> 
> From a quick look at the code, is this then later set for the PCI device in
> nvme_pci_configure_admin_queue()?

Yeah, you are right, the pci numa node has been used.

> 
> >   	INIT_WORK(&ctrl->scan_work, nvme_scan_work);
> >   	INIT_WORK(&ctrl->async_event_work, nvme_async_event_work);
> >   	INIT_WORK(&ctrl->fw_act_work, nvme_fw_act_work);
> > 
> > > [   52.968495] nvme 0000:81:00.0: Adding to iommu group 5
> > > [   52.980484] nvme nvme0: pci function 0000:81:00.0
> > > [   52.999881] nvme nvme0: 23/0/0 default/read/poll queues
> > Looks you didn't enabling polling. In irq mode, it isn't strange
> > to observe IOPS difference when running fio on different CPUs.
> 
> If you are still keen to investigate more, then can try either of these:
> 
> - add iommu.strict=0 to the cmdline
> 
> - use perf record+annotate to find the hotspot
>   - For this you need to enable psuedo-NMI with 2x steps:
>     CONFIG_ARM64_PSEUDO_NMI=y in defconfig
>     Add irqchip.gicv3_pseudo_nmi=1
> 
>     See https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/Kconfig#n1745
>     Your kernel log should show:
>     [    0.000000] GICv3: Pseudo-NMIs enabled using forced ICC_PMR_EL1
> synchronisation

OK, will try the above tomorrow.

> 
> But my impression is that this may be a HW implementation issue, considering
> we don't see such a huge drop off on our HW.

Except for mpere-mtjade, we saw bad nvme performance on ThunderX2® CN99XX too,
but I don't get one CN99XX system to check if the issue is same with
this one.


Thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: John Garry <john.garry@huawei.com>
Cc: Robin Murphy <robin.murphy@arm.com>,
	iommu@lists.linux-foundation.org, Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node
Date: Thu, 22 Jul 2021 23:54:08 +0800	[thread overview]
Message-ID: <YPmUoBk9u+tU2rbS@T590> (raw)
In-Reply-To: <fc552129-e89d-74ad-9e57-30e3ffe4cf5d@huawei.com>

On Thu, Jul 22, 2021 at 12:12:05PM +0100, John Garry wrote:
> On 22/07/2021 11:19, Ming Lei wrote:
> > > If you check below, you can see that cpu4 services an NVMe irq. From
> > > checking htop, during the test that cpu is at 100% load, which I put the
> > > performance drop (vs cpu0) down to.
> > nvme.poll_queues is 2 in my test, and no irq is involved. But the irq mode
> > fio test is still as bad as io_uring.
> > 
> 
> I tried that:
> 
> dmesg | grep -i nvme
> [    0.000000] Kernel command line: BOOT_IMAGE=/john/Image rdinit=/init
> crashkernel=256M@32M console=ttyAMA0,115200 earlycon acpi=force
> pcie_aspm=off noinitrd root=/dev/sda1 rw log_buf_len=16M user_debug=1
> iommu.strict=1 nvme.use_threaded_interrupts=0 irqchip.gicv3_pseudo_nmi=1
> nvme.poll_queues=2
> 
> [   30.531989] megaraid_sas 0000:08:00.0: NVMe passthru support : Yes
> [   30.615336] megaraid_sas 0000:08:00.0: NVME page size   : (4096)
> [   52.035895] nvme 0000:81:00.0: Adding to iommu group 5
> [   52.047732] nvme nvme0: pci function 0000:81:00.0
> [   52.067216] nvme nvme0: 22/0/2 default/read/poll queues
> [   52.087318]  nvme0n1: p1
> 
> So I get these results:
> cpu0 335K
> cpu32 346K
> cpu64 300K
> cpu96 300K
> 
> So still not massive changes.

In your last email, the results are the following with irq mode io_uring:

 cpu0  497K
 cpu4  307K
 cpu32 566K
 cpu64 488K
 cpu96 508K

So looks you get much worse result with real io_polling?

> 
> > > Here's some system info:
> > > 
> > > HW queue irq affinities:
> > > PCI name is 81:00.0: nvme0n1
> > > -eirq 298, cpu list 67, effective list 67
> > > -eirq 299, cpu list 32-38, effective list 35
> > > -eirq 300, cpu list 39-45, effective list 39
> > > -eirq 301, cpu list 46-51, effective list 46
> > > -eirq 302, cpu list 52-57, effective list 52
> > > -eirq 303, cpu list 58-63, effective list 60
> > > -eirq 304, cpu list 64-69, effective list 68
> > > -eirq 305, cpu list 70-75, effective list 70
> > > -eirq 306, cpu list 76-80, effective list 76
> > > -eirq 307, cpu list 81-85, effective list 84
> > > -eirq 308, cpu list 86-90, effective list 86
> > > -eirq 309, cpu list 91-95, effective list 92
> > > -eirq 310, cpu list 96-101, effective list 100
> > > -eirq 311, cpu list 102-107, effective list 102
> > > -eirq 312, cpu list 108-112, effective list 108
> > > -eirq 313, cpu list 113-117, effective list 116
> > > -eirq 314, cpu list 118-122, effective list 118
> > > -eirq 315, cpu list 123-127, effective list 124
> > > -eirq 316, cpu list 0-5, effective list 4
> > > -eirq 317, cpu list 6-11, effective list 6
> > > -eirq 318, cpu list 12-16, effective list 12
> > > -eirq 319, cpu list 17-21, effective list 20
> > > -eirq 320, cpu list 22-26, effective list 22
> > > -eirq 321, cpu list 27-31, effective list 28
> > > 
> > > 
> > > john@ubuntu:~$ lscpu | grep NUMA
> > > NUMA node(s):  4
> > > NUMA node0 CPU(s):   0-31
> > > NUMA node1 CPU(s):   32-63
> > > NUMA node2 CPU(s):   64-95
> > > NUMA node3 CPU(s):   96-127
> > > 
> > > john@ubuntu:~$ lspci | grep -i non
> > > 81:00.0 Non-Volatile memory controller: Huawei Technologies Co., Ltd. Device
> > > 0123 (rev 45)
> > > 
> > > cat /sys/block/nvme0n1/device/device/numa_node
> > > 2
> > BTW, nvme driver doesn't apply the pci numa node, and I guess the
> > following patch is needed:
> > 
> > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > index 11779be42186..3c5e10e8b0c2 100644
> > --- a/drivers/nvme/host/core.c
> > +++ b/drivers/nvme/host/core.c
> > @@ -4366,7 +4366,11 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
> >   	ctrl->dev = dev;
> >   	ctrl->ops = ops;
> >   	ctrl->quirks = quirks;
> > +#ifdef CONFIG_NUMA
> > +	ctrl->numa_node = dev->numa_node;
> > +#else
> >   	ctrl->numa_node = NUMA_NO_NODE;
> > +#endif
> 
> From a quick look at the code, is this then later set for the PCI device in
> nvme_pci_configure_admin_queue()?

Yeah, you are right, the pci numa node has been used.

> 
> >   	INIT_WORK(&ctrl->scan_work, nvme_scan_work);
> >   	INIT_WORK(&ctrl->async_event_work, nvme_async_event_work);
> >   	INIT_WORK(&ctrl->fw_act_work, nvme_fw_act_work);
> > 
> > > [   52.968495] nvme 0000:81:00.0: Adding to iommu group 5
> > > [   52.980484] nvme nvme0: pci function 0000:81:00.0
> > > [   52.999881] nvme nvme0: 23/0/0 default/read/poll queues
> > Looks you didn't enabling polling. In irq mode, it isn't strange
> > to observe IOPS difference when running fio on different CPUs.
> 
> If you are still keen to investigate more, then can try either of these:
> 
> - add iommu.strict=0 to the cmdline
> 
> - use perf record+annotate to find the hotspot
>   - For this you need to enable psuedo-NMI with 2x steps:
>     CONFIG_ARM64_PSEUDO_NMI=y in defconfig
>     Add irqchip.gicv3_pseudo_nmi=1
> 
>     See https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/Kconfig#n1745
>     Your kernel log should show:
>     [    0.000000] GICv3: Pseudo-NMIs enabled using forced ICC_PMR_EL1
> synchronisation

OK, will try the above tomorrow.

> 
> But my impression is that this may be a HW implementation issue, considering
> we don't see such a huge drop off on our HW.

Except for mpere-mtjade, we saw bad nvme performance on ThunderX2® CN99XX too,
but I don't get one CN99XX system to check if the issue is same with
this one.


Thanks,
Ming


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2021-07-22 15:54 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-09  8:38 [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node Ming Lei
2021-07-09  8:38 ` Ming Lei
2021-07-09  8:38 ` Ming Lei
2021-07-09 10:16 ` Russell King (Oracle)
2021-07-09 10:16   ` Russell King (Oracle)
2021-07-09 10:16   ` Russell King (Oracle)
2021-07-09 14:21   ` Ming Lei
2021-07-09 14:21     ` Ming Lei
2021-07-09 14:21     ` Ming Lei
2021-07-09 10:26 ` Robin Murphy
2021-07-09 10:26   ` Robin Murphy
2021-07-09 10:26   ` Robin Murphy
2021-07-09 11:04   ` John Garry
2021-07-09 11:04     ` John Garry
2021-07-09 11:04     ` John Garry
2021-07-09 12:34     ` Robin Murphy
2021-07-09 12:34       ` Robin Murphy
2021-07-09 12:34       ` Robin Murphy
2021-07-09 14:24   ` Ming Lei
2021-07-09 14:24     ` Ming Lei
2021-07-09 14:24     ` Ming Lei
2021-07-19 16:14     ` John Garry
2021-07-19 16:14       ` John Garry
2021-07-19 16:14       ` John Garry
2021-07-21  1:40       ` Ming Lei
2021-07-21  1:40         ` Ming Lei
2021-07-21  1:40         ` Ming Lei
2021-07-21  9:23         ` John Garry
2021-07-21  9:23           ` John Garry
2021-07-21  9:23           ` John Garry
2021-07-21  9:59           ` Ming Lei
2021-07-21  9:59             ` Ming Lei
2021-07-21  9:59             ` Ming Lei
2021-07-21 11:07             ` John Garry
2021-07-21 11:07               ` John Garry
2021-07-21 11:07               ` John Garry
2021-07-21 11:58               ` Ming Lei
2021-07-21 11:58                 ` Ming Lei
2021-07-21 11:58                 ` Ming Lei
2021-07-22  7:58               ` Ming Lei
2021-07-22  7:58                 ` Ming Lei
2021-07-22  7:58                 ` Ming Lei
2021-07-22 10:05                 ` John Garry
2021-07-22 10:05                   ` John Garry
2021-07-22 10:05                   ` John Garry
2021-07-22 10:19                   ` Ming Lei
2021-07-22 10:19                     ` Ming Lei
2021-07-22 10:19                     ` Ming Lei
2021-07-22 11:12                     ` John Garry
2021-07-22 11:12                       ` John Garry
2021-07-22 11:12                       ` John Garry
2021-07-22 12:53                       ` Marc Zyngier
2021-07-22 12:53                         ` Marc Zyngier
2021-07-22 12:53                         ` Marc Zyngier
2021-07-22 13:54                         ` John Garry
2021-07-22 13:54                           ` John Garry
2021-07-22 13:54                           ` John Garry
2021-07-22 15:54                       ` Ming Lei [this message]
2021-07-22 15:54                         ` Ming Lei
2021-07-22 15:54                         ` Ming Lei
2021-07-22 17:40                         ` Robin Murphy
2021-07-22 17:40                           ` Robin Murphy
2021-07-22 17:40                           ` Robin Murphy
2021-07-23 10:21                           ` Ming Lei
2021-07-23 10:21                             ` Ming Lei
2021-07-23 10:21                             ` Ming Lei
2021-07-26  7:51                             ` John Garry
2021-07-26  7:51                               ` John Garry
2021-07-26  7:51                               ` John Garry
2021-07-28  1:32                               ` Ming Lei
2021-07-28  1:32                                 ` Ming Lei
2021-07-28  1:32                                 ` Ming Lei
2021-07-28 10:38                                 ` John Garry
2021-07-28 10:38                                   ` John Garry
2021-07-28 10:38                                   ` John Garry
2021-07-28 15:17                                   ` Ming Lei
2021-07-28 15:17                                     ` Ming Lei
2021-07-28 15:17                                     ` Ming Lei
2021-07-28 15:39                                     ` Robin Murphy
2021-07-28 15:39                                       ` Robin Murphy
2021-07-28 15:39                                       ` Robin Murphy
2021-08-10  9:36                                     ` John Garry
2021-08-10  9:36                                       ` John Garry
2021-08-10  9:36                                       ` John Garry
2021-08-10 10:35                                       ` Ming Lei
2021-08-10 10:35                                         ` Ming Lei
2021-08-10 10:35                                         ` Ming Lei
2021-07-27 17:08                             ` Robin Murphy
2021-07-27 17:08                               ` Robin Murphy
2021-07-27 17:08                               ` Robin Murphy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YPmUoBk9u+tU2rbS@T590 \
    --to=ming.lei@redhat.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=john.garry@huawei.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.