From: Ming Lei <ming.lei@redhat.com>
To: "Russell King (Oracle)" <linux@armlinux.org.uk>
Cc: iommu@lists.linux-foundation.org, Will Deacon <will@kernel.org>,
linux-arm-kernel@lists.infradead.org,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node
Date: Fri, 9 Jul 2021 22:21:39 +0800 [thread overview]
Message-ID: <YOhbc5C47IzC893B@T590> (raw)
In-Reply-To: <20210709101614.GZ22278@shell.armlinux.org.uk>
On Fri, Jul 09, 2021 at 11:16:14AM +0100, Russell King (Oracle) wrote:
> On Fri, Jul 09, 2021 at 04:38:09PM +0800, Ming Lei wrote:
> > I observed that NVMe performance is very bad when running fio on one
> > CPU(aarch64) in remote numa node compared with the nvme pci numa node.
>
> Have you checked the effect of running a memory-heavy process using
> memory from node 1 while being executed by CPUs in node 0?
1) aarch64
[root@ampere-mtjade-04 ~]# taskset -c 0 numactl -m 0 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
11.511752 GB/sec
[root@ampere-mtjade-04 ~]# taskset -c 0 numactl -m 1 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
3.084333 GB/sec
2) x86_64[1]
[root@hp-dl380g10-01 mingl]# taskset -c 0 numactl -m 0 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
4.193927 GB/sec
[root@hp-dl380g10-01 mingl]# taskset -c 0 numactl -m 1 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
3.553392 GB/sec
[1] on this x86_64 machine, IOPS can reach 680K in same fio nvme test
Thanks,
Ming
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: "Russell King (Oracle)" <linux@armlinux.org.uk>
Cc: linux-nvme@lists.infradead.org, Will Deacon <will@kernel.org>,
linux-arm-kernel@lists.infradead.org,
iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org
Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node
Date: Fri, 9 Jul 2021 22:21:39 +0800 [thread overview]
Message-ID: <YOhbc5C47IzC893B@T590> (raw)
In-Reply-To: <20210709101614.GZ22278@shell.armlinux.org.uk>
On Fri, Jul 09, 2021 at 11:16:14AM +0100, Russell King (Oracle) wrote:
> On Fri, Jul 09, 2021 at 04:38:09PM +0800, Ming Lei wrote:
> > I observed that NVMe performance is very bad when running fio on one
> > CPU(aarch64) in remote numa node compared with the nvme pci numa node.
>
> Have you checked the effect of running a memory-heavy process using
> memory from node 1 while being executed by CPUs in node 0?
1) aarch64
[root@ampere-mtjade-04 ~]# taskset -c 0 numactl -m 0 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
11.511752 GB/sec
[root@ampere-mtjade-04 ~]# taskset -c 0 numactl -m 1 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
3.084333 GB/sec
2) x86_64[1]
[root@hp-dl380g10-01 mingl]# taskset -c 0 numactl -m 0 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
4.193927 GB/sec
[root@hp-dl380g10-01 mingl]# taskset -c 0 numactl -m 1 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
3.553392 GB/sec
[1] on this x86_64 machine, IOPS can reach 680K in same fio nvme test
Thanks,
Ming
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: "Russell King (Oracle)" <linux@armlinux.org.uk>
Cc: linux-nvme@lists.infradead.org, Will Deacon <will@kernel.org>,
linux-arm-kernel@lists.infradead.org,
iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org
Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node
Date: Fri, 9 Jul 2021 22:21:39 +0800 [thread overview]
Message-ID: <YOhbc5C47IzC893B@T590> (raw)
In-Reply-To: <20210709101614.GZ22278@shell.armlinux.org.uk>
On Fri, Jul 09, 2021 at 11:16:14AM +0100, Russell King (Oracle) wrote:
> On Fri, Jul 09, 2021 at 04:38:09PM +0800, Ming Lei wrote:
> > I observed that NVMe performance is very bad when running fio on one
> > CPU(aarch64) in remote numa node compared with the nvme pci numa node.
>
> Have you checked the effect of running a memory-heavy process using
> memory from node 1 while being executed by CPUs in node 0?
1) aarch64
[root@ampere-mtjade-04 ~]# taskset -c 0 numactl -m 0 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
11.511752 GB/sec
[root@ampere-mtjade-04 ~]# taskset -c 0 numactl -m 1 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
3.084333 GB/sec
2) x86_64[1]
[root@hp-dl380g10-01 mingl]# taskset -c 0 numactl -m 0 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
4.193927 GB/sec
[root@hp-dl380g10-01 mingl]# taskset -c 0 numactl -m 1 perf bench mem memcpy -s 4GB -f default
# Running 'mem/memcpy' benchmark:
# function 'default' (Default memcpy() provided by glibc)
# Copying 4GB bytes ...
3.553392 GB/sec
[1] on this x86_64 machine, IOPS can reach 680K in same fio nvme test
Thanks,
Ming
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2021-07-09 14:22 UTC|newest]
Thread overview: 90+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-09 8:38 [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node Ming Lei
2021-07-09 8:38 ` Ming Lei
2021-07-09 8:38 ` Ming Lei
2021-07-09 10:16 ` Russell King (Oracle)
2021-07-09 10:16 ` Russell King (Oracle)
2021-07-09 10:16 ` Russell King (Oracle)
2021-07-09 14:21 ` Ming Lei [this message]
2021-07-09 14:21 ` Ming Lei
2021-07-09 14:21 ` Ming Lei
2021-07-09 10:26 ` Robin Murphy
2021-07-09 10:26 ` Robin Murphy
2021-07-09 10:26 ` Robin Murphy
2021-07-09 11:04 ` John Garry
2021-07-09 11:04 ` John Garry
2021-07-09 11:04 ` John Garry
2021-07-09 12:34 ` Robin Murphy
2021-07-09 12:34 ` Robin Murphy
2021-07-09 12:34 ` Robin Murphy
2021-07-09 14:24 ` Ming Lei
2021-07-09 14:24 ` Ming Lei
2021-07-09 14:24 ` Ming Lei
2021-07-19 16:14 ` John Garry
2021-07-19 16:14 ` John Garry
2021-07-19 16:14 ` John Garry
2021-07-21 1:40 ` Ming Lei
2021-07-21 1:40 ` Ming Lei
2021-07-21 1:40 ` Ming Lei
2021-07-21 9:23 ` John Garry
2021-07-21 9:23 ` John Garry
2021-07-21 9:23 ` John Garry
2021-07-21 9:59 ` Ming Lei
2021-07-21 9:59 ` Ming Lei
2021-07-21 9:59 ` Ming Lei
2021-07-21 11:07 ` John Garry
2021-07-21 11:07 ` John Garry
2021-07-21 11:07 ` John Garry
2021-07-21 11:58 ` Ming Lei
2021-07-21 11:58 ` Ming Lei
2021-07-21 11:58 ` Ming Lei
2021-07-22 7:58 ` Ming Lei
2021-07-22 7:58 ` Ming Lei
2021-07-22 7:58 ` Ming Lei
2021-07-22 10:05 ` John Garry
2021-07-22 10:05 ` John Garry
2021-07-22 10:05 ` John Garry
2021-07-22 10:19 ` Ming Lei
2021-07-22 10:19 ` Ming Lei
2021-07-22 10:19 ` Ming Lei
2021-07-22 11:12 ` John Garry
2021-07-22 11:12 ` John Garry
2021-07-22 11:12 ` John Garry
2021-07-22 12:53 ` Marc Zyngier
2021-07-22 12:53 ` Marc Zyngier
2021-07-22 12:53 ` Marc Zyngier
2021-07-22 13:54 ` John Garry
2021-07-22 13:54 ` John Garry
2021-07-22 13:54 ` John Garry
2021-07-22 15:54 ` Ming Lei
2021-07-22 15:54 ` Ming Lei
2021-07-22 15:54 ` Ming Lei
2021-07-22 17:40 ` Robin Murphy
2021-07-22 17:40 ` Robin Murphy
2021-07-22 17:40 ` Robin Murphy
2021-07-23 10:21 ` Ming Lei
2021-07-23 10:21 ` Ming Lei
2021-07-23 10:21 ` Ming Lei
2021-07-26 7:51 ` John Garry
2021-07-26 7:51 ` John Garry
2021-07-26 7:51 ` John Garry
2021-07-28 1:32 ` Ming Lei
2021-07-28 1:32 ` Ming Lei
2021-07-28 1:32 ` Ming Lei
2021-07-28 10:38 ` John Garry
2021-07-28 10:38 ` John Garry
2021-07-28 10:38 ` John Garry
2021-07-28 15:17 ` Ming Lei
2021-07-28 15:17 ` Ming Lei
2021-07-28 15:17 ` Ming Lei
2021-07-28 15:39 ` Robin Murphy
2021-07-28 15:39 ` Robin Murphy
2021-07-28 15:39 ` Robin Murphy
2021-08-10 9:36 ` John Garry
2021-08-10 9:36 ` John Garry
2021-08-10 9:36 ` John Garry
2021-08-10 10:35 ` Ming Lei
2021-08-10 10:35 ` Ming Lei
2021-08-10 10:35 ` Ming Lei
2021-07-27 17:08 ` Robin Murphy
2021-07-27 17:08 ` Robin Murphy
2021-07-27 17:08 ` Robin Murphy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YOhbc5C47IzC893B@T590 \
--to=ming.lei@redhat.com \
--cc=iommu@lists.linux-foundation.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux@armlinux.org.uk \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.