From: Sinan Kaya <okaya@codeaurora.org>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, timur@codeaurora.org,
linux-arm-msm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
open list <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] mm/dmapool: localize page allocations
Date: Thu, 17 May 2018 17:05:53 -0400 [thread overview]
Message-ID: <bbd1c867-7ca8-1364-cedb-39f52bb586d9@codeaurora.org> (raw)
In-Reply-To: <20180517204103.GJ26718@bombadil.infradead.org>
On 5/17/2018 4:41 PM, Matthew Wilcox wrote:
> Let's try a different example. I have a four-socket system with one
> NVMe device with lots of hardware queues. Each CPU has its own queue
> assigned to it. If I allocate all the PRP metadata on the socket with
> the NVMe device attached to it, I'm sending a lot of coherency traffic
> in the direction of that socket, in addition to the actual data. If the
> PRP lists are allocated randomly on the various sockets, the traffic
> is heading all over the fabric. If the PRP lists are allocated on the
> local socket, the only time those lists move off this node is when the
> device requests them.
So.., your reasoning is that you actually want to keep the memory as close
as possible to the CPU rather than the device itself. CPU would do
frequent updates the buffer until the point where it hands off the buffer
to the hardware. Device would fetch the memory via coherency when it needs
to consume the data but this would be a one time penalty.
It sounds logical to me. I was always told that you want to keep buffers
as close as possible to the device.
Maybe, it makes sense for things that device needs frequent access like
receive buffers.
If the majority user is CPU, then the buffer needs to be kept closer to
the CPU.
dma_alloc_coherent() is generally used for receiver buffer allocation in
network adapters in general. People allocate a chunk and then create a
queue that hardware owns for dumping events and data.
Since DMA pool is a generic API, we should maybe request where we want
to keep the buffers closer to and allocate buffers from the appropriate
NUMA node based on that.
--
Sinan Kaya
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
prev parent reply other threads:[~2018-05-17 21:05 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-17 17:36 [PATCH] mm/dmapool: localize page allocations Sinan Kaya
2018-05-17 18:18 ` Matthew Wilcox
2018-05-17 19:37 ` Sinan Kaya
2018-05-17 19:46 ` Matthew Wilcox
2018-05-17 20:05 ` Sinan Kaya
2018-05-17 20:41 ` Matthew Wilcox
2018-05-17 21:05 ` Sinan Kaya [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bbd1c867-7ca8-1364-cedb-39f52bb586d9@codeaurora.org \
--to=okaya@codeaurora.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=timur@codeaurora.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).