From: Leon Romanovsky <leon@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Jason Gunthorpe <jgg@nvidia.com>,
Keith Busch <kbusch@kernel.org>,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-nvme@lists.infradead.org, Sagi Grimberg <sagi@grimberg.me>
Subject: Re: [PATCH 4/4] nvme-pci: unmap MMIO pages with appropriate interface
Date: Wed, 15 Oct 2025 09:44:11 +0300 [thread overview]
Message-ID: <20251015064411.GA6393@unreal> (raw)
In-Reply-To: <20251015042053.GC7073@lst.de>
On Wed, Oct 15, 2025 at 06:20:53AM +0200, Christoph Hellwig wrote:
> On Mon, Oct 13, 2025 at 06:34:12PM +0300, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@nvidia.com>
> >
> > Block layer maps MMIO memory through dma_map_phys() interface
> > with help of DMA_ATTR_MMIO attribute. There is a need to unmap
> > that memory with the appropriate unmap function, something which
> > wasn't possible before adding new REQ attribute to block layer in
> > previous patch.
>
> This should go into the same patch that switches to dma_map_phys.
I don't think so, dma_map_phys() patch [1] doesn't change any behavior
and dma_map_page() is equal to dma_map_phys(... , attr = 0),
> Unless I'm missing something it also misses passing the flag for
> the metadata mapping.
Yes, I didn't realize that same request can have both metadata and data
payloads.
>
> Btw, where is all this going? Are you trying to remove the auto
> detection of P2P in the low-level dma mapping routines? If so that
> should probably go into at very least the cover lttter, but probably also
> the commit logs.
It is an outcome of multiple things:
1. We missed setting of IOMMU_MMIO flag in dma-iommu.c flow for p2p pages
and for that we need some external indication as memory type is already
known to the callers.
2. Robin expressed concerns about overloading DMA_ATTR_SKIP_CPU_SYNC.
[1] https://lore.kernel.org/all/a40705f38a9f3c757f30228b9b848ce0a87cbcdd.1760369219.git.leon@kernel.org/
[2] https://lore.kernel.org/all/751e7ece-8640-4653-b308-96da6731b8e7@arm.com/
prev parent reply other threads:[~2025-10-15 6:44 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-13 15:34 [PATCH 0/4] Properly take MMIO path Leon Romanovsky
2025-10-13 15:34 ` [PATCH 1/4] blk-mq-dma: migrate to dma_map_phys instead of map_page Leon Romanovsky
2025-10-15 4:16 ` Christoph Hellwig
2025-10-13 15:34 ` [PATCH 2/4] blk-mq-dma: unify DMA unmap routine Leon Romanovsky
2025-10-13 18:53 ` Keith Busch
2025-10-13 19:35 ` Leon Romanovsky
2025-10-13 15:34 ` [PATCH 3/4] block-dma: properly take MMIO path Leon Romanovsky
2025-10-13 19:01 ` Keith Busch
2025-10-13 19:29 ` Leon Romanovsky
2025-10-13 19:34 ` Keith Busch
2025-10-15 4:18 ` Christoph Hellwig
2025-10-13 15:34 ` [PATCH 4/4] nvme-pci: unmap MMIO pages with appropriate interface Leon Romanovsky
2025-10-15 4:20 ` Christoph Hellwig
2025-10-15 6:44 ` Leon Romanovsky [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251015064411.GA6393@unreal \
--to=leon@kernel.org \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=jgg@nvidia.com \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).