qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Ethan Chen via <qemu-devel@nongnu.org>
To: Peter Xu <peterx@redhat.com>
Cc: qemu-devel@nongnu.org, "Paolo Bonzini" <pbonzini@redhat.com>,
	"David Hildenbrand" <david@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Igor Mammedov" <imammedo@redhat.com>
Subject: Re: [PATCH 2/6] system/physmem: IOMMU: Invoke the translate_size function if it is implemented
Date: Mon, 30 Oct 2023 14:00:54 +0800	[thread overview]
Message-ID: <ZT9GlTLtTOT3WUif@ethan84-VirtualBox> (raw)
In-Reply-To: <ZTvfECmO4JFZ/aIp@x1n>

On Fri, Oct 27, 2023 at 12:13:50PM -0400, Peter Xu wrote:
> Add cc list.
> 
> On Fri, Oct 27, 2023 at 12:02:24PM -0400, Peter Xu wrote:
> > On Fri, Oct 27, 2023 at 11:28:36AM +0800, Ethan Chen wrote:
> > > On Thu, Oct 26, 2023 at 10:20:41AM -0400, Peter Xu wrote:
> > > > Could you elaborate why is that important?  In what use case?
> > > I was not involved in the formulation of the IOPMP specification, but I'll try
> > > to explain my perspective. IOPMP use the same the idea as PMP. "The matching 
> > > PMP entry must match all bytes of an access, or the access fails."
> > > 
> > > > 
> > > > Consider IOVA mapped for address range iova=[0, 4K] only, here we have a
> > > > DMA request with range=[0, 8K].  Now my understanding is what you want to
> > > > achieve is don't trigger the DMA to [0, 4K] and fail the whole [0, 8K]
> > > > request.
> > > > 
> > > > Can we just fail at the latter DMA [4K, 8K] when it happens?  After all,
> > > > IIUC a device can split the 0-8K DMA into two smaller DMAs, then the 1st
> > > > chunk can succeed then if it falls in 0-4K.  Some further explanation of
> > > > the failure use case could be helpful.
> > > 
> > > IOPMP can only detect partially hit in an access. DMA device will split a 
> > > large DMA transfer to small DMA transfers base on target and DMA transfer 
> > > width, so partially hit error only happens when an access cross the boundary.
> > > But to ensure that an access is only within one entry is still important. 
> > > For example, an entry may mean permission of a device memory region. We do 
> > > not want to see one DMA transfer can access mutilple devices, although DMA 
> > > have permissions from multiple entries.
> > 
> > I was expecting a DMA request can be fulfilled successfully as long as the
> > DMA translations are valid for the whole range of the request, even if the
> > requested range may include two separate translated targets or more, each
> > point to different places (either RAM, or other devicie's MMIO regions).

IOPMP is used to check DMA translation is vaild or not. In IOPMP specification
, a translation access more than one entry is not invalid.
Though it is not recommand, user can create an IOPMP entry contains mutiple
places to make this kind translations valid.

> > 
> > AFAIK currently QEMU memory model will automatically split that large
> > request into two or more smaller requests, and fulfill them separately by
> > two/more IOMMU translations, with its memory access dispatched to the
> > specific memory regions.

Because of requests may be split, I need a method to take the original request
information to IOPMP.

> > 
> > The example you provided doesn't seem to be RISCV specific.  Do you mean it
> > is a generic requirement from PCI/PCIe POV, or is it only a restriction of
> > IOPMP?  If it's a valid PCI restriction, does it mean that all the rest
> > IOMMU implementations in QEMU currently are broken?
> > 

It only a restriction of IOPMP.

Thanks,
Ethan Chen



WARNING: multiple messages have this Message-ID (diff)
From: Ethan Chen <ethan84@andestech.com>
To: Peter Xu <peterx@redhat.com>
Cc: qemu-devel@nongnu.org, "Paolo Bonzini" <pbonzini@redhat.com>,
	"David Hildenbrand" <david@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Igor Mammedov" <imammedo@redhat.com>
Subject: Re: [PATCH 2/6] system/physmem: IOMMU: Invoke the translate_size function if it is implemented
Date: Mon, 30 Oct 2023 14:00:54 +0800	[thread overview]
Message-ID: <ZT9GlTLtTOT3WUif@ethan84-VirtualBox> (raw)
Message-ID: <20231030060054.mfBa37hn3EajRjtSkOtCB5HyxaFsOi93Ly2HNKFl6ZM@z> (raw)
In-Reply-To: <ZTvfECmO4JFZ/aIp@x1n>

On Fri, Oct 27, 2023 at 12:13:50PM -0400, Peter Xu wrote:
> Add cc list.
> 
> On Fri, Oct 27, 2023 at 12:02:24PM -0400, Peter Xu wrote:
> > On Fri, Oct 27, 2023 at 11:28:36AM +0800, Ethan Chen wrote:
> > > On Thu, Oct 26, 2023 at 10:20:41AM -0400, Peter Xu wrote:
> > > > Could you elaborate why is that important?  In what use case?
> > > I was not involved in the formulation of the IOPMP specification, but I'll try
> > > to explain my perspective. IOPMP use the same the idea as PMP. "The matching 
> > > PMP entry must match all bytes of an access, or the access fails."
> > > 
> > > > 
> > > > Consider IOVA mapped for address range iova=[0, 4K] only, here we have a
> > > > DMA request with range=[0, 8K].  Now my understanding is what you want to
> > > > achieve is don't trigger the DMA to [0, 4K] and fail the whole [0, 8K]
> > > > request.
> > > > 
> > > > Can we just fail at the latter DMA [4K, 8K] when it happens?  After all,
> > > > IIUC a device can split the 0-8K DMA into two smaller DMAs, then the 1st
> > > > chunk can succeed then if it falls in 0-4K.  Some further explanation of
> > > > the failure use case could be helpful.
> > > 
> > > IOPMP can only detect partially hit in an access. DMA device will split a 
> > > large DMA transfer to small DMA transfers base on target and DMA transfer 
> > > width, so partially hit error only happens when an access cross the boundary.
> > > But to ensure that an access is only within one entry is still important. 
> > > For example, an entry may mean permission of a device memory region. We do 
> > > not want to see one DMA transfer can access mutilple devices, although DMA 
> > > have permissions from multiple entries.
> > 
> > I was expecting a DMA request can be fulfilled successfully as long as the
> > DMA translations are valid for the whole range of the request, even if the
> > requested range may include two separate translated targets or more, each
> > point to different places (either RAM, or other devicie's MMIO regions).

IOPMP is used to check DMA translation is vaild or not. In IOPMP specification
, a translation access more than one entry is not invalid.
Though it is not recommand, user can create an IOPMP entry contains mutiple
places to make this kind translations valid.

> > 
> > AFAIK currently QEMU memory model will automatically split that large
> > request into two or more smaller requests, and fulfill them separately by
> > two/more IOMMU translations, with its memory access dispatched to the
> > specific memory regions.

Because of requests may be split, I need a method to take the original request
information to IOPMP.

> > 
> > The example you provided doesn't seem to be RISCV specific.  Do you mean it
> > is a generic requirement from PCI/PCIe POV, or is it only a restriction of
> > IOPMP?  If it's a valid PCI restriction, does it mean that all the rest
> > IOMMU implementations in QEMU currently are broken?
> > 

It only a restriction of IOPMP.

Thanks,
Ethan Chen



  parent reply	other threads:[~2023-10-30  6:02 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-25  5:14 [PATCH 0/6] Support RISC-V IOPMP Ethan Chen via
2023-10-25  5:14 ` Ethan Chen
2023-10-25  5:14 ` [PATCH 1/6] exec/memory: Introduce the translate_size function within the IOMMU class Ethan Chen via
2023-10-25  5:14   ` Ethan Chen
2023-10-25 14:56   ` David Hildenbrand
2023-10-26  7:14     ` Ethan Chen via
2023-10-26  7:14       ` Ethan Chen
2023-10-26  7:26       ` David Hildenbrand
2023-10-25  5:14 ` [PATCH 2/6] system/physmem: IOMMU: Invoke the translate_size function if it is implemented Ethan Chen via
2023-10-25  5:14   ` Ethan Chen
2023-10-25 15:14   ` Peter Xu
2023-10-26  6:48     ` Ethan Chen via
2023-10-26  6:48       ` Ethan Chen
2023-10-26 14:20       ` Peter Xu
2023-10-27  3:28     ` Ethan Chen via
2023-10-27  3:28       ` Ethan Chen
2023-10-27 16:02       ` Peter Xu
2023-10-27 16:13         ` Peter Xu
2023-10-30  6:00         ` Ethan Chen via [this message]
2023-10-30  6:00           ` Ethan Chen
2023-10-30 15:02           ` Peter Xu
2023-10-31  8:52             ` Ethan Chen via
2023-10-31  8:52               ` Ethan Chen
2023-10-25  5:14 ` [PATCH 3/6] exec/memattrs: Add iopmp source id to MemTxAttrs Ethan Chen via
2023-10-25  5:14   ` Ethan Chen
2023-10-25  5:14 ` [PATCH 4/6] Add RISC-V IOPMP support Ethan Chen via
2023-10-25  5:14   ` Ethan Chen
2023-10-25  5:14 ` [PATCH 5/6] hw/dma: Add Andes ATCDMAC300 support Ethan Chen via
2023-10-25  5:14   ` Ethan Chen
2023-10-25  5:14 ` [PATCH 6/6] hw/riscv/virt: Add IOPMP support Ethan Chen via
2023-10-25  5:14   ` Ethan Chen
2023-10-26 12:02 ` [PATCH 0/6] Support RISC-V IOPMP Ethan Chen via
2023-10-26 12:02   ` Ethan Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZT9GlTLtTOT3WUif@ethan84-VirtualBox \
    --to=qemu-devel@nongnu.org \
    --cc=david@redhat.com \
    --cc=ethan84@andestech.com \
    --cc=imammedo@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).