From: Peter Xu <peterx@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: qemu-devel@nongnu.org, tianyu.lan@intel.com,
kevin.tian@intel.com, mst@redhat.com, jan.kiszka@siemens.com,
alex.williamson@redhat.com, bd.aviv@gmail.com
Subject: Re: [Qemu-devel] [PATCH RFC v4 15/20] intel_iommu: provide its own replay() callback
Date: Sun, 22 Jan 2017 17:36:25 +0800 [thread overview]
Message-ID: <20170122093625.GD26526@pxdev.xzpeter.org> (raw)
In-Reply-To: <20170122085118.GA26526@pxdev.xzpeter.org>
On Sun, Jan 22, 2017 at 04:51:18PM +0800, Peter Xu wrote:
> On Sun, Jan 22, 2017 at 03:56:10PM +0800, Jason Wang wrote:
>
> [...]
>
> > >+/**
> > >+ * vtd_page_walk_level - walk over specific level for IOVA range
> > >+ *
> > >+ * @addr: base GPA addr to start the walk
> > >+ * @start: IOVA range start address
> > >+ * @end: IOVA range end address (start <= addr < end)
> > >+ * @hook_fn: hook func to be called when detected page
> > >+ * @private: private data to be passed into hook func
> > >+ * @read: whether parent level has read permission
> > >+ * @write: whether parent level has write permission
> > >+ * @skipped: accumulated skipped ranges
> >
> > What's the usage for this parameter? Looks like it was never used in this
> > series.
>
> This was for debugging purpose before, and I kept it in case one day
> it can be used again, considering that will not affect much on the
> overall performance.
>
> >
> > >+ * @notify_unmap: whether we should notify invalid entries
> > >+ */
> > >+static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
> > >+ uint64_t end, vtd_page_walk_hook hook_fn,
> > >+ void *private, uint32_t level,
> > >+ bool read, bool write, uint64_t *skipped,
> > >+ bool notify_unmap)
> > >+{
> > >+ bool read_cur, write_cur, entry_valid;
> > >+ uint32_t offset;
> > >+ uint64_t slpte;
> > >+ uint64_t subpage_size, subpage_mask;
> > >+ IOMMUTLBEntry entry;
> > >+ uint64_t iova = start;
> > >+ uint64_t iova_next;
> > >+ uint64_t skipped_local = 0;
> > >+ int ret = 0;
> > >+
> > >+ trace_vtd_page_walk_level(addr, level, start, end);
> > >+
> > >+ subpage_size = 1ULL << vtd_slpt_level_shift(level);
> > >+ subpage_mask = vtd_slpt_level_page_mask(level);
> > >+
> > >+ while (iova < end) {
> > >+ iova_next = (iova & subpage_mask) + subpage_size;
> > >+
> > >+ offset = vtd_iova_level_offset(iova, level);
> > >+ slpte = vtd_get_slpte(addr, offset);
> > >+
> > >+ /*
> > >+ * When one of the following case happens, we assume the whole
> > >+ * range is invalid:
> > >+ *
> > >+ * 1. read block failed
> >
> > Don't get the meaning (and don't see any code relate to this comment).
>
> I took above vtd_get_slpte() a "read", so I was trying to mean that we
> failed to read the SLPTE due to some reason, we assume the range is
> invalid.
>
> >
> > >+ * 3. reserved area non-zero
> > >+ * 2. both read & write flag are not set
> >
> > Should be 1,2,3? And the above comment is conflict with the code at least
> > when notify_unmap is true.
>
> Yes, okay I don't know why 3 jumped. :(
>
> And yes, I should mention that "both read & write flag not set" only
> suites for page tables here.
>
> >
> > >+ */
> > >+
> > >+ if (slpte == (uint64_t)-1) {
> >
> > If this is true, vtd_slpte_nonzero_rsvd(slpte) should be true too I think?
>
> Yes, but we are doing two checks here:
>
> - checking against -1 to make sure slpte read success
> - checking against nonzero reserved fields to make sure it follows spec
>
> Imho we should not skip the first check here, only if one day removing
> this may really matter (e.g., for performance reason? I cannot think
> of one yet).
>
> >
> > >+ trace_vtd_page_walk_skip_read(iova, iova_next);
> > >+ skipped_local++;
> > >+ goto next;
> > >+ }
> > >+
> > >+ if (vtd_slpte_nonzero_rsvd(slpte, level)) {
> > >+ trace_vtd_page_walk_skip_reserve(iova, iova_next);
> > >+ skipped_local++;
> > >+ goto next;
> > >+ }
> > >+
> > >+ /* Permissions are stacked with parents' */
> > >+ read_cur = read && (slpte & VTD_SL_R);
> > >+ write_cur = write && (slpte & VTD_SL_W);
> > >+
> > >+ /*
> > >+ * As long as we have either read/write permission, this is
> > >+ * a valid entry. The rule works for both page or page tables.
> > >+ */
> > >+ entry_valid = read_cur | write_cur;
> > >+
> > >+ if (vtd_is_last_slpte(slpte, level)) {
> > >+ entry.target_as = &address_space_memory;
> > >+ entry.iova = iova & subpage_mask;
> > >+ /*
> > >+ * This might be meaningless addr if (!read_cur &&
> > >+ * !write_cur), but after all this field will be
> > >+ * meaningless in that case, so let's share the code to
> > >+ * generate the IOTLBs no matter it's an MAP or UNMAP
> > >+ */
> > >+ entry.translated_addr = vtd_get_slpte_addr(slpte);
> > >+ entry.addr_mask = ~subpage_mask;
> > >+ entry.perm = IOMMU_ACCESS_FLAG(read_cur, write_cur);
> > >+ if (!entry_valid && !notify_unmap) {
> > >+ trace_vtd_page_walk_skip_perm(iova, iova_next);
> > >+ skipped_local++;
> > >+ goto next;
> > >+ }
> >
> > Under which case should we care about unmap here (better with a comment I
> > think)?
>
> When PSIs are for invalidation, rather than newly mapped entries. In
> that case, notify_unmap will be true, and here we need to notify
> IOMMU_NONE to do the cache flush or unmap.
>
> (this page walk is not only for replaying, it is for cache flushing as
> well)
>
> Do you have suggestion on the comments?
Besides this one, I tried to fix the comments in this function as
below, hope this is better (I removed 1-3 thing since I think that's
clearer from below code):
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index e958f53..f3fe8c4 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -735,15 +735,6 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
offset = vtd_iova_level_offset(iova, level);
slpte = vtd_get_slpte(addr, offset);
- /*
- * When one of the following case happens, we assume the whole
- * range is invalid:
- *
- * 1. read block failed
- * 3. reserved area non-zero
- * 2. both read & write flag are not set
- */
-
if (slpte == (uint64_t)-1) {
trace_vtd_page_walk_skip_read(iova, iova_next);
skipped_local++;
@@ -761,20 +752,16 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
write_cur = write && (slpte & VTD_SL_W);
/*
- * As long as we have either read/write permission, this is
- * a valid entry. The rule works for both page or page tables.
+ * As long as we have either read/write permission, this is a
+ * valid entry. The rule works for both page entries and page
+ * table entries.
*/
entry_valid = read_cur | write_cur;
if (vtd_is_last_slpte(slpte, level)) {
entry.target_as = &address_space_memory;
entry.iova = iova & subpage_mask;
- /*
- * This might be meaningless addr if (!read_cur &&
- * !write_cur), but after all this field will be
- * meaningless in that case, so let's share the code to
- * generate the IOTLBs no matter it's an MAP or UNMAP
- */
+ /* NOTE: this is only meaningful if entry_valid == true */
entry.translated_addr = vtd_get_slpte_addr(slpte);
entry.addr_mask = ~subpage_mask;
entry.perm = IOMMU_ACCESS_FLAG(read_cur, write_cur);
Thanks,
-- peterx
next prev parent reply other threads:[~2017-01-22 9:36 UTC|newest]
Thread overview: 75+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-20 13:08 [Qemu-devel] [PATCH RFC v4 00/20] VT-d: vfio enablement and misc enhances Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 01/20] vfio: trace map/unmap for notify as well Peter Xu
2017-01-23 18:20 ` Alex Williamson
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 02/20] vfio: introduce vfio_get_vaddr() Peter Xu
2017-01-23 18:49 ` Alex Williamson
2017-01-24 3:28 ` Peter Xu
2017-01-24 4:30 ` Alex Williamson
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 03/20] vfio: allow to notify unmap for very large region Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 04/20] IOMMU: add option to enable VTD_CAP_CM to vIOMMU capility exposoed to guest Peter Xu
2017-01-22 2:51 ` [Qemu-devel] [PATCH RFC v4.1 04/20] intel_iommu: add "caching-mode" option Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 05/20] intel_iommu: simplify irq region translation Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 06/20] intel_iommu: renaming gpa to iova where proper Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 07/20] intel_iommu: fix trace for inv desc handling Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 08/20] intel_iommu: fix trace for addr translation Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 09/20] intel_iommu: vtd_slpt_level_shift check level Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 10/20] memory: add section range info for IOMMU notifier Peter Xu
2017-01-23 19:12 ` Alex Williamson
2017-01-24 7:48 ` Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 11/20] memory: provide IOMMU_NOTIFIER_FOREACH macro Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 12/20] memory: provide iommu_replay_all() Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 13/20] memory: introduce memory_region_notify_one() Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 14/20] memory: add MemoryRegionIOMMUOps.replay() callback Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 15/20] intel_iommu: provide its own replay() callback Peter Xu
2017-01-22 7:56 ` Jason Wang
2017-01-22 8:51 ` Peter Xu
2017-01-22 9:36 ` Peter Xu [this message]
2017-01-23 1:50 ` Jason Wang
2017-01-23 1:48 ` Jason Wang
2017-01-23 2:54 ` Peter Xu
2017-01-23 3:12 ` Jason Wang
2017-01-23 3:35 ` Peter Xu
2017-01-23 19:34 ` Alex Williamson
2017-01-24 4:04 ` Peter Xu
2017-01-23 19:33 ` Alex Williamson
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 16/20] intel_iommu: do replay when context invalidate Peter Xu
2017-01-23 10:36 ` Jason Wang
2017-01-24 4:52 ` Peter Xu
2017-01-25 3:09 ` Jason Wang
2017-01-25 3:46 ` Peter Xu
2017-01-25 6:37 ` Tian, Kevin
2017-01-25 6:44 ` Peter Xu
2017-01-25 7:45 ` Jason Wang
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 17/20] intel_iommu: allow dynamic switch of IOMMU region Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices Peter Xu
2017-01-22 8:08 ` Jason Wang
2017-01-22 9:04 ` Peter Xu
2017-01-23 1:55 ` Jason Wang
2017-01-23 3:34 ` Peter Xu
2017-01-23 10:23 ` Jason Wang
2017-01-23 19:40 ` Alex Williamson
2017-01-25 1:19 ` Jason Wang
2017-01-25 1:31 ` Alex Williamson
2017-01-25 7:41 ` Jason Wang
2017-01-24 4:42 ` Peter Xu
2017-01-23 18:03 ` Alex Williamson
2017-01-24 7:22 ` Peter Xu
2017-01-24 16:24 ` Alex Williamson
2017-01-25 4:04 ` Peter Xu
2017-01-23 2:01 ` Jason Wang
2017-01-23 2:17 ` Jason Wang
2017-01-23 3:40 ` Peter Xu
2017-01-23 10:27 ` Jason Wang
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 19/20] intel_iommu: unmap existing pages before replay Peter Xu
2017-01-22 8:13 ` Jason Wang
2017-01-22 9:09 ` Peter Xu
2017-01-23 1:57 ` Jason Wang
2017-01-23 7:30 ` Peter Xu
2017-01-23 10:29 ` Jason Wang
2017-01-23 10:40 ` Jason Wang
2017-01-24 7:31 ` Peter Xu
2017-01-25 3:11 ` Jason Wang
2017-01-25 4:15 ` Peter Xu
2017-01-20 13:08 ` [Qemu-devel] [PATCH RFC v4 20/20] intel_iommu: replay even with DSI/GLOBAL inv desc Peter Xu
2017-01-23 15:55 ` [Qemu-devel] [PATCH RFC v4 00/20] VT-d: vfio enablement and misc enhances Michael S. Tsirkin
2017-01-24 7:40 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170122093625.GD26526@pxdev.xzpeter.org \
--to=peterx@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=bd.aviv@gmail.com \
--cc=jan.kiszka@siemens.com \
--cc=jasowang@redhat.com \
--cc=kevin.tian@intel.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=tianyu.lan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).