* [PATCH 0/6] x86/mm: Minor non-functional cleanup
@ 2018-08-15 18:34 Andrew Cooper
2018-08-15 18:34 ` [PATCH 1/6] x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations Andrew Cooper
` (6 more replies)
0 siblings, 7 replies; 20+ messages in thread
From: Andrew Cooper @ 2018-08-15 18:34 UTC (permalink / raw)
To: Xen-devel
Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, Jan Beulich,
Roger Pau Monné
Minor cleanup which has accumulated while doing other work. No functional
change anywhere.
Andrew Cooper (6):
x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations
x86/shadow: Use more appropriate conversion functions
x86/shadow: Switch shadow_domain.has_fast_mmio_entries to bool
x86/shadow: Use MASK_* helpers for the MMIO fastpath PTE manipulation
x86/shadow: Clean up the MMIO fastpath helpers
x86/shadow: Use mfn_t in shadow_track_dirty_vram()
xen/arch/x86/cpu/mcheck/vmce.c | 2 +-
xen/arch/x86/domain_page.c | 2 +-
xen/arch/x86/mm/hap/hap.c | 3 ++-
xen/arch/x86/mm/mem_sharing.c | 4 ++--
xen/arch/x86/mm/p2m-pod.c | 2 +-
xen/arch/x86/mm/p2m.c | 4 ++--
xen/arch/x86/mm/shadow/common.c | 44 ++++++++++++++++++++---------------------
xen/arch/x86/mm/shadow/multi.c | 37 +++++++++++++++++-----------------
xen/arch/x86/mm/shadow/types.h | 27 +++++++++++++------------
xen/include/asm-x86/domain.h | 2 +-
10 files changed, 64 insertions(+), 63 deletions(-)
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/6] x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations
2018-08-15 18:34 [PATCH 0/6] x86/mm: Minor non-functional cleanup Andrew Cooper
@ 2018-08-15 18:34 ` Andrew Cooper
2018-08-16 16:07 ` Roger Pau Monné
2018-08-17 12:54 ` Jan Beulich
2018-08-15 18:34 ` [PATCH 2/6] x86/shadow: Use more appropriate conversion functions Andrew Cooper
` (5 subsequent siblings)
6 siblings, 2 replies; 20+ messages in thread
From: Andrew Cooper @ 2018-08-15 18:34 UTC (permalink / raw)
To: Xen-devel
Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, Jan Beulich,
Roger Pau Monné
Use l1e_get_mfn() in place of l1e_get_pfn() when applicable, and fix up style
on affected lines.
For sh_remove_shadow_via_pointer(), map_domain_page() is guaranteed to succeed
so there is no need to ASSERT() its success. This allows the pointer
arithmetic to folded into the previous expression, and for vaddr to be
properly typed as l1_pgentry_t, avoiding the cast in l1e_get_mfn().
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/cpu/mcheck/vmce.c | 2 +-
xen/arch/x86/domain_page.c | 2 +-
xen/arch/x86/mm/hap/hap.c | 3 ++-
xen/arch/x86/mm/mem_sharing.c | 4 ++--
xen/arch/x86/mm/p2m-pod.c | 2 +-
xen/arch/x86/mm/p2m.c | 4 ++--
xen/arch/x86/mm/shadow/common.c | 34 ++++++++++++++++------------------
xen/arch/x86/mm/shadow/multi.c | 23 +++++++++++++----------
8 files changed, 38 insertions(+), 36 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..ea37006 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -540,7 +540,7 @@ int unmmap_broken_page(struct domain *d, mfn_t mfn, unsigned long gfn)
r_mfn = get_gfn_query(d, gfn, &pt);
if ( p2m_to_mask(pt) & P2M_UNMAP_TYPES)
{
- ASSERT(mfn_x(r_mfn) == mfn_x(mfn));
+ ASSERT(mfn_eq(r_mfn, mfn));
rc = p2m_change_type_one(d, gfn, pt, p2m_ram_broken);
}
put_gfn(d, gfn);
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index 0c24530..aee9a80 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -101,7 +101,7 @@ void *map_domain_page(mfn_t mfn)
ASSERT(idx < dcache->entries);
hashent->refcnt++;
ASSERT(hashent->refcnt);
- ASSERT(l1e_get_pfn(MAPCACHE_L1ENT(idx)) == mfn_x(mfn));
+ ASSERT(mfn_eq(l1e_get_mfn(MAPCACHE_L1ENT(idx)), mfn));
goto out;
}
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 812a840..d6449e6 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -729,7 +729,8 @@ hap_write_p2m_entry(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
* unless the only change is an increase in access rights. */
mfn_t omfn = l1e_get_mfn(*p);
mfn_t nmfn = l1e_get_mfn(new);
- flush_nestedp2m = !( mfn_x(omfn) == mfn_x(nmfn)
+
+ flush_nestedp2m = !(mfn_eq(omfn, nmfn)
&& perms_strictly_increased(old_flags, l1e_get_flags(new)) );
}
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index fad8a9d..5c08adb 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -500,7 +500,7 @@ static int audit(void)
continue;
}
o_mfn = get_gfn_query_unlocked(d, g->gfn, &t);
- if ( mfn_x(o_mfn) != mfn_x(mfn) )
+ if ( !mfn_eq(o_mfn, mfn) )
{
MEM_SHARING_DEBUG("Incorrect P2M for d=%hu, PFN=%lx."
"Expecting MFN=%lx, got %lx\n",
@@ -904,7 +904,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
/* This tricky business is to avoid two callers deadlocking if
* grabbing pages in opposite client/source order */
- if( mfn_x(smfn) == mfn_x(cmfn) )
+ if ( mfn_eq(smfn, cmfn) )
{
/* The pages are already the same. We could return some
* kind of error here, but no matter how you look at it,
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 631e9ae..ba37344 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -75,7 +75,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
{
struct domain * od;
- p = mfn_to_page(_mfn(mfn_x(mfn) + i));
+ p = mfn_to_page(mfn_add(mfn, i));
od = page_get_owner(p);
if ( od != d )
{
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 8e9fbb5..5c73ff8 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1104,7 +1104,7 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l,
for ( i = 0; i < (1UL << order); ++i )
{
- ASSERT(mfn_valid(_mfn(mfn_x(omfn) + i)));
+ ASSERT(mfn_valid(mfn_add(omfn, i)));
set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
}
}
@@ -1222,7 +1222,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn_l, mfn_t mfn,
"gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn_l, t);
goto out;
}
- if ( mfn_x(mfn) != mfn_x(actual_mfn) )
+ if ( !mfn_eq(mfn, actual_mfn) )
gdprintk(XENLOG_WARNING,
"no mapping between mfn %08lx and gfn %08lx\n",
mfn_x(mfn), gfn_l);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index fd42d73..8a7a2b0 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -564,10 +564,10 @@ void oos_audit_hash_is_present(struct domain *d, mfn_t gmfn)
{
oos = v->arch.paging.shadow.oos;
idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
- if ( mfn_x(oos[idx]) != mfn_x(gmfn) )
+ if ( !mfn_eq(oos[idx], gmfn) )
idx = (idx + 1) % SHADOW_OOS_PAGES;
- if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
+ if ( mfn_eq(oos[idx], gmfn) )
return;
}
@@ -635,15 +635,15 @@ void oos_fixup_add(struct domain *d, mfn_t gmfn,
oos = v->arch.paging.shadow.oos;
oos_fixup = v->arch.paging.shadow.oos_fixup;
idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
- if ( mfn_x(oos[idx]) != mfn_x(gmfn) )
+ if ( !mfn_eq(oos[idx], gmfn) )
idx = (idx + 1) % SHADOW_OOS_PAGES;
- if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
+ if ( mfn_eq(oos[idx], gmfn) )
{
int i;
for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
{
if ( mfn_valid(oos_fixup[idx].smfn[i])
- && (mfn_x(oos_fixup[idx].smfn[i]) == mfn_x(smfn))
+ && mfn_eq(oos_fixup[idx].smfn[i], smfn)
&& (oos_fixup[idx].off[i] == off) )
return;
}
@@ -816,9 +816,9 @@ static void oos_hash_remove(struct domain *d, mfn_t gmfn)
{
oos = v->arch.paging.shadow.oos;
idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
- if ( mfn_x(oos[idx]) != mfn_x(gmfn) )
+ if ( !mfn_eq(oos[idx], gmfn) )
idx = (idx + 1) % SHADOW_OOS_PAGES;
- if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
+ if ( mfn_eq(oos[idx], gmfn) )
{
oos[idx] = INVALID_MFN;
return;
@@ -841,9 +841,9 @@ mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
oos = v->arch.paging.shadow.oos;
oos_snapshot = v->arch.paging.shadow.oos_snapshot;
idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
- if ( mfn_x(oos[idx]) != mfn_x(gmfn) )
+ if ( !mfn_eq(oos[idx], gmfn) )
idx = (idx + 1) % SHADOW_OOS_PAGES;
- if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
+ if ( mfn_eq(oos[idx], gmfn) )
{
return oos_snapshot[idx];
}
@@ -868,10 +868,10 @@ void sh_resync(struct domain *d, mfn_t gmfn)
oos_fixup = v->arch.paging.shadow.oos_fixup;
oos_snapshot = v->arch.paging.shadow.oos_snapshot;
idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
- if ( mfn_x(oos[idx]) != mfn_x(gmfn) )
+ if ( !mfn_eq(oos[idx], gmfn) )
idx = (idx + 1) % SHADOW_OOS_PAGES;
- if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
+ if ( mfn_eq(oos[idx], gmfn) )
{
_sh_resync(v, gmfn, &oos_fixup[idx], oos_snapshot[idx]);
oos[idx] = INVALID_MFN;
@@ -2749,7 +2749,7 @@ static int sh_remove_shadow_via_pointer(struct domain *d, mfn_t smfn)
{
struct page_info *sp = mfn_to_page(smfn);
mfn_t pmfn;
- void *vaddr;
+ l1_pgentry_t *vaddr;
int rc;
ASSERT(sp->u.sh.type > 0);
@@ -2759,10 +2759,8 @@ static int sh_remove_shadow_via_pointer(struct domain *d, mfn_t smfn)
if (sp->up == 0) return 0;
pmfn = maddr_to_mfn(sp->up);
ASSERT(mfn_valid(pmfn));
- vaddr = map_domain_page(pmfn);
- ASSERT(vaddr);
- vaddr += sp->up & (PAGE_SIZE-1);
- ASSERT(l1e_get_pfn(*(l1_pgentry_t *)vaddr) == mfn_x(smfn));
+ vaddr = map_domain_page(pmfn) + (sp->up & (PAGE_SIZE - 1));
+ ASSERT(mfn_eq(l1e_get_mfn(*vaddr), smfn));
/* Is this the only reference to this shadow? */
rc = (sp->u.sh.count == 1) ? 1 : 0;
@@ -3646,7 +3644,7 @@ static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
{
if ( !npte
|| !p2m_is_ram(p2m_flags_to_type(l1e_get_flags(npte[i])))
- || l1e_get_pfn(npte[i]) != mfn_x(omfn) )
+ || !mfn_eq(l1e_get_mfn(npte[i]), omfn) )
{
/* This GFN->MFN mapping has gone away */
sh_remove_all_shadows_and_parents(d, omfn);
@@ -3654,7 +3652,7 @@ static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
_gfn(gfn + (i << PAGE_SHIFT))) )
cpumask_or(&flushmask, &flushmask, d->dirty_cpumask);
}
- omfn = _mfn(mfn_x(omfn) + 1);
+ omfn = mfn_add(omfn, 1);
}
flush_tlb_mask(&flushmask);
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 021ae25..0d74c01 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -960,7 +960,8 @@ static int shadow_set_l4e(struct domain *d,
{
/* We lost a reference to an old mfn. */
mfn_t osl3mfn = shadow_l4e_get_mfn(old_sl4e);
- if ( (mfn_x(osl3mfn) != mfn_x(shadow_l4e_get_mfn(new_sl4e)))
+
+ if ( mfn_eq(osl3mfn, shadow_l4e_get_mfn(new_sl4e))
|| !perms_strictly_increased(shadow_l4e_get_flags(old_sl4e),
shadow_l4e_get_flags(new_sl4e)) )
{
@@ -1006,7 +1007,8 @@ static int shadow_set_l3e(struct domain *d,
{
/* We lost a reference to an old mfn. */
mfn_t osl2mfn = shadow_l3e_get_mfn(old_sl3e);
- if ( (mfn_x(osl2mfn) != mfn_x(shadow_l3e_get_mfn(new_sl3e))) ||
+
+ if ( !mfn_eq(osl2mfn, shadow_l3e_get_mfn(new_sl3e)) ||
!perms_strictly_increased(shadow_l3e_get_flags(old_sl3e),
shadow_l3e_get_flags(new_sl3e)) )
{
@@ -1091,7 +1093,8 @@ static int shadow_set_l2e(struct domain *d,
{
/* We lost a reference to an old mfn. */
mfn_t osl1mfn = shadow_l2e_get_mfn(old_sl2e);
- if ( (mfn_x(osl1mfn) != mfn_x(shadow_l2e_get_mfn(new_sl2e))) ||
+
+ if ( !mfn_eq(osl1mfn, shadow_l2e_get_mfn(new_sl2e)) ||
!perms_strictly_increased(shadow_l2e_get_flags(old_sl2e),
shadow_l2e_get_flags(new_sl2e)) )
{
@@ -2447,7 +2450,7 @@ sh_map_and_validate(struct vcpu *v, mfn_t gmfn,
smfn2 = smfn;
guest_idx = guest_index(new_gp);
shadow_idx = shadow_index(&smfn2, guest_idx);
- if ( mfn_x(smfn2) != mfn_x(map_mfn) )
+ if ( !mfn_eq(smfn2, map_mfn) )
{
/* We have moved to another page of the shadow */
map_mfn = smfn2;
@@ -4272,7 +4275,7 @@ int sh_rm_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
sl1e = *sl1p;
if ( ((shadow_l1e_get_flags(sl1e) & (_PAGE_PRESENT|_PAGE_RW))
!= (_PAGE_PRESENT|_PAGE_RW))
- || (mfn_x(shadow_l1e_get_mfn(sl1e)) != mfn_x(gmfn)) )
+ || !mfn_eq(shadow_l1e_get_mfn(sl1e), gmfn) )
{
unmap_domain_page(sl1p);
goto fail;
@@ -4341,7 +4344,7 @@ static int sh_guess_wrmap(struct vcpu *v, unsigned long vaddr, mfn_t gmfn)
sl1e = *sl1p;
if ( ((shadow_l1e_get_flags(sl1e) & (_PAGE_PRESENT|_PAGE_RW))
!= (_PAGE_PRESENT|_PAGE_RW))
- || (mfn_x(shadow_l1e_get_mfn(sl1e)) != mfn_x(gmfn)) )
+ || !mfn_eq(shadow_l1e_get_mfn(sl1e), gmfn) )
return 0;
/* Found it! Need to remove its write permissions. */
@@ -4753,7 +4756,7 @@ int sh_audit_l1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
gfn = guest_l1e_get_gfn(*gl1e);
mfn = shadow_l1e_get_mfn(*sl1e);
gmfn = get_gfn_query_unlocked(v->domain, gfn_x(gfn), &p2mt);
- if ( !p2m_is_grant(p2mt) && mfn_x(gmfn) != mfn_x(mfn) )
+ if ( !p2m_is_grant(p2mt) && !mfn_eq(gmfn, mfn) )
AUDIT_FAIL(1, "bad translation: gfn %" SH_PRI_gfn
" --> %" PRI_mfn " != mfn %" PRI_mfn,
gfn_x(gfn), mfn_x(gmfn), mfn_x(mfn));
@@ -4827,7 +4830,7 @@ int sh_audit_l2_table(struct vcpu *v, mfn_t sl2mfn, mfn_t x)
: get_shadow_status(d,
get_gfn_query_unlocked(d, gfn_x(gfn),
&p2mt), SH_type_l1_shadow);
- if ( mfn_x(gmfn) != mfn_x(mfn) )
+ if ( !mfn_eq(gmfn, mfn) )
AUDIT_FAIL(2, "bad translation: gfn %" SH_PRI_gfn
" (--> %" PRI_mfn ")"
" --> %" PRI_mfn " != mfn %" PRI_mfn,
@@ -4882,7 +4885,7 @@ int sh_audit_l3_table(struct vcpu *v, mfn_t sl3mfn, mfn_t x)
&& (guest_index(gl3e) % 4) == 3)
? SH_type_l2h_shadow
: SH_type_l2_shadow);
- if ( mfn_x(gmfn) != mfn_x(mfn) )
+ if ( !mfn_eq(gmfn, mfn) )
AUDIT_FAIL(3, "bad translation: gfn %" SH_PRI_gfn
" --> %" PRI_mfn " != mfn %" PRI_mfn,
gfn_x(gfn), mfn_x(gmfn), mfn_x(mfn));
@@ -4927,7 +4930,7 @@ int sh_audit_l4_table(struct vcpu *v, mfn_t sl4mfn, mfn_t x)
gmfn = get_shadow_status(d, get_gfn_query_unlocked(
d, gfn_x(gfn), &p2mt),
SH_type_l3_shadow);
- if ( mfn_x(gmfn) != mfn_x(mfn) )
+ if ( !mfn_eq(gmfn, mfn) )
AUDIT_FAIL(4, "bad translation: gfn %" SH_PRI_gfn
" --> %" PRI_mfn " != mfn %" PRI_mfn,
gfn_x(gfn), mfn_x(gmfn), mfn_x(mfn));
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 2/6] x86/shadow: Use more appropriate conversion functions
2018-08-15 18:34 [PATCH 0/6] x86/mm: Minor non-functional cleanup Andrew Cooper
2018-08-15 18:34 ` [PATCH 1/6] x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations Andrew Cooper
@ 2018-08-15 18:34 ` Andrew Cooper
2018-08-16 16:08 ` Roger Pau Monné
2018-08-21 10:02 ` Wei Liu
2018-08-15 18:34 ` [PATCH 3/6] x86/shadow: Switch shadow_domain.has_fast_mmio_entries to bool Andrew Cooper
` (4 subsequent siblings)
6 siblings, 2 replies; 20+ messages in thread
From: Andrew Cooper @ 2018-08-15 18:34 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Tim Deegan, Wei Liu, Jan Beulich,
Roger Pau Monné
Replace pfn_to_paddr(mfn_x(...)) with mfn_to_maddr(), and replace an opencoded
gfn_to_gaddr().
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/mm/shadow/multi.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 0d74c01..fbdbb7d 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -628,7 +628,7 @@ _sh_propagate(struct vcpu *v,
sflags |= get_pat_flags(v,
gflags,
gfn_to_paddr(target_gfn),
- pfn_to_paddr(mfn_x(target_mfn)),
+ mfn_to_maddr(target_mfn),
MTRR_TYPE_UNCACHABLE);
else if ( iommu_snoop )
sflags |= pat_type_2_pte_flags(PAT_TYPE_WRBACK);
@@ -636,7 +636,7 @@ _sh_propagate(struct vcpu *v,
sflags |= get_pat_flags(v,
gflags,
gfn_to_paddr(target_gfn),
- pfn_to_paddr(mfn_x(target_mfn)),
+ mfn_to_maddr(target_mfn),
NO_HARDCODE_MEM_TYPE);
}
}
@@ -1131,7 +1131,7 @@ static inline void shadow_vram_get_l1e(shadow_l1e_t new_sl1e,
if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
/* Initial guest reference, record it */
- dirty_vram->sl1ma[i] = pfn_to_paddr(mfn_x(sl1mfn))
+ dirty_vram->sl1ma[i] = mfn_to_maddr(sl1mfn)
| ((unsigned long)sl1e & ~PAGE_MASK);
}
}
@@ -1160,7 +1160,7 @@ static inline void shadow_vram_put_l1e(shadow_l1e_t old_sl1e,
unsigned long i = gfn - dirty_vram->begin_pfn;
struct page_info *page = mfn_to_page(mfn);
int dirty = 0;
- paddr_t sl1ma = pfn_to_paddr(mfn_x(sl1mfn))
+ paddr_t sl1ma = mfn_to_maddr(sl1mfn)
| ((unsigned long)sl1e & ~PAGE_MASK);
if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
@@ -2931,8 +2931,7 @@ static int sh_page_fault(struct vcpu *v,
{
/* Magic MMIO marker: extract gfn for MMIO address */
ASSERT(sh_l1e_is_mmio(sl1e));
- gpa = (((paddr_t)(gfn_x(sh_l1e_mmio_get_gfn(sl1e))))
- << PAGE_SHIFT)
+ gpa = gfn_to_gaddr(sh_l1e_mmio_get_gfn(sl1e))
| (va & ~PAGE_MASK);
}
perfc_incr(shadow_fault_fast_mmio);
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 3/6] x86/shadow: Switch shadow_domain.has_fast_mmio_entries to bool
2018-08-15 18:34 [PATCH 0/6] x86/mm: Minor non-functional cleanup Andrew Cooper
2018-08-15 18:34 ` [PATCH 1/6] x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations Andrew Cooper
2018-08-15 18:34 ` [PATCH 2/6] x86/shadow: Use more appropriate conversion functions Andrew Cooper
@ 2018-08-15 18:34 ` Andrew Cooper
2018-08-16 16:09 ` Roger Pau Monné
2018-08-17 12:57 ` Jan Beulich
2018-08-15 18:34 ` [PATCH 4/6] x86/shadow: Use MASK_* helpers for the MMIO fastpath PTE manipulation Andrew Cooper
` (3 subsequent siblings)
6 siblings, 2 replies; 20+ messages in thread
From: Andrew Cooper @ 2018-08-15 18:34 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Tim Deegan, Wei Liu, Jan Beulich,
Roger Pau Monné
Remove an unecessary if().
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/mm/shadow/common.c | 2 +-
xen/arch/x86/mm/shadow/multi.c | 3 +--
xen/include/asm-x86/domain.h | 2 +-
3 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8a7a2b0..c9640b9 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3687,7 +3687,7 @@ shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
if ( d->arch.paging.shadow.has_fast_mmio_entries )
{
shadow_blow_tables(d);
- d->arch.paging.shadow.has_fast_mmio_entries = 0;
+ d->arch.paging.shadow.has_fast_mmio_entries = false;
}
#endif
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index fbdbb7d..8f90c9f 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -563,8 +563,7 @@ _sh_propagate(struct vcpu *v,
{
/* Guest l1e maps emulated MMIO space */
*sp = sh_l1e_mmio(target_gfn, gflags);
- if ( !d->arch.paging.shadow.has_fast_mmio_entries )
- d->arch.paging.shadow.has_fast_mmio_entries = 1;
+ d->arch.paging.shadow.has_fast_mmio_entries = true;
goto done;
}
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 09f6b3d..3da2c68 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -113,7 +113,7 @@ struct shadow_domain {
bool_t hash_walking; /* Some function is walking the hash table */
/* Fast MMIO path heuristic */
- bool_t has_fast_mmio_entries;
+ bool has_fast_mmio_entries;
/* OOS */
bool_t oos_active;
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 4/6] x86/shadow: Use MASK_* helpers for the MMIO fastpath PTE manipulation
2018-08-15 18:34 [PATCH 0/6] x86/mm: Minor non-functional cleanup Andrew Cooper
` (2 preceding siblings ...)
2018-08-15 18:34 ` [PATCH 3/6] x86/shadow: Switch shadow_domain.has_fast_mmio_entries to bool Andrew Cooper
@ 2018-08-15 18:34 ` Andrew Cooper
2018-08-16 16:12 ` Roger Pau Monné
2018-08-21 10:04 ` Wei Liu
2018-08-15 18:34 ` [PATCH 5/6] x86/shadow: Clean up the MMIO fastpath helpers Andrew Cooper
` (2 subsequent siblings)
6 siblings, 2 replies; 20+ messages in thread
From: Andrew Cooper @ 2018-08-15 18:34 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Tim Deegan, Wei Liu, Jan Beulich,
Roger Pau Monné
Drop the now-unused SH_L1E_MMIO_GFN_SHIFT definition.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/mm/shadow/types.h | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/mm/shadow/types.h b/xen/arch/x86/mm/shadow/types.h
index 0430628..8c0c802 100644
--- a/xen/arch/x86/mm/shadow/types.h
+++ b/xen/arch/x86/mm/shadow/types.h
@@ -317,12 +317,11 @@ static inline int sh_l1e_is_gnp(shadow_l1e_t sl1e)
#define SH_L1E_MMIO_MAGIC 0xffffffff00000001ULL
#define SH_L1E_MMIO_MAGIC_MASK 0xffffffff00000009ULL
#define SH_L1E_MMIO_GFN_MASK 0x00000000fffffff0ULL
-#define SH_L1E_MMIO_GFN_SHIFT 4
static inline shadow_l1e_t sh_l1e_mmio(gfn_t gfn, u32 gflags)
{
return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC
- | (gfn_x(gfn) << SH_L1E_MMIO_GFN_SHIFT)
+ | MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK)
| (gflags & (_PAGE_USER|_PAGE_RW))) };
}
@@ -333,7 +332,7 @@ static inline int sh_l1e_is_mmio(shadow_l1e_t sl1e)
static inline gfn_t sh_l1e_mmio_get_gfn(shadow_l1e_t sl1e)
{
- return _gfn((sl1e.l1 & SH_L1E_MMIO_GFN_MASK) >> SH_L1E_MMIO_GFN_SHIFT);
+ return _gfn(MASK_EXTR(sl1e.l1, SH_L1E_MMIO_GFN_MASK));
}
static inline u32 sh_l1e_mmio_get_flags(shadow_l1e_t sl1e)
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 5/6] x86/shadow: Clean up the MMIO fastpath helpers
2018-08-15 18:34 [PATCH 0/6] x86/mm: Minor non-functional cleanup Andrew Cooper
` (3 preceding siblings ...)
2018-08-15 18:34 ` [PATCH 4/6] x86/shadow: Use MASK_* helpers for the MMIO fastpath PTE manipulation Andrew Cooper
@ 2018-08-15 18:34 ` Andrew Cooper
2018-08-16 16:16 ` Roger Pau Monné
2018-08-21 10:05 ` Wei Liu
2018-08-15 18:34 ` [PATCH 6/6] x86/shadow: Use mfn_t in shadow_track_dirty_vram() Andrew Cooper
2018-08-16 19:00 ` [PATCH 0/6] x86/mm: Minor non-functional cleanup Tim Deegan
6 siblings, 2 replies; 20+ messages in thread
From: Andrew Cooper @ 2018-08-15 18:34 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Tim Deegan, Wei Liu, Jan Beulich,
Roger Pau Monné
Use bool when appropraite, remove extranious brackets and fix up comment
style.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/mm/shadow/types.h | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/xen/arch/x86/mm/shadow/types.h b/xen/arch/x86/mm/shadow/types.h
index 8c0c802..d509674 100644
--- a/xen/arch/x86/mm/shadow/types.h
+++ b/xen/arch/x86/mm/shadow/types.h
@@ -294,9 +294,9 @@ void sh_destroy_monitor_table(struct vcpu *v, mfn_t mmfn);
*/
#define SH_L1E_MAGIC 0xffffffff00000001ULL
-static inline int sh_l1e_is_magic(shadow_l1e_t sl1e)
+static inline bool sh_l1e_is_magic(shadow_l1e_t sl1e)
{
- return ((sl1e.l1 & SH_L1E_MAGIC) == SH_L1E_MAGIC);
+ return (sl1e.l1 & SH_L1E_MAGIC) == SH_L1E_MAGIC;
}
/* Guest not present: a single magic value */
@@ -305,15 +305,17 @@ static inline shadow_l1e_t sh_l1e_gnp(void)
return (shadow_l1e_t){ -1ULL };
}
-static inline int sh_l1e_is_gnp(shadow_l1e_t sl1e)
+static inline bool sh_l1e_is_gnp(shadow_l1e_t sl1e)
{
- return (sl1e.l1 == sh_l1e_gnp().l1);
+ return sl1e.l1 == sh_l1e_gnp().l1;
}
-/* MMIO: an invalid PTE that contains the GFN of the equivalent guest l1e.
+/*
+ * MMIO: an invalid PTE that contains the GFN of the equivalent guest l1e.
* We store 28 bits of GFN in bits 4:32 of the entry.
* The present bit is set, and the U/S and R/W bits are taken from the guest.
- * Bit 3 is always 0, to differentiate from gnp above. */
+ * Bit 3 is always 0, to differentiate from gnp above.
+ */
#define SH_L1E_MMIO_MAGIC 0xffffffff00000001ULL
#define SH_L1E_MMIO_MAGIC_MASK 0xffffffff00000009ULL
#define SH_L1E_MMIO_GFN_MASK 0x00000000fffffff0ULL
@@ -325,9 +327,9 @@ static inline shadow_l1e_t sh_l1e_mmio(gfn_t gfn, u32 gflags)
| (gflags & (_PAGE_USER|_PAGE_RW))) };
}
-static inline int sh_l1e_is_mmio(shadow_l1e_t sl1e)
+static inline bool sh_l1e_is_mmio(shadow_l1e_t sl1e)
{
- return ((sl1e.l1 & SH_L1E_MMIO_MAGIC_MASK) == SH_L1E_MMIO_MAGIC);
+ return (sl1e.l1 & SH_L1E_MMIO_MAGIC_MASK) == SH_L1E_MMIO_MAGIC;
}
static inline gfn_t sh_l1e_mmio_get_gfn(shadow_l1e_t sl1e)
@@ -335,9 +337,9 @@ static inline gfn_t sh_l1e_mmio_get_gfn(shadow_l1e_t sl1e)
return _gfn(MASK_EXTR(sl1e.l1, SH_L1E_MMIO_GFN_MASK));
}
-static inline u32 sh_l1e_mmio_get_flags(shadow_l1e_t sl1e)
+static inline uint32_t sh_l1e_mmio_get_flags(shadow_l1e_t sl1e)
{
- return (u32)((sl1e.l1 & (_PAGE_USER|_PAGE_RW)));
+ return sl1e.l1 & (_PAGE_USER | _PAGE_RW);
}
#else
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 6/6] x86/shadow: Use mfn_t in shadow_track_dirty_vram()
2018-08-15 18:34 [PATCH 0/6] x86/mm: Minor non-functional cleanup Andrew Cooper
` (4 preceding siblings ...)
2018-08-15 18:34 ` [PATCH 5/6] x86/shadow: Clean up the MMIO fastpath helpers Andrew Cooper
@ 2018-08-15 18:34 ` Andrew Cooper
2018-08-16 16:18 ` Roger Pau Monné
2018-08-21 10:05 ` Wei Liu
2018-08-16 19:00 ` [PATCH 0/6] x86/mm: Minor non-functional cleanup Tim Deegan
6 siblings, 2 replies; 20+ messages in thread
From: Andrew Cooper @ 2018-08-15 18:34 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Tim Deegan, Wei Liu, Jan Beulich,
Roger Pau Monné
... as the only user of sl1mfn would prefer it that way.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/mm/shadow/common.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index c9640b9..28d1dd4 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3834,7 +3834,7 @@ int shadow_track_dirty_vram(struct domain *d,
memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
else
{
- unsigned long map_mfn = mfn_x(INVALID_MFN);
+ mfn_t map_mfn = INVALID_MFN;
void *map_sl1p = NULL;
/* Iterate over VRAM to track dirty bits. */
@@ -3872,13 +3872,13 @@ int shadow_track_dirty_vram(struct domain *d,
/* Hopefully the most common case: only one mapping,
* whose dirty bit we can use. */
l1_pgentry_t *sl1e;
- unsigned long sl1mfn = paddr_to_pfn(sl1ma);
+ mfn_t sl1mfn = maddr_to_mfn(sl1ma);
- if ( sl1mfn != map_mfn )
+ if ( !mfn_eq(sl1mfn, map_mfn) )
{
if ( map_sl1p )
unmap_domain_page(map_sl1p);
- map_sl1p = map_domain_page(_mfn(sl1mfn));
+ map_sl1p = map_domain_page(sl1mfn);
map_mfn = sl1mfn;
}
sl1e = map_sl1p + (sl1ma & ~PAGE_MASK);
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 1/6] x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations
2018-08-15 18:34 ` [PATCH 1/6] x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations Andrew Cooper
@ 2018-08-16 16:07 ` Roger Pau Monné
2018-08-17 12:54 ` Jan Beulich
1 sibling, 0 replies; 20+ messages in thread
From: Roger Pau Monné @ 2018-08-16 16:07 UTC (permalink / raw)
To: Andrew Cooper; +Cc: George Dunlap, Wei Liu, Tim Deegan, Jan Beulich, Xen-devel
On Wed, Aug 15, 2018 at 07:34:32PM +0100, Andrew Cooper wrote:
> Use l1e_get_mfn() in place of l1e_get_pfn() when applicable, and fix up style
> on affected lines.
>
> For sh_remove_shadow_via_pointer(), map_domain_page() is guaranteed to succeed
> so there is no need to ASSERT() its success. This allows the pointer
> arithmetic to folded into the previous expression, and for vaddr to be
> properly typed as l1_pgentry_t, avoiding the cast in l1e_get_mfn().
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
With one change:
> diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> index 021ae25..0d74c01 100644
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -960,7 +960,8 @@ static int shadow_set_l4e(struct domain *d,
> {
> /* We lost a reference to an old mfn. */
> mfn_t osl3mfn = shadow_l4e_get_mfn(old_sl4e);
> - if ( (mfn_x(osl3mfn) != mfn_x(shadow_l4e_get_mfn(new_sl4e)))
> +
> + if ( mfn_eq(osl3mfn, shadow_l4e_get_mfn(new_sl4e))
I think this should be !mfn_eq.
Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/6] x86/shadow: Use more appropriate conversion functions
2018-08-15 18:34 ` [PATCH 2/6] x86/shadow: Use more appropriate conversion functions Andrew Cooper
@ 2018-08-16 16:08 ` Roger Pau Monné
2018-08-21 10:02 ` Wei Liu
1 sibling, 0 replies; 20+ messages in thread
From: Roger Pau Monné @ 2018-08-16 16:08 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Wei Liu, Tim Deegan, Jan Beulich, Xen-devel
On Wed, Aug 15, 2018 at 07:34:33PM +0100, Andrew Cooper wrote:
> Replace pfn_to_paddr(mfn_x(...)) with mfn_to_maddr(), and replace an opencoded
> gfn_to_gaddr().
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 3/6] x86/shadow: Switch shadow_domain.has_fast_mmio_entries to bool
2018-08-15 18:34 ` [PATCH 3/6] x86/shadow: Switch shadow_domain.has_fast_mmio_entries to bool Andrew Cooper
@ 2018-08-16 16:09 ` Roger Pau Monné
2018-08-17 12:57 ` Jan Beulich
1 sibling, 0 replies; 20+ messages in thread
From: Roger Pau Monné @ 2018-08-16 16:09 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Wei Liu, Tim Deegan, Jan Beulich, Xen-devel
On Wed, Aug 15, 2018 at 07:34:34PM +0100, Andrew Cooper wrote:
> Remove an unecessary if().
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 4/6] x86/shadow: Use MASK_* helpers for the MMIO fastpath PTE manipulation
2018-08-15 18:34 ` [PATCH 4/6] x86/shadow: Use MASK_* helpers for the MMIO fastpath PTE manipulation Andrew Cooper
@ 2018-08-16 16:12 ` Roger Pau Monné
2018-08-21 10:04 ` Wei Liu
1 sibling, 0 replies; 20+ messages in thread
From: Roger Pau Monné @ 2018-08-16 16:12 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Wei Liu, Tim Deegan, Jan Beulich, Xen-devel
On Wed, Aug 15, 2018 at 07:34:35PM +0100, Andrew Cooper wrote:
> Drop the now-unused SH_L1E_MMIO_GFN_SHIFT definition.
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/6] x86/shadow: Clean up the MMIO fastpath helpers
2018-08-15 18:34 ` [PATCH 5/6] x86/shadow: Clean up the MMIO fastpath helpers Andrew Cooper
@ 2018-08-16 16:16 ` Roger Pau Monné
2018-08-21 10:05 ` Wei Liu
1 sibling, 0 replies; 20+ messages in thread
From: Roger Pau Monné @ 2018-08-16 16:16 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Wei Liu, Tim Deegan, Jan Beulich, Xen-devel
On Wed, Aug 15, 2018 at 07:34:36PM +0100, Andrew Cooper wrote:
> Use bool when appropraite, remove extranious brackets and fix up comment
> style.
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Just one nit...
> -static inline u32 sh_l1e_mmio_get_flags(shadow_l1e_t sl1e)
> +static inline uint32_t sh_l1e_mmio_get_flags(shadow_l1e_t sl1e)
Is there any reason to use uint32_t instead of unsigned int?
Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 6/6] x86/shadow: Use mfn_t in shadow_track_dirty_vram()
2018-08-15 18:34 ` [PATCH 6/6] x86/shadow: Use mfn_t in shadow_track_dirty_vram() Andrew Cooper
@ 2018-08-16 16:18 ` Roger Pau Monné
2018-08-21 10:05 ` Wei Liu
1 sibling, 0 replies; 20+ messages in thread
From: Roger Pau Monné @ 2018-08-16 16:18 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Wei Liu, Tim Deegan, Jan Beulich, Xen-devel
On Wed, Aug 15, 2018 at 07:34:37PM +0100, Andrew Cooper wrote:
> ... as the only user of sl1mfn would prefer it that way.
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/6] x86/mm: Minor non-functional cleanup
2018-08-15 18:34 [PATCH 0/6] x86/mm: Minor non-functional cleanup Andrew Cooper
` (5 preceding siblings ...)
2018-08-15 18:34 ` [PATCH 6/6] x86/shadow: Use mfn_t in shadow_track_dirty_vram() Andrew Cooper
@ 2018-08-16 19:00 ` Tim Deegan
6 siblings, 0 replies; 20+ messages in thread
From: Tim Deegan @ 2018-08-16 19:00 UTC (permalink / raw)
To: Andrew Cooper
Cc: George Dunlap, Roger Pau Monné, Wei Liu, Jan Beulich,
Xen-devel
At 19:34 +0100 on 15 Aug (1534361671), Andrew Cooper wrote:
> Minor cleanup which has accumulated while doing other work. No functional
> change anywhere.
>
> Andrew Cooper (6):
> x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations
> x86/shadow: Use more appropriate conversion functions
> x86/shadow: Switch shadow_domain.has_fast_mmio_entries to bool
> x86/shadow: Use MASK_* helpers for the MMIO fastpath PTE manipulation
> x86/shadow: Clean up the MMIO fastpath helpers
> x86/shadow: Use mfn_t in shadow_track_dirty_vram()
Reviewed-by: Tim Deegan <tim@xen.org>
(with the one correction that Roger asked for in patch 1/6)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 1/6] x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations
2018-08-15 18:34 ` [PATCH 1/6] x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations Andrew Cooper
2018-08-16 16:07 ` Roger Pau Monné
@ 2018-08-17 12:54 ` Jan Beulich
1 sibling, 0 replies; 20+ messages in thread
From: Jan Beulich @ 2018-08-17 12:54 UTC (permalink / raw)
To: Andrew Cooper
Cc: George Dunlap, Tim Deegan, Xen-devel, Wei Liu, Roger Pau Monne
>>> On 15.08.18 at 20:34, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -729,7 +729,8 @@ hap_write_p2m_entry(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
> * unless the only change is an increase in access rights. */
> mfn_t omfn = l1e_get_mfn(*p);
> mfn_t nmfn = l1e_get_mfn(new);
> - flush_nestedp2m = !( mfn_x(omfn) == mfn_x(nmfn)
> +
> + flush_nestedp2m = !(mfn_eq(omfn, nmfn)
> && perms_strictly_increased(old_flags, l1e_get_flags(new)) );
Seeing that you strip the stray leading space, could you strip the stray
trailing one as well, and move the && to its proper place?
With the one previously pointed out issue fixed
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 3/6] x86/shadow: Switch shadow_domain.has_fast_mmio_entries to bool
2018-08-15 18:34 ` [PATCH 3/6] x86/shadow: Switch shadow_domain.has_fast_mmio_entries to bool Andrew Cooper
2018-08-16 16:09 ` Roger Pau Monné
@ 2018-08-17 12:57 ` Jan Beulich
1 sibling, 0 replies; 20+ messages in thread
From: Jan Beulich @ 2018-08-17 12:57 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Tim Deegan, Xen-devel, Wei Liu, Roger Pau Monne
>>> On 15.08.18 at 20:34, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -563,8 +563,7 @@ _sh_propagate(struct vcpu *v,
> {
> /* Guest l1e maps emulated MMIO space */
> *sp = sh_l1e_mmio(target_gfn, gflags);
> - if ( !d->arch.paging.shadow.has_fast_mmio_entries )
> - d->arch.paging.shadow.has_fast_mmio_entries = 1;
> + d->arch.paging.shadow.has_fast_mmio_entries = true;
Are you sure the if() isn't intentionally there to avoid dirtying a
cacheline when the value is already as intended?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/6] x86/shadow: Use more appropriate conversion functions
2018-08-15 18:34 ` [PATCH 2/6] x86/shadow: Use more appropriate conversion functions Andrew Cooper
2018-08-16 16:08 ` Roger Pau Monné
@ 2018-08-21 10:02 ` Wei Liu
1 sibling, 0 replies; 20+ messages in thread
From: Wei Liu @ 2018-08-21 10:02 UTC (permalink / raw)
To: Andrew Cooper
Cc: Wei Liu, Roger Pau Monné, Tim Deegan, Jan Beulich, Xen-devel
On Wed, Aug 15, 2018 at 07:34:33PM +0100, Andrew Cooper wrote:
> Replace pfn_to_paddr(mfn_x(...)) with mfn_to_maddr(), and replace an opencoded
> gfn_to_gaddr().
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 4/6] x86/shadow: Use MASK_* helpers for the MMIO fastpath PTE manipulation
2018-08-15 18:34 ` [PATCH 4/6] x86/shadow: Use MASK_* helpers for the MMIO fastpath PTE manipulation Andrew Cooper
2018-08-16 16:12 ` Roger Pau Monné
@ 2018-08-21 10:04 ` Wei Liu
1 sibling, 0 replies; 20+ messages in thread
From: Wei Liu @ 2018-08-21 10:04 UTC (permalink / raw)
To: Andrew Cooper
Cc: Wei Liu, Roger Pau Monné, Tim Deegan, Jan Beulich, Xen-devel
On Wed, Aug 15, 2018 at 07:34:35PM +0100, Andrew Cooper wrote:
> Drop the now-unused SH_L1E_MMIO_GFN_SHIFT definition.
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/6] x86/shadow: Clean up the MMIO fastpath helpers
2018-08-15 18:34 ` [PATCH 5/6] x86/shadow: Clean up the MMIO fastpath helpers Andrew Cooper
2018-08-16 16:16 ` Roger Pau Monné
@ 2018-08-21 10:05 ` Wei Liu
1 sibling, 0 replies; 20+ messages in thread
From: Wei Liu @ 2018-08-21 10:05 UTC (permalink / raw)
To: Andrew Cooper
Cc: Wei Liu, Roger Pau Monné, Tim Deegan, Jan Beulich, Xen-devel
On Wed, Aug 15, 2018 at 07:34:36PM +0100, Andrew Cooper wrote:
> Use bool when appropraite, remove extranious brackets and fix up comment
> style.
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 6/6] x86/shadow: Use mfn_t in shadow_track_dirty_vram()
2018-08-15 18:34 ` [PATCH 6/6] x86/shadow: Use mfn_t in shadow_track_dirty_vram() Andrew Cooper
2018-08-16 16:18 ` Roger Pau Monné
@ 2018-08-21 10:05 ` Wei Liu
1 sibling, 0 replies; 20+ messages in thread
From: Wei Liu @ 2018-08-21 10:05 UTC (permalink / raw)
To: Andrew Cooper
Cc: Wei Liu, Roger Pau Monné, Tim Deegan, Jan Beulich, Xen-devel
On Wed, Aug 15, 2018 at 07:34:37PM +0100, Andrew Cooper wrote:
> ... as the only user of sl1mfn would prefer it that way.
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2018-08-21 10:05 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-08-15 18:34 [PATCH 0/6] x86/mm: Minor non-functional cleanup Andrew Cooper
2018-08-15 18:34 ` [PATCH 1/6] x86/mm: Use mfn_eq()/mfn_add() rather than opencoded variations Andrew Cooper
2018-08-16 16:07 ` Roger Pau Monné
2018-08-17 12:54 ` Jan Beulich
2018-08-15 18:34 ` [PATCH 2/6] x86/shadow: Use more appropriate conversion functions Andrew Cooper
2018-08-16 16:08 ` Roger Pau Monné
2018-08-21 10:02 ` Wei Liu
2018-08-15 18:34 ` [PATCH 3/6] x86/shadow: Switch shadow_domain.has_fast_mmio_entries to bool Andrew Cooper
2018-08-16 16:09 ` Roger Pau Monné
2018-08-17 12:57 ` Jan Beulich
2018-08-15 18:34 ` [PATCH 4/6] x86/shadow: Use MASK_* helpers for the MMIO fastpath PTE manipulation Andrew Cooper
2018-08-16 16:12 ` Roger Pau Monné
2018-08-21 10:04 ` Wei Liu
2018-08-15 18:34 ` [PATCH 5/6] x86/shadow: Clean up the MMIO fastpath helpers Andrew Cooper
2018-08-16 16:16 ` Roger Pau Monné
2018-08-21 10:05 ` Wei Liu
2018-08-15 18:34 ` [PATCH 6/6] x86/shadow: Use mfn_t in shadow_track_dirty_vram() Andrew Cooper
2018-08-16 16:18 ` Roger Pau Monné
2018-08-21 10:05 ` Wei Liu
2018-08-16 19:00 ` [PATCH 0/6] x86/mm: Minor non-functional cleanup Tim Deegan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).