From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Julien Grall' <julien.grall@arm.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: Kevin Tian <kevin.tian@intel.com>,
Stefano Stabellini <sstabellini@kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
Razvan Cojocaru <rcojocaru@bitdefender.com>,
Jun Nakajima <jun.nakajima@intel.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
"Tim (Xen.org)" <tim@xen.org>,
George Dunlap <George.Dunlap@citrix.com>,
Tamas K Lengyel <tamas@tklengyel.com>,
Jan Beulich <jbeulich@suse.com>,
Shane Wang <shane.wang@intel.com>,
Ian Jackson <Ian.Jackson@citrix.com>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
Gang Wei <gang.wei@intel.com>
Subject: Re: [PATCH v4 16/16] xen: Convert page_to_mfn and mfn_to_page to use typesafe MFN
Date: Wed, 21 Feb 2018 14:59:30 +0000 [thread overview]
Message-ID: <ea8e0b0c23f44a679bd690b3c175fede@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <20180221140259.29360-17-julien.grall@arm.com>
> -----Original Message-----
> From: Julien Grall [mailto:julien.grall@arm.com]
> Sent: 21 February 2018 14:03
> To: xen-devel@lists.xen.org
> Cc: Julien Grall <julien.grall@arm.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Andrew Cooper <Andrew.Cooper3@citrix.com>;
> George Dunlap <George.Dunlap@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Jan Beulich <jbeulich@suse.com>; Konrad
> Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>;
> Wei Liu <wei.liu2@citrix.com>; Razvan Cojocaru
> <rcojocaru@bitdefender.com>; Tamas K Lengyel <tamas@tklengyel.com>;
> Paul Durrant <Paul.Durrant@citrix.com>; Boris Ostrovsky
> <boris.ostrovsky@oracle.com>; Suravee Suthikulpanit
> <suravee.suthikulpanit@amd.com>; Jun Nakajima
> <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; George
> Dunlap <George.Dunlap@citrix.com>; Gang Wei <gang.wei@intel.com>;
> Shane Wang <shane.wang@intel.com>
> Subject: [PATCH v4 16/16] xen: Convert page_to_mfn and mfn_to_page to
> use typesafe MFN
>
> Most of the users of page_to_mfn and mfn_to_page are either overriding
> the macros to make them work with mfn_t or use mfn_x/_mfn because the
> rest of the function use mfn_t.
>
> So make page_to_mfn and mfn_to_page return mfn_t by default. The __*
> version are now dropped as this patch will convert all the remaining
> non-typesafe callers.
>
> Only reasonable clean-ups are done in this patch. The rest will use
> _mfn/mfn_x for the time being.
>
> Lastly, domain_page_to_mfn is also converted to use mfn_t given that
> most of the callers are now switched to _mfn(domain_page_to_mfn(...)).
>
> Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
>
> ---
>
> Andrew suggested to drop IS_VALID_PAGE in xen/tmem_xen.h. His
> comment
> was:
>
> "/sigh This is tautological. The definition of a "valid mfn" in this
> case is one for which we have frametable entry, and by having a struct
> page_info in our hands, this is by definition true (unless you have a
> wild pointer, at which point your bug is elsewhere).
>
> IS_VALID_PAGE() is only ever used in assertions and never usefully, so
> instead I would remove it entirely rather than trying to fix it up."
>
> I can remove the function in a separate patch at the begining of the
> series if Konrad (TMEM maintainer) is happy with that.
>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Tamas K Lengyel <tamas@tklengyel.com>
> Cc: Paul Durrant <paul.durrant@citrix.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Gang Wei <gang.wei@intel.com>
> Cc: Shane Wang <shane.wang@intel.com>
>
> Changes in v4:
> - Drop __page_to_mfn and __mfn_to_page. Reword the commit
> title/message to reflect that.
>
> Changes in v3:
> - Rebase on the latest staging and fix some conflicts. Tags
> haven't be retained.
> - Switch the printf format to PRI_mfn
>
> Changes in v2:
> - Some part have been moved in separate patch
> - Remove one spurious comment
> - Convert domain_page_to_mfn to use mfn_t
> ---
> xen/arch/arm/domain_build.c | 2 --
> xen/arch/arm/kernel.c | 2 +-
> xen/arch/arm/mem_access.c | 2 +-
> xen/arch/arm/mm.c | 8 ++++----
> xen/arch/arm/p2m.c | 10 ++--------
> xen/arch/x86/cpu/vpmu.c | 4 ++--
> xen/arch/x86/domain.c | 21 +++++++++++----------
> xen/arch/x86/domain_page.c | 6 +++---
> xen/arch/x86/hvm/dm.c | 2 +-
> xen/arch/x86/hvm/dom0_build.c | 6 +++---
> xen/arch/x86/hvm/emulate.c | 6 +++---
> xen/arch/x86/hvm/hvm.c | 12 ++++++------
> xen/arch/x86/hvm/ioreq.c | 4 ++--
> xen/arch/x86/hvm/stdvga.c | 2 +-
> xen/arch/x86/hvm/svm/svm.c | 4 ++--
> xen/arch/x86/hvm/viridian.c | 6 +++---
> xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
> xen/arch/x86/hvm/vmx/vmx.c | 10 +++++-----
> xen/arch/x86/hvm/vmx/vvmx.c | 6 +++---
> xen/arch/x86/mm.c | 4 ----
> xen/arch/x86/mm/guest_walk.c | 6 +++---
> xen/arch/x86/mm/hap/guest_walk.c | 2 +-
> xen/arch/x86/mm/hap/hap.c | 6 ------
> xen/arch/x86/mm/hap/nested_ept.c | 2 +-
> xen/arch/x86/mm/mem_sharing.c | 5 -----
> xen/arch/x86/mm/p2m-ept.c | 8 ++++----
> xen/arch/x86/mm/p2m-pod.c | 6 ------
> xen/arch/x86/mm/p2m.c | 6 ------
> xen/arch/x86/mm/paging.c | 6 ------
> xen/arch/x86/mm/shadow/private.h | 16 ++--------------
> xen/arch/x86/numa.c | 2 +-
> xen/arch/x86/physdev.c | 2 +-
> xen/arch/x86/pv/callback.c | 6 ------
> xen/arch/x86/pv/descriptor-tables.c | 6 ------
> xen/arch/x86/pv/dom0_build.c | 14 +++++++-------
> xen/arch/x86/pv/domain.c | 6 ------
> xen/arch/x86/pv/emul-gate-op.c | 6 ------
> xen/arch/x86/pv/emul-priv-op.c | 10 ----------
> xen/arch/x86/pv/grant_table.c | 6 ------
> xen/arch/x86/pv/ro-page-fault.c | 6 ------
> xen/arch/x86/pv/shim.c | 4 +---
> xen/arch/x86/smpboot.c | 6 ------
> xen/arch/x86/tboot.c | 4 ++--
> xen/arch/x86/traps.c | 4 ++--
> xen/arch/x86/x86_64/mm.c | 6 ------
> xen/common/domain.c | 4 ++--
> xen/common/grant_table.c | 6 ------
> xen/common/kimage.c | 6 ------
> xen/common/memory.c | 6 ------
> xen/common/page_alloc.c | 6 ------
> xen/common/tmem.c | 2 +-
> xen/common/tmem_xen.c | 4 ----
> xen/common/trace.c | 4 ++--
> xen/common/vmap.c | 6 +-----
> xen/common/xenoprof.c | 2 --
> xen/drivers/passthrough/amd/iommu_map.c | 12 ++++++------
> xen/drivers/passthrough/iommu.c | 2 +-
> xen/drivers/passthrough/x86/iommu.c | 2 +-
> xen/include/asm-arm/mm.h | 20 ++++++++++----------
> xen/include/asm-arm/p2m.h | 4 ++--
> xen/include/asm-x86/mm.h | 12 ++++++------
> xen/include/asm-x86/p2m.h | 2 +-
> xen/include/asm-x86/page.h | 32 +++++++++++++++-----------------
> xen/include/xen/domain_page.h | 8 ++++----
> xen/include/xen/mm.h | 5 -----
> xen/include/xen/tmem_xen.h | 2 +-
> 66 files changed, 132 insertions(+), 285 deletions(-)
>
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 155c952349..fd6c2482de 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -49,8 +49,6 @@ struct map_range_data
> /* Override macros from asm/page.h to make them work with mfn_t */
> #undef virt_to_mfn
> #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
>
> //#define DEBUG_11_ALLOCATION
> #ifdef DEBUG_11_ALLOCATION
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 2fb0b9684d..8fdfd91543 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -286,7 +286,7 @@ static __init int kernel_decompress(struct
> bootmodule *mod)
> iounmap(input);
> return -ENOMEM;
> }
> - mfn = _mfn(page_to_mfn(pages));
> + mfn = page_to_mfn(pages);
> output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR,
> VMAP_DEFAULT);
>
> rc = perform_gunzip(output, input, size);
> diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
> index 0f2cbb81d3..112e291cba 100644
> --- a/xen/arch/arm/mem_access.c
> +++ b/xen/arch/arm/mem_access.c
> @@ -210,7 +210,7 @@ p2m_mem_access_check_and_get_page(vaddr_t
> gva, unsigned long flag,
> if ( t != p2m_ram_rw )
> goto err;
>
> - page = mfn_to_page(mfn_x(mfn));
> + page = mfn_to_page(mfn);
>
> if ( unlikely(!get_page(page, v->domain)) )
> page = NULL;
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index db74466a16..510a5a2050 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -477,7 +477,7 @@ void unmap_domain_page(const void *va)
> local_irq_restore(flags);
> }
>
> -unsigned long domain_page_map_to_mfn(const void *ptr)
> +mfn_t domain_page_map_to_mfn(const void *ptr)
> {
> unsigned long va = (unsigned long)ptr;
> lpae_t *map = this_cpu(xen_dommap);
> @@ -485,12 +485,12 @@ unsigned long domain_page_map_to_mfn(const
> void *ptr)
> unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
>
> if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> - return __virt_to_mfn(va);
> + return virt_to_mfn(va);
>
> ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
> ASSERT(map[slot].pt.avail != 0);
>
> - return map[slot].pt.base + offset;
> + return _mfn(map[slot].pt.base + offset);
> }
> #endif
>
> @@ -1287,7 +1287,7 @@ int xenmem_add_to_physmap_one(
> return -EINVAL;
> }
>
> - mfn = _mfn(page_to_mfn(page));
> + mfn = page_to_mfn(page);
> t = p2m_map_foreign;
>
> rcu_unlock_domain(od);
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 65e8b9c6ea..8b16c8322d 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -37,12 +37,6 @@ static unsigned int __read_mostly max_vmid =
> MAX_VMID_8_BIT;
>
> #define P2M_ROOT_PAGES (1<<P2M_ROOT_ORDER)
>
> -/* Override macros from asm/mm.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> unsigned int __read_mostly p2m_ipa_bits;
>
> /* Helpers to lookup the properties of each level */
> @@ -90,8 +84,8 @@ void dump_p2m_lookup(struct domain *d, paddr_t
> addr)
>
> printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
>
> - printk("P2M @ %p mfn:0x%lx\n",
> - p2m->root, __page_to_mfn(p2m->root));
> + printk("P2M @ %p mfn:%#"PRI_mfn"\n",
> + p2m->root, mfn_x(page_to_mfn(p2m->root)));
>
> dump_pt_walk(page_to_maddr(p2m->root), addr,
> P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
> diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
> index 7baf4614be..b978e05613 100644
> --- a/xen/arch/x86/cpu/vpmu.c
> +++ b/xen/arch/x86/cpu/vpmu.c
> @@ -653,7 +653,7 @@ static void pvpmu_finish(struct domain *d,
> xen_pmu_params_t *params)
> {
> struct vcpu *v;
> struct vpmu_struct *vpmu;
> - uint64_t mfn;
> + mfn_t mfn;
> void *xenpmu_data;
>
> if ( (params->vcpu >= d->max_vcpus) || (d->vcpu[params->vcpu] ==
> NULL) )
> @@ -675,7 +675,7 @@ static void pvpmu_finish(struct domain *d,
> xen_pmu_params_t *params)
> if ( xenpmu_data )
> {
> mfn = domain_page_map_to_mfn(xenpmu_data);
> - ASSERT(mfn_valid(_mfn(mfn)));
> + ASSERT(mfn_valid(mfn));
> unmap_domain_page_global(xenpmu_data);
> put_page_and_type(mfn_to_page(mfn));
> }
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index f93327b0a2..44ba52b7ba 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -195,7 +195,7 @@ void dump_pageframe_info(struct domain *d)
> }
> }
> printk(" DomPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> - _p(page_to_mfn(page)),
> + _p(mfn_x(page_to_mfn(page))),
> page->count_info, page->u.inuse.type_info);
> }
> spin_unlock(&d->page_alloc_lock);
> @@ -208,7 +208,7 @@ void dump_pageframe_info(struct domain *d)
> page_list_for_each ( page, &d->xenpage_list )
> {
> printk(" XenPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> - _p(page_to_mfn(page)),
> + _p(mfn_x(page_to_mfn(page))),
> page->count_info, page->u.inuse.type_info);
> }
> spin_unlock(&d->page_alloc_lock);
> @@ -635,7 +635,8 @@ int arch_domain_soft_reset(struct domain *d)
> struct page_info *page = virt_to_page(d->shared_info), *new_page;
> int ret = 0;
> struct domain *owner;
> - unsigned long mfn, gfn;
> + mfn_t mfn;
> + unsigned long gfn;
> p2m_type_t p2mt;
> unsigned int i;
>
> @@ -669,7 +670,7 @@ int arch_domain_soft_reset(struct domain *d)
> ASSERT( owner == d );
>
> mfn = page_to_mfn(page);
> - gfn = mfn_to_gmfn(d, mfn);
> + gfn = mfn_to_gmfn(d, mfn_x(mfn));
>
> /*
> * gfn == INVALID_GFN indicates that the shared_info page was never
> mapped
> @@ -678,7 +679,7 @@ int arch_domain_soft_reset(struct domain *d)
> if ( gfn == gfn_x(INVALID_GFN) )
> goto exit_put_page;
>
> - if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn )
> + if ( !mfn_eq(get_gfn_query(d, gfn, &p2mt), mfn) )
> {
> printk(XENLOG_G_ERR "Failed to get Dom%d's shared_info GFN
> (%lx)\n",
> d->domain_id, gfn);
> @@ -695,7 +696,7 @@ int arch_domain_soft_reset(struct domain *d)
> goto exit_put_gfn;
> }
>
> - ret = guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn),
> PAGE_ORDER_4K);
> + ret = guest_physmap_remove_page(d, _gfn(gfn), mfn,
> PAGE_ORDER_4K);
> if ( ret )
> {
> printk(XENLOG_G_ERR "Failed to remove Dom%d's shared_info frame
> %lx\n",
> @@ -704,7 +705,7 @@ int arch_domain_soft_reset(struct domain *d)
> goto exit_put_gfn;
> }
>
> - ret = guest_physmap_add_page(d, _gfn(gfn),
> _mfn(page_to_mfn(new_page)),
> + ret = guest_physmap_add_page(d, _gfn(gfn), page_to_mfn(new_page),
> PAGE_ORDER_4K);
> if ( ret )
> {
> @@ -1002,7 +1003,7 @@ int arch_set_info_guest(
> {
> if ( (page->u.inuse.type_info & PGT_type_mask) ==
> PGT_l4_page_table )
> - done = !fill_ro_mpt(_mfn(page_to_mfn(page)));
> + done = !fill_ro_mpt(page_to_mfn(page));
>
> page_unlock(page);
> }
> @@ -1131,7 +1132,7 @@ int arch_set_info_guest(
> l4_pgentry_t *l4tab;
>
> l4tab = map_domain_page(pagetable_get_mfn(v->arch.guest_table));
> - *l4tab = l4e_from_pfn(page_to_mfn(cr3_page),
> + *l4tab = l4e_from_mfn(page_to_mfn(cr3_page),
> _PAGE_PRESENT|_PAGE_RW|_PAGE_USER|_PAGE_ACCESSED);
> unmap_domain_page(l4tab);
> }
> @@ -1997,7 +1998,7 @@ int domain_relinquish_resources(struct domain *d)
> if ( d->arch.pirq_eoi_map != NULL )
> {
> unmap_domain_page_global(d->arch.pirq_eoi_map);
> - put_page_and_type(mfn_to_page(d->arch.pirq_eoi_map_mfn));
> + put_page_and_type(mfn_to_page(_mfn(d-
> >arch.pirq_eoi_map_mfn)));
> d->arch.pirq_eoi_map = NULL;
> d->arch.auto_unmask = 0;
> }
> diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
> index 3432a854dd..88046b39c9 100644
> --- a/xen/arch/x86/domain_page.c
> +++ b/xen/arch/x86/domain_page.c
> @@ -331,13 +331,13 @@ void unmap_domain_page_global(const void *ptr)
> }
>
> /* Translate a map-domain-page'd address to the underlying MFN */
> -unsigned long domain_page_map_to_mfn(const void *ptr)
> +mfn_t domain_page_map_to_mfn(const void *ptr)
> {
> unsigned long va = (unsigned long)ptr;
> const l1_pgentry_t *pl1e;
>
> if ( va >= DIRECTMAP_VIRT_START )
> - return virt_to_mfn(ptr);
> + return _mfn(virt_to_mfn(ptr));
>
> if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> {
> @@ -350,5 +350,5 @@ unsigned long domain_page_map_to_mfn(const void
> *ptr)
> pl1e = &__linear_l1_table[l1_linear_offset(va)];
> }
>
> - return l1e_get_pfn(*pl1e);
> + return l1e_get_mfn(*pl1e);
> }
> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
> index 7788577a73..cf1e600998 100644
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -193,7 +193,7 @@ static int modified_memory(struct domain *d,
> * These are most probably not page tables any more
> * don't take a long time and don't die either.
> */
> - sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
> + sh_remove_shadows(d, page_to_mfn(page), 1, 0);
> put_page(page);
> }
> }
> diff --git a/xen/arch/x86/hvm/dom0_build.c
> b/xen/arch/x86/hvm/dom0_build.c
> index afebaec70b..717ffff584 100644
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -119,7 +119,7 @@ static int __init pvh_populate_memory_range(struct
> domain *d,
> continue;
> }
>
> - rc = guest_physmap_add_page(d, _gfn(start),
> _mfn(page_to_mfn(page)),
> + rc = guest_physmap_add_page(d, _gfn(start), page_to_mfn(page),
> order);
> if ( rc != 0 )
> {
> @@ -269,7 +269,7 @@ static int __init
> pvh_setup_vmx_realmode_helpers(struct domain *d)
> }
> write_32bit_pse_identmap(ident_pt);
> unmap_domain_page(ident_pt);
> - put_page(mfn_to_page(mfn_x(mfn)));
> + put_page(mfn_to_page(mfn));
> d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] = gaddr;
> if ( pvh_add_mem_range(d, gaddr, gaddr + PAGE_SIZE, E820_RESERVED) )
> printk("Unable to set identity page tables as reserved in the memory
> map\n");
> @@ -287,7 +287,7 @@ static void __init pvh_steal_low_ram(struct domain
> *d, unsigned long start,
>
> for ( mfn = start; mfn < start + nr_pages; mfn++ )
> {
> - struct page_info *pg = mfn_to_page(mfn);
> + struct page_info *pg = mfn_to_page(_mfn(mfn));
> int rc;
>
> rc = unshare_xen_page_with_guest(pg, dom_io);
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 3b824553ab..fdd7177303 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -591,7 +591,7 @@ static void *hvmemul_map_linear_addr(
> goto unhandleable;
> }
>
> - *mfn++ = _mfn(page_to_mfn(page));
> + *mfn++ = page_to_mfn(page);
>
> if ( p2m_is_discard_write(p2mt) )
> {
> @@ -623,7 +623,7 @@ static void *hvmemul_map_linear_addr(
> out:
> /* Drop all held references. */
> while ( mfn-- > hvmemul_ctxt->mfn )
> - put_page(mfn_to_page(mfn_x(*mfn)));
> + put_page(mfn_to_page(*mfn));
>
> return err;
> }
> @@ -649,7 +649,7 @@ static void hvmemul_unmap_linear_addr(
> {
> ASSERT(mfn_valid(*mfn));
> paging_mark_dirty(currd, *mfn);
> - put_page(mfn_to_page(mfn_x(*mfn)));
> + put_page(mfn_to_page(*mfn));
>
> *mfn++ = _mfn(0); /* Clean slot for map()'s error checking. */
> }
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 91bc3e8b27..74de968315 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2247,7 +2247,7 @@ int hvm_set_cr0(unsigned long value, bool_t
> may_defer)
> v->arch.guest_table = pagetable_from_page(page);
>
> HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR3 value = %lx, mfn =
> %lx",
> - v->arch.hvm_vcpu.guest_cr[3], page_to_mfn(page));
> + v->arch.hvm_vcpu.guest_cr[3], mfn_x(page_to_mfn(page)));
> }
> }
> else if ( !(value & X86_CR0_PG) && (old_value & X86_CR0_PG) )
> @@ -2624,7 +2624,7 @@ void *hvm_map_guest_frame_ro(unsigned long
> gfn, bool_t permanent)
>
> void hvm_unmap_guest_frame(void *p, bool_t permanent)
> {
> - unsigned long mfn;
> + mfn_t mfn;
> struct page_info *page;
>
> if ( !p )
> @@ -2645,7 +2645,7 @@ void hvm_unmap_guest_frame(void *p, bool_t
> permanent)
> list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
> if ( track->page == page )
> {
> - paging_mark_dirty(d, _mfn(mfn));
> + paging_mark_dirty(d, mfn);
> list_del(&track->list);
> xfree(track);
> break;
> @@ -2662,7 +2662,7 @@ void
> hvm_mapped_guest_frames_mark_dirty(struct domain *d)
>
> spin_lock(&d->arch.hvm_domain.write_map.lock);
> list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
> - paging_mark_dirty(d, _mfn(page_to_mfn(track->page)));
> + paging_mark_dirty(d, page_to_mfn(track->page));
> spin_unlock(&d->arch.hvm_domain.write_map.lock);
> }
>
> @@ -3236,8 +3236,8 @@ static enum hvm_translation_result __hvm_copy(
>
> if ( xchg(&lastpage, gfn_x(gfn)) != gfn_x(gfn) )
> dprintk(XENLOG_G_DEBUG,
> - "%pv attempted write to read-only gfn %#lx (mfn=%#lx)\n",
> - v, gfn_x(gfn), page_to_mfn(page));
> + "%pv attempted write to read-only gfn %#lx
> (mfn=%#"PRI_mfn")\n",
> + v, gfn_x(gfn), mfn_x(page_to_mfn(page)));
> }
> else
> {
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 7e66965bcd..a1c2218fdc 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -268,7 +268,7 @@ static void hvm_remove_ioreq_gfn(
> struct domain *d, struct hvm_ioreq_page *iorp)
> {
> if ( guest_physmap_remove_page(d, _gfn(iorp->gfn),
> - _mfn(page_to_mfn(iorp->page)), 0) )
> + page_to_mfn(iorp->page), 0) )
> domain_crash(d);
> clear_page(iorp->va);
> }
> @@ -281,7 +281,7 @@ static int hvm_add_ioreq_gfn(
> clear_page(iorp->va);
>
> rc = guest_physmap_add_page(d, _gfn(iorp->gfn),
> - _mfn(page_to_mfn(iorp->page)), 0);
> + page_to_mfn(iorp->page), 0);
> if ( rc == 0 )
> paging_mark_pfn_dirty(d, _pfn(iorp->gfn));
>
> diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
> index 088fbdf8ce..925bab2438 100644
> --- a/xen/arch/x86/hvm/stdvga.c
> +++ b/xen/arch/x86/hvm/stdvga.c
> @@ -590,7 +590,7 @@ void stdvga_init(struct domain *d)
> if ( pg == NULL )
> break;
> s->vram_page[i] = pg;
> - clear_domain_page(_mfn(page_to_mfn(pg)));
> + clear_domain_page(page_to_mfn(pg));
> }
>
> if ( i == ARRAY_SIZE(s->vram_page) )
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 9f58afc2d8..30e951039e 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1543,7 +1543,7 @@ static int svm_cpu_up_prepare(unsigned int cpu)
> if ( !pg )
> goto err;
>
> - clear_domain_page(_mfn(page_to_mfn(pg)));
> + clear_domain_page(page_to_mfn(pg));
> *this_hsa = page_to_maddr(pg);
> }
>
> @@ -1553,7 +1553,7 @@ static int svm_cpu_up_prepare(unsigned int cpu)
> if ( !pg )
> goto err;
>
> - clear_domain_page(_mfn(page_to_mfn(pg)));
> + clear_domain_page(page_to_mfn(pg));
> *this_vmcb = page_to_maddr(pg);
> }
>
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index 70aab520bc..d6aa89d0b7 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -354,7 +354,7 @@ static void enable_hypercall_page(struct domain *d)
> if ( page )
> put_page(page);
> gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN
> %#"PRI_mfn")\n",
> - gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
> + gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
> return;
> }
>
> @@ -414,7 +414,7 @@ static void initialize_vp_assist(struct vcpu *v)
>
> fail:
> gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN
> %#"PRI_mfn")\n", gmfn,
> - page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
> + mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
> }
>
> static void teardown_vp_assist(struct vcpu *v)
> @@ -492,7 +492,7 @@ static void update_reference_tsc(struct domain *d,
> bool_t initialize)
> if ( page )
> put_page(page);
> gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN
> %#"PRI_mfn")\n",
> - gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
> + gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
> return;
> }
>
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index e7818caed0..a5a160746a 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -1433,7 +1433,7 @@ int vmx_vcpu_enable_pml(struct vcpu *v)
>
> vmx_vmcs_enter(v);
>
> - __vmwrite(PML_ADDRESS, page_to_mfn(v->arch.hvm_vmx.pml_pg) <<
> PAGE_SHIFT);
> + __vmwrite(PML_ADDRESS, page_to_maddr(v->arch.hvm_vmx.pml_pg));
> __vmwrite(GUEST_PML_INDEX, NR_PML_ENTRIES - 1);
>
> v->arch.hvm_vmx.secondary_exec_control |=
> SECONDARY_EXEC_ENABLE_PML;
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 5cd689e823..cf7f7e1bb7 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2968,7 +2968,7 @@ gp_fault:
> static int vmx_alloc_vlapic_mapping(struct domain *d)
> {
> struct page_info *pg;
> - unsigned long mfn;
> + mfn_t mfn;
>
> if ( !cpu_has_vmx_virtualize_apic_accesses )
> return 0;
> @@ -2977,10 +2977,10 @@ static int vmx_alloc_vlapic_mapping(struct
> domain *d)
> if ( !pg )
> return -ENOMEM;
> mfn = page_to_mfn(pg);
> - clear_domain_page(_mfn(mfn));
> + clear_domain_page(mfn);
> share_xen_page_with_guest(pg, d, XENSHARE_writable);
> - d->arch.hvm_domain.vmx.apic_access_mfn = mfn;
> - set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE),
> _mfn(mfn),
> + d->arch.hvm_domain.vmx.apic_access_mfn = mfn_x(mfn);
> + set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE),
> mfn,
> PAGE_ORDER_4K, p2m_get_hostp2m(d)->default_access);
>
> return 0;
> @@ -2991,7 +2991,7 @@ static void vmx_free_vlapic_mapping(struct
> domain *d)
> unsigned long mfn = d->arch.hvm_domain.vmx.apic_access_mfn;
>
> if ( mfn != 0 )
> - free_shared_domheap_page(mfn_to_page(mfn));
> + free_shared_domheap_page(mfn_to_page(_mfn(mfn)));
> }
>
> static void vmx_install_vlapic_mapping(struct vcpu *v)
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
> b/xen/arch/x86/hvm/vmx/vvmx.c
> index dfe97b9705..db92ae6660 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -84,7 +84,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
> }
> v->arch.hvm_vmx.vmread_bitmap = vmread_bitmap;
>
> - clear_domain_page(_mfn(page_to_mfn(vmread_bitmap)));
> + clear_domain_page(page_to_mfn(vmread_bitmap));
>
> vmwrite_bitmap = alloc_domheap_page(NULL, 0);
> if ( !vmwrite_bitmap )
> @@ -1726,7 +1726,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs
> *regs)
> nvcpu->nv_vvmcx = vvmcx;
> nvcpu->nv_vvmcxaddr = gpa;
> v->arch.hvm_vmx.vmcs_shadow_maddr =
> - pfn_to_paddr(domain_page_map_to_mfn(vvmcx));
> + mfn_to_maddr(domain_page_map_to_mfn(vvmcx));
> }
> else
> {
> @@ -1812,7 +1812,7 @@ int nvmx_handle_vmclear(struct cpu_user_regs
> *regs)
> {
> if ( writable )
> clear_vvmcs_launched(&nvmx->launched_list,
> - domain_page_map_to_mfn(vvmcs));
> + mfn_x(domain_page_map_to_mfn(vvmcs)));
> else
> rc = VMFAIL_VALID;
> hvm_unmap_guest_frame(vvmcs, 0);
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index d33b6bfa9d..eeaf03eb49 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -131,10 +131,6 @@
> #include "pv/mm.h"
>
> /* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> #undef virt_to_mfn
> #define virt_to_mfn(v) _mfn(__virt_to_mfn(v))
>
> diff --git a/xen/arch/x86/mm/guest_walk.c
> b/xen/arch/x86/mm/guest_walk.c
> index 6055fec1ad..f67aeda3d0 100644
> --- a/xen/arch/x86/mm/guest_walk.c
> +++ b/xen/arch/x86/mm/guest_walk.c
> @@ -469,20 +469,20 @@ guest_walk_tables(struct vcpu *v, struct
> p2m_domain *p2m,
> if ( l3p )
> {
> unmap_domain_page(l3p);
> - put_page(mfn_to_page(mfn_x(gw->l3mfn)));
> + put_page(mfn_to_page(gw->l3mfn));
> }
> #endif
> #if GUEST_PAGING_LEVELS >= 3
> if ( l2p )
> {
> unmap_domain_page(l2p);
> - put_page(mfn_to_page(mfn_x(gw->l2mfn)));
> + put_page(mfn_to_page(gw->l2mfn));
> }
> #endif
> if ( l1p )
> {
> unmap_domain_page(l1p);
> - put_page(mfn_to_page(mfn_x(gw->l1mfn)));
> + put_page(mfn_to_page(gw->l1mfn));
> }
>
> return walk_ok;
> diff --git a/xen/arch/x86/mm/hap/guest_walk.c
> b/xen/arch/x86/mm/hap/guest_walk.c
> index c550017ba4..cb3f9cebe7 100644
> --- a/xen/arch/x86/mm/hap/guest_walk.c
> +++ b/xen/arch/x86/mm/hap/guest_walk.c
> @@ -83,7 +83,7 @@ unsigned long
> hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
> *pfec &= ~PFEC_page_present;
> goto out_tweak_pfec;
> }
> - top_mfn = _mfn(page_to_mfn(top_page));
> + top_mfn = page_to_mfn(top_page);
>
> /* Map the top-level table and call the tree-walker */
> ASSERT(mfn_valid(top_mfn));
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index 003c2d8896..370a490aad 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -42,12 +42,6 @@
>
> #include "private.h"
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
> /************************************************/
> /* HAP VRAM TRACKING SUPPORT */
> /************************************************/
> diff --git a/xen/arch/x86/mm/hap/nested_ept.c
> b/xen/arch/x86/mm/hap/nested_ept.c
> index 14b1bb01e9..1738df69f6 100644
> --- a/xen/arch/x86/mm/hap/nested_ept.c
> +++ b/xen/arch/x86/mm/hap/nested_ept.c
> @@ -173,7 +173,7 @@ nept_walk_tables(struct vcpu *v, unsigned long l2ga,
> ept_walk_t *gw)
> goto map_err;
> gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
> unmap_domain_page(lxp);
> - put_page(mfn_to_page(mfn_x(lxmfn)));
> + put_page(mfn_to_page(lxmfn));
>
> if ( nept_non_present_check(gw->lxe[lvl]) )
> goto non_present;
> diff --git a/xen/arch/x86/mm/mem_sharing.c
> b/xen/arch/x86/mm/mem_sharing.c
> index 57f54c55c8..fad8a9df13 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -152,11 +152,6 @@ static inline shr_handle_t get_next_handle(void)
> #define mem_sharing_enabled(d) \
> (is_hvm_domain(d) && (d)->arch.hvm_domain.mem_sharing_enabled)
>
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
> static atomic_t nr_saved_mfns = ATOMIC_INIT(0);
> static atomic_t nr_shared_mfns = ATOMIC_INIT(0);
>
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index 66dbb3e83a..14b593923b 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -74,13 +74,13 @@ static int atomic_write_ept_entry(ept_entry_t
> *entryptr, ept_entry_t new,
> goto out;
>
> rc = -ESRCH;
> - fdom = page_get_owner(mfn_to_page(new.mfn));
> + fdom = page_get_owner(mfn_to_page(_mfn(new.mfn)));
> if ( fdom == NULL )
> goto out;
>
> /* get refcount on the page */
> rc = -EBUSY;
> - if ( !get_page(mfn_to_page(new.mfn), fdom) )
> + if ( !get_page(mfn_to_page(_mfn(new.mfn)), fdom) )
> goto out;
> }
> }
> @@ -91,7 +91,7 @@ static int atomic_write_ept_entry(ept_entry_t
> *entryptr, ept_entry_t new,
> write_atomic(&entryptr->epte, new.epte);
>
> if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
> - put_page(mfn_to_page(oldmfn));
> + put_page(mfn_to_page(_mfn(oldmfn)));
>
> rc = 0;
>
> @@ -270,7 +270,7 @@ static void ept_free_entry(struct p2m_domain *p2m,
> ept_entry_t *ept_entry, int l
> }
>
> p2m_tlb_flush_sync(p2m);
> - p2m_free_ptp(p2m, mfn_to_page(ept_entry->mfn));
> + p2m_free_ptp(p2m, mfn_to_page(_mfn(ept_entry->mfn)));
> }
>
> static bool_t ept_split_super_page(struct p2m_domain *p2m,
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index fa13e07f7c..631e9aec33 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -29,12 +29,6 @@
>
> #include "mm-locks.h"
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
> #define superpage_aligned(_x) (((_x)&(SUPERPAGE_PAGES-1))==0)
>
> /* Enforce lock ordering when grabbing the "external" page_alloc lock */
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 48e50fb5d8..9ce0a5c9e1 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -47,12 +47,6 @@ bool_t __initdata opt_hap_1gb = 1, __initdata
> opt_hap_2mb = 1;
> boolean_param("hap_1gb", opt_hap_1gb);
> boolean_param("hap_2mb", opt_hap_2mb);
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
> DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
>
> /* Init the datastructures for later use by the p2m code */
> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
> index 8a658b9118..2b0445ffe9 100644
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -47,12 +47,6 @@
> /* Per-CPU variable for enforcing the lock ordering */
> DEFINE_PER_CPU(int, mm_lock_level);
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
> /************************************************/
> /* LOG DIRTY SUPPORT */
> /************************************************/
> diff --git a/xen/arch/x86/mm/shadow/private.h
> b/xen/arch/x86/mm/shadow/private.h
> index 845541fe8a..ea0ad28c05 100644
> --- a/xen/arch/x86/mm/shadow/private.h
> +++ b/xen/arch/x86/mm/shadow/private.h
> @@ -315,7 +315,7 @@ static inline int page_is_out_of_sync(struct page_info
> *p)
>
> static inline int mfn_is_out_of_sync(mfn_t gmfn)
> {
> - return page_is_out_of_sync(mfn_to_page(mfn_x(gmfn)));
> + return page_is_out_of_sync(mfn_to_page(gmfn));
> }
>
> static inline int page_oos_may_write(struct page_info *p)
> @@ -326,7 +326,7 @@ static inline int page_oos_may_write(struct page_info
> *p)
>
> static inline int mfn_oos_may_write(mfn_t gmfn)
> {
> - return page_oos_may_write(mfn_to_page(mfn_x(gmfn)));
> + return page_oos_may_write(mfn_to_page(gmfn));
> }
> #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */
>
> @@ -465,18 +465,6 @@ void sh_reset_l3_up_pointers(struct vcpu *v);
> * MFN/page-info handling
> */
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
> -/* Override pagetable_t <-> struct page_info conversions to work with
> mfn_t */
> -#undef pagetable_get_page
> -#define pagetable_get_page(x) mfn_to_page(pagetable_get_mfn(x))
> -#undef pagetable_from_page
> -#define pagetable_from_page(pg)
> pagetable_from_mfn(page_to_mfn(pg))
> -
> #define backpointer(sp) _mfn(pdx_to_pfn((unsigned long)(sp)->v.sh.back))
> static inline unsigned long __backpointer(const struct page_info *sp)
> {
> diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
> index 4fc967f893..a87987da6f 100644
> --- a/xen/arch/x86/numa.c
> +++ b/xen/arch/x86/numa.c
> @@ -430,7 +430,7 @@ static void dump_numa(unsigned char key)
> spin_lock(&d->page_alloc_lock);
> page_list_for_each(page, &d->page_list)
> {
> - i = phys_to_nid((paddr_t)page_to_mfn(page) << PAGE_SHIFT);
> + i = phys_to_nid(page_to_maddr(page));
> page_num_node[i]++;
> }
> spin_unlock(&d->page_alloc_lock);
> diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
> index 380d36f6b9..7bfa0f23f0 100644
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -239,7 +239,7 @@ ret_t do_physdev_op(int cmd,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> }
>
> if ( cmpxchg(&currd->arch.pirq_eoi_map_mfn,
> - 0, page_to_mfn(page)) != 0 )
> + 0, mfn_x(page_to_mfn(page))) != 0 )
> {
> put_page_and_type(page);
> ret = -EBUSY;
> diff --git a/xen/arch/x86/pv/callback.c b/xen/arch/x86/pv/callback.c
> index 97d8438600..5957cb5085 100644
> --- a/xen/arch/x86/pv/callback.c
> +++ b/xen/arch/x86/pv/callback.c
> @@ -31,12 +31,6 @@
>
> #include <public/callback.h>
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> static int register_guest_nmi_callback(unsigned long address)
> {
> struct vcpu *curr = current;
> diff --git a/xen/arch/x86/pv/descriptor-tables.c
> b/xen/arch/x86/pv/descriptor-tables.c
> index b418bbb581..71bf92713e 100644
> --- a/xen/arch/x86/pv/descriptor-tables.c
> +++ b/xen/arch/x86/pv/descriptor-tables.c
> @@ -25,12 +25,6 @@
> #include <asm/p2m.h>
> #include <asm/pv/mm.h>
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> /*
> * Flush the LDT, dropping any typerefs. Returns a boolean indicating
> whether
> * mappings have been removed (i.e. a TLB flush is needed).
> diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
> index 0bd2f1bf90..5b4325b87f 100644
> --- a/xen/arch/x86/pv/dom0_build.c
> +++ b/xen/arch/x86/pv/dom0_build.c
> @@ -64,7 +64,7 @@ static __init void mark_pv_pt_pages_rdonly(struct
> domain *d,
> for ( count = 0; count < nr_pt_pages; count++ )
> {
> l1e_remove_flags(*pl1e, _PAGE_RW);
> - page = mfn_to_page(l1e_get_pfn(*pl1e));
> + page = mfn_to_page(l1e_get_mfn(*pl1e));
>
> /* Read-only mapping + PGC_allocated + page-table page. */
> page->count_info = PGC_allocated | 3;
> @@ -496,7 +496,7 @@ int __init dom0_construct_pv(struct domain *d,
> page = alloc_domheap_pages(d, order, 0);
> if ( page == NULL )
> panic("Not enough RAM for domain 0 allocation");
> - alloc_spfn = page_to_mfn(page);
> + alloc_spfn = mfn_x(page_to_mfn(page));
> alloc_epfn = alloc_spfn + d->tot_pages;
>
> if ( initrd_len )
> @@ -524,12 +524,12 @@ int __init dom0_construct_pv(struct domain *d,
> mpt_alloc = (paddr_t)initrd->mod_start << PAGE_SHIFT;
> init_domheap_pages(mpt_alloc,
> mpt_alloc + PAGE_ALIGN(initrd_len));
> - initrd->mod_start = initrd_mfn = page_to_mfn(page);
> + initrd->mod_start = initrd_mfn = mfn_x(page_to_mfn(page));
> }
> else
> {
> while ( count-- )
> - if ( assign_pages(d, mfn_to_page(mfn++), 0, 0) )
> + if ( assign_pages(d, mfn_to_page(_mfn(mfn++)), 0, 0) )
> BUG();
> }
> initrd->mod_end = 0;
> @@ -661,7 +661,7 @@ int __init dom0_construct_pv(struct domain *d,
> L1_PROT : COMPAT_L1_PROT));
> l1tab++;
>
> - page = mfn_to_page(mfn);
> + page = mfn_to_page(_mfn(mfn));
> if ( !page->u.inuse.type_info &&
> !get_page_and_type(page, d, PGT_writable_page) )
> BUG();
> @@ -801,7 +801,7 @@ int __init dom0_construct_pv(struct domain *d,
> si->nr_p2m_frames = d->tot_pages - count;
> page_list_for_each ( page, &d->page_list )
> {
> - mfn = page_to_mfn(page);
> + mfn = mfn_x(page_to_mfn(page));
> BUG_ON(SHARED_M2P(get_gpfn_from_mfn(mfn)));
> if ( get_gpfn_from_mfn(mfn) >= count )
> {
> @@ -826,7 +826,7 @@ int __init dom0_construct_pv(struct domain *d,
> panic("Not enough RAM for DOM0 reservation");
> while ( pfn < d->tot_pages )
> {
> - mfn = page_to_mfn(page);
> + mfn = mfn_x(page_to_mfn(page));
> #ifndef NDEBUG
> #define pfn (nr_pages - 1 - (pfn - (alloc_epfn - alloc_spfn)))
> #endif
> diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
> index 2c784fb3cc..5565e69f44 100644
> --- a/xen/arch/x86/pv/domain.c
> +++ b/xen/arch/x86/pv/domain.c
> @@ -11,12 +11,6 @@
>
> #include <asm/pv/domain.h>
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> static void noreturn continue_nonidle_domain(struct vcpu *v)
> {
> check_wakeup_from_wait();
> diff --git a/xen/arch/x86/pv/emul-gate-op.c b/xen/arch/x86/pv/emul-gate-
> op.c
> index 14ce95e26e..810c4f7d8c 100644
> --- a/xen/arch/x86/pv/emul-gate-op.c
> +++ b/xen/arch/x86/pv/emul-gate-op.c
> @@ -41,12 +41,6 @@
>
> #include "emulate.h"
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> static int read_gate_descriptor(unsigned int gate_sel,
> const struct vcpu *v,
> unsigned int *sel,
> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-
> op.c
> index 17aaf97f10..b44b2415a6 100644
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -43,16 +43,6 @@
> #include "emulate.h"
> #include "mm.h"
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> -/***********************
> - * I/O emulation support
> - */
> -
> struct priv_op_ctxt {
> struct x86_emulate_ctxt ctxt;
> struct {
> diff --git a/xen/arch/x86/pv/grant_table.c b/xen/arch/x86/pv/grant_table.c
> index 458085e1b6..6b7d855c8a 100644
> --- a/xen/arch/x86/pv/grant_table.c
> +++ b/xen/arch/x86/pv/grant_table.c
> @@ -27,12 +27,6 @@
>
> #include "mm.h"
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> static unsigned int grant_to_pte_flags(unsigned int grant_flags,
> unsigned int cache_flags)
> {
> diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-
> fault.c
> index 7e0e7e8dfc..17a08345d1 100644
> --- a/xen/arch/x86/pv/ro-page-fault.c
> +++ b/xen/arch/x86/pv/ro-page-fault.c
> @@ -33,12 +33,6 @@
> #include "emulate.h"
> #include "mm.h"
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> /*********************
> * Writable Pagetables
> */
> diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
> index 534965c92a..519278f78c 100644
> --- a/xen/arch/x86/pv/shim.c
> +++ b/xen/arch/x86/pv/shim.c
> @@ -37,8 +37,6 @@
>
> #include <compat/grant_table.h>
>
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> #undef virt_to_mfn
> #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
>
> @@ -848,7 +846,7 @@ static unsigned long batch_memory_op(unsigned int
> cmd, unsigned int order,
> set_xen_guest_handle(xmr.extent_start, pfns);
> page_list_for_each ( pg, list )
> {
> - pfns[xmr.nr_extents++] = page_to_mfn(pg);
> + pfns[xmr.nr_extents++] = mfn_x(page_to_mfn(pg));
> if ( xmr.nr_extents == ARRAY_SIZE(pfns) || !page_list_next(pg, list) )
> {
> long nr = xen_hypercall_memory_op(cmd, &xmr);
> diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
> index 52ad43e0ed..48c8cdb734 100644
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -48,12 +48,6 @@
> #include <mach_wakecpu.h>
> #include <smpboot_hooks.h>
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> #define setup_trampoline()
> (bootsym_phys(trampoline_realmode_entry))
>
> unsigned long __read_mostly trampoline_phys;
> diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
> index 71e757c553..fb4616ae83 100644
> --- a/xen/arch/x86/tboot.c
> +++ b/xen/arch/x86/tboot.c
> @@ -184,7 +184,7 @@ static void update_pagetable_mac(vmac_ctx_t *ctx)
>
> for ( mfn = 0; mfn < max_page; mfn++ )
> {
> - struct page_info *page = mfn_to_page(mfn);
> + struct page_info *page = mfn_to_page(_mfn(mfn));
>
> if ( !mfn_valid(_mfn(mfn)) )
> continue;
> @@ -276,7 +276,7 @@ static void tboot_gen_xenheap_integrity(const
> uint8_t key[TB_KEY_SIZE],
> vmac_set_key((uint8_t *)key, &ctx);
> for ( mfn = 0; mfn < max_page; mfn++ )
> {
> - struct page_info *page = __mfn_to_page(mfn);
> + struct page_info *page = mfn_to_page(_mfn(mfn));
>
> if ( !mfn_valid(_mfn(mfn)) )
> continue;
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 27190e0423..571796b123 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -834,8 +834,8 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> }
>
> gdprintk(XENLOG_WARNING,
> - "Bad GMFN %lx (MFN %lx) to MSR %08x\n",
> - gmfn, page ? page_to_mfn(page) : -1UL, base);
> + "Bad GMFN %lx (MFN %#"PRI_mfn") to MSR %08x\n",
> + gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN),
> base);
> return 0;
> }
>
> diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
> index 6b679882d6..ff8a6de23f 100644
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -40,12 +40,6 @@ asm(".file \"" __FILE__ "\"");
> #include <asm/mem_sharing.h>
> #include <public/memory.h>
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -
> unsigned int __read_mostly m2p_compat_vstart =
> __HYPERVISOR_COMPAT_VIRT_START;
>
> l2_pgentry_t *compat_idle_pg_table_l2;
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index e1c003d71e..e2acfeff80 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -1217,7 +1217,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long
> gfn, unsigned offset)
> }
>
> v->vcpu_info = new_info;
> - v->vcpu_info_mfn = _mfn(page_to_mfn(page));
> + v->vcpu_info_mfn = page_to_mfn(page);
>
> /* Set new vcpu_info pointer /before/ setting pending flags. */
> smp_wmb();
> @@ -1250,7 +1250,7 @@ void unmap_vcpu_info(struct vcpu *v)
>
> vcpu_info_reset(v); /* NB: Clobbers v->vcpu_info_mfn */
>
> - put_page_and_type(mfn_to_page(mfn_x(mfn)));
> + put_page_and_type(mfn_to_page(mfn));
> }
>
> int default_initialise_vcpu(struct vcpu *v,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index e9a81b66be..959b7c64b2 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -40,12 +40,6 @@
> #include <xsm/xsm.h>
> #include <asm/flushtlb.h>
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -
> /* Per-domain grant information. */
> struct grant_table {
> /*
> diff --git a/xen/common/kimage.c b/xen/common/kimage.c
> index afd8292cc1..210241dfb7 100644
> --- a/xen/common/kimage.c
> +++ b/xen/common/kimage.c
> @@ -23,12 +23,6 @@
>
> #include <asm/page.h>
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> /*
> * When kexec transitions to the new kernel there is a one-to-one
> * mapping between physical and virtual addresses. On processors
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 93d856df02..4c6b36b297 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -33,12 +33,6 @@
> #include <asm/guest.h>
> #endif
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -
> struct memop_args {
> /* INPUT */
> struct domain *domain; /* Domain to be affected. */
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index b0db41feea..7fa847c9a1 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -151,12 +151,6 @@
> #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
> #endif
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -
> /*
> * Comma-separated list of hexadecimal page numbers containing bad
> bytes.
> * e.g. 'badpage=0x3f45,0x8a321'.
> diff --git a/xen/common/tmem.c b/xen/common/tmem.c
> index 324f42a6f9..c077f87e77 100644
> --- a/xen/common/tmem.c
> +++ b/xen/common/tmem.c
> @@ -243,7 +243,7 @@ static void tmem_persistent_pool_page_put(void
> *page_va)
> struct page_info *pi;
>
> ASSERT(IS_PAGE_ALIGNED(page_va));
> - pi = mfn_to_page(virt_to_mfn(page_va));
> + pi = mfn_to_page(_mfn(virt_to_mfn(page_va)));
> ASSERT(IS_VALID_PAGE(pi));
> __tmem_free_page_thispool(pi);
> }
> diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
> index bd52e44faf..bf7b14f79a 100644
> --- a/xen/common/tmem_xen.c
> +++ b/xen/common/tmem_xen.c
> @@ -14,10 +14,6 @@
> #include <xen/cpu.h>
> #include <xen/init.h>
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> bool __read_mostly opt_tmem;
> boolean_param("tmem", opt_tmem);
>
> diff --git a/xen/common/trace.c b/xen/common/trace.c
> index 2e18702317..1f19b7a604 100644
> --- a/xen/common/trace.c
> +++ b/xen/common/trace.c
> @@ -243,7 +243,7 @@ static int alloc_trace_bufs(unsigned int pages)
> /* Now share the trace pages */
> for ( i = 0; i < pages; i++ )
> {
> - pg = mfn_to_page(t_info_mfn_list[offset + i]);
> + pg = mfn_to_page(_mfn(t_info_mfn_list[offset + i]));
> share_xen_page_with_privileged_guests(pg, XENSHARE_writable);
> }
> }
> @@ -274,7 +274,7 @@ out_dealloc:
> uint32_t mfn = t_info_mfn_list[offset + i];
> if ( !mfn )
> break;
> - ASSERT(!(mfn_to_page(mfn)->count_info & PGC_allocated));
> + ASSERT(!(mfn_to_page(_mfn(mfn))->count_info & PGC_allocated));
> free_xenheap_pages(mfn_to_virt(mfn), 0);
> }
> }
> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
> index 1f50c91789..6e295862fb 100644
> --- a/xen/common/vmap.c
> +++ b/xen/common/vmap.c
> @@ -9,10 +9,6 @@
> #include <xen/vmap.h>
> #include <asm/page.h>
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> static DEFINE_SPINLOCK(vm_lock);
> static void *__read_mostly vm_base[VMAP_REGION_NR];
> #define vm_bitmap(x) ((unsigned long *)vm_base[x])
> @@ -274,7 +270,7 @@ static void *vmalloc_type(size_t size, enum
> vmap_region type)
>
> error:
> while ( i-- )
> - free_domheap_page(mfn_to_page(mfn_x(mfn[i])));
> + free_domheap_page(mfn_to_page(mfn[i]));
> xfree(mfn);
> return NULL;
> }
> diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
> index 5acdde5691..fecdfb3697 100644
> --- a/xen/common/xenoprof.c
> +++ b/xen/common/xenoprof.c
> @@ -22,8 +22,6 @@
> /* Override macros from asm/page.h to make them work with mfn_t */
> #undef virt_to_mfn
> #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
>
> /* Limit amount of pages used for shared buffer (per domain) */
> #define MAX_OPROF_SHARED_PAGES 32
> diff --git a/xen/drivers/passthrough/amd/iommu_map.c
> b/xen/drivers/passthrough/amd/iommu_map.c
> index fd2327d3e5..70b4345b37 100644
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -451,7 +451,7 @@ static int iommu_pde_from_gfn(struct domain *d,
> unsigned long pfn,
> BUG_ON( table == NULL || level < IOMMU_PAGING_MODE_LEVEL_1 ||
> level > IOMMU_PAGING_MODE_LEVEL_6 );
>
> - next_table_mfn = page_to_mfn(table);
> + next_table_mfn = mfn_x(page_to_mfn(table));
>
> if ( level == IOMMU_PAGING_MODE_LEVEL_1 )
> {
> @@ -493,7 +493,7 @@ static int iommu_pde_from_gfn(struct domain *d,
> unsigned long pfn,
> return 1;
> }
>
> - next_table_mfn = page_to_mfn(table);
> + next_table_mfn = mfn_x(page_to_mfn(table));
> set_iommu_pde_present((u32*)pde, next_table_mfn, next_level,
> !!IOMMUF_writable, !!IOMMUF_readable);
>
> @@ -520,7 +520,7 @@ static int iommu_pde_from_gfn(struct domain *d,
> unsigned long pfn,
> unmap_domain_page(next_table_vaddr);
> return 1;
> }
> - next_table_mfn = page_to_mfn(table);
> + next_table_mfn = mfn_x(page_to_mfn(table));
> set_iommu_pde_present((u32*)pde, next_table_mfn, next_level,
> !!IOMMUF_writable, !!IOMMUF_readable);
> }
> @@ -577,7 +577,7 @@ static int update_paging_mode(struct domain *d,
> unsigned long gfn)
> }
>
> new_root_vaddr = __map_domain_page(new_root);
> - old_root_mfn = page_to_mfn(old_root);
> + old_root_mfn = mfn_x(page_to_mfn(old_root));
> set_iommu_pde_present(new_root_vaddr, old_root_mfn, level,
> !!IOMMUF_writable, !!IOMMUF_readable);
> level++;
> @@ -712,7 +712,7 @@ int amd_iommu_map_page(struct domain *d,
> unsigned long gfn, unsigned long mfn,
> }
>
> /* Deallocate lower level page table */
> - free_amd_iommu_pgtable(mfn_to_page(pt_mfn[merge_level - 1]));
> + free_amd_iommu_pgtable(mfn_to_page(_mfn(pt_mfn[merge_level -
> 1])));
> }
>
> out:
> @@ -802,7 +802,7 @@ void amd_iommu_share_p2m(struct domain *d)
> mfn_t pgd_mfn;
>
> pgd_mfn =
> pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
> - p2m_table = mfn_to_page(mfn_x(pgd_mfn));
> + p2m_table = mfn_to_page(pgd_mfn);
>
> if ( hd->arch.root_table != p2m_table )
> {
> diff --git a/xen/drivers/passthrough/iommu.c
> b/xen/drivers/passthrough/iommu.c
> index 1aecf7cf34..2c44fabf99 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -184,7 +184,7 @@ void __hwdom_init iommu_hwdom_init(struct
> domain *d)
>
> page_list_for_each ( page, &d->page_list )
> {
> - unsigned long mfn = page_to_mfn(page);
> + unsigned long mfn = mfn_x(page_to_mfn(page));
> unsigned long gfn = mfn_to_gmfn(d, mfn);
> unsigned int mapping = IOMMUF_readable;
> int ret;
> diff --git a/xen/drivers/passthrough/x86/iommu.c
> b/xen/drivers/passthrough/x86/iommu.c
> index 0253823173..68182afd91 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -58,7 +58,7 @@ int arch_iommu_populate_page_table(struct domain
> *d)
> if ( is_hvm_domain(d) ||
> (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
> {
> - unsigned long mfn = page_to_mfn(page);
> + unsigned long mfn = mfn_x(page_to_mfn(page));
> unsigned long gfn = mfn_to_gmfn(d, mfn);
>
> if ( gfn != gfn_x(INVALID_GFN) )
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 023e2eb213..b1d94805d4 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -138,7 +138,7 @@ extern vaddr_t xenheap_virt_start;
> #endif
>
> #ifdef CONFIG_ARM_32
> -#define is_xen_heap_page(page)
> is_xen_heap_mfn(__page_to_mfn(page))
> +#define is_xen_heap_page(page)
> is_xen_heap_mfn(mfn_x(page_to_mfn(page)))
> #define is_xen_heap_mfn(mfn) ({ \
> unsigned long mfn_ = (mfn); \
> (mfn_ >= mfn_x(xenheap_mfn_start) && \
> @@ -147,7 +147,7 @@ extern vaddr_t xenheap_virt_start;
> #else
> #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
> #define is_xen_heap_mfn(mfn) \
> - (mfn_valid(_mfn(mfn)) && is_xen_heap_page(__mfn_to_page(mfn)))
> + (mfn_valid(_mfn(mfn)) &&
> is_xen_heap_page(mfn_to_page(_mfn(mfn))))
> #endif
>
> #define is_xen_fixed_mfn(mfn) \
> @@ -220,12 +220,14 @@ static inline void __iomem *ioremap_wc(paddr_t
> start, size_t len)
> })
>
> /* Convert between machine frame numbers and page-info structures. */
> -#define __mfn_to_page(mfn) (frame_table + (pfn_to_pdx(mfn) -
> frametable_base_pdx))
> -#define __page_to_mfn(pg) pdx_to_pfn((unsigned long)((pg) -
> frame_table) + frametable_base_pdx)
> +#define mfn_to_page(mfn) \
> + (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx))
> +#define page_to_mfn(pg) \
> + pdx_to_mfn((unsigned long)((pg) - frame_table) +
> frametable_base_pdx)
>
> /* Convert between machine addresses and page-info structures. */
> -#define maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT)
> -#define page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) <<
> PAGE_SHIFT)
> +#define maddr_to_page(ma) mfn_to_page(maddr_to_mfn(ma))
> +#define page_to_maddr(pg) (mfn_to_maddr(page_to_mfn(pg)))
>
> /* Convert between frame number and address formats. */
> #define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
> @@ -235,7 +237,7 @@ static inline void __iomem *ioremap_wc(paddr_t
> start, size_t len)
> #define gaddr_to_gfn(ga) _gfn(paddr_to_pfn(ga))
> #define mfn_to_maddr(mfn) pfn_to_paddr(mfn_x(mfn))
> #define maddr_to_mfn(ma) _mfn(paddr_to_pfn(ma))
> -#define vmap_to_mfn(va) paddr_to_pfn(virt_to_maddr((vaddr_t)va))
> +#define vmap_to_mfn(va) maddr_to_mfn(virt_to_maddr((vaddr_t)va))
> #define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va))
>
> /* Page-align address and convert to frame number format */
> @@ -293,8 +295,6 @@ static inline uint64_t gvirt_to_maddr(vaddr_t va,
> paddr_t *pa,
> * These are overriden in various source files while underscored version
> * remain intact.
> */
> -#define mfn_to_page(mfn) __mfn_to_page(mfn)
> -#define page_to_mfn(pg) __page_to_mfn(pg)
> #define virt_to_mfn(va) __virt_to_mfn(va)
> #define mfn_to_virt(mfn) __mfn_to_virt(mfn)
>
> @@ -314,7 +314,7 @@ static inline struct page_info *virt_to_page(const void
> *v)
>
> static inline void *page_to_virt(const struct page_info *pg)
> {
> - return mfn_to_virt(page_to_mfn(pg));
> + return mfn_to_virt(mfn_x(page_to_mfn(pg)));
> }
>
> struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index a0abc84ed8..bcac141fd4 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -278,7 +278,7 @@ static inline struct page_info *get_page_from_gfn(
> {
> struct page_info *page;
> p2m_type_t p2mt;
> - unsigned long mfn = mfn_x(p2m_lookup(d, _gfn(gfn), &p2mt));
> + mfn_t mfn = p2m_lookup(d, _gfn(gfn), &p2mt);
>
> if (t)
> *t = p2mt;
> @@ -286,7 +286,7 @@ static inline struct page_info *get_page_from_gfn(
> if ( !p2m_is_any_ram(p2mt) )
> return NULL;
>
> - if ( !mfn_valid(_mfn(mfn)) )
> + if ( !mfn_valid(mfn) )
> return NULL;
> page = mfn_to_page(mfn);
>
> diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
> index 3013c266fe..8dc3821e97 100644
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -271,7 +271,7 @@ struct page_info
>
> #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
> #define is_xen_heap_mfn(mfn) \
> - (__mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))
> + (__mfn_valid(mfn) && is_xen_heap_page(mfn_to_page(_mfn(mfn))))
> #define is_xen_fixed_mfn(mfn) \
> ((((mfn) << PAGE_SHIFT) >= __pa(&_stext)) && \
> (((mfn) << PAGE_SHIFT) <= __pa(&__2M_rwdata_end)))
> @@ -384,7 +384,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct
> domain *l1e_owner);
>
> static inline struct page_info *get_page_from_mfn(mfn_t mfn, struct
> domain *d)
> {
> - struct page_info *page = __mfn_to_page(mfn_x(mfn));
> + struct page_info *page = mfn_to_page(mfn);
>
> if ( unlikely(!mfn_valid(mfn)) || unlikely(!get_page(page, d)) )
> {
> @@ -478,10 +478,10 @@ extern paddr_t mem_hotplug;
> #define SHARED_M2P(_e) ((_e) == SHARED_M2P_ENTRY)
>
> #define compat_machine_to_phys_mapping ((unsigned int
> *)RDWR_COMPAT_MPT_VIRT_START)
> -#define _set_gpfn_from_mfn(mfn, pfn) ({ \
> - struct domain *d = page_get_owner(__mfn_to_page(mfn)); \
> - unsigned long entry = (d && (d == dom_cow)) ? \
> - SHARED_M2P_ENTRY : (pfn); \
> +#define _set_gpfn_from_mfn(mfn, pfn) ({ \
> + struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn))); \
> + unsigned long entry = (d && (d == dom_cow)) ? \
> + SHARED_M2P_ENTRY : (pfn); \
> ((void)((mfn) >= (RDWR_COMPAT_MPT_VIRT_END -
> RDWR_COMPAT_MPT_VIRT_START) / 4 || \
> (compat_machine_to_phys_mapping[(mfn)] = (unsigned int)(entry))),
> \
> machine_to_phys_mapping[(mfn)] = (entry)); \
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 2e7aa8fc79..c486b6f8f0 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -488,7 +488,7 @@ static inline struct page_info *get_page_from_gfn(
> /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
> if ( t )
> *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct;
> - page = __mfn_to_page(gfn);
> + page = mfn_to_page(_mfn(gfn));
> return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
> }
>
> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
> index 45ca742678..7e2a1546c3 100644
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -88,10 +88,10 @@
> ((paddr_t)(((x).l4 & (PADDR_MASK&PAGE_MASK))))
>
> /* Get pointer to info structure of page mapped by pte (struct page_info *).
> */
> -#define l1e_get_page(x) (__mfn_to_page(l1e_get_pfn(x)))
> -#define l2e_get_page(x) (__mfn_to_page(l2e_get_pfn(x)))
> -#define l3e_get_page(x) (__mfn_to_page(l3e_get_pfn(x)))
> -#define l4e_get_page(x) (__mfn_to_page(l4e_get_pfn(x)))
> +#define l1e_get_page(x) mfn_to_page(l1e_get_mfn(x))
> +#define l2e_get_page(x) mfn_to_page(l2e_get_mfn(x))
> +#define l3e_get_page(x) mfn_to_page(l3e_get_mfn(x))
> +#define l4e_get_page(x) mfn_to_page(l4e_get_mfn(x))
>
> /* Get pte access flags (unsigned int). */
> #define l1e_get_flags(x) (get_pte_flags((x).l1))
> @@ -157,10 +157,10 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t
> pa, unsigned int flags)
> #define l4e_from_intpte(intpte) ((l4_pgentry_t) { (intpte_t)(intpte) })
>
> /* Construct a pte from a page pointer and access flags. */
> -#define l1e_from_page(page, flags) l1e_from_pfn(__page_to_mfn(page),
> (flags))
> -#define l2e_from_page(page, flags) l2e_from_pfn(__page_to_mfn(page),
> (flags))
> -#define l3e_from_page(page, flags) l3e_from_pfn(__page_to_mfn(page),
> (flags))
> -#define l4e_from_page(page, flags) l4e_from_pfn(__page_to_mfn(page),
> (flags))
> +#define l1e_from_page(page, flags) l1e_from_mfn(page_to_mfn(page),
> (flags))
> +#define l2e_from_page(page, flags) l2e_from_mfn(page_to_mfn(page),
> (flags))
> +#define l3e_from_page(page, flags) l3e_from_mfn(page_to_mfn(page),
> (flags))
> +#define l4e_from_page(page, flags) l4e_from_mfn(page_to_mfn(page),
> (flags))
>
> /* Add extra flags to an existing pte. */
> #define l1e_add_flags(x, flags) ((x).l1 |= put_pte_flags(flags))
> @@ -215,13 +215,13 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t
> pa, unsigned int flags)
> /* Page-table type. */
> typedef struct { u64 pfn; } pagetable_t;
> #define pagetable_get_paddr(x) ((paddr_t)(x).pfn << PAGE_SHIFT)
> -#define pagetable_get_page(x) __mfn_to_page((x).pfn)
> +#define pagetable_get_page(x) mfn_to_page(pagetable_get_mfn(x))
> #define pagetable_get_pfn(x) ((x).pfn)
> #define pagetable_get_mfn(x) _mfn(((x).pfn))
> #define pagetable_is_null(x) ((x).pfn == 0)
> #define pagetable_from_pfn(pfn) ((pagetable_t) { (pfn) })
> #define pagetable_from_mfn(mfn) ((pagetable_t) { mfn_x(mfn) })
> -#define pagetable_from_page(pg)
> pagetable_from_pfn(__page_to_mfn(pg))
> +#define pagetable_from_page(pg)
> pagetable_from_mfn(page_to_mfn(pg))
> #define pagetable_from_paddr(p) pagetable_from_pfn((p)>>PAGE_SHIFT)
> #define pagetable_null() pagetable_from_pfn(0)
>
> @@ -240,12 +240,12 @@ void copy_page_sse2(void *, const void *);
> #define __mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) <<
> PAGE_SHIFT))
>
> /* Convert between machine frame numbers and page-info structures. */
> -#define __mfn_to_page(mfn) (frame_table + pfn_to_pdx(mfn))
> -#define __page_to_mfn(pg) pdx_to_pfn((unsigned long)((pg) -
> frame_table))
> +#define mfn_to_page(mfn) (frame_table + mfn_to_pdx(mfn))
> +#define page_to_mfn(pg) pdx_to_mfn((unsigned long)((pg) -
> frame_table))
>
> /* Convert between machine addresses and page-info structures. */
> -#define __maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT)
> -#define __page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) <<
> PAGE_SHIFT)
> +#define __maddr_to_page(ma) mfn_to_page(maddr_to_mfn(ma))
> +#define __page_to_maddr(pg) (mfn_to_maddr(page_to_mfn(pg)))
>
> /* Convert between frame number and address formats. */
> #define __pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
> @@ -264,8 +264,6 @@ void copy_page_sse2(void *, const void *);
> #define mfn_to_virt(mfn) __mfn_to_virt(mfn)
> #define virt_to_maddr(va) __virt_to_maddr((unsigned long)(va))
> #define maddr_to_virt(ma) __maddr_to_virt((unsigned long)(ma))
> -#define mfn_to_page(mfn) __mfn_to_page(mfn)
> -#define page_to_mfn(pg) __page_to_mfn(pg)
> #define maddr_to_page(ma) __maddr_to_page(ma)
> #define page_to_maddr(pg) __page_to_maddr(pg)
> #define virt_to_page(va) __virt_to_page(va)
> @@ -273,7 +271,7 @@ void copy_page_sse2(void *, const void *);
> #define pfn_to_paddr(pfn) __pfn_to_paddr(pfn)
> #define paddr_to_pfn(pa) __paddr_to_pfn(pa)
> #define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa))
> -#define vmap_to_mfn(va) l1e_get_pfn(*virt_to_xen_l1e((unsigned
> long)(va)))
> +#define vmap_to_mfn(va)
> _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
> #define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va))
>
> #endif /* !defined(__ASSEMBLY__) */
> diff --git a/xen/include/xen/domain_page.h
> b/xen/include/xen/domain_page.h
> index 890bae5b9c..32669a3339 100644
> --- a/xen/include/xen/domain_page.h
> +++ b/xen/include/xen/domain_page.h
> @@ -34,7 +34,7 @@ void unmap_domain_page(const void *va);
> /*
> * Given a VA from map_domain_page(), return its underlying MFN.
> */
> -unsigned long domain_page_map_to_mfn(const void *va);
> +mfn_t domain_page_map_to_mfn(const void *va);
>
> /*
> * Similar to the above calls, except the mapping is accessible in all
> @@ -44,11 +44,11 @@ unsigned long domain_page_map_to_mfn(const void
> *va);
> void *map_domain_page_global(mfn_t mfn);
> void unmap_domain_page_global(const void *va);
>
> -#define __map_domain_page(pg)
> map_domain_page(_mfn(__page_to_mfn(pg)))
> +#define __map_domain_page(pg)
> map_domain_page(page_to_mfn(pg))
>
> static inline void *__map_domain_page_global(const struct page_info *pg)
> {
> - return map_domain_page_global(_mfn(__page_to_mfn(pg)));
> + return map_domain_page_global(page_to_mfn(pg));
> }
>
> #else /* !CONFIG_DOMAIN_PAGE */
> @@ -56,7 +56,7 @@ static inline void *__map_domain_page_global(const
> struct page_info *pg)
> #define map_domain_page(mfn) __mfn_to_virt(mfn_x(mfn))
> #define __map_domain_page(pg) page_to_virt(pg)
> #define unmap_domain_page(va) ((void)(va))
> -#define domain_page_map_to_mfn(va) virt_to_mfn((unsigned
> long)(va))
> +#define domain_page_map_to_mfn(va) _mfn(virt_to_mfn((unsigned
> long)(va)))
>
> static inline void *map_domain_page_global(mfn_t mfn)
> {
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index caad06e753..204dd9c48d 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -277,13 +277,8 @@ struct page_list_head
> # define PAGE_LIST_NULL ((typeof(((struct page_info){}).list.next))~0)
>
> # if !defined(pdx_to_page) && !defined(page_to_pdx)
> -# if defined(__page_to_mfn) || defined(__mfn_to_page)
> -# define page_to_pdx __page_to_mfn
> -# define pdx_to_page __mfn_to_page
> -# else
> # define page_to_pdx page_to_mfn
> # define pdx_to_page mfn_to_page
> -# endif
> # endif
>
> # define PAGE_LIST_HEAD_INIT(name) { NULL, NULL }
> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
> index 542c0b3f20..8516a0b131 100644
> --- a/xen/include/xen/tmem_xen.h
> +++ b/xen/include/xen/tmem_xen.h
> @@ -25,7 +25,7 @@
> typedef uint32_t pagesize_t; /* like size_t, must handle largest PAGE_SIZE
> */
>
> #define IS_PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr),
> PAGE_SIZE)
> -#define IS_VALID_PAGE(_pi) mfn_valid(_mfn(page_to_mfn(_pi)))
> +#define IS_VALID_PAGE(_pi) mfn_valid(page_to_mfn(_pi))
>
> extern struct page_list_head tmem_page_list;
> extern spinlock_t tmem_page_list_lock;
> --
> 2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-02-21 14:59 UTC|newest]
Thread overview: 76+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-21 14:02 [PATCH v4 00/16] xen: Convert page_to_mfn and mfn_to_page to use typesafe MFN Julien Grall
2018-02-21 14:02 ` [PATCH v4 01/16] xen/tmem: Convert the file common/tmem_xen.c " Julien Grall
2018-02-21 14:02 ` [PATCH v4 02/16] xen/arm: setup: use maddr_to_mfn rather than _mfn(paddr_to_pfn(...)) Julien Grall
2018-02-21 14:02 ` [PATCH v4 03/16] xen/arm: mm: Use gaddr_to_gfn rather than _gfn(paddr_to_pfn(...)) Julien Grall
2018-02-21 14:02 ` [PATCH v4 04/16] xen/arm: mm: Remove unused M2P code Julien Grall
2018-02-21 14:02 ` [PATCH v4 05/16] xen/arm: mm: Remove unused relinquish_shared_pages Julien Grall
2018-02-21 14:02 ` [PATCH v4 06/16] xen/x86: Remove unused override of page_to_mfn/mfn_to_page Julien Grall
2018-03-01 11:20 ` George Dunlap
2018-03-02 14:42 ` Jan Beulich
2018-03-02 14:44 ` Julien Grall
2018-03-02 15:11 ` Jan Beulich
2018-03-05 13:29 ` Julien Grall
2018-02-21 14:02 ` [PATCH v4 07/16] xen/x86: mm: Switch x86/mm.c to use typesafe for virt_to_mfn Julien Grall
2018-03-02 14:45 ` Jan Beulich
2018-03-02 14:46 ` Julien Grall
2018-02-21 14:02 ` [PATCH v4 08/16] xen/mm: Drop the parameter mfn from populate_pt_range Julien Grall
2018-02-22 16:35 ` Wei Liu
2018-02-22 16:40 ` Julien Grall
2018-02-22 16:51 ` Wei Liu
2018-02-22 16:55 ` Julien Grall
2018-02-22 17:10 ` Wei Liu
2018-03-02 14:55 ` Jan Beulich
2018-03-05 13:43 ` Julien Grall
2018-03-05 14:00 ` Jan Beulich
2018-03-05 14:11 ` Julien Grall
2018-03-05 14:38 ` Jan Beulich
2018-03-09 17:29 ` Wei Liu
2018-03-11 19:30 ` Julien Grall
2018-03-12 6:36 ` Jan Beulich
2018-03-14 15:22 ` Julien Grall
2018-02-21 14:02 ` [PATCH v4 09/16] xen/pdx: Introduce helper to convert MFN <-> PDX Julien Grall
2018-02-22 16:39 ` Wei Liu
2018-02-21 14:02 ` [PATCH v4 10/16] xen/mm: Switch map_pages_to_xen to use MFN typesafe Julien Grall
2018-02-23 4:59 ` Tian, Kevin
2018-02-23 17:21 ` Wei Liu
2018-03-02 15:06 ` Jan Beulich
2018-03-02 15:08 ` Jan Beulich
2018-03-05 14:07 ` Julien Grall
2018-03-05 14:39 ` Jan Beulich
2018-03-05 14:44 ` Julien Grall
2018-02-21 14:02 ` [PATCH v4 11/16] xen/mm: Switch page_alloc.c to typesafe MFN Julien Grall
2018-02-23 17:21 ` Wei Liu
2018-03-02 15:18 ` Jan Beulich
2018-03-02 15:57 ` Julien Grall
2018-02-21 14:02 ` [PATCH v4 12/16] xen/mm: Switch common/memory.c to use " Julien Grall
2018-02-23 17:26 ` Wei Liu
2018-02-23 17:46 ` Julien Grall
2018-02-23 18:05 ` Wei Liu
2018-02-23 18:06 ` Julien Grall
2018-02-23 18:10 ` Wei Liu
2018-03-02 15:34 ` Jan Beulich
2018-03-05 14:18 ` Julien Grall
2018-03-05 14:41 ` Jan Beulich
2018-03-09 17:33 ` Wei Liu
2018-03-11 19:44 ` Julien Grall
2018-03-12 6:39 ` Jan Beulich
2018-03-14 16:08 ` Julien Grall
2018-02-21 14:02 ` [PATCH v4 13/16] xen/grant: Switch {create, replace}_grant_p2m_mapping to " Julien Grall
2018-02-23 17:29 ` Wei Liu
2018-03-02 15:38 ` Jan Beulich
2018-02-21 14:02 ` [PATCH v4 14/16] xen/grant: Switch common/grant_table.c to use " Julien Grall
2018-02-23 17:30 ` Wei Liu
2018-03-02 15:54 ` Jan Beulich
2018-03-02 15:59 ` Julien Grall
2018-03-02 16:12 ` Jan Beulich
2018-02-21 14:02 ` [PATCH v4 15/16] xen/x86: Switch mfn_to_page in x86_64/mm.c " Julien Grall
2018-03-02 15:57 ` Jan Beulich
2018-02-21 14:02 ` [PATCH v4 16/16] xen: Convert page_to_mfn and mfn_to_page " Julien Grall
2018-02-21 14:25 ` Razvan Cojocaru
2018-02-21 14:59 ` Paul Durrant [this message]
2018-02-21 23:20 ` Boris Ostrovsky
2018-02-23 4:59 ` Tian, Kevin
2018-02-23 17:31 ` Wei Liu
2018-03-02 16:08 ` Jan Beulich
2018-03-14 17:02 ` Julien Grall
2018-03-15 7:07 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ea8e0b0c23f44a679bd690b3c175fede@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=George.Dunlap@citrix.com \
--cc=Ian.Jackson@citrix.com \
--cc=boris.ostrovsky@oracle.com \
--cc=gang.wei@intel.com \
--cc=jbeulich@suse.com \
--cc=julien.grall@arm.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=rcojocaru@bitdefender.com \
--cc=shane.wang@intel.com \
--cc=sstabellini@kernel.org \
--cc=suravee.suthikulpanit@amd.com \
--cc=tamas@tklengyel.com \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).