xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com"
	<stefano.stabellini@eu.citrix.com>
Subject: [HYBRID] : mapping IO mems in the EPT
Date: Thu, 14 Jun 2012 18:43:38 -0700	[thread overview]
Message-ID: <20120614184338.43fb879b@mantra.us.oracle.com> (raw)

Hi guys,

During my refresh to latest linux, I noticed, direct mapping of all
non-RAM pages in xen_set_identity_and_release(). I currently don't map
all at front, but as needed looking at the PAGE_IO bit in the pte. One
result of that is minor change to common code macro:

        __set_fixmap(idx, phys, PAGE_KERNEL_NOCACHE) to
to        __set_fixmap(idx, phys, PAGE_KERNEL_IO_NOCACHE)


To avoid this change, and keep all my changes limited to xen files only,
I thought I could just map the entire non-ram pages up front too. But
I am concerned the EPT may grow too large? Specially, when we get to 
*really* large NUMA boxes. What do you guys think? Should I worry about
it?

thanks
Mukesh

E820 on my small box:
Xen: [mem 0x0000000000000000-0x000000000009cfff] usable
Xen: [mem 0x000000000009d800-0x00000000000fffff] reserved
Xen: [mem 0x0000000000100000-0x00000000bf30cfff] usable
Xen: [mem 0x00000000bf30d000-0x00000000bf38cfff] ACPI NVS
Xen: [mem 0x00000000bf38d000-0x00000000bf3a2fff] reserved
Xen: [mem 0x00000000bf3a3000-0x00000000bf3a3fff] ACPI NVS
Xen: [mem 0x00000000bf3a4000-0x00000000bf3b4fff] reserved
Xen: [mem 0x00000000bf3b5000-0x00000000bf3b7fff] ACPI NVS
Xen: [mem 0x00000000bf3b8000-0x00000000bf3defff] reserved
Xen: [mem 0x00000000bf3df000-0x00000000bf3dffff] usable
Xen: [mem 0x00000000bf3e0000-0x00000000bf3e0fff] ACPI NVS
Xen: [mem 0x00000000bf3e1000-0x00000000bf415fff] reserved
Xen: [mem 0x00000000bf416000-0x00000000bf41ffff] ACPI data
Xen: [mem 0x00000000bf420000-0x00000000bf420fff] ACPI NVS
Xen: [mem 0x00000000bf421000-0x00000000bf422fff] ACPI data
Xen: [mem 0x00000000bf423000-0x00000000bf42afff] ACPI NVS
Xen: [mem 0x00000000bf42b000-0x00000000bf453fff] reserved
Xen: [mem 0x00000000bf454000-0x00000000bf656fff] ACPI NVS
Xen: [mem 0x00000000bf657000-0x00000000bf7fffff] usable
Xen: [mem 0x00000000c0000000-0x00000000cfffffff] reserved
Xen: [mem 0x00000000fec00000-0x00000000fec02fff] reserved
Xen: [mem 0x00000000fec90000-0x00000000fec90fff] reserved
Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Xen: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Xen: [mem 0x0000000100000000-0x00000002bfffffff] usable

             reply	other threads:[~2012-06-15  1:43 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-15  1:43 Mukesh Rathor [this message]
2012-06-15 11:02 ` [HYBRID] : mapping IO mems in the EPT Stefano Stabellini
2012-06-18 18:35   ` Konrad Rzeszutek Wilk
2012-06-19  9:21     ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120614184338.43fb879b@mantra.us.oracle.com \
    --to=mukesh.rathor@oracle.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Xen-devel@lists.xensource.com \
    --cc=stefano.stabellini@eu.citrix.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).