xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
To: Tim Deegan <tim@xen.org>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: Accessing Dom0 Physical memory from xen, via direct mappings (PML4:262-271)
Date: Tue, 13 Mar 2012 09:32:28 -0700	[thread overview]
Message-ID: <CAP8mzPPJTK_tg5h2JUTKOHptrG1C2giok3Yy9OO+=zwWEKq4Pg@mail.gmail.com> (raw)
In-Reply-To: <20120313160817.GA71026@ocelot.phlegethon.org>


[-- Attachment #1.1: Type: text/plain, Size: 3664 bytes --]

On Tue, Mar 13, 2012 at 9:08 AM, Tim Deegan <tim@xen.org> wrote:

> At 08:45 -0700 on 13 Mar (1331628358), Shriram Rajagopalan wrote:
> > In config.h (include/asm-x86/config.h) I found this:
> >
> > #if __x86_64__
> > ...
> >  *  0xffff830000000000 - 0xffff87ffffffffff [5TB, 5*2^40 bytes,
> PML4:262-271]
> >  *    1:1 direct mapping of all physical memory.
> > ...
> >
> > I was wondering if it's possible for dom0 to malloc a huge chunk of
> memory
> > and let xen know the starting address of this range.
> > Inside xen, I can translate dom0 virt address to a virt address in the
> above
> > range and access the entire chunk via these virtual addresses.
>
> Eh, maybe?  But it's harder than you'd think.  Memory malloc()ed in dom0
> may not be contiguous in PFN-space, and dom0 PFNs may not be contiguous
> in MFN-space, so you can't just translate the base of the buffer and use
> it with offsets, you have to translate again for every 4k page.  Also, you
> need to make sure dom0 doesn't page out or relocate your user-space
> buffer while Xen is accessing the MFNs.  mlock() might do what you need
> but AIUI there's no guarantee that it won't, e.g., move the buffer
> around to defragment memory.
>
>
Yep. I am aware of the above issues. As far as contiguity is concerned,
I was hoping (*naively/lazily*) that if I allocate a huge chunk (1G or so)
using posix_memalign, it would start at page boundary and also be contiguous
*most* of the time.  I need this setup only for some temporary analysis and
not for a production quality system.
And the machine has more than enough ram, with swap usage being 0 all the
time.


If you do handle all that, the correct way to get at the mappings in
> this range is with map_domain_page().  But remember, this only works
> like this on 64-bit Xen.  On 32-bit, only a certain amount of memory can
> be mapped at one time so if the buffer is really big, you'll need to map
> an unmap parts of it on demand.
>
>
64-bit. The comments I pointed out were under the #if x86_64 region.

But maybe back up a bit: why do you want to do this?  What's the buffer
> for?  Is it something you could do more easily by having Xen allocate
> the buffer and let dom0 map it?
>
>
Well, the buffer acts as a huge log dirty "byte" map (a byte per word).
I am skipping the reason for doing this huge byte map, for the sake of
brevity.

Can I have xen allocate this huge buffer ? (a byte per 8-byte word means
about
128M for a 1G guest). And if I were to have this byte-map per-vcpu, it
would mean
512M worth of RAM, for a 4-vcpu guest.

Is there a way I could increase the xen heap size to be able to allocate
this much memory?
And how do I map the xen memory in dom0 ? I vaguely remember seeing similar
code in
xentrace, but if you could point me in the right direction, it would be
great.


 > The catch here is that I want this virtual address range to be accessible
> > across all vcpu contexts in xen (whether it's servicing a hypercall from
> dom0
> > or a vmx fault caused by Guest).
> >
> > So far, I have only been able to achieve the former. In the latter case,
> > where the "current" vcpu belongs to a guest (eg in a vmx fault handler),
> > I can't access this address range inside xen. Do I have to add EPT
> > mappings to guest's p2m to do this ? Or can I do something else ?
>
> If you really have got a pointer into the 1-1 mapping it should work
> from any vcpu.
>
 But again, that's not going to work on 32-bit Xen.
> There, you have to use map_domain_page_global() to get a mapping that
> persists across all vcpus, and that's even more limited in how much it
> can map at once.
>
> Cheers,
>
> Tim.
>
>
cheers
shriram

[-- Attachment #1.2: Type: text/html, Size: 4918 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2012-03-13 16:32 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-13 15:45 Accessing Dom0 Physical memory from xen, via direct mappings (PML4:262-271) Shriram Rajagopalan
2012-03-13 16:08 ` Tim Deegan
2012-03-13 16:32   ` Shriram Rajagopalan [this message]
2012-03-13 16:43     ` Tim Deegan
2012-03-14  3:04       ` Shriram Rajagopalan
2012-03-14  9:00         ` Tim Deegan
     [not found] <mailman.6174.1331656614.1471.xen-devel@lists.xen.org>
2012-03-13 18:13 ` Andres Lagar-Cavilla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAP8mzPPJTK_tg5h2JUTKOHptrG1C2giok3Yy9OO+=zwWEKq4Pg@mail.gmail.com' \
    --to=rshriram@cs.ubc.ca \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).