xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: sepanta s <sapanta992@gmail.com>
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: George Dunlap <dunlapg@umich.edu>,
	Razvan Cojocaru <rcojocaru@bitdefender.com>,
	"xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: monitor access to pages with a specific p2m_type_t
Date: Mon, 11 Jul 2016 16:55:09 +0430	[thread overview]
Message-ID: <CABaiLQ-OvCkn5VUHSCLNQFy1RwhYEYSBth5gWHn3padvKuWQVw@mail.gmail.com> (raw)
In-Reply-To: <CABaiLQ_q1UV8aqScMvZRXmK9Gmj_1BP=r9kTHeMFepbQ2w-X1A@mail.gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 3324 bytes --]

On Sun, Jul 10, 2016 at 4:50 PM, sepanta s <sapanta992@gmail.com> wrote:

>
>
> On Sun, Jun 26, 2016 at 5:15 PM, sepanta s <sapanta992@gmail.com> wrote:
>
>>
>>
>>
>> On Fri, Jun 24, 2016 at 8:10 PM, Tamas K Lengyel <tamas@tklengyel.com>
>> wrote:
>>
>>>
>>> On Jun 24, 2016 05:19, "Razvan Cojocaru" <rcojocaru@bitdefender.com>
>>> wrote:
>>> >
>>> > On 06/24/2016 02:05 PM, George Dunlap wrote:
>>> > > On Wed, Jun 22, 2016 at 12:38 PM, sepanta s <sapanta992@gmail.com>
>>> wrote:
>>> > >> Hi,
>>> > >> Is it possible to monitor the access on the pages withp2m_type_t
>>> > >> p2m_ram_shared?
>>> > >
>>> > > cc'ing Tamas and Razvan
>>> >
>>> > Thanks for the CC. Judging by the "if ( npfec.write_access && (p2mt ==
>>> > p2m_ram_shared) )" line in hvm_hap_nested_page_fault() (from
>>> > xen/arch/x86/hvm/hvm.c), I'd say it certainly looks possible. But I
>>> > don't know what the context of the question is.
>>> >
>>> >
>>> > Thanks,
>>> > Razvan
>>>
>> The question is just getting the gfn and mfn of the page which is as
>> type: p2m_ram_shared to see which pages are written and unshared.
>>
>>> Yes, p2m_ram_shared type pages can be monitored with mem_access just as
>>> normal pages. The only part that may be tricky is if you map the page into
>>> your monitoring application while the page is shared. Your handle will
>>> continue to be valid even if the page is unshared but it will continue to
>>> point to the shared page. However, even if you catch write access events to
>>> the shared page that will lead to unsharing, the mem_access notification is
>>> sent before unsharing. I just usually do unsharing myself in the mem_access
>>> callback manually for monitored pages for this reason. I might change the
>>> flow in 4.8 to send the notification after the unsharing happened to
>>> simplify this.
>>>
>>> Tamas
>>>
>> Thanks, but in mem_access , what APIs can be used to see such events ?
>>
>
> Should I mark the shared pages as rx only ?
>
>
Hi,
Is there any sample code which I can undestand how to capture the events on
the gfns which have p2m_ram_shared enabled ?
I couldn't find any ... .
I would be grateful if any help , as there is not any documents through net
to use :(

Should I just set the ring_page as the pages which are shared and mark them
read-only, then capture the write events?


BTW, I added a function called mem_sharing_notify_unshare to mem_sharing.c
and added it to __mem_sharing_unshare_page at this part:

*if ( p2m_change_type_one(d, gfn, p2m_ram_shared, p2m_ram_rw) )*
*{*
*gdprintk(XENLOG_ERR, "Could not change p2m type d %hu gfn %lx.\n", *
*d->domain_id, gfn);*
*BUG();*
*}else {*


* mem_sharing_notify_unshare(d,gfn.0);*
*}*

So by having a vm event channel listening to unsharing event, I can see the
notification in xen-access . To do so, I
have used vm_event_enable which uses HVM_PARAM_SHARING_RING_PFN .
But, when I used memshrtool to share all the pages of two vms - my vm1 and
its clone vm2 .
About 900 MB of the ram is shared but there are a lot of unshared events
happening.
When I do the sharing one more time, there is not any pages unshared as the
total number of shared pages stay the same.
Seems no unsharing is done as the number of shared pages are the same.
Does any page fault triggers when a write operation is done on a shared
page among two vms ?

[-- Attachment #1.2: Type: text/html, Size: 6335 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-07-11 12:25 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-22 11:38 monitor access to pages with a specific p2m_type_t sepanta s
2016-06-24 11:05 ` George Dunlap
2016-06-24 11:20   ` Razvan Cojocaru
2016-06-24 15:40     ` Tamas K Lengyel
2016-06-26 12:45       ` sepanta s
2016-07-10 12:20         ` sepanta s
2016-07-11 12:25           ` sepanta s [this message]
2016-07-12 18:26             ` Tamas K Lengyel
2016-07-23 11:19               ` sepanta s
2016-08-02  6:19                 ` sepanta s
2016-08-02 15:53                   ` Tamas K Lengyel
2016-08-05 11:35                     ` sepanta s
2016-08-05 18:15                       ` Tamas K Lengyel
2016-08-05 18:26                         ` sepanta s

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CABaiLQ-OvCkn5VUHSCLNQFy1RwhYEYSBth5gWHn3padvKuWQVw@mail.gmail.com \
    --to=sapanta992@gmail.com \
    --cc=Xen-devel@lists.xen.org \
    --cc=dunlapg@umich.edu \
    --cc=rcojocaru@bitdefender.com \
    --cc=tamas@tklengyel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).