xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Lakshitha Harshan <harshan.dll@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Memory Sharing
Date: Tue, 24 May 2011 12:47:39 +0530	[thread overview]
Message-ID: <BANLkTi=Xg=0s4RjVYz9u1ViabrDWtEbWpQ@mail.gmail.com> (raw)


[-- Attachment #1.1: Type: text/plain, Size: 1462 bytes --]

Hi,

I'm using Xen 4.1.1-rc1-pre on linux-xen-2.6.32.39-g057b171. I'm trying to
use memory sharing functions. In my code i call,

uint64_t shandle, chandle;
 xc_memshr_nominate_gfn(xc_handle,dom1,4200,&shandle); //dom1 & dom2 are
Debian Squeeze domUs
 xc_memshr_nominate_gfn(xc_handle,dom2,4200,&shandle); // gfn 4200 is a
memory page in kernel code segment
xc_memshr_share(xc_handle, shandle, chandle);

I use xl dmesg and following is the output.

(XEN) sh error: sh_remove_all_mappings(): can't find all mappings of mfn
42f01: c=8000000000000002 t=8400000000000001
(XEN) sh error: sh_remove_all_mappings(): can't find all mappings of mfn
42f01: c=8000000000000002 t=8400000000000001
(XEN) sh error: sh_remove_all_mappings(): can't find all mappings of mfn
9c502: c=8000000000000002 t=8400000000000001
(XEN) sh error: sh_remove_all_mappings(): can't find all mappings of mfn
42f02: c=8000000000000002 t=8400000000000001
(XEN) sh error: sh_remove_all_mappings(): can't find all mappings of mfn
42f02: c=8000000000000002 t=8400000000000001
(XEN) printk: 2695593 messages suppressed.
(XEN) mm.c:907:d1 Error getting mfn 9c625 (pfn fffffffffffffffe) from L1
entry 000000009c625021 for l1e_owner=1, pg_owner=1
(XEN) printk: 2768333 messages suppressed.

After this operations the cpu usage of domUs goes to 97-100% and I can't
shutdown them. So when i issue "xl destroy""the system reboots.

What could be the reason? Any help greatly appreciated.

Thanks,
Harshan

[-- Attachment #1.2: Type: text/html, Size: 1975 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

             reply	other threads:[~2011-05-24  7:17 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-24  7:17 Lakshitha Harshan [this message]
2011-05-24  9:19 ` Memory Sharing Tim Deegan
  -- strict thread matches above, loose matches on Subject: below --
2016-04-18 16:19 memory sharing sepanta s
2016-02-04 21:04 Memory Sharing hanji unit
2016-02-05  0:29 ` Tamas K Lengyel
2016-02-05 10:40 ` David Vrabel
2011-07-26  7:33 Lakshitha Harshan
2011-07-26  9:51 ` Tim Deegan
2011-06-16 14:15 Lakshitha Harshan
2011-06-19 23:34 ` Jui-Hao Chiang
2011-06-14  5:04 Lakshitha Harshan
2011-06-14  8:10 ` Tim Deegan
2011-06-01  3:45 Lakshitha Harshan
2011-06-01  9:46 ` Tim Deegan
2011-05-03  4:49 Lakshitha Harshan
2011-05-03 12:05 ` Tim Deegan
2011-04-26  9:52 Lakshitha Harshan
2011-04-09  6:37 Lakshitha Harshan
2011-04-09  8:36 ` Keir Fraser
2011-04-03  8:28 Memory sharing Lakshitha Harshan
2011-04-03  8:58 ` Keir Fraser

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='BANLkTi=Xg=0s4RjVYz9u1ViabrDWtEbWpQ@mail.gmail.com' \
    --to=harshan.dll@gmail.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).