* Immediate kernel panic using gntdev device
@ 2012-11-11 23:35 D Sundstrom
2012-11-12 11:36 ` Pablo Llopis
0 siblings, 1 reply; 8+ messages in thread
From: D Sundstrom @ 2012-11-11 23:35 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 1158 bytes --]
Running under debian wheezy kernel 3.2.0-3-686-pae (32 bit), under Xen 4.1
HVM.
The linux PV drivers load and all appears to be fine.
I want to use the gntalloc device to allocate a page of memory from Domain
A and then map that into Domain B using gntdev. Both are unprivileged
domains.
Using Daniel DeGraaf's test program here:
http://lists.xen.org/archives/html/xen-devel/2011-01/txtzDU6iZhTkB.txt
I can run the command to create a grant, but upon running the command to
map the grant (from either the same domain or another DomU), the kernel
immediately crashes with no diagnostic output.
Should I expect to be able to map grants in a DomU allocated in another
DomU?
Example of running the test:
$ xenstore-read domid
8
$ thetestprogram
src-add <domid> return gntref, address
map <domid> <ref> return index, address
src-del <gntref> no rv
gu <index> no rv
unmap <address> no rv
show print and change mapped items
This process bumps by 4000
src-add 8
src-add mapped 1372 at 0=0
show
00(-1217044480,0): current 4000 new 0
map 8 1372
(immediately crashes the VM)
Thanks,
David
[-- Attachment #1.2: Type: text/html, Size: 2370 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Immediate kernel panic using gntdev device
2012-11-11 23:35 Immediate kernel panic using gntdev device D Sundstrom
@ 2012-11-12 11:36 ` Pablo Llopis
2012-11-12 13:15 ` D Sundstrom
0 siblings, 1 reply; 8+ messages in thread
From: Pablo Llopis @ 2012-11-12 11:36 UTC (permalink / raw)
To: D Sundstrom; +Cc: xen-devel
Hello David,
I am not a Xen developer, but I think I can help with your issue :)
On Mon, Nov 12, 2012 at 12:35 AM, D Sundstrom <sunds@peapod.net> wrote:
>
> Running under debian wheezy kernel 3.2.0-3-686-pae (32 bit), under Xen 4.1
> HVM.
>
> The linux PV drivers load and all appears to be fine.
>
> I want to use the gntalloc device to allocate a page of memory from Domain A
> and then map that into Domain B using gntdev. Both are unprivileged
> domains.
>
> Using Daniel DeGraaf's test program here:
> http://lists.xen.org/archives/html/xen-devel/2011-01/txtzDU6iZhTkB.txt
>
I am running a slightly different version of the tool,
http://lists.xen.org/archives/html/xen-devel/2011-02/msg00231.html (I
think this one is more up to date, I did not check if there was a more
recent one)
> I can run the command to create a grant, but upon running the command to map
> the grant (from either the same domain or another DomU), the kernel
> immediately crashes with no diagnostic output.
>
> Should I expect to be able to map grants in a DomU allocated in another
> DomU?
Yes, I think that is the main goal of grant references.
>
> Example of running the test:
>
> $ xenstore-read domid
> 8
>
> $ thetestprogram
>
> src-add <domid> return gntref, address
> map <domid> <ref> return index, address
> src-del <gntref> no rv
> gu <index> no rv
> unmap <address> no rv
> show print and change mapped items
> This process bumps by 4000
>
> src-add 8
>
> src-add mapped 1372 at 0=0
>
> show
>
> 00(-1217044480,0): current 4000 new 0
>
> map 8 1372
It is not clear from your output from which domain you are running
each command. It looks like you are trying to issue a grant and map it
from within the same domain. That's probably the reason it crashes.
You are supposed to run this tool from both domains, running the calls
which interface with gntalloc from one domain, and the calls which
interface with gntdev from the other domain.
In any case, the domid you have to specify in the map must be the
domid of the domain which issued the grant. In other words, when
creating a grant, the domid which is granted access is specified. When
mapping a grant, the domid which issued the grant is specified. (i.e.
If you did "src-add 8" from dom0 you should run map 0 1372 from domU
8)
>
> (immediately crashes the VM)
I have also experienced this crash. The crash should probably not
happen, especially if it can be triggered from user-space, but I have
not looked into that.
>
>
> Thanks,
>
> David
>
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Immediate kernel panic using gntdev device
2012-11-12 11:36 ` Pablo Llopis
@ 2012-11-12 13:15 ` D Sundstrom
2012-11-13 16:24 ` Daniel De Graaf
0 siblings, 1 reply; 8+ messages in thread
From: D Sundstrom @ 2012-11-12 13:15 UTC (permalink / raw)
To: Pablo Llopis; +Cc: xen-devel
Thank you Pablo.
It makes no difference if I run both the src-add and map from the same
domain or from different DomU domains.
Whichever DomU I run the map function in crashes immediately.
You mention Dom0. I just want to be clear that I'd like to share
between two DomU domains. Have you gotten this to work?
I also tried the userspace APIs provided by Xen such as
xc_gnttab_map_grant_ref() and these also crash. Of course, these use
the same driver IOCTLs, so this isn't a surprise.
I'll need to see if I can get some debug info from the DomU kernel to
make progress.
If I can get this to work, are there any restrictions on sharing large
amounts of memory? Say 160Mb? Or are grant tables intended for a
small number of pages?
Thanks,
David
On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <pllopis@arcos.inf.uc3m.es> wrote:
>
> It is not clear from your output from which domain you are running
> each command. It looks like you are trying to issue a grant and map it
> from within the same domain. That's probably the reason it crashes.
> You are supposed to run this tool from both domains, running the calls
> which interface with gntalloc from one domain, and the calls which
> interface with gntdev from the other domain.
> In any case, the domid you have to specify in the map must be the
> domid of the domain which issued the grant. In other words, when
> creating a grant, the domid which is granted access is specified. When
> mapping a grant, the domid which issued the grant is specified. (i.e.
> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
> 8)
>
> >
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Immediate kernel panic using gntdev device
2012-11-12 13:15 ` D Sundstrom
@ 2012-11-13 16:24 ` Daniel De Graaf
2012-12-04 0:49 ` D Sundstrom
0 siblings, 1 reply; 8+ messages in thread
From: Daniel De Graaf @ 2012-11-13 16:24 UTC (permalink / raw)
To: D Sundstrom; +Cc: Pablo Llopis, xen-devel
On 11/12/2012 08:15 AM, D Sundstrom wrote:
> Thank you Pablo.
>
> It makes no difference if I run both the src-add and map from the same
> domain or from different DomU domains.
> Whichever DomU I run the map function in crashes immediately.
Mapping your own grants (which is what the test run you showed did) might
cause problems - although it's a bug that needs to be fixed, if so. You
may want to try using the vchan-node2 tool (tools/libvchan) for testing
and as an example user.
> You mention Dom0. I just want to be clear that I'd like to share
> between two DomU domains. Have you gotten this to work?
That was the goal of gntalloc/libvchan - it should work (and has for me).
> I also tried the userspace APIs provided by Xen such as
> xc_gnttab_map_grant_ref() and these also crash. Of course, these use
> the same driver IOCTLs, so this isn't a surprise.
>
> I'll need to see if I can get some debug info from the DomU kernel to
> make progress.
You might want to try booting your domU with console=hvc0 and look at
xl console - that will usually give you useful backtraces. Without that,
it's rather difficult to tell what the problem is.
> If I can get this to work, are there any restrictions on sharing large
> amounts of memory? Say 160Mb? Or are grant tables intended for a
> small number of pages?
There are restrictions within the modules (default is 1024 4K pages), and
in Xen itself for the number of grant table and maptrack pages - but I
think those can be adjusted via a boot parameter. The grant tables aren't
currently intended to share large amounts of memory, so you may run in to
some inefficiencies when doing the map/unmap. If you're using an IOMMU for
one of the domUs, this may end up being especially costly.
> Thanks,
> David
>
>
>
> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <pllopis@arcos.inf.uc3m.es> wrote:
>>
>> It is not clear from your output from which domain you are running
>> each command. It looks like you are trying to issue a grant and map it
>> from within the same domain. That's probably the reason it crashes.
>> You are supposed to run this tool from both domains, running the calls
>> which interface with gntalloc from one domain, and the calls which
>> interface with gntdev from the other domain.
>> In any case, the domid you have to specify in the map must be the
>> domid of the domain which issued the grant. In other words, when
>> creating a grant, the domid which is granted access is specified. When
>> mapping a grant, the domid which issued the grant is specified. (i.e.
>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
>> 8)
>>
>>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
--
Daniel De Graaf
National Security Agency
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Immediate kernel panic using gntdev device
2012-11-13 16:24 ` Daniel De Graaf
@ 2012-12-04 0:49 ` D Sundstrom
2012-12-04 14:18 ` Daniel De Graaf
0 siblings, 1 reply; 8+ messages in thread
From: D Sundstrom @ 2012-12-04 0:49 UTC (permalink / raw)
To: Daniel De Graaf; +Cc: Pablo Llopis, xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 3651 bytes --]
The issue seems to be my version of Xen (XenClient XT) must not support
ballon drivers. Any call to the memory_op hypercall to change the
reservation terminates my guest with extreme prejudice.
I'll take that one up with Citrix. However, can someone explain why
mapping a grant needs to manipulate the balloon reservation?
Specifically, in the 3.7-RC7 linux kernel tree, the file
drivers/xen/balloon.c:
At line 512 it tries to get a page out of the balloon. This returns null
(no page).
If page.... at line 513 evaluates to false
At line 518 the else block calls decrease_reservation().
Thanks
David
On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:
> On 11/12/2012 08:15 AM, D Sundstrom wrote:
> > Thank you Pablo.
> >
> > It makes no difference if I run both the src-add and map from the same
> > domain or from different DomU domains.
> > Whichever DomU I run the map function in crashes immediately.
>
> Mapping your own grants (which is what the test run you showed did) might
> cause problems - although it's a bug that needs to be fixed, if so. You
> may want to try using the vchan-node2 tool (tools/libvchan) for testing
> and as an example user.
>
> > You mention Dom0. I just want to be clear that I'd like to share
> > between two DomU domains. Have you gotten this to work?
>
> That was the goal of gntalloc/libvchan - it should work (and has for me).
>
> > I also tried the userspace APIs provided by Xen such as
> > xc_gnttab_map_grant_ref() and these also crash. Of course, these use
> > the same driver IOCTLs, so this isn't a surprise.
> >
> > I'll need to see if I can get some debug info from the DomU kernel to
> > make progress.
>
> You might want to try booting your domU with console=hvc0 and look at
> xl console - that will usually give you useful backtraces. Without that,
> it's rather difficult to tell what the problem is.
>
> > If I can get this to work, are there any restrictions on sharing large
> > amounts of memory? Say 160Mb? Or are grant tables intended for a
> > small number of pages?
>
> There are restrictions within the modules (default is 1024 4K pages), and
> in Xen itself for the number of grant table and maptrack pages - but I
> think those can be adjusted via a boot parameter. The grant tables aren't
> currently intended to share large amounts of memory, so you may run in to
> some inefficiencies when doing the map/unmap. If you're using an IOMMU for
> one of the domUs, this may end up being especially costly.
>
> > Thanks,
> > David
> >
> >
> >
> > On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <pllopis@arcos.inf.uc3m.es>
> wrote:
> >>
> >> It is not clear from your output from which domain you are running
> >> each command. It looks like you are trying to issue a grant and map it
> >> from within the same domain. That's probably the reason it crashes.
> >> You are supposed to run this tool from both domains, running the calls
> >> which interface with gntalloc from one domain, and the calls which
> >> interface with gntdev from the other domain.
> >> In any case, the domid you have to specify in the map must be the
> >> domid of the domain which issued the grant. In other words, when
> >> creating a grant, the domid which is granted access is specified. When
> >> mapping a grant, the domid which issued the grant is specified. (i.e.
> >> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
> >> 8)
> >>
> >>>
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >
>
>
> --
> Daniel De Graaf
> National Security Agency
>
[-- Attachment #1.2: Type: text/html, Size: 4863 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Immediate kernel panic using gntdev device
2012-12-04 0:49 ` D Sundstrom
@ 2012-12-04 14:18 ` Daniel De Graaf
2012-12-04 17:48 ` D Sundstrom
0 siblings, 1 reply; 8+ messages in thread
From: Daniel De Graaf @ 2012-12-04 14:18 UTC (permalink / raw)
To: D Sundstrom; +Cc: Pablo Llopis, xen-devel
On 12/03/2012 07:49 PM, D Sundstrom wrote:
> The issue seems to be my version of Xen (XenClient XT) must not support
> ballon drivers. Any call to the memory_op hypercall to change the
> reservation terminates my guest with extreme prejudice.
>
> I'll take that one up with Citrix. However, can someone explain why
> mapping a grant needs to manipulate the balloon reservation?
>
> Specifically, in the 3.7-RC7 linux kernel tree, the file
> drivers/xen/balloon.c:
>
> At line 512 it tries to get a page out of the balloon. This returns null
> (no page).
> If page.... at line 513 evaluates to false
> At line 518 the else block calls decrease_reservation().
>
>
> Thanks
> David
>
The gntdev driver needs a GFN for the mapped page (this is a hard requirement
for HVM, and also makes PV in-kernel mapping of the page easier iirc), and this
GFN must be unused by the guest (no associated MFN - otherwise it may end up
leaking the MFN until the domain is shutdown). Since ballooned out pages satisfy
these requirements, the gntdev code uses the balloon pool instead of breaking
the GFN/MFN association itself or trying to use the pages beyond the last valid
GFN.
>
> On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:
>
>> On 11/12/2012 08:15 AM, D Sundstrom wrote:
>>> Thank you Pablo.
>>>
>>> It makes no difference if I run both the src-add and map from the same
>>> domain or from different DomU domains.
>>> Whichever DomU I run the map function in crashes immediately.
>>
>> Mapping your own grants (which is what the test run you showed did) might
>> cause problems - although it's a bug that needs to be fixed, if so. You
>> may want to try using the vchan-node2 tool (tools/libvchan) for testing
>> and as an example user.
>>
>>> You mention Dom0. I just want to be clear that I'd like to share
>>> between two DomU domains. Have you gotten this to work?
>>
>> That was the goal of gntalloc/libvchan - it should work (and has for me).
>>
>>> I also tried the userspace APIs provided by Xen such as
>>> xc_gnttab_map_grant_ref() and these also crash. Of course, these use
>>> the same driver IOCTLs, so this isn't a surprise.
>>>
>>> I'll need to see if I can get some debug info from the DomU kernel to
>>> make progress.
>>
>> You might want to try booting your domU with console=hvc0 and look at
>> xl console - that will usually give you useful backtraces. Without that,
>> it's rather difficult to tell what the problem is.
>>
>>> If I can get this to work, are there any restrictions on sharing large
>>> amounts of memory? Say 160Mb? Or are grant tables intended for a
>>> small number of pages?
>>
>> There are restrictions within the modules (default is 1024 4K pages), and
>> in Xen itself for the number of grant table and maptrack pages - but I
>> think those can be adjusted via a boot parameter. The grant tables aren't
>> currently intended to share large amounts of memory, so you may run in to
>> some inefficiencies when doing the map/unmap. If you're using an IOMMU for
>> one of the domUs, this may end up being especially costly.
>>
>>> Thanks,
>>> David
>>>
>>>
>>>
>>> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <pllopis@arcos.inf.uc3m.es>
>> wrote:
>>>>
>>>> It is not clear from your output from which domain you are running
>>>> each command. It looks like you are trying to issue a grant and map it
>>>> from within the same domain. That's probably the reason it crashes.
>>>> You are supposed to run this tool from both domains, running the calls
>>>> which interface with gntalloc from one domain, and the calls which
>>>> interface with gntdev from the other domain.
>>>> In any case, the domid you have to specify in the map must be the
>>>> domid of the domain which issued the grant. In other words, when
>>>> creating a grant, the domid which is granted access is specified. When
>>>> mapping a grant, the domid which issued the grant is specified. (i.e.
>>>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
>>>> 8)
>>>>
>>>>>
>>>
--
Daniel De Graaf
National Security Agency
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Immediate kernel panic using gntdev device
2012-12-04 14:18 ` Daniel De Graaf
@ 2012-12-04 17:48 ` D Sundstrom
2012-12-04 17:54 ` Daniel De Graaf
0 siblings, 1 reply; 8+ messages in thread
From: D Sundstrom @ 2012-12-04 17:48 UTC (permalink / raw)
To: Daniel De Graaf; +Cc: Pablo Llopis, xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 4951 bytes --]
Thanks Daniel and Pablo.
Pablo please keep me advised if you solve any issues regarding granting
large amounts of memory. I have the same requirement.
Daniel, thanks for the explanation. Indeed, if I just allocate memory of
the heap everything works, but I'm "leaking" that memory.
I'll need an answer from Citrix as to why XenClient fails for the memory op
hypercall.
Is the intent of "decrease reservation" to pull more memory into the DomU?
I didn't quite understand the logic in this driver if it fails to find
memory already in the balloon list.
-David
On Tue, Dec 4, 2012 at 8:18 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:
> On 12/03/2012 07:49 PM, D Sundstrom wrote:
> > The issue seems to be my version of Xen (XenClient XT) must not support
> > ballon drivers. Any call to the memory_op hypercall to change the
> > reservation terminates my guest with extreme prejudice.
> >
> > I'll take that one up with Citrix. However, can someone explain why
> > mapping a grant needs to manipulate the balloon reservation?
> >
> > Specifically, in the 3.7-RC7 linux kernel tree, the file
> > drivers/xen/balloon.c:
> >
> > At line 512 it tries to get a page out of the balloon. This returns null
> > (no page).
> > If page.... at line 513 evaluates to false
> > At line 518 the else block calls decrease_reservation().
> >
> >
> > Thanks
> > David
> >
>
> The gntdev driver needs a GFN for the mapped page (this is a hard
> requirement
> for HVM, and also makes PV in-kernel mapping of the page easier iirc), and
> this
> GFN must be unused by the guest (no associated MFN - otherwise it may end
> up
> leaking the MFN until the domain is shutdown). Since ballooned out pages
> satisfy
> these requirements, the gntdev code uses the balloon pool instead of
> breaking
> the GFN/MFN association itself or trying to use the pages beyond the last
> valid
> GFN.
>
> >
> > On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov
> >wrote:
> >
> >> On 11/12/2012 08:15 AM, D Sundstrom wrote:
> >>> Thank you Pablo.
> >>>
> >>> It makes no difference if I run both the src-add and map from the same
> >>> domain or from different DomU domains.
> >>> Whichever DomU I run the map function in crashes immediately.
> >>
> >> Mapping your own grants (which is what the test run you showed did)
> might
> >> cause problems - although it's a bug that needs to be fixed, if so. You
> >> may want to try using the vchan-node2 tool (tools/libvchan) for testing
> >> and as an example user.
> >>
> >>> You mention Dom0. I just want to be clear that I'd like to share
> >>> between two DomU domains. Have you gotten this to work?
> >>
> >> That was the goal of gntalloc/libvchan - it should work (and has for
> me).
> >>
> >>> I also tried the userspace APIs provided by Xen such as
> >>> xc_gnttab_map_grant_ref() and these also crash. Of course, these use
> >>> the same driver IOCTLs, so this isn't a surprise.
> >>>
> >>> I'll need to see if I can get some debug info from the DomU kernel to
> >>> make progress.
> >>
> >> You might want to try booting your domU with console=hvc0 and look at
> >> xl console - that will usually give you useful backtraces. Without that,
> >> it's rather difficult to tell what the problem is.
> >>
> >>> If I can get this to work, are there any restrictions on sharing large
> >>> amounts of memory? Say 160Mb? Or are grant tables intended for a
> >>> small number of pages?
> >>
> >> There are restrictions within the modules (default is 1024 4K pages),
> and
> >> in Xen itself for the number of grant table and maptrack pages - but I
> >> think those can be adjusted via a boot parameter. The grant tables
> aren't
> >> currently intended to share large amounts of memory, so you may run in
> to
> >> some inefficiencies when doing the map/unmap. If you're using an IOMMU
> for
> >> one of the domUs, this may end up being especially costly.
> >>
> >>> Thanks,
> >>> David
> >>>
> >>>
> >>>
> >>> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <
> pllopis@arcos.inf.uc3m.es>
> >> wrote:
> >>>>
> >>>> It is not clear from your output from which domain you are running
> >>>> each command. It looks like you are trying to issue a grant and map it
> >>>> from within the same domain. That's probably the reason it crashes.
> >>>> You are supposed to run this tool from both domains, running the calls
> >>>> which interface with gntalloc from one domain, and the calls which
> >>>> interface with gntdev from the other domain.
> >>>> In any case, the domid you have to specify in the map must be the
> >>>> domid of the domain which issued the grant. In other words, when
> >>>> creating a grant, the domid which is granted access is specified. When
> >>>> mapping a grant, the domid which issued the grant is specified. (i.e.
> >>>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
> >>>> 8)
> >>>>
> >>>>>
> >>>
>
>
> --
> Daniel De Graaf
> National Security Agency
>
[-- Attachment #1.2: Type: text/html, Size: 6544 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Immediate kernel panic using gntdev device
2012-12-04 17:48 ` D Sundstrom
@ 2012-12-04 17:54 ` Daniel De Graaf
0 siblings, 0 replies; 8+ messages in thread
From: Daniel De Graaf @ 2012-12-04 17:54 UTC (permalink / raw)
To: D Sundstrom; +Cc: Pablo Llopis, xen-devel
On 12/04/2012 12:48 PM, D Sundstrom wrote:
> Thanks Daniel and Pablo.
>
> Pablo please keep me advised if you solve any issues regarding granting
> large amounts of memory. I have the same requirement.
>
> Daniel, thanks for the explanation. Indeed, if I just allocate memory of
> the heap everything works, but I'm "leaking" that memory.
>
> I'll need an answer from Citrix as to why XenClient fails for the memory op
> hypercall.
>
> Is the intent of "decrease reservation" to pull more memory into the DomU?
> I didn't quite understand the logic in this driver if it fails to find
> memory already in the balloon list.
>
> -David
The decrease reservation actually removes memory from the DomU (lowers usage),
creating a free GFN. When the grant is later unmapped, the GFN will be passed
to increase reservation so that it's usable normally again. The overhead of
these extra two hypercalls is avoided if ballooned pages are already available,
which is why they aren't done at the same time as the grant map/unmap.
>
> On Tue, Dec 4, 2012 at 8:18 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:
>
>> On 12/03/2012 07:49 PM, D Sundstrom wrote:
>>> The issue seems to be my version of Xen (XenClient XT) must not support
>>> ballon drivers. Any call to the memory_op hypercall to change the
>>> reservation terminates my guest with extreme prejudice.
>>>
>>> I'll take that one up with Citrix. However, can someone explain why
>>> mapping a grant needs to manipulate the balloon reservation?
>>>
>>> Specifically, in the 3.7-RC7 linux kernel tree, the file
>>> drivers/xen/balloon.c:
>>>
>>> At line 512 it tries to get a page out of the balloon. This returns null
>>> (no page).
>>> If page.... at line 513 evaluates to false
>>> At line 518 the else block calls decrease_reservation().
>>>
>>>
>>> Thanks
>>> David
>>>
>>
>> The gntdev driver needs a GFN for the mapped page (this is a hard
>> requirement
>> for HVM, and also makes PV in-kernel mapping of the page easier iirc), and
>> this
>> GFN must be unused by the guest (no associated MFN - otherwise it may end
>> up
>> leaking the MFN until the domain is shutdown). Since ballooned out pages
>> satisfy
>> these requirements, the gntdev code uses the balloon pool instead of
>> breaking
>> the GFN/MFN association itself or trying to use the pages beyond the last
>> valid
>> GFN.
>>
>>>
>>> On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov
>>> wrote:
>>>
>>>> On 11/12/2012 08:15 AM, D Sundstrom wrote:
>>>>> Thank you Pablo.
>>>>>
>>>>> It makes no difference if I run both the src-add and map from the same
>>>>> domain or from different DomU domains.
>>>>> Whichever DomU I run the map function in crashes immediately.
>>>>
>>>> Mapping your own grants (which is what the test run you showed did)
>> might
>>>> cause problems - although it's a bug that needs to be fixed, if so. You
>>>> may want to try using the vchan-node2 tool (tools/libvchan) for testing
>>>> and as an example user.
>>>>
>>>>> You mention Dom0. I just want to be clear that I'd like to share
>>>>> between two DomU domains. Have you gotten this to work?
>>>>
>>>> That was the goal of gntalloc/libvchan - it should work (and has for
>> me).
>>>>
>>>>> I also tried the userspace APIs provided by Xen such as
>>>>> xc_gnttab_map_grant_ref() and these also crash. Of course, these use
>>>>> the same driver IOCTLs, so this isn't a surprise.
>>>>>
>>>>> I'll need to see if I can get some debug info from the DomU kernel to
>>>>> make progress.
>>>>
>>>> You might want to try booting your domU with console=hvc0 and look at
>>>> xl console - that will usually give you useful backtraces. Without that,
>>>> it's rather difficult to tell what the problem is.
>>>>
>>>>> If I can get this to work, are there any restrictions on sharing large
>>>>> amounts of memory? Say 160Mb? Or are grant tables intended for a
>>>>> small number of pages?
>>>>
>>>> There are restrictions within the modules (default is 1024 4K pages),
>> and
>>>> in Xen itself for the number of grant table and maptrack pages - but I
>>>> think those can be adjusted via a boot parameter. The grant tables
>> aren't
>>>> currently intended to share large amounts of memory, so you may run in
>> to
>>>> some inefficiencies when doing the map/unmap. If you're using an IOMMU
>> for
>>>> one of the domUs, this may end up being especially costly.
>>>>
>>>>> Thanks,
>>>>> David
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <
>> pllopis@arcos.inf.uc3m.es>
>>>> wrote:
>>>>>>
>>>>>> It is not clear from your output from which domain you are running
>>>>>> each command. It looks like you are trying to issue a grant and map it
>>>>>> from within the same domain. That's probably the reason it crashes.
>>>>>> You are supposed to run this tool from both domains, running the calls
>>>>>> which interface with gntalloc from one domain, and the calls which
>>>>>> interface with gntdev from the other domain.
>>>>>> In any case, the domid you have to specify in the map must be the
>>>>>> domid of the domain which issued the grant. In other words, when
>>>>>> creating a grant, the domid which is granted access is specified. When
>>>>>> mapping a grant, the domid which issued the grant is specified. (i.e.
>>>>>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
>>>>>> 8)
>>>>>>
>>>>>>>
>>>>>
>>
>>
>> --
>> Daniel De Graaf
>> National Security Agency
>>
>
--
Daniel De Graaf
National Security Agency
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2012-12-04 17:54 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-11 23:35 Immediate kernel panic using gntdev device D Sundstrom
2012-11-12 11:36 ` Pablo Llopis
2012-11-12 13:15 ` D Sundstrom
2012-11-13 16:24 ` Daniel De Graaf
2012-12-04 0:49 ` D Sundstrom
2012-12-04 14:18 ` Daniel De Graaf
2012-12-04 17:48 ` D Sundstrom
2012-12-04 17:54 ` Daniel De Graaf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).