xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* VM save/restore
@ 2012-08-17 21:28 Junjie Wei
  2012-08-18  6:38 ` Keir Fraser
  0 siblings, 1 reply; 7+ messages in thread
From: Junjie Wei @ 2012-08-17 21:28 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: Type: text/plain, Size: 7412 bytes --]

Hello,

There is a problem in Xen-4.1.2 and early versions with VM save/restore.
When a VM is configured with VCPUs > 64, it can be started or stopped,
but cannot be saved. It happens to both PVM and HVM guests.

# xm vcpu-list 3 | grep OVM_OL5U7_X86_64_PVM_10GB | wc -l
65

# xm save 3 vm.save
Error: /usr/lib64/xen/bin/xc_save 27 3 0 0 0 failed

/var/log/xen/xend.log: INFO (XendCheckpoint:416)
xc: error: Too many VCPUS in guest!: Internal error

It was caused by a hard-coded limit in tools/libxc/xc_domain_save.c:

if ( info.max_vcpu_id >= 64 )
{
      ERROR("Too many VCPUS in guest!");
      goto out;
}

And also in tools/libxc/xc_domain_restore.c:

case XC_SAVE_ID_VCPU_INFO:
      buf->new_ctxt_format = 1;
      if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
          buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
                                            sizeof(uint64_t)) ) {
          PERROR("Error when reading max_vcpu_id");
          return -1;
      }

The code above is in both xen-4.1.2 and xen-unstable.

I think if a VM can be successfully started, then save/restore should
also work. So I made a patch and did some testing.

The above problem is gone but there are new ones.

Let me summarize the result here.

With the patch, save/restore works fine as long as it can be started,
except two cases.

1) 32-bit guests can be configured with VCPUs > 32 and started,
    but the guest can only make use of 32 of them.

2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
    but `xm save' does not work.

See the testing below for details.The limit of 128 VCPUs for HVM
guests is already considered.

Could you please review the patch and help with these two cases?


Thanks,
Junjie

-= Test environment =-

[root@ovs087 HVM_X86_64]# cat /etc/ovs-release
Oracle VM server release 3.2.1

[root@ovs087 HVM_X86_64]# uname -a
Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012 
x86_64 x86_64 x86_64 GNU/Linux

[root@ovs087 HVM_X86_64]# rpm -qa | grep xen
xenpvboot-0.1-8.el5
xen-devel-4.1.2-39
xen-tools-4.1.2-39
xen-4.1.2-39

-= PVM x86_64, 128 VCPUs =-

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   6916.9
OVM_OL5U7_X86_64_PVM_10GB                    9  2048 128     r-----     48.1

[root@ovs087 PVM_X86_64]# xm save 9 vm.save

[root@ovs087 PVM_X86_64]# xm restore vm.save

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   7076.7
OVM_OL5U7_X86_64_PVM_10GB                   10  2048 128     r-----     51.6

-= PVM x86_64, 256 VCPUs =-

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs State   Time(s)
Domain-0                                     0   511     8 r-----  10398.1
OVM_OL5U7_X86_64_PVM_10GB                   35  2048   256 r-----     30.4

[root@ovs087 PVM_X86_64]# xm save 35 vm.save

[root@ovs087 PVM_X86_64]# xm restore vm.save

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs State   Time(s)
Domain-0                                     0   511     8 r-----  10572.1
OVM_OL5U7_X86_64_PVM_10GB                   36  2048   256 r-----   1466.9

-= HVM x86_64, 128 VCPUs =-

[root@ovs087 HVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   8017.4
OVM_OL5U7_X86_64_PVHVM_10GB                 19  2048 128     r-----    343.7

[root@ovs087 HVM_X86_64]# xm save 19 vm.save

[root@ovs087 HVM_X86_64]# xm restore vm.save

[root@ovs087 HVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   8241.1
OVM_OL5U7_X86_64_PVHVM_10GB                 20  2048 128     r-----    121.7

-= PVM x86, 64 VCPUs =-

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36798.0
OVM_OL5U7_X86_PVM_10GB                      54  2048 32     r-----     92.8

[root@ovs087 PVM_X86]# xm vcpu-list 54 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
64

[root@ovs087 PVM_X86]# xm save 54 vm.save

[root@ovs087 PVM_X86]# xm restore vm.save

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36959.3
OVM_OL5U7_X86_PVM_10GB                      55  2048 32     r-----     51.0

[root@ovs087 PVM_X86]# xm vcpu-list 55 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
64

32-bit PVM, 65 VCPUs:

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36975.9
OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----      8.6

[root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
65

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36977.7
OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----     24.8

[root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
65

[root@ovs087 PVM_X86]# xm save 56 vm.save
Error: /usr/lib64/xen/bin/xc_save 26 56 0 0 0 failed

/var/log/xen/xend.log: INFO (XendCheckpoint:416)
xc: error: No context for VCPU64 (61 = No data available): Internal error

-= HVM x86, 64 VCPUs =-

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36506.1
OVM_OL5U7_X86_PVHVM_10GB                    52  2048 32     r-----     68.6

[root@ovs087 HVM_X86]# xm vcpu-list 52 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
64

[root@ovs087 HVM_X86]# xm save 52 vm.save

[root@ovs087 HVM_X86]# xm restore vm.save

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36730.5
OVM_OL5U7_X86_PVHVM_10GB                    53  2048 32     r-----     19.8

[root@ovs087 HVM_X86]# xm vcpu-list 53 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
64

-= HVM x86, 128 VCPUs =-

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36261.1
OVM_OL5U7_X86_PVHVM_10GB                    50  2048 32     r-----     34.9

[root@ovs087 HVM_X86]# xm vcpu-list 50 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
128

[root@ovs087 HVM_X86]# xm save 50 vm.save

[root@ovs087 HVM_X86]# xm restore vm.save

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36480.5
OVM_OL5U7_X86_PVHVM_10GB                    51  2048 32     r-----     20.3

[root@ovs087 HVM_X86]# xm vcpu-list 51 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
128

[-- Attachment #2: skip-max-vcpu-id-check.patch --]
[-- Type: text/x-patch, Size: 1212 bytes --]

Index: tools/libxc/xc_domain_restore.c
===================================================================
--- tools/libxc/xc_domain_restore.c	(revision 3415)
+++ tools/libxc/xc_domain_restore.c	(working copy)
@@ -771,8 +771,7 @@
     case XC_SAVE_ID_VCPU_INFO:
         buf->new_ctxt_format = 1;
         if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
-             buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
-                                               sizeof(uint64_t)) ) {
+             RDEXACT(fd, &buf->vcpumap, sizeof(uint64_t)) ) {
             PERROR("Error when reading max_vcpu_id");
             return -1;
         }
Index: tools/libxc/xc_domain_save.c
===================================================================
--- tools/libxc/xc_domain_save.c	(revision 3415)
+++ tools/libxc/xc_domain_save.c	(working copy)
@@ -1566,12 +1566,6 @@
             uint64_t vcpumap;
         } chunk = { XC_SAVE_ID_VCPU_INFO, info.max_vcpu_id };
 
-        if ( info.max_vcpu_id >= 64 )
-        {
-            ERROR("Too many VCPUS in guest!");
-            goto out;
-        }
-
         for ( i = 1; i <= info.max_vcpu_id; i++ )
         {
             xc_vcpuinfo_t vinfo;


[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: VM save/restore
  2012-08-17 21:28 VM save/restore Junjie Wei
@ 2012-08-18  6:38 ` Keir Fraser
  2012-08-18  7:34   ` Keir Fraser
  2012-08-20 20:58   ` Junjie Wei
  0 siblings, 2 replies; 7+ messages in thread
From: Keir Fraser @ 2012-08-18  6:38 UTC (permalink / raw)
  To: Junjie Wei, xen-devel

On 17/08/2012 22:28, "Junjie Wei" <junjie.wei@oracle.com> wrote:

> Hello,
> 
> There is a problem in Xen-4.1.2 and early versions with VM save/restore.
> When a VM is configured with VCPUs > 64, it can be started or stopped,
> but cannot be saved. It happens to both PVM and HVM guests.
> 
> # xm vcpu-list 3 | grep OVM_OL5U7_X86_64_PVM_10GB | wc -l
> 65
> 
> # xm save 3 vm.save
> Error: /usr/lib64/xen/bin/xc_save 27 3 0 0 0 failed
> 
> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
> xc: error: Too many VCPUS in guest!: Internal error
> 
> It was caused by a hard-coded limit in tools/libxc/xc_domain_save.c:
> 
> if ( info.max_vcpu_id >= 64 )
> {
>       ERROR("Too many VCPUS in guest!");
>       goto out;
> }
> 
> And also in tools/libxc/xc_domain_restore.c:
> 
> case XC_SAVE_ID_VCPU_INFO:
>       buf->new_ctxt_format = 1;
>       if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
>           buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
>                                             sizeof(uint64_t)) ) {
>           PERROR("Error when reading max_vcpu_id");
>           return -1;
>       }
> 
> The code above is in both xen-4.1.2 and xen-unstable.
> 
> I think if a VM can be successfully started, then save/restore should
> also work. So I made a patch and did some testing.

The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
after restore I would imagine.

And what is a PVM guest?

 -- Keir

> The above problem is gone but there are new ones.
> 
> Let me summarize the result here.
> 
> With the patch, save/restore works fine as long as it can be started,
> except two cases.
> 
> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>     but the guest can only make use of 32 of them.
> 
> 2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
>     but `xm save' does not work.
> 
> See the testing below for details.The limit of 128 VCPUs for HVM
> guests is already considered.
> 
> Could you please review the patch and help with these two cases?
> 
> 
> Thanks,
> Junjie
> 
> -= Test environment =-
> 
> [root@ovs087 HVM_X86_64]# cat /etc/ovs-release
> Oracle VM server release 3.2.1
> 
> [root@ovs087 HVM_X86_64]# uname -a
> Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012
> x86_64 x86_64 x86_64 GNU/Linux
> 
> [root@ovs087 HVM_X86_64]# rpm -qa | grep xen
> xenpvboot-0.1-8.el5
> xen-devel-4.1.2-39
> xen-tools-4.1.2-39
> xen-4.1.2-39
> 
> -= PVM x86_64, 128 VCPUs =-
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   6916.9
> OVM_OL5U7_X86_64_PVM_10GB                    9  2048 128     r-----     48.1
> 
> [root@ovs087 PVM_X86_64]# xm save 9 vm.save
> 
> [root@ovs087 PVM_X86_64]# xm restore vm.save
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   7076.7
> OVM_OL5U7_X86_64_PVM_10GB                   10  2048 128     r-----     51.6
> 
> -= PVM x86_64, 256 VCPUs =-
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs State   Time(s)
> Domain-0                                     0   511     8 r-----  10398.1
> OVM_OL5U7_X86_64_PVM_10GB                   35  2048   256 r-----     30.4
> 
> [root@ovs087 PVM_X86_64]# xm save 35 vm.save
> 
> [root@ovs087 PVM_X86_64]# xm restore vm.save
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs State   Time(s)
> Domain-0                                     0   511     8 r-----  10572.1
> OVM_OL5U7_X86_64_PVM_10GB                   36  2048   256 r-----   1466.9
> 
> -= HVM x86_64, 128 VCPUs =-
> 
> [root@ovs087 HVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   8017.4
> OVM_OL5U7_X86_64_PVHVM_10GB                 19  2048 128     r-----    343.7
> 
> [root@ovs087 HVM_X86_64]# xm save 19 vm.save
> 
> [root@ovs087 HVM_X86_64]# xm restore vm.save
> 
> [root@ovs087 HVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   8241.1
> OVM_OL5U7_X86_64_PVHVM_10GB                 20  2048 128     r-----    121.7
> 
> -= PVM x86, 64 VCPUs =-
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36798.0
> OVM_OL5U7_X86_PVM_10GB                      54  2048 32     r-----     92.8
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 54 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 64
> 
> [root@ovs087 PVM_X86]# xm save 54 vm.save
> 
> [root@ovs087 PVM_X86]# xm restore vm.save
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36959.3
> OVM_OL5U7_X86_PVM_10GB                      55  2048 32     r-----     51.0
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 55 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 64
> 
> 32-bit PVM, 65 VCPUs:
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36975.9
> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----      8.6
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 65
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36977.7
> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----     24.8
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 65
> 
> [root@ovs087 PVM_X86]# xm save 56 vm.save
> Error: /usr/lib64/xen/bin/xc_save 26 56 0 0 0 failed
> 
> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
> xc: error: No context for VCPU64 (61 = No data available): Internal error
> 
> -= HVM x86, 64 VCPUs =-
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36506.1
> OVM_OL5U7_X86_PVHVM_10GB                    52  2048 32     r-----     68.6
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 52 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 64
> 
> [root@ovs087 HVM_X86]# xm save 52 vm.save
> 
> [root@ovs087 HVM_X86]# xm restore vm.save
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36730.5
> OVM_OL5U7_X86_PVHVM_10GB                    53  2048 32     r-----     19.8
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 53 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 64
> 
> -= HVM x86, 128 VCPUs =-
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36261.1
> OVM_OL5U7_X86_PVHVM_10GB                    50  2048 32     r-----     34.9
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 50 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 128
> 
> [root@ovs087 HVM_X86]# xm save 50 vm.save
> 
> [root@ovs087 HVM_X86]# xm restore vm.save
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36480.5
> OVM_OL5U7_X86_PVHVM_10GB                    51  2048 32     r-----     20.3
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 51 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 128
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: VM save/restore
  2012-08-18  6:38 ` Keir Fraser
@ 2012-08-18  7:34   ` Keir Fraser
  2012-08-20 20:54     ` Junjie Wei
  2012-08-20 21:05     ` Junjie Wei
  2012-08-20 20:58   ` Junjie Wei
  1 sibling, 2 replies; 7+ messages in thread
From: Keir Fraser @ 2012-08-18  7:34 UTC (permalink / raw)
  To: Junjie Wei, xen-devel; +Cc: Jan Beulich

[-- Attachment #1: Type: text/plain, Size: 7659 bytes --]

On 18/08/2012 07:38, "Keir Fraser" <keir.xen@gmail.com> wrote:

>
>> I think if a VM can be successfully started, then save/restore should
>> also work. So I made a patch and did some testing.
> 
> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
> after restore I would imagine.

How about the attached patch? It might actually work properly, unlike yours.
;)

>> The above problem is gone but there are new ones.
>> 
>> Let me summarize the result here.
>> 
>> With the patch, save/restore works fine as long as it can be started,
>> except two cases.
>> 
>> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>>     but the guest can only make use of 32 of them.

HVM guest? I don't know why this is. You will have to investigate some more
what has happened to the rest of your VCPUs! I think it should definitely
work. Cc Jan in case he has any thoughts.

>> 2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
>>     but `xm save' does not work.

That's because your changes to the save/restore code were wrong. Try my
patch instead.
 
 -- Keir

>> See the testing below for details.The limit of 128 VCPUs for HVM
>> guests is already considered.
>> 
>> Could you please review the patch and help with these two cases?
>> 
>> 
>> Thanks,
>> Junjie
>> 
>> -= Test environment =-
>> 
>> [root@ovs087 HVM_X86_64]# cat /etc/ovs-release
>> Oracle VM server release 3.2.1
>> 
>> [root@ovs087 HVM_X86_64]# uname -a
>> Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012
>> x86_64 x86_64 x86_64 GNU/Linux
>> 
>> [root@ovs087 HVM_X86_64]# rpm -qa | grep xen
>> xenpvboot-0.1-8.el5
>> xen-devel-4.1.2-39
>> xen-tools-4.1.2-39
>> xen-4.1.2-39
>> 
>> -= PVM x86_64, 128 VCPUs =-
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   6916.9
>> OVM_OL5U7_X86_64_PVM_10GB                    9  2048 128     r-----     48.1
>> 
>> [root@ovs087 PVM_X86_64]# xm save 9 vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm restore vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   7076.7
>> OVM_OL5U7_X86_64_PVM_10GB                   10  2048 128     r-----     51.6
>> 
>> -= PVM x86_64, 256 VCPUs =-
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs State   Time(s)
>> Domain-0                                     0   511     8 r-----  10398.1
>> OVM_OL5U7_X86_64_PVM_10GB                   35  2048   256 r-----     30.4
>> 
>> [root@ovs087 PVM_X86_64]# xm save 35 vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm restore vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs State   Time(s)
>> Domain-0                                     0   511     8 r-----  10572.1
>> OVM_OL5U7_X86_64_PVM_10GB                   36  2048   256 r-----   1466.9
>> 
>> -= HVM x86_64, 128 VCPUs =-
>> 
>> [root@ovs087 HVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   8017.4
>> OVM_OL5U7_X86_64_PVHVM_10GB                 19  2048 128     r-----    343.7
>> 
>> [root@ovs087 HVM_X86_64]# xm save 19 vm.save
>> 
>> [root@ovs087 HVM_X86_64]# xm restore vm.save
>> 
>> [root@ovs087 HVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   8241.1
>> OVM_OL5U7_X86_64_PVHVM_10GB                 20  2048 128     r-----    121.7
>> 
>> -= PVM x86, 64 VCPUs =-
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36798.0
>> OVM_OL5U7_X86_PVM_10GB                      54  2048 32     r-----     92.8
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 54 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 64
>> 
>> [root@ovs087 PVM_X86]# xm save 54 vm.save
>> 
>> [root@ovs087 PVM_X86]# xm restore vm.save
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36959.3
>> OVM_OL5U7_X86_PVM_10GB                      55  2048 32     r-----     51.0
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 55 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 64
>> 
>> 32-bit PVM, 65 VCPUs:
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36975.9
>> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----      8.6
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 65
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36977.7
>> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----     24.8
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 65
>> 
>> [root@ovs087 PVM_X86]# xm save 56 vm.save
>> Error: /usr/lib64/xen/bin/xc_save 26 56 0 0 0 failed
>> 
>> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
>> xc: error: No context for VCPU64 (61 = No data available): Internal error
>> 
>> -= HVM x86, 64 VCPUs =-
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36506.1
>> OVM_OL5U7_X86_PVHVM_10GB                    52  2048 32     r-----     68.6
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 52 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 64
>> 
>> [root@ovs087 HVM_X86]# xm save 52 vm.save
>> 
>> [root@ovs087 HVM_X86]# xm restore vm.save
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36730.5
>> OVM_OL5U7_X86_PVHVM_10GB                    53  2048 32     r-----     19.8
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 53 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 64
>> 
>> -= HVM x86, 128 VCPUs =-
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36261.1
>> OVM_OL5U7_X86_PVHVM_10GB                    50  2048 32     r-----     34.9
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 50 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 128
>> 
>> [root@ovs087 HVM_X86]# xm save 50 vm.save
>> 
>> [root@ovs087 HVM_X86]# xm restore vm.save
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36480.5
>> OVM_OL5U7_X86_PVHVM_10GB                    51  2048 32     r-----     20.3
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 51 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 128
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 


[-- Attachment #2: 00-sr-extend-vcpus --]
[-- Type: application/octet-stream, Size: 6672 bytes --]

diff -r 64017d4df9da tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c	Fri Aug 17 11:36:08 2012 +0200
+++ b/tools/libxc/xc_domain_restore.c	Sat Aug 18 08:25:52 2012 +0100
@@ -462,7 +462,7 @@ static int dump_qemu(xc_interface *xch, 
 
 static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
                            struct tailbuf_hvm *buf, int fd,
-                           unsigned int max_vcpu_id, uint64_t vcpumap,
+                           unsigned int max_vcpu_id, uint64_t *vcpumap,
                            int ext_vcpucontext,
                            int vcpuextstate, uint32_t vcpuextstate_size)
 {
@@ -530,7 +530,7 @@ static int buffer_tail_hvm(xc_interface 
 
 static int buffer_tail_pv(xc_interface *xch, struct restore_ctx *ctx,
                           struct tailbuf_pv *buf, int fd,
-                          unsigned int max_vcpu_id, uint64_t vcpumap,
+                          unsigned int max_vcpu_id, uint64_t *vcpumap,
                           int ext_vcpucontext,
                           int vcpuextstate,
                           uint32_t vcpuextstate_size)
@@ -563,8 +563,8 @@ static int buffer_tail_pv(xc_interface *
     /* VCPU contexts */
     buf->vcpucount = 0;
     for (i = 0; i <= max_vcpu_id; i++) {
-        // DPRINTF("vcpumap: %llx, cpu: %d, bit: %llu\n", vcpumap, i, (vcpumap % (1ULL << i)));
-        if ( (!(vcpumap & (1ULL << i))) )
+        // DPRINTF("vcpumap: %llx, cpu: %d, bit: %llu\n", vcpumap[i/64], i, (vcpumap[i/64] & (1ULL << (i%64))));
+        if ( (!(vcpumap[i/64] & (1ULL << (i%64)))) )
             continue;
         buf->vcpucount++;
     }
@@ -614,7 +614,7 @@ static int buffer_tail_pv(xc_interface *
 
 static int buffer_tail(xc_interface *xch, struct restore_ctx *ctx,
                        tailbuf_t *buf, int fd, unsigned int max_vcpu_id,
-                       uint64_t vcpumap, int ext_vcpucontext,
+                       uint64_t *vcpumap, int ext_vcpucontext,
                        int vcpuextstate, uint32_t vcpuextstate_size)
 {
     if ( buf->ishvm )
@@ -680,7 +680,7 @@ typedef struct {
 
     int new_ctxt_format;
     int max_vcpu_id;
-    uint64_t vcpumap;
+    uint64_t vcpumap[XC_SR_MAX_VCPUS/64];
     uint64_t identpt;
     uint64_t paging_ring_pfn;
     uint64_t access_ring_pfn;
@@ -745,12 +745,12 @@ static int pagebuf_get_one(xc_interface 
     case XC_SAVE_ID_VCPU_INFO:
         buf->new_ctxt_format = 1;
         if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
-             buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
-                                               sizeof(uint64_t)) ) {
+             buf->max_vcpu_id >= XC_SR_MAX_VCPUS ||
+             RDEXACT(fd, buf->vcpumap, vcpumap_sz(buf->max_vcpu_id)) ) {
             PERROR("Error when reading max_vcpu_id");
             return -1;
         }
-        // DPRINTF("Max VCPU ID: %d, vcpumap: %llx\n", buf->max_vcpu_id, buf->vcpumap);
+        // DPRINTF("Max VCPU ID: %d, vcpumap: %llx\n", buf->max_vcpu_id, buf->vcpumap[0]);
         return pagebuf_get_one(xch, ctx, buf, fd, dom);
 
     case XC_SAVE_ID_HVM_IDENT_PT:
@@ -1366,7 +1366,7 @@ int xc_domain_restore(xc_interface *xch,
     struct mmuext_op pin[MAX_PIN_BATCH];
     unsigned int nr_pins;
 
-    uint64_t vcpumap = 1ULL;
+    uint64_t vcpumap[XC_SR_MAX_VCPUS/64] = { 1ULL };
     unsigned int max_vcpu_id = 0;
     int new_ctxt_format = 0;
 
@@ -1517,8 +1517,8 @@ int xc_domain_restore(xc_interface *xch,
         if ( j == 0 ) {
             /* catch vcpu updates */
             if (pagebuf.new_ctxt_format) {
-                vcpumap = pagebuf.vcpumap;
                 max_vcpu_id = pagebuf.max_vcpu_id;
+                memcpy(vcpumap, pagebuf.vcpumap, vcpumap_sz(max_vcpu_id));
             }
             /* should this be deferred? does it change? */
             if ( pagebuf.identpt )
@@ -1880,7 +1880,7 @@ int xc_domain_restore(xc_interface *xch,
     vcpup = tailbuf.u.pv.vcpubuf;
     for ( i = 0; i <= max_vcpu_id; i++ )
     {
-        if ( !(vcpumap & (1ULL << i)) )
+        if ( !(vcpumap[i/64] & (1ULL << (i%64))) )
             continue;
 
         memcpy(ctxt, vcpup, ((dinfo->guest_width == 8) ? sizeof(ctxt->x64)
diff -r 64017d4df9da tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c	Fri Aug 17 11:36:08 2012 +0200
+++ b/tools/libxc/xc_domain_save.c	Sat Aug 18 08:25:52 2012 +0100
@@ -855,7 +855,7 @@ int xc_domain_save(xc_interface *xch, in
     unsigned long needed_to_fix = 0;
     unsigned long total_sent    = 0;
 
-    uint64_t vcpumap = 1ULL;
+    uint64_t vcpumap[XC_SR_MAX_VCPUS/64] = { 1ULL };
 
     /* HVM: a buffer for holding HVM context */
     uint32_t hvm_buf_size = 0;
@@ -1581,13 +1581,13 @@ int xc_domain_save(xc_interface *xch, in
     }
 
     {
-        struct {
+        struct chunk {
             int id;
             int max_vcpu_id;
-            uint64_t vcpumap;
+            uint64_t vcpumap[XC_SR_MAX_VCPUS/64];
         } chunk = { XC_SAVE_ID_VCPU_INFO, info.max_vcpu_id };
 
-        if ( info.max_vcpu_id >= 64 )
+        if ( info.max_vcpu_id >= XC_SR_MAX_VCPUS )
         {
             ERROR("Too many VCPUS in guest!");
             goto out;
@@ -1598,11 +1598,12 @@ int xc_domain_save(xc_interface *xch, in
             xc_vcpuinfo_t vinfo;
             if ( (xc_vcpu_getinfo(xch, dom, i, &vinfo) == 0) &&
                  vinfo.online )
-                vcpumap |= 1ULL << i;
+                vcpumap[i/64] |= 1ULL << (i%64);
         }
 
-        chunk.vcpumap = vcpumap;
-        if ( wrexact(io_fd, &chunk, sizeof(chunk)) )
+        memcpy(chunk.vcpumap, vcpumap, vcpumap_sz(info.max_vcpu_id));
+        if ( wrexact(io_fd, &chunk, offsetof(struct chunk, vcpumap)
+                     + vcpumap_sz(info.max_vcpu_id)) )
         {
             PERROR("Error when writing to state file");
             goto out;
@@ -1878,7 +1879,7 @@ int xc_domain_save(xc_interface *xch, in
 
     for ( i = 0; i <= info.max_vcpu_id; i++ )
     {
-        if ( !(vcpumap & (1ULL << i)) )
+        if ( !(vcpumap[i/64] & (1ULL << (i%64))) )
             continue;
 
         if ( (i != 0) && xc_vcpu_getcontext(xch, dom, i, &ctxt) )
diff -r 64017d4df9da tools/libxc/xg_save_restore.h
--- a/tools/libxc/xg_save_restore.h	Fri Aug 17 11:36:08 2012 +0200
+++ b/tools/libxc/xg_save_restore.h	Sat Aug 18 08:25:52 2012 +0100
@@ -269,6 +269,9 @@
 /* When pinning page tables at the end of restore, we also use batching. */
 #define MAX_PIN_BATCH  1024
 
+/* Maximum #VCPUs currently supported for save/restore. */
+#define XC_SR_MAX_VCPUS 4096
+#define vcpumap_sz(max_id) (((max_id)/64+1)*sizeof(uint64_t))
 
 
 /*

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: VM save/restore
  2012-08-18  7:34   ` Keir Fraser
@ 2012-08-20 20:54     ` Junjie Wei
  2012-08-20 21:05     ` Junjie Wei
  1 sibling, 0 replies; 7+ messages in thread
From: Junjie Wei @ 2012-08-20 20:54 UTC (permalink / raw)
  To: Keir Fraser; +Cc: Jan Beulich, xen-devel

On 08/18/2012 03:34 AM, Keir Fraser wrote:
> On 18/08/2012 07:38, "Keir Fraser" <keir.xen@gmail.com> wrote:
>
>>
>>> I think if a VM can be successfully started, then save/restore should
>>> also work. So I made a patch and did some testing.
>>
>> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
>> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
>> after restore I would imagine.
>
> How about the attached patch? It might actually work properly, unlike yours.
> ;)
>
>>> The above problem is gone but there are new ones.
>>>
>>> Let me summarize the result here.
>>>
>>> With the patch, save/restore works fine as long as it can be started,
>>> except two cases.
>>>
>>> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>>>      but the guest can only make use of 32 of them.
>
> HVM guest? I don't know why this is. You will have to investigate some more
> what has happened to the rest of your VCPUs! I think it should definitely
> work. Cc Jan in case he has any thoughts.
>
>>> 2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
>>>      but `xm save' does not work.
>
> That's because your changes to the save/restore code were wrong. Try my
> patch instead.
>
>   -- Keir
>

Tested. Your patch works perfectly for all cases. :)


Thanks,
Junjie

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: VM save/restore
  2012-08-18  6:38 ` Keir Fraser
  2012-08-18  7:34   ` Keir Fraser
@ 2012-08-20 20:58   ` Junjie Wei
  2012-08-22 21:17     ` Keir Fraser
  1 sibling, 1 reply; 7+ messages in thread
From: Junjie Wei @ 2012-08-20 20:58 UTC (permalink / raw)
  To: Keir Fraser; +Cc: xen-devel

On 08/18/2012 02:38 AM, Keir Fraser wrote:
>> It was caused by a hard-coded limit in tools/libxc/xc_domain_save.c:
>>
>> if ( info.max_vcpu_id >= 64 )
>> {
>>        ERROR("Too many VCPUS in guest!");
>>        goto out;
>> }
>>
>> And also in tools/libxc/xc_domain_restore.c:
>>
>> case XC_SAVE_ID_VCPU_INFO:
>>        buf->new_ctxt_format = 1;
>>        if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
>>            buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
>>                                              sizeof(uint64_t)) ) {
>>            PERROR("Error when reading max_vcpu_id");
>>            return -1;
>>        }
>>
>> The code above is in both xen-4.1.2 and xen-unstable.
>>
>> I think if a VM can be successfully started, then save/restore should
>> also work. So I made a patch and did some testing.
>
> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
> after restore I would imagine.
>
> And what is a PVM guest?
>
>   -- Keir
>

Paravirtualization / modified kernel. Am I using the wrong term "PVM"?

Thanks,
Junjie

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: VM save/restore
  2012-08-18  7:34   ` Keir Fraser
  2012-08-20 20:54     ` Junjie Wei
@ 2012-08-20 21:05     ` Junjie Wei
  1 sibling, 0 replies; 7+ messages in thread
From: Junjie Wei @ 2012-08-20 21:05 UTC (permalink / raw)
  To: Keir Fraser; +Cc: Jan Beulich, xen-devel

On 08/18/2012 03:34 AM, Keir Fraser wrote:
>>>
>>> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>>>      but the guest can only make use of 32 of them.
>
> HVM guest? I don't know why this is. You will have to investigate some more
> what has happened to the rest of your VCPUs! I think it should definitely
> work. Cc Jan in case he has any thoughts.
>

It looks like that for 32-bit guests, the VCPU -> CPU mapping only works 
for the first 32 VCPUs. It can be reliably reproduced.

Thanks,
Junjie


[root@ovs087 HVM_X86]# rpm -qa | grep xen
xenpvboot-0.1-8.el5
xen-tools-4.1.2-39
xen-devel-4.1.2-39
xen-4.1.2-39

[root@ovs087 HVM_X86]# uname -a
Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012 
x86_64 x86_64 x86_64 GNU/Linux

[root@ovs087 HVM_X86]# cat vm.cfg | grep vcpus
vcpus = 36

[root@ovs087 HVM_X86]# xm list 65
Name                                        ID   Mem VCPUs      State 
Time(s)
OVM_OL5U7_X86_PVHVM_10GB                    65  2048    32     r----- 
   33.3

[root@ovs087 HVM_X86]# xm vcpu-list 65
Name                                ID  VCPU   CPU State   Time(s) CPU 
Affinity
OVM_OL5U7_X86_PVHVM_10GB            65     0     4   -b-      10.7 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     1     0   -b-       1.8 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     2     4   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     3     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     4     7   -b-       1.1 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     5     5   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     6     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     7     3   r--      10.6 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     8     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     9     7   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    10     7   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    11     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    12     6   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    13     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    14     7   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    15     5   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    16     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    17     6   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    18     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    19     5   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    20     7   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    21     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    22     5   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    23     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    24     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    25     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    26     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    27     6   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    28     4   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    29     7   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    30     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    31     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    32     -   --p       0.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    33     -   --p       0.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    34     -   --p       0.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    35     -   --p       0.0 any cpu

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: VM save/restore
  2012-08-20 20:58   ` Junjie Wei
@ 2012-08-22 21:17     ` Keir Fraser
  0 siblings, 0 replies; 7+ messages in thread
From: Keir Fraser @ 2012-08-22 21:17 UTC (permalink / raw)
  To: Junjie Wei; +Cc: xen-devel

On 20/08/2012 21:58, "Junjie Wei" <junjie.wei@oracle.com> wrote:

>> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
>> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
>> after restore I would imagine.
>> 
>> And what is a PVM guest?
>> 
>>   -- Keir
>> 
> 
> Paravirtualization / modified kernel. Am I using the wrong term "PVM"?

Ah, I realised that in the end. They're usually just called "PV".

 -- Keir

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-08-22 21:17 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-08-17 21:28 VM save/restore Junjie Wei
2012-08-18  6:38 ` Keir Fraser
2012-08-18  7:34   ` Keir Fraser
2012-08-20 20:54     ` Junjie Wei
2012-08-20 21:05     ` Junjie Wei
2012-08-20 20:58   ` Junjie Wei
2012-08-22 21:17     ` Keir Fraser

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).