xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/17] xenpaging changes for xen-unstable
@ 2010-12-06 20:59 Olaf Hering
  2010-12-06 20:59 ` [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path Olaf Hering
                   ` (17 more replies)
  0 siblings, 18 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel


Here are some changes for xenpaging in xen-unstable.

Patches 1 to 7 are likely non-controversial and could be applied.
All later patches need more review (8-11), and more work from my side (12+).


This series uses the recently added wait_event feature. The __hvm_copy
patch crashes Xen with what looks like stack corruption. After a few
populate/resume iterations. I have added some printk to the
populate/resume functions and also wait.c. This leads to crashes.  It
rarely prints a clean backtrace like the one shown below (most of the
time a few cpus crash at once).

If I leave my debug patch out, then the console and also network is dead.
However, the remote power switch is able to inject some reboot event
into dom0. The shutdown scripts get up to the point where xend is
stopped. After a timeout the system is hard rebooted.

(XEN) Debugging connection not set up.
(XEN) ----[ Xen-4.1.22459-20101206.184155  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    3
(XEN) RIP:    e008:[<ffff82c4801227d0>] check_lock+0x20/0x50
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000001   rbx: 0000000000000000   rcx: 0000000000000001
(XEN) rdx: 0000000000000086   rsi: 0000000000000040   rdi: 0000000000000004
(XEN) rbp: ffff83013e707d20   rsp: ffff83013e707d20   r8:  ffff83013f0667c8
(XEN) r9:  00000000deadbeef   r10: ffff82c480218960   r11: 0000000000000246
(XEN) r12: ffff82c4802ed8c0   r13: ffff82c4802ed8c0   r14: ffff8300bf74e000
(XEN) r15: ffff82c4802d2520   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 0000000133429000   cr2: 0000000000000004
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83013e707d20:
(XEN)    ffff83013e707d38 ffff82c480122831 0000000000000007 ffff83013e707d98
(XEN)    ffff82c4801215ba ffffffffffffffff 0000000000000286 0000000000000202
(XEN)    0000000000000001 ffff8300bf2f2000 0000000000000000 ffff8300bf74e000
(XEN)    ffff83013e642190 ffff880001e53701 0000000000000001 ffff83013e707da8
(XEN)    ffff82c4801219f6 ffff83013e707dc8 ffff82c48015382f ffff83013e642000
(XEN)    0000000000000010 ffff83013e707dd8 ffff82c4801538bd ffff83013e707e08
(XEN)    ffff82c4801068e1 0000000000000296 ffff83013e642000 0000000000000010
(XEN)    ffff83013e642190 ffff83013e707e38 ffff82c480106da6 ffff83013e707ef8
(XEN)    ffff8300bf74a000 ffffffffffffffda ffff8800f3025db8 ffff83013e707ef8
(XEN)    ffff82c4801079e3 0000000700000005 0000000001306004 ffff83013e707e68
(XEN)    ffff82c480164b02 ffff83013e707f18 ffff8300bf74a000 0000000000a17b49
(XEN)    00000000008c3630 ffff83013e707ef8 ffff82c48020b000 0000000000000000
(XEN)    0000000000000246 ffffffff00000010 0000000000000100 00007fffb01ff93d
(XEN)    000000000000e033 ffff83013e707ed8 ffff8300bf74a000 0000000000000000
(XEN)    ffffffff8062bda0 ffff880001e53701 0000000000000001 00007cfec18f80c7
(XEN)    ffff82c480205ea2 ffffffff8000340a 0000000000000020 0000000000000001
(XEN)    ffff880001e53701 ffffffff8062bda0 0000000000000000 ffffffff80364fa0
(XEN)    0000000000000005 0000000000000246 ffff8800f420c778 ffffffff80364fa0
(XEN)    0000000000000200 0000000000000020 ffffffff8000340a 0000000000000000
(XEN)    ffff8800f3025db8 0000000000000004 0000010000000000 ffffffff8000340a
(XEN) Xen call trace:
(XEN)    [<ffff82c4801227d0>] check_lock+0x20/0x50
(XEN)    [<ffff82c480122831>] _spin_lock+0x11/0x5d
(XEN)    [<ffff82c4801215ba>] vcpu_wake+0x4b/0x43d
(XEN)    [<ffff82c4801219f6>] vcpu_unblock+0x4a/0x4c
(XEN)    [<ffff82c48015382f>] vcpu_kick+0x20/0x7f
(XEN)    [<ffff82c4801538bd>] vcpu_mark_events_pending+0x2f/0x33
(XEN)    [<ffff82c4801068e1>] evtchn_set_pending+0xc4/0x197
(XEN)    [<ffff82c480106da6>] evtchn_send+0x12a/0x14b
(XEN)    [<ffff82c4801079e3>] do_event_channel_op+0xc1c/0x1039
(XEN)    [<ffff82c480205ea2>] syscall_enter+0xf2/0x14c
(XEN)    
(XEN) Pagetable walk from 0000000000000004:
(XEN)  L4[0x000] = 000000013998f067 00000000000f3257
(XEN)  L3[0x000] = 0000000139988067 00000000000f325e
(XEN)  L2[0x000] = 0000000000000000 ffffffffffffffff 



Another crash, seems to happen also without my debugging printks:

..........

stein-schneider login: (XEN) memory.c:133:d0 Could not allocate order=9 extent: id=1 memflags=0 (3 of 4)
(XEN) memory.c:133:d0 Could not allocate order=9 extent: id=1 memflags=0 (0 of 3)
[  123.659052] device vif1.0 entered promiscuous mode
[  123.670991] br0: port 2(vif1.0) entering forwarding state
[  123.701379] (cdrom_add_media_watch() file=/usr/src/packages/BUILD/kernel-xen-2.6.32.24/linux-2.6.32/drivers/xen/blkback/cdrom.c, line=108) nodename:backend/vbd/1/768
[  123.733629] (cdrom_is_type() file=/usr/src/packages/BUILD/kernel-xen-2.6.32.24/linux-2.6.32/drivers/xen/blkback/cdrom.c, line=95) type:0
[  123.797533] (cdrom_add_media_watch() file=/usr/src/packages/BUILD/kernel-xen-2.6.32.24/linux-2.6.32/drivers/xen/blkback/cdrom.c, line=108) nodename:backend/vbd/1/5632
[  123.830026] (cdrom_is_type() file=/usr/src/packages/BUILD/kernel-xen-2.6.32.24/linux-2.6.32/drivers/xen/blkback/cdrom.c, line=95) type:1
[  123.857058] (cdrom_add_media_watch() file=/usr/src/packages/BUILD/kernel-xen-2.6.32.24/linux-2.6.32/drivers/xen/blkback/cdrom.c, line=110) is a cdrom
[  124.177892] (cdrom_add_media_watch() file=/usr/src/packages/BUILD/kernel-xen-2.6.32.24/linux-2.6.32/drivers/xen/blkback/cdrom.c, line=112) xenstore wrote OK
[  124.208728] (cdrom_is_type() file=/usr/src/packages/BUILD/kernel-xen-2.6.32.24/linux-2.6.32/drivers/xen/blkback/cdrom.c, line=95) type:1
[  124.454759] ip_tables: (C) 2000-2006 Netfilter Core Team
[  124.480436] OLH gntdev_open(449) xend[5286]->qemu-dm[5271] i ffff8800f2572720 f ffff8800f20fa9c0
[  124.796372] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[  124.810546] (cdrom_add_media_watch() file=/usr/src/packages/BUILD/kernel-xen-2.6.32.24/linux-2.6.32/drivers/xen/blkback/cdrom.c, line=108) nodename:backend/vbd/1/832
[  124.842834] (cdrom_is_type() file=/usr/src/packages/BUILD/kernel-xen-2.6.32.24/linux-2.6.32/drivers/xen/blkback/cdrom.c, line=95) type:0
(XEN) HVM1: HVM Loader
(XEN) HVM1: Detected Xen v4.1.22459-20101206
(XEN) HVM1: CPU speed is 2667 MHz
(XEN) HVM1: Xenbus rings @0xfeffc000, event channel 2
(XEN) irq.c:258: Dom1 PCI link 0 changed 0 -> 5
(XEN) HVM1: PCI-ISA link 0 routed to IRQ5
(XEN) irq.c:258: Dom1 PCI link 1 changed 0 -> 10
(XEN) HVM1: PCI-ISA link 1 routed to IRQ10
(XEN) irq.c:258: Dom1 PCI link 2 changed 0 -> 11
(XEN) HVM1: PCI-ISA link 2 routed to IRQ11
(XEN) irq.c:258: Dom1 PCI link 3 changed 0 -> 5
(XEN) HVM1: PCI-ISA link 3 routed to IRQ5
(XEN) HVM1: pci dev 01:3 INTA->IRQ10
(XEN) HVM1: pci dev 03:0 INTA->IRQ5
(XEN) HVM1: pci dev 02:0 bar 10 size 02000000: f0000008
(XEN) HVM1: pci dev 03:0 bar 14 size 01000000: f2000008
(XEN) HVM1: pci dev 02:0 bar 14 size 00001000: f3000000
(XEN) HVM1: pci dev 03:0 bar 10 size 00000100: 0000c001
(XEN) HVM1: pci dev 01:1 bar 20 size 00000010: 0000c101
(XEN) HVM1: Multiprocessor initialisation:
(XEN) HVM1:  - CPU0 ... 40-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(XEN) HVM1: Testing HVM environment:
(XEN) HVM1:  - REP INSB across page boundaries ... passed
(XEN) HVM1:  - GS base MSRs and SWAPGS ... passed
(XEN) HVM1: Passed 2 of 2 tests
(XEN) HVM1: Writing SMBIOS tables ...
(XEN) HVM1: Loading ROMBIOS ...
(XEN) HVM1: 9660 bytes of ROMBIOS high-memory extensions:
(XEN) HVM1:   Relocating to 0xfc000000-0xfc0025bc ... done
(XEN) HVM1: Creating MP tables ...
(XEN) HVM1: Loading Cirrus VGABIOS ...
(XEN) HVM1: Loading ACPI ...
(XEN) HVM1:  - Lo data: 000ea020-000ea04f
(XEN) HVM1:  - Hi data: fc002800-fc01291f
(XEN) HVM1: vm86 TSS at fc012c00
(XEN) HVM1: BIOS map:
(XEN) HVM1:  c0000-c8fff: VGA BIOS
(XEN) HVM1:  eb000-eb164: SMBIOS tables
(XEN) HVM1:  f0000-fffff: Main BIOS
(XEN) HVM1: E820 table:
(XEN) HVM1:  [00]: 00000000:00000000 - 00000000:0009e000: RAM
(XEN) HVM1:  [01]: 00000000:0009e000 - 00000000:0009fc00: RESERVED
(XEN) HVM1:  [02]: 00000000:0009fc00 - 00000000:000a0000: RESERVED
(XEN) HVM1:  HOLE: 00000000:000a0000 - 00000000:000e0000
(XEN) HVM1:  [03]: 00000000:000e0000 - 00000000:00100000: RESERVED
(XEN) HVM1:  [04]: 00000000:00100000 - 00000000:40000000: RAM
(XEN) HVM1:  HOLE: 00000000:40000000 - 00000000:fc000000
(XEN) HVM1:  [05]: 00000000:fc000000 - 00000001:00000000: RESERVED
(XEN) HVM1: Invoking ROMBIOS ...
(XEN) HVM1: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
(XEN) stdvga.c:147:d1 entering stdvga and caching modes
(XEN) HVM1: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
(XEN) HVM1: Bochs BIOS - build: 06/23/99
(XEN) HVM1: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
(XEN) HVM1: Options: apmbios pcibios eltorito PMM
(XEN) HVM1:
(XEN) HVM1: ata0-0: PCHS=8322/16/63 translation=lba LCHS=522/255/63
(XEN) HVM1: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (4096 MBytes)
(XEN) HVM1: ata0-1: PCHS=16383/16/63 translation=lba LCHS=1024/255/63
(XEN) HVM1: ata0  slave: QEMU HARDDISK ATA-7 Hard-Disk (43008 MBytes)
(XEN) HVM1: ata1 master: QEMU DVD-ROM ATAPI-4 CD-Rom/DVD-Rom
(XEN) HVM1: IDE time out
(XEN) HVM1:
(XEN) HVM1:
(XEN) HVM1:
(XEN) HVM1: Press F12 for boot menu.
(XEN) HVM1:
(XEN) HVM1: Booting from Hard Disk...
(XEN) HVM1: Booting from 0000:7c00
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=82
(XEN) HVM1: int13_harddisk: function 08, unmapped device for ELDL=82
(XEN) HVM1: *** int 15h function AX=00c0, BX=0000 not yet supported!
[  134.061361] vif1.0: no IPv6 routers present
(XEN) HVM1: *** int 15h function AX=ec00, BX=0002 not yet supported!
(XEN) HVM1: KBD: unsupported int 16h function 03
(XEN) HVM1: *** int 15h function AX=e980, BX=0000 not yet supported!
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=82
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=82
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=83
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=83
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=84
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=84
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=85
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=85
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=86
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=86
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=87
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=87
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 88
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 88
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 89
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 89
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8a
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8a
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8b
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8b
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8c
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8c
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8d
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8d
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8e
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8e
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8f
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8f
(XEN) Debugging connection not set up.
(XEN) Debugging connection not set up.
(XEN) Debugging connection not set up.
(XEN) ----[ Xen-4.1.22459-20101206.203922  x86_64  debug=y  Tainted:    C ]----
(XEN) ----[ Xen-4.1.22459-20101206.203922  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    6
(XEN) ----[ Xen-4.1.22459-20101206.203922  x86_64  debug=y  Tainted:    C ]----
(XEN) RIP:    e008:[<ffff82c480121ff0>]CPU:    0
(XEN)  check_lock+0x20/0x50CPU:    1
(XEN)
(XEN) RFLAGS: 0000000000010002   RIP:    e008:[<ffff82c480121ff0>]RIP:    e008:[<ffff82c480121ff0>]CONTEXT: hypervisor
(XEN)  check_lock+0x20/0x50rax: 0000000000000001   rbx: 0000000000000000   rcx: 0000000000000001
(XEN)
(XEN) RFLAGS: 0000000000010002   rdx: 0000000000000087   rsi: 0000000000000003   rdi: 0000000000000004
(XEN) CONTEXT: hypervisor
(XEN) rbp: ffff83013e65fd50   rsp: ffff83013e65fd50   r8:  0000000000000001
(XEN) rax: 0000000000000001   rbx: 0000000000000000   rcx: 0000000000000001
(XEN)  check_lock+0x20/0x50r9:  000000000000003f   r10: ffff8300bf750060   r11: 0000000000000246
(XEN) rdx: 0000000000000087   rsi: 0000000000000003   rdi: 0000000000000004
(XEN) r12: ffff83013e67d470   r13: ffff83013e77b810   r14: ffff83013e77b810
(XEN) rbp: ffff83013e737d28   rsp: ffff83013e737d28   r8:  0000000000000002
(XEN) r15: ffff82c4802e58c0   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN)
(XEN) RFLAGS: 0000000000010002   cr3: 00000001391c9000   cr2: 0000000000000004
(XEN) r9:  000000000000003e   r10: ffff8300bf2f6060   r11: 0f0f0f0f0f0f0f0f
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) r12: ffff83013e767870   r13: ffff83013e7678d0   r14: 0000000000000000
(XEN) CONTEXT: hypervisor
(XEN) Xen stack trace from rsp=ffff83013e65fd50:
(XEN)   rax: 0000000000000001   rbx: 0000000000000000   rcx: 0000000000000001
(XEN)  ffff83013e65fd68r15: ffff82c4802e58c0   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) rdx: 0000000000000087   rsi: 0000000000000003   rdi: 0000000000000004
(XEN)  ffff82c480122294cr3: 0000000138185000   cr2: 0000000000000004
(XEN) rbp: ffff82c480297d50   rsp: ffff82c480297d50   r8:  0000000000000001
(XEN)  0000000000000003ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN)  ffff83013e65fe38r9:  000000000000003f   r10: ffff8300bf4ec060   r11: 0000000000000246
(XEN)
(XEN)   Xen stack trace from rsp=ffff83013e737d28:
(XEN)    ffff82c48011881d ffff83013e737d40 ffff82c4802e58c0 ffff82c480122294 00000006bf750030 0000000000000003r12: ffff83013e77b810   r13: ffff83013e77b870   r14: 0000000000000000
(XEN)  ffff83013e737e10 ffff82c4802e58c0
(XEN)
(XEN)   r15: ffff82c4802e58c0   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN)  ffff82c4802e58c0cr3: 00000001391c9000   cr2: 0000000000000004
(XEN)  ffff82c48011881d ffff83013e65fde8ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN)  0000002842c7c815Xen stack trace from rsp=ffff82c480297d50:
(XEN)    ffff83013e65fe68 ffff82c4802e58c0
(XEN)    0000000180122294 ffff82c480297d68 ffff82c4802e58c0 0000000600000286
(XEN)    ffff83013e77b5c0 ffff82c480122294 0000000600000040 ffff82c4802e58c0 ffff83013e65fde8 ffff83013e737dc0 0000000000000003 0000002842c7c32f
(XEN)    ffff83013e737e40 ffff82c480297e38 ffc0000000000000
(XEN)    ffff83013e669060 00000001c18c8247 ffff83013e77b870
(XEN)    ffff83013e77b5c0 0000000000000030 000000013e765040
(XEN)    ffff83013e737dc0 0000000000000000
(XEN)    0000000000000000 ffff82c48011881d 0000000000000000 ffff82c4802e58c0 ffff8301389c4010 00000000bf4ec030 0000000000000082 0000000000000086
(XEN)    ffff82c4802e58c0 0000000000000000
(XEN)    ffff8300bf750000 ffff82c4802e58c0 ffff82c48025ab40 ffff83013e737dd0 ffff83013e669060 ffff82c480297de8
(XEN)    0000002842c7d0e2 00000000000000f1 ffff82c480297e68 ffff83013e669040
(XEN)    ffff83013e65feb8
(XEN)    ffff82c48012090c 0000000000000286 0000000000000000 ffff83013e77b5c0 0000000000000282 0000000000000001
(XEN)    ffff82c480297de8 0000002842c7c815
(XEN)    0000000000000000 0000000000000000 0000000000000000 ffffffff8059bf68 ffff82c4802e58e0 ffff83013e642000 0000000000000000 ffff82c480297df8
(XEN)    0000000000000082 00000000000000f0
(XEN)    ffff8300bf750000
(XEN)    ffff83013e65fea8 0000000000000000 0000000000000000 ffff82c48011e737 0000000000000000 aaaaaaaaaaaaaaaa ffff8300bf75a000 0000000000000000 ffff82c48025ab40
(XEN)    0000000000000082 0000000000000006
(XEN)    0000000000000006 0000000000000000 ffff82c4802b3f00 ffff83013e765060 ffff83013e65ff18
(XEN)
(XEN)    ffff8300bf4ec000 ffffffffffffffff ffff82c48025ab40 ffff83013e65fef8 ffff82c4802e58e0 ffff82c480121ef7 ffff83013e765040 ffff83013e669040 ffff83013e737e90
(XEN)
(XEN)    ffff8300bf750000 ffff82c4802e58c0 0000000000000000 ffff82c480297eb8 0000000000000000 ffff82c48012090c ffff82c48012090c 0000000000000000 ffff83013e737e40 0000000000000282
(XEN)
(XEN)    0000002842c7c32f 0000000000000000 ffff82c4801220c5 ffff83013e65ff08 ffff83013e7678e8 ffff82c480121f72 ffff83013e737e90 00007cfec19a00c7
(XEN)
(XEN)
(XEN)    0000002842c7d0e2 ffff82c480204f46 ffff82c480124d4c 0000000000000000 0000000000000000 ffff8800f4135f00 ffff83013e737ee0 0000000000000000 ffff83013e765100 0000000000000000 ffff83013e767a80
(XEN)    ffff83013e642000 0000000000000000
(XEN)    ffffffff8062bda0 0000000000000001 ffffffff8059bfd8
(XEN)    0000000000000246 0000000000000001
(XEN)    ffff8300bf4ec000 ffff82c4802b3f00 0000000000000000 ffff83013e737f18 00000000ffff7fd0 ffff82c480297ea8 0000000000000000
(XEN)    0000000000000000 ffffffffffffffff
(XEN)    ffff82c48011e737 ffffffff800033aa aaaaaaaaaaaaaaaa 00000000deadbeef
(XEN)    00000000deadbeef ffff83013e737ed0 0000000000000000 00000000deadbeef 0000000000000000
(XEN)    ffff82c480121ef7 0000010000000000 ffff82c4802b3f00 ffff82c4802b3f00 ffffffff800033aa
(XEN)    000000000000e033 ffff82c480297f18 0000000000000246
(XEN)
(XEN)    ffffffffffffffff ffffffff8059bf70 ffff83013e737f18 000000000000e02b ffff82c48025dce0 000000000000beef ffff82c480297ef8 ffff83013e737f18 000000000000beef 0000002842c73038 ffff82c480121ef7
(XEN)  ffff82c4802e58c0Xen call trace:
(XEN)
(XEN)   [<ffff82c480121ff0>] ffff8300bf4ec000 check_lock+0x20/0x50
(XEN)     0000000000000000[<ffff82c480122294>] 0000000000000000 _spin_trylock+0x11/0x4e
(XEN)     0000000000000000
(XEN)   [<ffff82c48011881d>]
(XEN)    csched_schedule+0x3cb/0x6c9
(XEN)     ffff83013e765040 0000000000000000[<ffff82c48012090c>] ffff82c480297f08 schedule+0x121/0x619
(XEN)     ffff82c480121f72 ffff83013e737ee0[<ffff82c480121ef7>] 00007d3b7fd680c7 __do_softirq+0x88/0x99
(XEN)
(XEN)   [<ffff82c480121f72>] ffff82c480204f46 ffff82c480121f72 do_softirq+0x6a/0x7a
(XEN)     0000000000000000
(XEN)  0000000000000000Pagetable walk from 0000000000000004:
(XEN)  ffff83013e737f10 0000000000000000
(XEN)    L4[0x000] = 0000000133425067 00000000000ecfbf
(XEN)
(XEN)    L3[0x000] = 00000001334a4067 00000000000ecf40
(XEN)  ffff82c480155409 L2[0x000] = 0000000000000000 ffffffffffffffff
(XEN)  0000000000000000
(XEN) ****************************************
(XEN)  ffff8300bf75a000Panic on CPU 6:
(XEN)  0000000000000001 0000000000000000FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 0000000000000004
(XEN)
(XEN)   ****************************************
(XEN)
(XEN)  ffff8300bf2f6000Reboot in five seconds...
(XEN)  ffff83013e737d38Debugging connection not set up.
(XEN)  0000000000000000 ffffffff8062bda0 0000000000000000 ffff8800f4135fd8
(XEN)    0000000000000000 0000000000000246 0000000000000000
(XEN)    ffffffff8062bda0 0000000000000000 ffff8800f4143fd8 00000000ffff7fd0
(XEN)    0000000000000246 0000000000000000 0000000000000000 0000000000000000 00000000ffff7fd0
(XEN)    0000000000000000 ffffffff800033aa
(XEN)    0000000000000000 00000000deadbeef ffffffff800033aa 00000000deadbeef 00000000deadbeef 00000000deadbeef 00000000deadbeef
(XEN)
(XEN)    00000000deadbeef 0000010000000000 0000010000000000 ffffffff800033aa ffffffff800033aa 000000000000e033 000000000000e033 0000000000000246
(XEN)
(XEN)   Xen call trace:
(XEN)     ffff8800f4135f08[<ffff82c480121ff0>] 000000000000e02b check_lock+0x20/0x50
(XEN)     000000000000beef[<ffff82c480122294>] 000000000000beef _spin_trylock+0x11/0x4e
(XEN)
(XEN) [<ffff82c48011881d>]Xen call trace:
(XEN)     csched_schedule+0x3cb/0x6c9
(XEN)    [<ffff82c480121ff0>][<ffff82c48012090c>] check_lock+0x20/0x50
(XEN)    [<ffff82c480122294>] schedule+0x121/0x619
(XEN)    [<ffff82c480121ef7>] _spin_trylock+0x11/0x4e
(XEN)    [<ffff82c48011881d>] __do_softirq+0x88/0x99
(XEN)    [<ffff82c480121f72>] csched_schedule+0x3cb/0x6c9
(XEN)    [<ffff82c48012090c>] do_softirq+0x6a/0x7a
(XEN)    [<ffff82c480155409>] schedule+0x121/0x619
(XEN)    [<ffff82c480121ef7>] idle_loop+0x64/0x66
(XEN)
(XEN)  __do_softirq+0x88/0x99
(XEN)    Pagetable walk from 0000000000000004:
(XEN) [<ffff82c480121f72>] L4[0x000] = 0000000138e03067 00000000000f29e1
(XEN)  do_softirq+0x6a/0x7a
(XEN)     L3[0x000] = 000000013982e067 00000000000f33b6
(XEN)
(XEN)  L2[0x000] = 0000000000000000 ffffffffffffffff
(XEN) Pagetable walk from 0000000000000004:
(XEN)
(XEN) ****************************************
(XEN)  L4[0x000] = 0000000133425067 00000000000ecfbf
(XEN) Panic on CPU 1:
(XEN)  L3[0x000] = 00000001334a4067 00000000000ecf40
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 0000000000000004
(XEN)  L2[0x000] = 0000000000000000 ffffffffffffffff
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
(XEN) Debugging connection not set up.
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 0000000000000004
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
(XEN) Debugging connection not set up.
............


Olaf

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-14 18:52   ` Ian Jackson
  2010-12-06 20:59 ` [PATCH 02/17] xenpaging: remove perror usage " Olaf Hering
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.xenpaging_init.xc_interface_close.patch --]
[-- Type: text/plain, Size: 553 bytes --]

Just for correctness, close the xch handle in the error path.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 tools/xenpaging/xenpaging.c |    1 +
 1 file changed, 1 insertion(+)

--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -224,6 +224,7 @@ xenpaging_t *xenpaging_init(xc_interface
  err:
     if ( paging )
     {
+        xc_interface_close(xch);
         if ( paging->mem_event.shared_page )
         {
             munlock(paging->mem_event.shared_page, PAGE_SIZE);

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 02/17] xenpaging: remove perror usage in xenpaging_init error path
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
  2010-12-06 20:59 ` [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 03/17] xenpaging: print DPRINTF ouput if XENPAGING_DEBUG is in environment Olaf Hering
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.xenpaging_init.perror.patch --]
[-- Type: text/plain, Size: 730 bytes --]

Use the libxc error macro to report errors if initialising xenpaging
fails. Also report the actual errno string.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 tools/xenpaging/xenpaging.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -141,7 +141,7 @@ xenpaging_t *xenpaging_init(xc_interface
                 ERROR("EPT not supported for this guest");
                 break;
             default:
-                perror("Error initialising shared page");
+                ERROR("Error initialising shared page: %s", strerror(errno));
                 break;
         }
         goto err;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 03/17] xenpaging: print DPRINTF ouput if XENPAGING_DEBUG is in environment
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
  2010-12-06 20:59 ` [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path Olaf Hering
  2010-12-06 20:59 ` [PATCH 02/17] xenpaging: remove perror usage " Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 04/17] xenpaging: print number of evicted pages Olaf Hering
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.xenpaging_init.xentoollog_logger.patch --]
[-- Type: text/plain, Size: 1082 bytes --]

No DPRINTF output is logged because the default loglevel is to low in
libxc. Recognize the XENPAGING_DEBUG environment variable to change the
default loglevel at runtime.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 tools/xenpaging/xenpaging.c |   11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -39,12 +39,6 @@
 #include "policy.h"
 #include "xenpaging.h"
 
-
-#if 0
-#undef DPRINTF
-#define DPRINTF(...) ((void)0)
-#endif
-
 static char filename[80];
 static int interrupted;
 static void close_handler(int sig)
@@ -83,9 +77,12 @@ xenpaging_t *xenpaging_init(xc_interface
 {
     xenpaging_t *paging;
     xc_interface *xch;
+    xentoollog_logger *dbg = NULL;
     int rc;
 
-    xch = xc_interface_open(NULL, NULL, 0);
+    if ( getenv("XENPAGING_DEBUG") )
+        dbg = (xentoollog_logger *)xtl_createlogger_stdiostream(stderr, XTL_DEBUG, 0);
+    xch = xc_interface_open(dbg, NULL, 0);
     if ( !xch )
         goto err_iface;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 04/17] xenpaging: print number of evicted pages
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (2 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 03/17] xenpaging: print DPRINTF ouput if XENPAGING_DEBUG is in environment Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 05/17] xenpaging: remove duplicate xc_interface_close call Olaf Hering
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.xenpaging_init.num_pages.patch --]
[-- Type: text/plain, Size: 557 bytes --]

Print number of evicted pages after evict loop.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 tools/xenpaging/xenpaging.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -583,7 +583,7 @@ int main(int argc, char *argv[])
             DPRINTF("%d pages evicted\n", i);
     }
 
-    DPRINTF("pages evicted\n");
+    DPRINTF("%d pages evicted. Done.\n", i);
 
     /* Swap pages in and out */
     while ( !interrupted )

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 05/17] xenpaging: remove duplicate xc_interface_close call
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (3 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 04/17] xenpaging: print number of evicted pages Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 06/17] xenpaging: do not use DPRINTF/ERROR if xch handle is unavailable Olaf Hering
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.xc_handle.double_free.patch --]
[-- Type: text/plain, Size: 571 bytes --]

Fix double-free in xc_interface_close() because xenpaging_teardown()
releases the *xch already. Remove second xc_interface_close() call.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 tools/xenpaging/xenpaging.c |    2 --
 1 file changed, 2 deletions(-)

--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -693,8 +693,6 @@ int main(int argc, char *argv[])
     if ( rc == 0 )
         rc = rc1;
 
-    xc_interface_close(xch);
-
     DPRINTF("xenpaging exit code %d\n", rc);
     return rc;
 }

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 06/17] xenpaging: do not use DPRINTF/ERROR if xch handle is unavailable
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (4 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 05/17] xenpaging: remove duplicate xc_interface_close call Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 07/17] xenpaging: update xch usage Olaf Hering
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.xc_handle.dprintf.patch --]
[-- Type: text/plain, Size: 1208 bytes --]

Fix DPRINTF/ERROR usage. Both macros reference a xch variable in local scope.
If xc_interface_open fails and after xc_interface_close, both can not be used
anymore. Use standard fprintf for this case.

Remove the code to print the exit value, its not really useful.
Its a left-over for debugging from an earlier patch.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 tools/xenpaging/xenpaging.c |    6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -539,7 +539,7 @@ int main(int argc, char *argv[])
     paging = xenpaging_init(&xch, domain_id);
     if ( paging == NULL )
     {
-        ERROR("Error initialising paging");
+        fprintf(stderr, "Error initialising paging");
         return 1;
     }
 
@@ -688,12 +688,10 @@ int main(int argc, char *argv[])
     /* Tear down domain paging */
     rc1 = xenpaging_teardown(xch, paging);
     if ( rc1 != 0 )
-        ERROR("Error tearing down paging");
+        fprintf(stderr, "Error tearing down paging");
 
     if ( rc == 0 )
         rc = rc1;
-
-    DPRINTF("xenpaging exit code %d\n", rc);
     return rc;
 }

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 07/17] xenpaging: update xch usage
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (5 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 06/17] xenpaging: do not use DPRINTF/ERROR if xch handle is unavailable Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 08/17] xenpaging: make vcpu_sleep_nosync() optional in mem_event_check_ring() Olaf Hering
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.xc_handle.xch.patch --]
[-- Type: text/plain, Size: 9129 bytes --]

Instead of passing xch around, use the handle from xenpaging_t.
In the updated functions, use a local xch variable.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 tools/xenpaging/policy.h         |    3 --
 tools/xenpaging/policy_default.c |    4 +-
 tools/xenpaging/xenpaging.c      |   54 ++++++++++++++++++++-------------------
 3 files changed, 32 insertions(+), 29 deletions(-)

--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/policy.h
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/policy.h
@@ -29,8 +29,7 @@
 
 
 int policy_init(xenpaging_t *paging);
-int policy_choose_victim(xc_interface *xch,
-                         xenpaging_t *paging, domid_t domain_id,
+int policy_choose_victim(xenpaging_t *paging, domid_t domain_id,
                          xenpaging_victim_t *victim);
 void policy_notify_paged_out(domid_t domain_id, unsigned long gfn);
 void policy_notify_paged_in(domid_t domain_id, unsigned long gfn);
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/policy_default.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/policy_default.c
@@ -67,10 +67,10 @@ int policy_init(xenpaging_t *paging)
     return rc;
 }
 
-int policy_choose_victim(xc_interface *xch,
-                         xenpaging_t *paging, domid_t domain_id,
+int policy_choose_victim(xenpaging_t *paging, domid_t domain_id,
                          xenpaging_victim_t *victim)
 {
+    struct xc_interface *xch = paging->xc_handle;
     unsigned long wrap = current_gfn;
     ASSERT(victim != NULL);
 
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -73,7 +73,7 @@ static void *init_page(void)
     return NULL;
 }
 
-xenpaging_t *xenpaging_init(xc_interface **xch_r, domid_t domain_id)
+xenpaging_t *xenpaging_init(domid_t domain_id)
 {
     xenpaging_t *paging;
     xc_interface *xch;
@@ -87,7 +87,6 @@ xenpaging_t *xenpaging_init(xc_interface
         goto err_iface;
 
     DPRINTF("xenpaging init\n");
-    *xch_r = xch;
 
     /* Allocate memory */
     paging = malloc(sizeof(xenpaging_t));
@@ -125,7 +124,7 @@ xenpaging_t *xenpaging_init(xc_interface
     mem_event_ring_lock_init(&paging->mem_event);
     
     /* Initialise Xen */
-    rc = xc_mem_event_enable(paging->xc_handle, paging->mem_event.domain_id,
+    rc = xc_mem_event_enable(xch, paging->mem_event.domain_id,
                              paging->mem_event.shared_page,
                              paging->mem_event.ring_page);
     if ( rc != 0 )
@@ -172,7 +171,7 @@ xenpaging_t *xenpaging_init(xc_interface
         goto err;
     }
 
-    rc = xc_get_platform_info(paging->xc_handle, domain_id,
+    rc = xc_get_platform_info(xch, domain_id,
                               paging->platform_info);
     if ( rc != 1 )
     {
@@ -188,7 +187,7 @@ xenpaging_t *xenpaging_init(xc_interface
         goto err;
     }
 
-    rc = xc_domain_getinfolist(paging->xc_handle, domain_id, 1,
+    rc = xc_domain_getinfolist(xch, domain_id, 1,
                                paging->domain_info);
     if ( rc != 1 )
     {
@@ -244,15 +243,18 @@ xenpaging_t *xenpaging_init(xc_interface
     return NULL;
 }
 
-int xenpaging_teardown(xc_interface *xch, xenpaging_t *paging)
+int xenpaging_teardown(xenpaging_t *paging)
 {
     int rc;
+    struct xc_interface *xch;
 
     if ( paging == NULL )
         return 0;
 
+    xch = paging->xc_handle;
+    paging->xc_handle = NULL;
     /* Tear down domain paging in Xen */
-    rc = xc_mem_event_disable(paging->xc_handle, paging->mem_event.domain_id);
+    rc = xc_mem_event_disable(xch, paging->mem_event.domain_id);
     if ( rc != 0 )
     {
         ERROR("Error tearing down domain paging in xen");
@@ -275,12 +277,11 @@ int xenpaging_teardown(xc_interface *xch
     paging->mem_event.xce_handle = -1;
     
     /* Close connection to Xen */
-    rc = xc_interface_close(paging->xc_handle);
+    rc = xc_interface_close(xch);
     if ( rc != 0 )
     {
         ERROR("Error closing connection to xen");
     }
-    paging->xc_handle = NULL;
 
     return 0;
 
@@ -334,9 +335,10 @@ static int put_response(mem_event_t *mem
     return 0;
 }
 
-int xenpaging_evict_page(xc_interface *xch, xenpaging_t *paging,
+int xenpaging_evict_page(xenpaging_t *paging,
                          xenpaging_victim_t *victim, int fd, int i)
 {
+    struct xc_interface *xch = paging->xc_handle;
     void *page;
     unsigned long gfn;
     int ret;
@@ -346,7 +348,7 @@ int xenpaging_evict_page(xc_interface *x
     /* Map page */
     gfn = victim->gfn;
     ret = -EFAULT;
-    page = xc_map_foreign_pages(paging->xc_handle, victim->domain_id,
+    page = xc_map_foreign_pages(xch, victim->domain_id,
                                 PROT_READ | PROT_WRITE, &gfn, 1);
     if ( page == NULL )
     {
@@ -369,7 +371,7 @@ int xenpaging_evict_page(xc_interface *x
     munmap(page, PAGE_SIZE);
 
     /* Tell Xen to evict page */
-    ret = xc_mem_paging_evict(paging->xc_handle, paging->mem_event.domain_id,
+    ret = xc_mem_paging_evict(xch, paging->mem_event.domain_id,
                               victim->gfn);
     if ( ret != 0 )
     {
@@ -407,10 +409,10 @@ static int xenpaging_resume_page(xenpagi
     return ret;
 }
 
-static int xenpaging_populate_page(
-    xc_interface *xch, xenpaging_t *paging,
+static int xenpaging_populate_page(xenpaging_t *paging,
     uint64_t *gfn, int fd, int i)
 {
+    struct xc_interface *xch = paging->xc_handle;
     unsigned long _gfn;
     void *page;
     int ret;
@@ -420,7 +422,7 @@ static int xenpaging_populate_page(
     do
     {
         /* Tell Xen to allocate a page for the domain */
-        ret = xc_mem_paging_prep(paging->xc_handle, paging->mem_event.domain_id,
+        ret = xc_mem_paging_prep(xch, paging->mem_event.domain_id,
                                  _gfn);
         if ( ret != 0 )
         {
@@ -439,7 +441,7 @@ static int xenpaging_populate_page(
 
     /* Map page */
     ret = -EFAULT;
-    page = xc_map_foreign_pages(paging->xc_handle, paging->mem_event.domain_id,
+    page = xc_map_foreign_pages(xch, paging->mem_event.domain_id,
                                 PROT_READ | PROT_WRITE, &_gfn, 1);
     *gfn = _gfn;
     if ( page == NULL )
@@ -462,15 +464,16 @@ static int xenpaging_populate_page(
     return ret;
 }
 
-static int evict_victim(xc_interface *xch, xenpaging_t *paging, domid_t domain_id,
+static int evict_victim(xenpaging_t *paging, domid_t domain_id,
                         xenpaging_victim_t *victim, int fd, int i)
 {
+    struct xc_interface *xch = paging->xc_handle;
     int j = 0;
     int ret;
 
     do
     {
-        ret = policy_choose_victim(xch, paging, domain_id, victim);
+        ret = policy_choose_victim(paging, domain_id, victim);
         if ( ret != 0 )
         {
             if ( ret != -ENOSPC )
@@ -483,10 +486,10 @@ static int evict_victim(xc_interface *xc
             ret = -EINTR;
             goto out;
         }
-        ret = xc_mem_paging_nominate(paging->xc_handle,
+        ret = xc_mem_paging_nominate(xch,
                                      paging->mem_event.domain_id, victim->gfn);
         if ( ret == 0 )
-            ret = xenpaging_evict_page(xch, paging, victim, fd, i);
+            ret = xenpaging_evict_page(paging, victim, fd, i);
         else
         {
             if ( j++ % 1000 == 0 )
@@ -536,12 +539,13 @@ int main(int argc, char *argv[])
     srand(time(NULL));
 
     /* Initialise domain paging */
-    paging = xenpaging_init(&xch, domain_id);
+    paging = xenpaging_init(domain_id);
     if ( paging == NULL )
     {
         fprintf(stderr, "Error initialising paging");
         return 1;
     }
+    xch = paging->xc_handle;
 
     DPRINTF("starting %s %u %d\n", argv[0], domain_id, num_pages);
 
@@ -574,7 +578,7 @@ int main(int argc, char *argv[])
     memset(victims, 0, sizeof(xenpaging_victim_t) * num_pages);
     for ( i = 0; i < num_pages; i++ )
     {
-        rc = evict_victim(xch, paging, domain_id, &victims[i], fd, i);
+        rc = evict_victim(paging, domain_id, &victims[i], fd, i);
         if ( rc == -ENOSPC )
             break;
         if ( rc == -EINTR )
@@ -627,7 +631,7 @@ int main(int argc, char *argv[])
                 }
                 
                 /* Populate the page */
-                rc = xenpaging_populate_page(xch, paging, &req.gfn, fd, i);
+                rc = xenpaging_populate_page(paging, &req.gfn, fd, i);
                 if ( rc != 0 )
                 {
                     ERROR("Error populating page");
@@ -648,7 +652,7 @@ int main(int argc, char *argv[])
                 }
 
                 /* Evict a new page to replace the one we just paged in */
-                evict_victim(xch, paging, domain_id, &victims[i], fd, i);
+                evict_victim(paging, domain_id, &victims[i], fd, i);
             }
             else
             {
@@ -686,7 +690,7 @@ int main(int argc, char *argv[])
     free(victims);
 
     /* Tear down domain paging */
-    rc1 = xenpaging_teardown(xch, paging);
+    rc1 = xenpaging_teardown(paging);
     if ( rc1 != 0 )
         fprintf(stderr, "Error tearing down paging");

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 08/17] xenpaging: make vcpu_sleep_nosync() optional in mem_event_check_ring()
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (6 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 07/17] xenpaging: update xch usage Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 09/17] xenpaging: update machine_to_phys_mapping[] during page deallocation Olaf Hering
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.mem_event_check_ring.no_vcpu_sleep.patch --]
[-- Type: text/plain, Size: 2625 bytes --]

Add a new option to mem_event_check_ring() to disable the
vcpu_sleep_nosync. This is needed for an upcoming patch which sends a
one-way request to the pager.

Also add a micro-optimization, check ring_full first because its value
was just evaluated.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 xen/arch/x86/mm/mem_event.c     |    4 ++--
 xen/arch/x86/mm/mem_sharing.c   |    2 +-
 xen/arch/x86/mm/p2m.c           |    2 +-
 xen/include/asm-x86/mem_event.h |    2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/mem_event.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/mem_event.c
@@ -143,7 +143,7 @@ void mem_event_unpause_vcpus(struct doma
             vcpu_wake(v);
 }
 
-int mem_event_check_ring(struct domain *d)
+int mem_event_check_ring(struct domain *d, int do_vcpu_sleep)
 {
     struct vcpu *curr = current;
     int free_requests;
@@ -159,7 +159,7 @@ int mem_event_check_ring(struct domain *
     }
     ring_full = free_requests < MEM_EVENT_RING_THRESHOLD;
 
-    if ( (curr->domain->domain_id == d->domain_id) && ring_full )
+    if ( ring_full && do_vcpu_sleep && (curr->domain->domain_id == d->domain_id) )
     {
         set_bit(_VPF_mem_event, &curr->pause_flags);
         vcpu_sleep_nosync(curr);
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/mem_sharing.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/mem_sharing.c
@@ -321,7 +321,7 @@ static struct page_info* mem_sharing_all
     }
         
     /* XXX: Need to reserve a request, not just check the ring! */
-    if(mem_event_check_ring(d)) return page;
+    if(mem_event_check_ring(d, 1)) return page;
 
     req.flags |= MEM_EVENT_FLAG_OUT_OF_MEM;
     req.gfn = gfn;
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
@@ -2758,7 +2758,7 @@ void p2m_mem_paging_populate(struct p2m_
     struct domain *d = p2m->domain;
 
     /* Check that there's space on the ring for this request */
-    if ( mem_event_check_ring(d) )
+    if ( mem_event_check_ring(d, 1) )
         return;
 
     memset(&req, 0, sizeof(req));
--- xen-unstable.hg-4.1.22459.orig/xen/include/asm-x86/mem_event.h
+++ xen-unstable.hg-4.1.22459/xen/include/asm-x86/mem_event.h
@@ -24,7 +24,7 @@
 #ifndef __MEM_EVENT_H__
 #define __MEM_EVENT_H__
 
-int mem_event_check_ring(struct domain *d);
+int mem_event_check_ring(struct domain *d, int do_vcpu_sleep);
 void mem_event_put_request(struct domain *d, mem_event_request_t *req);
 void mem_event_get_response(struct domain *d, mem_event_response_t *rsp);
 void mem_event_unpause_vcpus(struct domain *d);

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 09/17] xenpaging: update machine_to_phys_mapping[] during page deallocation
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (7 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 08/17] xenpaging: make vcpu_sleep_nosync() optional in mem_event_check_ring() Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in Olaf Hering
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.machine_to_phys_mapping.free_domheap_pages.patch --]
[-- Type: text/plain, Size: 2029 bytes --]

The machine_to_phys_mapping[] array needs updating during page
deallocation.  If that page is allocated again, a call to
get_gpfn_from_mfn() will still return an old gfn from another guest.
This will cause trouble because this gfn number has no or different
meaning in the context of the current guest.

This happens when the entire guest ram is paged-out before
xen_vga_populate_vram() runs.  Then XENMEM_populate_physmap is called
with gfn 0xff000.  A new page is allocated with alloc_domheap_pages.
This new page does not have a gfn yet.  However, in
guest_physmap_add_entry() the passed mfn maps still to an old gfn
(perhaps from another old guest).  This old gfn is in paged-out state in
this guests context and has no mfn anymore.  As a result, the ASSERT()
triggers because p2m_is_ram() is true for p2m_ram_paging* types.
If the machine_to_phys_mapping[] array is updated properly, both loops
in guest_physmap_add_entry() turn into no-ops for the new page and the
mfn/gfn mapping will be done at the end of the function.

If XENMEM_add_to_physmap is used with XENMAPSPACE_gmfn,
get_gpfn_from_mfn() will return an appearently valid gfn.  As a result,
guest_physmap_remove_page() is called.  The ASSERT in p2m_remove_page
triggers because the passed mfn does not match the old mfn for the
passed gfn.


Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 xen/common/page_alloc.c |    6 ++++++
 1 file changed, 6 insertions(+)

--- xen-unstable.hg-4.1.22459.orig/xen/common/page_alloc.c
+++ xen-unstable.hg-4.1.22459/xen/common/page_alloc.c
@@ -1200,9 +1200,15 @@ void free_domheap_pages(struct page_info
 {
     int            i, drop_dom_ref;
     struct domain *d = page_get_owner(pg);
+    unsigned long mfn;
 
     ASSERT(!in_irq());
 
+    /* this page is not a gfn anymore */
+    mfn = page_to_mfn(pg);
+    for ( i = 0; i < (1 << order); i++ )
+        set_gpfn_from_mfn(mfn + i, INVALID_M2P_ENTRY);
+
     if ( unlikely(is_xen_heap_page(pg)) )
     {
         /* NB. May recursively lock from relinquish_memory(). */

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (8 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 09/17] xenpaging: update machine_to_phys_mapping[] during page deallocation Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-14 22:58   ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 11/17] xenpaging: drop paged pages in guest_remove_page Olaf Hering
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.machine_to_phys_mapping.patch --]
[-- Type: text/plain, Size: 642 bytes --]

Update the machine_to_phys_mapping[] array during page-in. The gfn is
now at a different page and the array has still INVALID_M2P_ENTRY in the
index.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 xen/arch/x86/mm/p2m.c |    1 +
 1 file changed, 1 insertion(+)

--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
@@ -2827,6 +2827,7 @@ void p2m_mem_paging_resume(struct p2m_do
     mfn = gfn_to_mfn(p2m, rsp.gfn, &p2mt);
     p2m_lock(p2m);
     set_p2m_entry(p2m, rsp.gfn, mfn, 0, p2m_ram_rw);
+    set_gpfn_from_mfn(mfn_x(mfn), gfn);
     audit_p2m(p2m, 1);
     p2m_unlock(p2m);

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 11/17] xenpaging: drop paged pages in guest_remove_page
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (9 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user Olaf Hering
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.guest_remove_page.patch --]
[-- Type: text/plain, Size: 7834 bytes --]

Simply drop paged-pages in guest_remove_page(), and notify xenpaging to
drop its reference to the gfn. If the ring is full, the page will
remain in paged-out state in xenpaging. This is not an issue, it just
means this gfn will not be nominated again.

This patch depends on an earlier patch for mem_event_check_ring(),
which adds an additional option to mem_event_check_ring().

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
v3:
 send one-way notification to pager to release page
 use new mem_event_check_ring() feature to not pause vcpu when ring is full
v2:
 resume dropped page to unpause vcpus

 tools/xenpaging/xenpaging.c    |   46 ++++++++++++++++++++++------------
 xen/arch/x86/mm/p2m.c          |   54 +++++++++++++++++++++++++++++++----------
 xen/common/memory.c            |    6 ++++
 xen/include/asm-x86/p2m.h      |    4 +++
 xen/include/public/mem_event.h |    1 
 5 files changed, 83 insertions(+), 28 deletions(-)

--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -386,6 +386,12 @@ int xenpaging_evict_page(xenpaging_t *pa
     return ret;
 }
 
+static void xenpaging_drop_page(xenpaging_t *paging, unsigned long gfn)
+{
+    /* Notify policy of page being dropped */
+    policy_notify_paged_in(paging->mem_event.domain_id, gfn);
+}
+
 static int xenpaging_resume_page(xenpaging_t *paging, mem_event_response_t *rsp, int notify_policy)
 {
     int ret;
@@ -630,25 +636,33 @@ int main(int argc, char *argv[])
                     goto out;
                 }
                 
-                /* Populate the page */
-                rc = xenpaging_populate_page(paging, &req.gfn, fd, i);
-                if ( rc != 0 )
+                if ( req.flags & MEM_EVENT_FLAG_DROP_PAGE )
                 {
-                    ERROR("Error populating page");
-                    goto out;
+                    DPRINTF("Dropping page %"PRIx64"\n", req.gfn);
+                    xenpaging_drop_page(paging, req.gfn);
                 }
-
-                /* Prepare the response */
-                rsp.gfn = req.gfn;
-                rsp.p2mt = req.p2mt;
-                rsp.vcpu_id = req.vcpu_id;
-                rsp.flags = req.flags;
-
-                rc = xenpaging_resume_page(paging, &rsp, 1);
-                if ( rc != 0 )
+                else
                 {
-                    ERROR("Error resuming page");
-                    goto out;
+                    /* Populate the page */
+                    rc = xenpaging_populate_page(paging, &req.gfn, fd, i);
+                    if ( rc != 0 )
+                    {
+                        ERROR("Error populating page");
+                        goto out;
+                    }
+
+                    /* Prepare the response */
+                    rsp.gfn = req.gfn;
+                    rsp.p2mt = req.p2mt;
+                    rsp.vcpu_id = req.vcpu_id;
+                    rsp.flags = req.flags;
+
+                    rc = xenpaging_resume_page(paging, &rsp, 1);
+                    if ( rc != 0 )
+                    {
+                        ERROR("Error resuming page");
+                        goto out;
+                    }
                 }
 
                 /* Evict a new page to replace the one we just paged in */
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
@@ -2194,12 +2194,15 @@ p2m_remove_page(struct p2m_domain *p2m,
 
     P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn, mfn);
 
-    for ( i = 0; i < (1UL << page_order); i++ )
+    if ( mfn_valid(_mfn(mfn)) )
     {
-        mfn_return = p2m->get_entry(p2m, gfn + i, &t, p2m_query);
-        if ( !p2m_is_grant(t) )
-            set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY);
-        ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
+        for ( i = 0; i < (1UL << page_order); i++ )
+        {
+            mfn_return = p2m->get_entry(p2m, gfn + i, &t, p2m_query);
+            if ( !p2m_is_grant(t) )
+                set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY);
+            ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
+        }
     }
     set_p2m_entry(p2m, gfn, _mfn(INVALID_MFN), page_order, p2m_invalid);
 }
@@ -2750,6 +2753,30 @@ int p2m_mem_paging_evict(struct p2m_doma
     return 0;
 }
 
+void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn)
+{
+    struct vcpu *v = current;
+    mem_event_request_t req;
+    struct domain *d = p2m->domain;
+
+    /* Check that there's space on the ring for this request */
+    if ( mem_event_check_ring(d, 0) )
+    {
+        /* This just means this gfn will not be paged again */
+        gdprintk(XENLOG_ERR, "dropped gfn %lx not released in xenpaging\n", gfn);
+    }
+    else
+    {
+        /* Send release notification to pager */
+        memset(&req, 0, sizeof(req));
+        req.flags |= MEM_EVENT_FLAG_DROP_PAGE;
+        req.gfn = gfn;
+        req.vcpu_id = v->vcpu_id;
+
+        mem_event_put_request(d, &req);
+    }
+}
+
 void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
 {
     struct vcpu *v = current;
@@ -2823,13 +2850,16 @@ void p2m_mem_paging_resume(struct p2m_do
     /* Pull the response off the ring */
     mem_event_get_response(d, &rsp);
 
-    /* Fix p2m entry */
-    mfn = gfn_to_mfn(p2m, rsp.gfn, &p2mt);
-    p2m_lock(p2m);
-    set_p2m_entry(p2m, rsp.gfn, mfn, 0, p2m_ram_rw);
-    set_gpfn_from_mfn(mfn_x(mfn), gfn);
-    audit_p2m(p2m, 1);
-    p2m_unlock(p2m);
+    /* Fix p2m entry if the page was not dropped */
+    if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
+    {
+        mfn = gfn_to_mfn(p2m, rsp.gfn, &p2mt);
+        p2m_lock(p2m);
+        set_p2m_entry(p2m, rsp.gfn, mfn, 0, p2m_ram_rw);
+        set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
+        audit_p2m(p2m, 1);
+        p2m_unlock(p2m);
+    }
 
     /* Unpause domain */
     if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
--- xen-unstable.hg-4.1.22459.orig/xen/common/memory.c
+++ xen-unstable.hg-4.1.22459/xen/common/memory.c
@@ -163,6 +163,12 @@ int guest_remove_page(struct domain *d,
 
 #ifdef CONFIG_X86
     mfn = mfn_x(gfn_to_mfn(p2m_get_hostp2m(d), gmfn, &p2mt)); 
+    if ( unlikely(p2m_is_paging(p2mt)) )
+    {
+        guest_physmap_remove_page(d, gmfn, mfn, 0);
+        p2m_mem_paging_drop_page(p2m_get_hostp2m(d), gmfn);
+        return 1;
+    }
 #else
     mfn = gmfn_to_mfn(d, gmfn);
 #endif
--- xen-unstable.hg-4.1.22459.orig/xen/include/asm-x86/p2m.h
+++ xen-unstable.hg-4.1.22459/xen/include/asm-x86/p2m.h
@@ -471,6 +471,8 @@ int set_shared_p2m_entry(struct p2m_doma
 int p2m_mem_paging_nominate(struct p2m_domain *p2m, unsigned long gfn);
 /* Evict a frame */
 int p2m_mem_paging_evict(struct p2m_domain *p2m, unsigned long gfn);
+/* Tell xenpaging to drop a paged out frame */
+void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn);
 /* Start populating a paged out frame */
 void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
 /* Prepare the p2m for paging a frame in */
@@ -478,6 +480,8 @@ int p2m_mem_paging_prep(struct p2m_domai
 /* Resume normal operation (in case a domain was paused) */
 void p2m_mem_paging_resume(struct p2m_domain *p2m);
 #else
+static inline void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn)
+{ }
 static inline void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
 { }
 #endif
--- xen-unstable.hg-4.1.22459.orig/xen/include/public/mem_event.h
+++ xen-unstable.hg-4.1.22459/xen/include/public/mem_event.h
@@ -37,6 +37,7 @@
 #define MEM_EVENT_FLAG_VCPU_PAUSED  (1 << 0)
 #define MEM_EVENT_FLAG_DOM_PAUSED   (1 << 1)
 #define MEM_EVENT_FLAG_OUT_OF_MEM   (1 << 2)
+#define MEM_EVENT_FLAG_DROP_PAGE    (1 << 3)
 
 
 typedef struct mem_event_shared_page {

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (10 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 11/17] xenpaging: drop paged pages in guest_remove_page Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-07  9:27   ` Jan Beulich
  2010-12-15 11:35   ` Keir Fraser
  2010-12-06 20:59 ` [PATCH 13/17] xenpaging: page only pagetables for debugging Olaf Hering
                   ` (5 subsequent siblings)
  17 siblings, 2 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.HVMCOPY_gfn_paged_out.patch --]
[-- Type: text/plain, Size: 7100 bytes --]

copy_from_user_hvm can fail when __hvm_copy returns
HVMCOPY_gfn_paged_out for a referenced gfn, for example during guests
pagetable walk.  This has to be handled in some way.


Use the recently added wait_queue feature to preempt the current vcpu
when populate a page, then resume execution later when the page was
resumed. This is only done if the active domain needs to access the
page, because in this case the vcpu would leave the active state anyway.


This patch adds a return code to p2m_mem_paging_populate() to indicate
the caller that the page was ready, so it can retry the gfn_to_mfn call.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 xen/arch/x86/hvm/hvm.c           |    3 ++-
 xen/arch/x86/mm/guest_walk.c     |    5 +++--
 xen/arch/x86/mm/hap/guest_walk.c |   10 ++++++----
 xen/arch/x86/mm/p2m.c            |   19 ++++++++++++++-----
 xen/common/domain.c              |    1 +
 xen/include/asm-x86/p2m.h        |    7 ++++---
 xen/include/xen/sched.h          |    3 +++
 7 files changed, 33 insertions(+), 15 deletions(-)

--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/hvm/hvm.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/hvm/hvm.c
@@ -1939,7 +1939,8 @@ static enum hvm_copy_result __hvm_copy(
 
         if ( p2m_is_paging(p2mt) )
         {
-            p2m_mem_paging_populate(p2m, gfn);
+            if ( p2m_mem_paging_populate(p2m, gfn) )
+                continue;
             return HVMCOPY_gfn_paged_out;
         }
         if ( p2m_is_shared(p2mt) )
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/guest_walk.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/guest_walk.c
@@ -93,11 +93,12 @@ static inline void *map_domain_gfn(struc
                                    uint32_t *rc) 
 {
     /* Translate the gfn, unsharing if shared */
+retry:
     *mfn = gfn_to_mfn_unshare(p2m, gfn_x(gfn), p2mt, 0);
     if ( p2m_is_paging(*p2mt) )
     {
-        p2m_mem_paging_populate(p2m, gfn_x(gfn));
-
+        if ( p2m_mem_paging_populate(p2m, gfn_x(gfn)) )
+            goto retry;
         *rc = _PAGE_PAGED;
         return NULL;
     }
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/hap/guest_walk.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/hap/guest_walk.c
@@ -46,12 +46,13 @@ unsigned long hap_gva_to_gfn(GUEST_PAGIN
     struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
 
     /* Get the top-level table's MFN */
+retry_cr3:
     cr3 = v->arch.hvm_vcpu.guest_cr[3];
     top_mfn = gfn_to_mfn_unshare(p2m, cr3 >> PAGE_SHIFT, &p2mt, 0);
     if ( p2m_is_paging(p2mt) )
     {
-        p2m_mem_paging_populate(p2m, cr3 >> PAGE_SHIFT);
-
+        if ( p2m_mem_paging_populate(p2m, cr3 >> PAGE_SHIFT) )
+            goto retry_cr3;
         pfec[0] = PFEC_page_paged;
         return INVALID_GFN;
     }
@@ -79,11 +80,12 @@ unsigned long hap_gva_to_gfn(GUEST_PAGIN
     if ( missing == 0 )
     {
         gfn_t gfn = guest_l1e_get_gfn(gw.l1e);
+retry_missing:
         gfn_to_mfn_unshare(p2m, gfn_x(gfn), &p2mt, 0);
         if ( p2m_is_paging(p2mt) )
         {
-            p2m_mem_paging_populate(p2m, gfn_x(gfn));
-
+            if ( p2m_mem_paging_populate(p2m, gfn_x(gfn)) )
+                goto retry_missing;
             pfec[0] = PFEC_page_paged;
             return INVALID_GFN;
         }
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
@@ -2777,16 +2777,17 @@ void p2m_mem_paging_drop_page(struct p2m
     }
 }
 
-void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
+int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
 {
     struct vcpu *v = current;
     mem_event_request_t req;
     p2m_type_t p2mt;
     struct domain *d = p2m->domain;
+    int ret = 0;
 
     /* Check that there's space on the ring for this request */
     if ( mem_event_check_ring(d, 1) )
-        return;
+        return ret;
 
     memset(&req, 0, sizeof(req));
 
@@ -2805,13 +2806,13 @@ void p2m_mem_paging_populate(struct p2m_
     /* Pause domain */
     if ( v->domain->domain_id == d->domain_id )
     {
-        vcpu_pause_nosync(v);
         req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+        ret = 1;
     }
     else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
     {
         /* gfn is already on its way back and vcpu is not paused */
-        return;
+        goto populate_out;
     }
 
     /* Send request to pager */
@@ -2820,6 +2821,14 @@ void p2m_mem_paging_populate(struct p2m_
     req.vcpu_id = v->vcpu_id;
 
     mem_event_put_request(d, &req);
+
+    if ( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
+    {
+        wait_event(d->wq, mfn_valid(gfn_to_mfn(p2m, gfn, &p2mt)) && !p2m_is_paging(p2mt));
+    }
+
+populate_out:
+    return ret;
 }
 
 int p2m_mem_paging_prep(struct p2m_domain *p2m, unsigned long gfn)
@@ -2863,7 +2872,7 @@ void p2m_mem_paging_resume(struct p2m_do
 
     /* Unpause domain */
     if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
-        vcpu_unpause(d->vcpu[rsp.vcpu_id]);
+        wake_up(&d->wq);
 
     /* Unpause any domains that were paused because the ring was full */
     mem_event_unpause_vcpus(d);
--- xen-unstable.hg-4.1.22459.orig/xen/common/domain.c
+++ xen-unstable.hg-4.1.22459/xen/common/domain.c
@@ -244,6 +244,7 @@ struct domain *domain_create(
     spin_lock_init(&d->node_affinity_lock);
 
     spin_lock_init(&d->shutdown_lock);
+    init_waitqueue_head(&d->wq);
     d->shutdown_code = -1;
 
     if ( domcr_flags & DOMCRF_hvm )
--- xen-unstable.hg-4.1.22459.orig/xen/include/asm-x86/p2m.h
+++ xen-unstable.hg-4.1.22459/xen/include/asm-x86/p2m.h
@@ -474,7 +474,8 @@ int p2m_mem_paging_evict(struct p2m_doma
 /* Tell xenpaging to drop a paged out frame */
 void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn);
 /* Start populating a paged out frame */
-void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
+/* retval 1 means the page is present on return */
+int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
 /* Prepare the p2m for paging a frame in */
 int p2m_mem_paging_prep(struct p2m_domain *p2m, unsigned long gfn);
 /* Resume normal operation (in case a domain was paused) */
@@ -482,8 +483,8 @@ void p2m_mem_paging_resume(struct p2m_do
 #else
 static inline void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn)
 { }
-static inline void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
-{ }
+static inline int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
+{ return 0; }
 #endif
 
 struct page_info *p2m_alloc_ptp(struct p2m_domain *p2m, unsigned long type);
--- xen-unstable.hg-4.1.22459.orig/xen/include/xen/sched.h
+++ xen-unstable.hg-4.1.22459/xen/include/xen/sched.h
@@ -26,6 +26,7 @@
 #include <xen/cpumask.h>
 #include <xen/nodemask.h>
 #include <xen/multicall.h>
+#include <xen/wait.h>
 
 #ifdef CONFIG_COMPAT
 #include <compat/vcpu.h>
@@ -332,6 +333,8 @@ struct domain
     nodemask_t node_affinity;
     unsigned int last_alloc_node;
     spinlock_t node_affinity_lock;
+
+    struct waitqueue_head wq;
 };
 
 struct domain_setup_info

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 13/17] xenpaging: page only pagetables for debugging
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (11 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 14/17] xenpaging: prevent page-out of first 16MB Olaf Hering
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.page_pagetables.patch --]
[-- Type: text/plain, Size: 845 bytes --]

Page only page-tables with a Linux guest, needed to run __hvm_copy code paths

---
 tools/xenpaging/policy_default.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/policy_default.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/policy_default.c
@@ -26,7 +26,7 @@
 #include "policy.h"
 
 
-#define MRU_SIZE (1024 * 16)
+#define MRU_SIZE (1 << 4)
 
 
 static unsigned long mru[MRU_SIZE];
@@ -60,8 +60,11 @@ int policy_init(xenpaging_t *paging)
     for ( i = 0; i < MRU_SIZE; i++ )
         mru[i] = INVALID_MFN;
 
-    /* Don't page out page 0 */
-    set_bit(0, bitmap);
+    /* Leave a hole for pagetables */
+    for ( i = 0; i < max_pages; i++ )
+        set_bit(i, bitmap);
+    for ( i = 0x1800; i < 0x18ff; i++ )
+        clear_bit(i, bitmap);
 
  out:
     return rc;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 14/17] xenpaging: prevent page-out of first 16MB
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (12 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 13/17] xenpaging: page only pagetables for debugging Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 15/17] xenpaging: start xenpaging via config option Olaf Hering
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.blacklist.patch --]
[-- Type: text/plain, Size: 824 bytes --]

This is more a workaround than a bugfix:
Don't page out first 16MB of memory.
When the BIOS does its initialization process and xenpaging removes pages,
crashes will occour due to lack of support of xenpaging.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 tools/xenpaging/policy_default.c |    4 ++++
 1 file changed, 4 insertions(+)

--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/policy_default.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/policy_default.c
@@ -60,6 +60,10 @@ int policy_init(xenpaging_t *paging)
     for ( i = 0; i < MRU_SIZE; i++ )
         mru[i] = INVALID_MFN;
 
+    /* Don't page out first 16MB */
+    for ( i = 0; i < ((16*1024*1024)/4096); i++ )
+        set_bit(i, bitmap);
+
     /* Leave a hole for pagetables */
     for ( i = 0; i < max_pages; i++ )
         set_bit(i, bitmap);

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 15/17] xenpaging: start xenpaging via config option
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (13 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 14/17] xenpaging: prevent page-out of first 16MB Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 16/17] xenpaging: add dynamic startup delay for xenpaging Olaf Hering
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.autostart.patch --]
[-- Type: text/plain, Size: 9729 bytes --]

Start xenpaging via config option.

TODO: make it actually work with xen-unstable ('None' is passed as size arg?)

TODO: add config option for different pagefile directory
TODO: add libxl support
TODO: parse config values like 42K, 42M, 42G, 42%

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
v2:
  unlink logfile instead of truncating it.
  allows hardlinking for further inspection

 tools/examples/xmexample.hvm            |    3 +
 tools/python/README.XendConfig          |    1 
 tools/python/README.sxpcfg              |    1 
 tools/python/xen/xend/XendConfig.py     |    3 +
 tools/python/xen/xend/XendDomainInfo.py |    6 ++
 tools/python/xen/xend/image.py          |   91 ++++++++++++++++++++++++++++++++
 tools/python/xen/xm/create.py           |    5 +
 tools/python/xen/xm/xenapi_create.py    |    1 
 8 files changed, 111 insertions(+)

--- xen-unstable.hg-4.1.22459.orig/tools/examples/xmexample.hvm
+++ xen-unstable.hg-4.1.22459/tools/examples/xmexample.hvm
@@ -127,6 +127,9 @@ disk = [ 'file:/var/images/min-el3-i386.
 # Device Model to be used
 device_model = 'qemu-dm'
 
+# xenpaging, number of pages
+xenpaging = 42
+
 #-----------------------------------------------------------------------------
 # boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d) 
 # default: hard disk, cd-rom, floppy
--- xen-unstable.hg-4.1.22459.orig/tools/python/README.XendConfig
+++ xen-unstable.hg-4.1.22459/tools/python/README.XendConfig
@@ -120,6 +120,7 @@ otherConfig
                                 image.vncdisplay
                                 image.vncunused
                                 image.hvm.device_model
+                                image.hvm.xenpaging
                                 image.hvm.display
                                 image.hvm.xauthority
                                 image.hvm.vncconsole
--- xen-unstable.hg-4.1.22459.orig/tools/python/README.sxpcfg
+++ xen-unstable.hg-4.1.22459/tools/python/README.sxpcfg
@@ -51,6 +51,7 @@ image
   - vncunused
   (HVM)
   - device_model
+  - xenpaging
   - display
   - xauthority
   - vncconsole
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xend/XendConfig.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xend/XendConfig.py
@@ -147,6 +147,7 @@ XENAPI_PLATFORM_CFG_TYPES = {
     'apic': int,
     'boot': str,
     'device_model': str,
+    'xenpaging': int,
     'loader': str,
     'display' : str,
     'fda': str,
@@ -508,6 +509,8 @@ class XendConfig(dict):
             self['platform']['nomigrate'] = 0
 
         if self.is_hvm():
+            if 'xenpaging' not in self['platform']:
+                self['platform']['xenpaging'] = None
             if 'timer_mode' not in self['platform']:
                 self['platform']['timer_mode'] = 1
             if 'viridian' not in self['platform']:
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xend/XendDomainInfo.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xend/XendDomainInfo.py
@@ -2390,6 +2390,7 @@ class XendDomainInfo:
 
         if self.image:
             self.image.createDeviceModel()
+            self.image.createXenPaging()
 
         #if have pass-through devs, need the virtual pci slots info from qemu
         self.pci_device_configure_boot()
@@ -2402,6 +2403,11 @@ class XendDomainInfo:
                 self.image.destroyDeviceModel()
             except Exception, e:
                 log.exception("Device model destroy failed %s" % str(e))
+            try:
+                log.debug("stopping xenpaging")
+                self.image.destroyXenPaging()
+            except Exception, e:
+                log.exception("stopping xenpaging failed %s" % str(e))
         else:
             log.debug("No device model")
 
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xend/image.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xend/image.py
@@ -122,12 +122,14 @@ class ImageHandler:
         self.vm.permissionsVm("image/cmdline", { 'dom': self.vm.getDomid(), 'read': True } )
 
         self.device_model = vmConfig['platform'].get('device_model')
+        self.xenpaging = vmConfig['platform'].get('xenpaging')
 
         self.display = vmConfig['platform'].get('display')
         self.xauthority = vmConfig['platform'].get('xauthority')
         self.vncconsole = int(vmConfig['platform'].get('vncconsole', 0))
         self.dmargs = self.parseDeviceModelArgs(vmConfig)
         self.pid = None
+        self.xenpaging_pid = None
         rtc_timeoffset = int(vmConfig['platform'].get('rtc_timeoffset', 0))
         if int(vmConfig['platform'].get('localtime', 0)):
             if time.localtime(time.time())[8]:
@@ -392,6 +394,95 @@ class ImageHandler:
         sentinel_fifos_inuse[sentinel_path_fifo] = 1
         self.sentinel_path_fifo = sentinel_path_fifo
 
+    def createXenPaging(self):
+        if self.xenpaging is None:
+            return
+        if self.xenpaging == 0:
+            return
+        if self.xenpaging_pid:
+            return
+        xenpaging_bin = auxbin.pathTo("xenpaging")
+        args = [xenpaging_bin]
+        args = args + ([ "%d" % self.vm.getDomid()])
+        args = args + ([ "%s" % self.xenpaging])
+        env = dict(os.environ)
+        self.xenpaging_logfile = "/var/log/xen/xenpaging-%s.log" %  str(self.vm.info['name_label'])
+        logfile_mode = os.O_WRONLY|os.O_CREAT|os.O_APPEND|os.O_TRUNC
+        null = os.open("/dev/null", os.O_RDONLY)
+        try:
+            os.unlink(self.xenpaging_logfile)
+        except:
+            pass
+        logfd = os.open(self.xenpaging_logfile, logfile_mode, 0644)
+        sys.stderr.flush()
+        contract = osdep.prefork("%s:%d" % (self.vm.getName(), self.vm.getDomid()))
+        xenpaging_pid = os.fork()
+        if xenpaging_pid == 0: #child
+            try:
+                xenpaging_dir = "/var/lib/xen/xenpaging"
+                osdep.postfork(contract)
+                os.dup2(null, 0)
+                os.dup2(logfd, 1)
+                os.dup2(logfd, 2)
+                try:
+                    os.mkdir(xenpaging_dir)
+                except:
+                    log.info("mkdir %s failed" % xenpaging_dir)
+                    pass
+                try:
+                    os.chdir(xenpaging_dir)
+                except:
+                    log.warn("chdir %s failed" % xenpaging_dir)
+                try:
+                    log.info("starting %s" % args)
+                    os.execve(xenpaging_bin, args, env)
+                except Exception, e:
+                    print >>sys.stderr, (
+                        'failed to execute xenpaging: %s: %s' %
+                        xenpaging_bin, utils.exception_string(e))
+                    os._exit(126)
+            except Exception, e:
+                log.warn("staring xenpaging in %s failed" % xenpaging_dir)
+                os._exit(127)
+        else:
+            osdep.postfork(contract, abandon=True)
+            self.xenpaging_pid = xenpaging_pid
+            os.close(null)
+            os.close(logfd)
+
+    def destroyXenPaging(self):
+        if self.xenpaging is None:
+            return
+        if self.xenpaging_pid:
+            try:
+                os.kill(self.xenpaging_pid, signal.SIGHUP)
+            except OSError, exn:
+                log.exception(exn)
+            for i in xrange(100):
+                try:
+                    (p, rv) = os.waitpid(self.xenpaging_pid, os.WNOHANG)
+                    if p == self.xenpaging_pid:
+                        break
+                except OSError:
+                    # This is expected if Xend has been restarted within
+                    # the life of this domain.  In this case, we can kill
+                    # the process, but we can't wait for it because it's
+                    # not our child. We continue this loop, and after it is
+                    # terminated make really sure the process is going away
+                    # (SIGKILL).
+                    pass
+                time.sleep(0.1)
+            else:
+                log.warning("xenpaging %d took more than 10s "
+                            "to terminate: sending SIGKILL" % self.xenpaging_pid)
+                try:
+                    os.kill(self.xenpaging_pid, signal.SIGKILL)
+                    os.waitpid(self.xenpaging_pid, 0)
+                except OSError:
+                    # This happens if the process doesn't exist.
+                    pass
+        self.xenpaging_pid = None
+
     def createDeviceModel(self, restore = False):
         if self.device_model is None:
             return
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xm/create.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xm/create.py
@@ -491,6 +491,10 @@ gopts.var('nfs_root', val="PATH",
           fn=set_value, default=None,
           use="Set the path of the root NFS directory.")
 
+gopts.var('xenpaging', val='NUM',
+          fn=set_int, default=None,
+          use="Number of pages to swap.")
+
 gopts.var('device_model', val='FILE',
           fn=set_value, default=None,
           use="Path to device model program.")
@@ -1076,6 +1080,7 @@ def configure_hvm(config_image, vals):
     args = [ 'acpi', 'apic',
              'boot',
              'cpuid', 'cpuid_check',
+             'xenpaging',
              'device_model', 'display',
              'fda', 'fdb',
              'gfx_passthru', 'guest_os_type',
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xm/xenapi_create.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xm/xenapi_create.py
@@ -1085,6 +1085,7 @@ class sxp2xml:
             'acpi',
             'apic',
             'boot',
+            'xenpaging',
             'device_model',
             'loader',
             'fda',

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 16/17] xenpaging: add dynamic startup delay for xenpaging
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (14 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 15/17] xenpaging: start xenpaging via config option Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 20:59 ` [PATCH 17/17] xenpaging: (sparse) documenation Olaf Hering
  2010-12-06 21:16 ` [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.autostart_delay.patch --]
[-- Type: text/plain, Size: 4588 bytes --]

This is a debug helper. Since the xenpaging support is still fragile, run
xenpaging at different stages in the bootprocess. Different delays will trigger
more bugs. This implementation starts without delay for 5 reboots, then
increments the delay by 0.1 seconds It uses xenstore for presistant storage of
delay values

TODO: find the correct place to remove the xenstore directory when the guest is shutdown or crashed

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 tools/python/xen/xend/image.py |   32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xend/image.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xend/image.py
@@ -123,6 +123,18 @@ class ImageHandler:
 
         self.device_model = vmConfig['platform'].get('device_model')
         self.xenpaging = vmConfig['platform'].get('xenpaging')
+        self.xenpaging_delay = xstransact.Read("/local/domain/0/xenpaging/%s/xenpaging_delay" % self.vm.info['name_label'])
+        if self.xenpaging_delay == None:
+            log.warn("XXX creating /local/domain/0/xenpaging/%s" % self.vm.info['name_label'])
+            xstransact.Mkdir("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'])
+            xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay', '0.0'))
+            xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay_inc', '0.1'))
+            xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay_use', '5'))
+            xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay_used', '0'))
+        self.xenpaging_delay = float(xstransact.Read("/local/domain/0/xenpaging/%s/xenpaging_delay" % self.vm.info['name_label']))
+        self.xenpaging_delay_inc = float(xstransact.Read("/local/domain/0/xenpaging/%s/xenpaging_delay_inc" % self.vm.info['name_label']))
+        self.xenpaging_delay_use = int(xstransact.Read("/local/domain/0/xenpaging/%s/xenpaging_delay_use" % self.vm.info['name_label']))
+        self.xenpaging_delay_used = int(xstransact.Read("/local/domain/0/xenpaging/%s/xenpaging_delay_used" % self.vm.info['name_label']))
 
         self.display = vmConfig['platform'].get('display')
         self.xauthority = vmConfig['platform'].get('xauthority')
@@ -401,6 +413,17 @@ class ImageHandler:
             return
         if self.xenpaging_pid:
             return
+        if self.xenpaging_delay_used < self.xenpaging_delay_use:
+            self.xenpaging_delay_used += 1
+        else:
+            self.xenpaging_delay_used = 0
+            self.xenpaging_delay += self.xenpaging_delay_inc
+        log.info("delay_used %s" % self.xenpaging_delay_used)
+        log.info("delay_use %s" % self.xenpaging_delay_use)
+        log.info("delay %s" % self.xenpaging_delay)
+        log.info("delay_inc %s" % self.xenpaging_delay_inc)
+        xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay', self.xenpaging_delay))
+        xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay_used', self.xenpaging_delay_used))
         xenpaging_bin = auxbin.pathTo("xenpaging")
         args = [xenpaging_bin]
         args = args + ([ "%d" % self.vm.getDomid()])
@@ -434,6 +457,9 @@ class ImageHandler:
                 except:
                     log.warn("chdir %s failed" % xenpaging_dir)
                 try:
+                    if self.xenpaging_delay != 0.0:
+                        log.info("delaying xenpaging startup %s seconds ..." % self.xenpaging_delay)
+                        time.sleep(self.xenpaging_delay)
                     log.info("starting %s" % args)
                     os.execve(xenpaging_bin, args, env)
                 except Exception, e:
@@ -449,10 +475,16 @@ class ImageHandler:
             self.xenpaging_pid = xenpaging_pid
             os.close(null)
             os.close(logfd)
+            if self.xenpaging_delay == 0.0:
+                log.warn("waiting for xenpaging ...")
+                time.sleep(22)
+                log.warn("waiting for xenpaging done.")
 
     def destroyXenPaging(self):
         if self.xenpaging is None:
             return
+        # FIXME find correct place for guest shutdown or crash
+        #xstransact.Remove("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'])
         if self.xenpaging_pid:
             try:
                 os.kill(self.xenpaging_pid, signal.SIGHUP)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 17/17] xenpaging: (sparse) documenation
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (15 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 16/17] xenpaging: add dynamic startup delay for xenpaging Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
  2010-12-06 21:16 ` [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: xen-unstable.xenpaging.doc.patch --]
[-- Type: text/plain, Size: 1775 bytes --]

Write up some sparse documentation about xenpaging usage.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

---
 docs/misc/xenpaging.txt |   48 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

--- /dev/null
+++ xen-unstable.hg-4.1.22459/docs/misc/xenpaging.txt
@@ -0,0 +1,48 @@
+Warning:
+
+The xenpaging code is new and not fully debugged.
+Usage of xenpaging can crash Xen or cause severe data corruption in the
+guest memory and its filesystems!
+
+Description:
+
+xenpaging writes memory pages of a given guest to a file and moves the
+pages back to the pool of available memory.  Once the guests wants to
+access the paged-out memory, the page is read from disk and placed into
+memory.  This allows the sum of all running guests to use more memory
+than physically available on the host.
+
+Usage:
+
+Once the guest is running, run xenpaging with the guest_id and the
+number of pages to page-out:
+
+  chdir /var/lib/xen/xenpaging
+  xenpaging <guest_id>  <number_of_pages>
+
+To obtain the guest_id, run 'xm list'.
+xenpaging will write the pagefile to the current directory.
+Example with 128MB pagefile on guest 1:
+
+  xenpaging 1 32768
+
+Caution: stopping xenpaging manually will cause the guest to stall or
+crash because the paged-out memory is not written back into the guest!
+
+After a reboot of a guest, its guest_id changes, the current xenpaging
+binary has no target anymore. To automate restarting of xenpaging after
+guest reboot, specify the number if pages in the guest configuration
+file /etc/xen/vm/<guest_name>:
+
+xenpaging=32768
+
+Redo the guest with 'xm create /etc/xen/vm/<guest_name>' to activate the
+changes.
+
+
+Todo:
+- implement stopping of xenpaging
+- implement/test live migration
+
+
+# vim: tw=72

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 00/17] xenpaging changes for xen-unstable
  2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
                   ` (16 preceding siblings ...)
  2010-12-06 20:59 ` [PATCH 17/17] xenpaging: (sparse) documenation Olaf Hering
@ 2010-12-06 21:16 ` Olaf Hering
  17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 21:16 UTC (permalink / raw)
  To: xen-devel

On Mon, Dec 06, Olaf Hering wrote:

> This series uses the recently added wait_event feature. The __hvm_copy
> patch crashes Xen with what looks like stack corruption. After a few
> populate/resume iterations. I have added some printk to the
> populate/resume functions and also wait.c. This leads to crashes.  It
> rarely prints a clean backtrace like the one shown below (most of the
> time a few cpus crash at once).

I have to add, sometimes the ASSERT in do_softirq() triggers.
But I havent had a chances to see what of the 3 checks actually trigger.


Olaf

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
  2010-12-06 20:59 ` [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user Olaf Hering
@ 2010-12-07  9:27   ` Jan Beulich
  2010-12-07  9:45     ` Olaf Hering
  2010-12-15 11:35   ` Keir Fraser
  1 sibling, 1 reply; 27+ messages in thread
From: Jan Beulich @ 2010-12-07  9:27 UTC (permalink / raw)
  To: Olaf Hering; +Cc: xen-devel

>>> On 06.12.10 at 21:59, Olaf Hering <olaf@aepfle.de> wrote:
> --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/guest_walk.c
> +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/guest_walk.c
> @@ -93,11 +93,12 @@ static inline void *map_domain_gfn(struc
>                                     uint32_t *rc) 
>  {
>      /* Translate the gfn, unsharing if shared */
> +retry:
>      *mfn = gfn_to_mfn_unshare(p2m, gfn_x(gfn), p2mt, 0);
>      if ( p2m_is_paging(*p2mt) )
>      {
> -        p2m_mem_paging_populate(p2m, gfn_x(gfn));
> -
> +        if ( p2m_mem_paging_populate(p2m, gfn_x(gfn)) )
> +            goto retry;
>          *rc = _PAGE_PAGED;
>          return NULL;
>      }

Is this retry loop (and similar ones later in the patch) guaranteed
to be bounded in some way?

> --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
> +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
> @@ -2805,13 +2806,13 @@ void p2m_mem_paging_populate(struct p2m_
>      /* Pause domain */
>      if ( v->domain->domain_id == d->domain_id )
>      {
> -        vcpu_pause_nosync(v);
>          req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
> +        ret = 1;
>      }
>      else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
>      {
>          /* gfn is already on its way back and vcpu is not paused */
> -        return;
> +        goto populate_out;

Do you really need a goto here (i.e. are you foreseeing to get stuff
added between the label and the return below)?

>      }
>  
>      /* Send request to pager */
> @@ -2820,6 +2821,14 @@ void p2m_mem_paging_populate(struct p2m_
>      req.vcpu_id = v->vcpu_id;
>  
>      mem_event_put_request(d, &req);
> +
> +    if ( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
> +    {
> +        wait_event(d->wq, mfn_valid(gfn_to_mfn(p2m, gfn, &p2mt)) && !p2m_is_paging(p2mt));
> +    }
> +
> +populate_out:
> +    return ret;
>  }
>  
>  int p2m_mem_paging_prep(struct p2m_domain *p2m, unsigned long gfn)
> --- xen-unstable.hg-4.1.22459.orig/xen/include/asm-x86/p2m.h
> +++ xen-unstable.hg-4.1.22459/xen/include/asm-x86/p2m.h
> @@ -474,7 +474,8 @@ int p2m_mem_paging_evict(struct p2m_doma
>  /* Tell xenpaging to drop a paged out frame */
>  void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn);
>  /* Start populating a paged out frame */
> -void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
> +/* retval 1 means the page is present on return */
> +int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);

Isn't this a case where you absolutely need the return value checked?
If so, you will want to add __must_check here.

Jan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
  2010-12-07  9:27   ` Jan Beulich
@ 2010-12-07  9:45     ` Olaf Hering
  0 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-07  9:45 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel

On Tue, Dec 07, Jan Beulich wrote:

> >>> On 06.12.10 at 21:59, Olaf Hering <olaf@aepfle.de> wrote:
> > --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/guest_walk.c
> > +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/guest_walk.c
> > @@ -93,11 +93,12 @@ static inline void *map_domain_gfn(struc
> >                                     uint32_t *rc) 
> >  {
> >      /* Translate the gfn, unsharing if shared */
> > +retry:
> >      *mfn = gfn_to_mfn_unshare(p2m, gfn_x(gfn), p2mt, 0);
> >      if ( p2m_is_paging(*p2mt) )
> >      {
> > -        p2m_mem_paging_populate(p2m, gfn_x(gfn));
> > -
> > +        if ( p2m_mem_paging_populate(p2m, gfn_x(gfn)) )
> > +            goto retry;
> >          *rc = _PAGE_PAGED;
> >          return NULL;
> >      }
> 
> Is this retry loop (and similar ones later in the patch) guaranteed
> to be bounded in some way?

This needs to be fixed, yes.
For the plain __hvm_copy case, with nothing else being modified, the
'return HVMCOPY_gfn_paged_out' could be just a 'continue'. But even
then, something needs to break the loop.

> >          /* gfn is already on its way back and vcpu is not paused */
> > -        return;
> > +        goto populate_out;
> 
> Do you really need a goto here (i.e. are you foreseeing to get stuff
> added between the label and the return below)?

Thats something for my debug patch, I have a trace_var at the end of
each function.

> > +/* retval 1 means the page is present on return */
> > +int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
> 
> Isn't this a case where you absolutely need the return value checked?
> If so, you will want to add __must_check here.

Yes, that would be a good addition.
Maybe the wait_event/wake_up could be done unconditionally, independent
if the p2m domain differs from the vcpu domain.


Olaf

> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path
  2010-12-06 20:59 ` [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path Olaf Hering
@ 2010-12-14 18:52   ` Ian Jackson
  0 siblings, 0 replies; 27+ messages in thread
From: Ian Jackson @ 2010-12-14 18:52 UTC (permalink / raw)
  To: Olaf Hering; +Cc: xen-devel

Olaf Hering writes ("[Xen-devel] [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path"):
> Just for correctness, close the xch handle in the error path.

Thanks, I have applied patches 1-7 (tools) and 17 (documentation) from
your series.

Ian.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in
  2010-12-06 20:59 ` [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in Olaf Hering
@ 2010-12-14 22:58   ` Olaf Hering
  2010-12-15 10:47     ` Tim Deegan
  0 siblings, 1 reply; 27+ messages in thread
From: Olaf Hering @ 2010-12-14 22:58 UTC (permalink / raw)
  To: xen-devel

On Mon, Dec 06, Olaf Hering wrote:

> Update the machine_to_phys_mapping[] array during page-in. The gfn is
> now at a different page and the array has still INVALID_M2P_ENTRY in the
> index.

Does anyone know what the "best" location for this array update is?
p2m_mem_paging_prep() allocates a new page for the guest and assigns a
gfn to that mfn. So in theory the array could be updated right away,
even if the gfn will still have a p2m_ram_paging_* type until
p2m_mem_paging_resume() is called.

Olaf

> --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
> +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
> @@ -2827,6 +2827,7 @@ void p2m_mem_paging_resume(struct p2m_do
>      mfn = gfn_to_mfn(p2m, rsp.gfn, &p2mt);
>      p2m_lock(p2m);
>      set_p2m_entry(p2m, rsp.gfn, mfn, 0, p2m_ram_rw);
> +    set_gpfn_from_mfn(mfn_x(mfn), gfn);
>      audit_p2m(p2m, 1);
>      p2m_unlock(p2m);
>  
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in
  2010-12-14 22:58   ` Olaf Hering
@ 2010-12-15 10:47     ` Tim Deegan
  0 siblings, 0 replies; 27+ messages in thread
From: Tim Deegan @ 2010-12-15 10:47 UTC (permalink / raw)
  To: Olaf Hering; +Cc: xen-devel@lists.xensource.com

At 22:58 +0000 on 14 Dec (1292367517), Olaf Hering wrote:
> On Mon, Dec 06, Olaf Hering wrote:
> 
> > Update the machine_to_phys_mapping[] array during page-in. The gfn is
> > now at a different page and the array has still INVALID_M2P_ENTRY in the
> > index.
> 
> Does anyone know what the "best" location for this array update is?
> p2m_mem_paging_prep() allocates a new page for the guest and assigns a
> gfn to that mfn. So in theory the array could be updated right away,
> even if the gfn will still have a p2m_ram_paging_* type until
> p2m_mem_paging_resume() is called.

I slightly prefer it where you have put it in this patch, but either
would probably be fine. 

Tim.

> Olaf
> 
> > --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
> > +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
> > @@ -2827,6 +2827,7 @@ void p2m_mem_paging_resume(struct p2m_do
> >      mfn = gfn_to_mfn(p2m, rsp.gfn, &p2mt);
> >      p2m_lock(p2m);
> >      set_p2m_entry(p2m, rsp.gfn, mfn, 0, p2m_ram_rw);
> > +    set_gpfn_from_mfn(mfn_x(mfn), gfn);
> >      audit_p2m(p2m, 1);
> >      p2m_unlock(p2m);
> >  
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
  2010-12-06 20:59 ` [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user Olaf Hering
  2010-12-07  9:27   ` Jan Beulich
@ 2010-12-15 11:35   ` Keir Fraser
  2010-12-15 13:51     ` Olaf Hering
  1 sibling, 1 reply; 27+ messages in thread
From: Keir Fraser @ 2010-12-15 11:35 UTC (permalink / raw)
  To: Olaf Hering, xen-devel

On 06/12/2010 20:59, "Olaf Hering" <olaf@aepfle.de> wrote:

>      mem_event_put_request(d, &req);
> +
> +    if ( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
> +    {
> +        wait_event(d->wq, mfn_valid(gfn_to_mfn(p2m, gfn, &p2mt)) &&
> !p2m_is_paging(p2mt));
> +    }
> +

This I find interesting. Do you not race the xenpaging daemon satisfying
your page-in request, but then very quickly paging it out again? In which
case you might never wake up!

I think the condition you wait on should be for a response to your paging
request. A wake_up() alone is not really sufficient; you need some kind of
explicit flagging to the vcpu too. Could the paging daemon stick a response
in a shared ring, or otherwise explicitly flag to this vcpu that it's
request has been fully satisfied and it's time to wake up and retry its
operation? Well, really that's a rhetorical question, because that is
exactly what you need to implement for this waitqueue strategy to work
properly!

 -- Keir

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
  2010-12-15 11:35   ` Keir Fraser
@ 2010-12-15 13:51     ` Olaf Hering
  2010-12-15 14:08       ` Keir Fraser
  0 siblings, 1 reply; 27+ messages in thread
From: Olaf Hering @ 2010-12-15 13:51 UTC (permalink / raw)
  To: Keir Fraser; +Cc: xen-devel

On Wed, Dec 15, Keir Fraser wrote:

> On 06/12/2010 20:59, "Olaf Hering" <olaf@aepfle.de> wrote:
> 
> >      mem_event_put_request(d, &req);
> > +
> > +    if ( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
> > +    {
> > +        wait_event(d->wq, mfn_valid(gfn_to_mfn(p2m, gfn, &p2mt)) &&
> > !p2m_is_paging(p2mt));
> > +    }
> > +
> 
> This I find interesting. Do you not race the xenpaging daemon satisfying
> your page-in request, but then very quickly paging it out again? In which
> case you might never wake up!

That probably depends on the size of the mru size in the xenpaging
policy. Right now alot of page-out/page-in will happen before the gfn
will be nominated again.

> I think the condition you wait on should be for a response to your paging
> request. A wake_up() alone is not really sufficient; you need some kind of
> explicit flagging to the vcpu too. Could the paging daemon stick a response
> in a shared ring, or otherwise explicitly flag to this vcpu that it's
> request has been fully satisfied and it's time to wake up and retry its
> operation? Well, really that's a rhetorical question, because that is
> exactly what you need to implement for this waitqueue strategy to work
> properly!

Yes, there needs to be some reliable event which the vcpu has to pick up.
I will return to work on this issue, but most likely not this year anymore.


Olaf

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
  2010-12-15 13:51     ` Olaf Hering
@ 2010-12-15 14:08       ` Keir Fraser
  0 siblings, 0 replies; 27+ messages in thread
From: Keir Fraser @ 2010-12-15 14:08 UTC (permalink / raw)
  To: Olaf Hering; +Cc: xen-devel

On 15/12/2010 13:51, "Olaf Hering" <olaf@aepfle.de> wrote:

>> I think the condition you wait on should be for a response to your paging
>> request. A wake_up() alone is not really sufficient; you need some kind of
>> explicit flagging to the vcpu too. Could the paging daemon stick a response
>> in a shared ring, or otherwise explicitly flag to this vcpu that it's
>> request has been fully satisfied and it's time to wake up and retry its
>> operation? Well, really that's a rhetorical question, because that is
>> exactly what you need to implement for this waitqueue strategy to work
>> properly!
> 
> Yes, there needs to be some reliable event which the vcpu has to pick up.
> I will return to work on this issue, but most likely not this year anymore.

This is all bugfix stuff which can be slipped into 4.1 during feature
freeze. Also, what doesn't get done in time for 4.1.0 can go into 4.1.1
instead, which will likely be 6-8 weeks later.

 -- Keir

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2010-12-15 14:08 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
2010-12-06 20:59 ` [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path Olaf Hering
2010-12-14 18:52   ` Ian Jackson
2010-12-06 20:59 ` [PATCH 02/17] xenpaging: remove perror usage " Olaf Hering
2010-12-06 20:59 ` [PATCH 03/17] xenpaging: print DPRINTF ouput if XENPAGING_DEBUG is in environment Olaf Hering
2010-12-06 20:59 ` [PATCH 04/17] xenpaging: print number of evicted pages Olaf Hering
2010-12-06 20:59 ` [PATCH 05/17] xenpaging: remove duplicate xc_interface_close call Olaf Hering
2010-12-06 20:59 ` [PATCH 06/17] xenpaging: do not use DPRINTF/ERROR if xch handle is unavailable Olaf Hering
2010-12-06 20:59 ` [PATCH 07/17] xenpaging: update xch usage Olaf Hering
2010-12-06 20:59 ` [PATCH 08/17] xenpaging: make vcpu_sleep_nosync() optional in mem_event_check_ring() Olaf Hering
2010-12-06 20:59 ` [PATCH 09/17] xenpaging: update machine_to_phys_mapping[] during page deallocation Olaf Hering
2010-12-06 20:59 ` [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in Olaf Hering
2010-12-14 22:58   ` Olaf Hering
2010-12-15 10:47     ` Tim Deegan
2010-12-06 20:59 ` [PATCH 11/17] xenpaging: drop paged pages in guest_remove_page Olaf Hering
2010-12-06 20:59 ` [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user Olaf Hering
2010-12-07  9:27   ` Jan Beulich
2010-12-07  9:45     ` Olaf Hering
2010-12-15 11:35   ` Keir Fraser
2010-12-15 13:51     ` Olaf Hering
2010-12-15 14:08       ` Keir Fraser
2010-12-06 20:59 ` [PATCH 13/17] xenpaging: page only pagetables for debugging Olaf Hering
2010-12-06 20:59 ` [PATCH 14/17] xenpaging: prevent page-out of first 16MB Olaf Hering
2010-12-06 20:59 ` [PATCH 15/17] xenpaging: start xenpaging via config option Olaf Hering
2010-12-06 20:59 ` [PATCH 16/17] xenpaging: add dynamic startup delay for xenpaging Olaf Hering
2010-12-06 20:59 ` [PATCH 17/17] xenpaging: (sparse) documenation Olaf Hering
2010-12-06 21:16 ` [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).