xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [GIT PULL] (xen) stable/for-jens-3.10 xenwatch: page allocation failure: order:7, mode:0x10c0d0
Date: Thu, 25 Apr 2013 14:39:51 +0200	[thread overview]
Message-ID: <51792417.2090302@citrix.com> (raw)
In-Reply-To: <44354459.20130425143233@eikelenboom.it>

On 25/04/13 14:32, Sander Eikelenboom wrote:
> 
> Thursday, April 25, 2013, 10:43:33 AM, you wrote:
> 
>> On 25/04/13 10:35, Roger Pau Monné wrote:
>>> On 24/04/13 20:16, Sander Eikelenboom wrote:
>>>> Friday, April 19, 2013, 4:44:01 PM, you wrote:
>>>>
>>>>> Hey Jens,
>>>>
>>>>> Please in your spare time (if there is such a thing at a conference)
>>>>> pull this branch:
>>>>
>>>>>  git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10
>>>>
>>>>> for your v3.10 branch. Sorry for being so late with this.
>>>>
>>>> <big snip></big snip>
>>>>
>>>>> Anyhow, please pull and if possible include the nice overview I typed up in the
>>>>> merge commit.
>>>>
>>>>>  Documentation/ABI/stable/sysfs-bus-xen-backend |  18 +
>>>>>  drivers/block/xen-blkback/blkback.c            | 843 ++++++++++++++++---------
>>>>>  drivers/block/xen-blkback/common.h             | 145 ++++-
>>>>>  drivers/block/xen-blkback/xenbus.c             |  38 ++
>>>>>  drivers/block/xen-blkfront.c                   | 490 +++++++++++---
>>>>>  include/xen/interface/io/blkif.h               |  53 ++
>>>>>  6 files changed, 1188 insertions(+), 399 deletions(-)
>>>>
>>>>> Roger Pau Monne (7):
>>>>>       xen-blkback: print stats about persistent grants
>>>>>       xen-blkback: use balloon pages for all mappings
>>>>>       xen-blkback: implement LRU mechanism for persistent grants
>>>>>       xen-blkback: move pending handles list from blkbk to pending_req
>>>>>       xen-blkback: make the queue of free requests per backend
>>>>>       xen-blkback: expand map/unmap functions
>>>>>       xen-block: implement indirect descriptors
>>>>
>>>>
>>>> Hi Konrad / Roger,
>>>>
>>>> I tried this pull on top of latest Linus latest linux-3.9 tree, but although it seems to boot and work fine at first, i seem to get trouble after running for about a day.
>>>> Without this pull it runs fine for several days.
>>>>
>>>> Trying to start a new guest I ended up with the splat below. In the output of xl-dmesg i seem to see more of these than before:
>>>> (XEN) [2013-04-24 14:37:40] grant_table.c:1250:d1 Expanding dom (1) grant table from (9) to (10) frames
>>>
>>> Hello Sander,
>>>
>>> Thanks for the report, it is expected to see more messages regarding
>>> grant table expansion with this patch, since we are using up to 1056
>>> persistent grants for each backend. Could you try lowering down the
>>> maximum number of persistent grants to see if that prevents running out
>>> of memory:
>>>
>>> # echo 384 > /sys/module/xen_blkback/parameters/max_persistent_grants
> 
>> And the number of free pages keep in blkback cache:
> 
> # echo 256 >> /sys/module/xen_blkback/parameters/max_buffer_pages
> 
> With both set .. it still bails out after sometime when trying to start a new guest.

OK, will work on a patch to split memory allocation instead of doing it
all in a big chunk.

> 
> 
> [ 9871.923198] Pid: 54, comm: xenwatch Not tainted 3.9.0-rc8-20130424-jens+ #1
> [ 9871.934278] Call Trace:
> [ 9871.945146]  [<ffffffff81100c51>] warn_alloc_failed+0xf1/0x140
> [ 9871.956094]  [<ffffffff811021f1>] ? __alloc_pages_direct_compact+0x211/0x230
> [ 9871.967048]  [<ffffffff811028af>] __alloc_pages_nodemask+0x69f/0x960
> [ 9871.978092]  [<ffffffff8113a161>] alloc_pages_current+0xb1/0x160
> [ 9871.989065]  [<ffffffff81100679>] __get_free_pages+0x9/0x40
> [ 9871.999999]  [<ffffffff81142af4>] __kmalloc+0x134/0x160
> [ 9872.010845]  [<ffffffff815832d0>] xen_blkbk_probe+0x170/0x2f0
> [ 9872.021667]  [<ffffffff81474ce7>] xenbus_dev_probe+0x77/0x130
> [ 9872.032542]  [<ffffffff8156a390>] ? __driver_attach+0xa0/0xa0
> [ 9872.043453]  [<ffffffff8156a151>] driver_probe_device+0x81/0x220
> [ 9872.054115]  [<ffffffff8198198c>] ? klist_next+0x8c/0x110
> [ 9872.064454]  [<ffffffff8156a390>] ? __driver_attach+0xa0/0xa0
> [ 9872.074610]  [<ffffffff8156a3db>] __device_attach+0x4b/0x50
> [ 9872.084541]  [<ffffffff815684e8>] bus_for_each_drv+0x68/0x90
> [ 9872.094282]  [<ffffffff8156a0c9>] device_attach+0x89/0x90
> [ 9872.103751]  [<ffffffff81569258>] bus_probe_device+0xa8/0xd0
> [ 9872.113158]  [<ffffffff81567c80>] device_add+0x650/0x720
> [ 9872.122379]  [<ffffffff81573103>] ? device_pm_sleep_init+0x43/0x70
> [ 9872.131304]  [<ffffffff81567d69>] device_register+0x19/0x20
> [ 9872.139948]  [<ffffffff8147495b>] xenbus_probe_node+0x14b/0x160
> [ 9872.148414]  [<ffffffff815685b4>] ? bus_for_each_dev+0xa4/0xb0
> [ 9872.156603]  [<ffffffff81474b2c>] xenbus_dev_changed+0x1bc/0x1c0
> [ 9872.164631]  [<ffffffff810b67f7>] ? lock_release+0x117/0x260
> [ 9872.172551]  [<ffffffff81474f66>] backend_changed+0x16/0x20
> [ 9872.180427]  [<ffffffff81472f5e>] xenwatch_thread+0x4e/0x150
> [ 9872.188238]  [<ffffffff8108abb0>] ? wake_up_bit+0x40/0x40
> [ 9872.196032]  [<ffffffff81472f10>] ? xs_watch+0x60/0x60
> [ 9872.203841]  [<ffffffff8108a546>] kthread+0xd6/0xe0
> [ 9872.211567]  [<ffffffff8108a470>] ? __init_kthread_worker+0x70/0x70
> [ 9872.219075]  [<ffffffff819979bc>] ret_from_fork+0x7c/0xb0
> [ 9872.226329]  [<ffffffff8108a470>] ? __init_kthread_worker+0x70/0x70
> [ 9872.233416] Mem-Info:
> [ 9872.241071] Node 0 DMA per-cpu:
> [ 9872.248137] CPU    0: hi:    0, btch:   1 usd:   0
> [ 9872.255108] CPU    1: hi:    0, btch:   1 usd:   0
> [ 9872.262090] CPU    2: hi:    0, btch:   1 usd:   0
> [ 9872.269069] CPU    3: hi:    0, btch:   1 usd:   0
> [ 9872.275890] CPU    4: hi:    0, btch:   1 usd:   0
> [ 9872.282629] CPU    5: hi:    0, btch:   1 usd:   0
> [ 9872.289393] Node 0 DMA32 per-cpu:
> [ 9872.296163] CPU    0: hi:  186, btch:  31 usd:  53
> [ 9872.302701] CPU    1: hi:  186, btch:  31 usd:  72
> [ 9872.308924] CPU    2: hi:  186, btch:  31 usd:  66
> [ 9872.314937] CPU    3: hi:  186, btch:  31 usd:  30
> [ 9872.320649] CPU    4: hi:  186, btch:  31 usd: 110
> [ 9872.326032] CPU    5: hi:  186, btch:  31 usd: 163
> [ 9872.331185] active_anon:4510 inactive_anon:10674 isolated_anon:0
> [ 9872.331185]  active_file:21063 inactive_file:161965 isolated_file:0
> [ 9872.331185]  unevictable:519 dirty:127 writeback:0 unstable:0
> [ 9872.331185]  free:3448 slab_reclaimable:8061 slab_unreclaimable:10395
> [ 9872.331185]  mapped:3916 shmem:321 pagetables:1249 bounce:0
> [ 9872.331185]  free_cma:0
> [ 9872.358911] Node 0 DMA free:3836kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:4kB active_file:76kB inactive_file:10748kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:1004kB slab_unreclaimable:224kB kernel_stack:16kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
> [ 9872.374281] lowmem_reserve[]: 0 884 884 884
> [ 9872.379566] Node 0 DMA32 free:9916kB min:3772kB low:4712kB high:5656kB active_anon:17968kB inactive_anon:42692kB active_file:84192kB inactive_file:637180kB unevictable:2084kB isolated(anon):0kB isolated(file):0kB present:1032192kB managed:905896kB mlocked:2084kB dirty:524kB writeback:0kB mapped:15800kB shmem:1284kB slab_reclaimable:31240kB slab_unreclaimable:41352kB kernel_stack:2160kB pagetables:5016kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
> [ 9872.402980] lowmem_reserve[]: 0 0 0 0
> [ 9872.409005] Node 0 DMA: 5*4kB (M) 13*8kB (M) 104*16kB (M) 4*32kB (MR) 2*64kB (R) 0*128kB 1*256kB (R) 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 3836kB
> [ 9872.415521] Node 0 DMA32: 1665*4kB (UEMR) 206*8kB (MR) 10*16kB (UMR) 5*32kB (R) 0*64kB 4*128kB (R) 1*256kB (R) 1*512kB (R) 0*1024kB 0*2048kB 0*4096kB = 9908kB
> [ 9872.428594] 183858 total pagecache pages
> [ 9872.435128] 0 pages in swap cache
> [ 9872.441781] Swap cache stats: add 7, delete 7, find 3/3
> [ 9872.448409] Free swap  = 2097148kB
> [ 9872.455042] Total swap = 2097148kB
> [ 9872.465414] 262143 pages RAM
> [ 9872.471913] 28027 pages reserved
> [ 9872.478443] 295127 pages shared
> [ 9872.484909] 207989 pages non-shared
> [ 9872.491387] vbd vbd-20-768: 12 creating block interface
> [ 9872.499259] vbd vbd-20-768: 12 xenbus_dev_probe on backend/vbd/20/768
> [ 9872.506942] vbd: probe of vbd-20-768 failed with error -12
> 

  reply	other threads:[~2013-04-25 12:39 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-04-19 14:44 [GIT PULL] (xen) stable/for-jens-3.10 Konrad Rzeszutek Wilk
2013-04-24 18:16 ` [GIT PULL] (xen) stable/for-jens-3.10 xenwatch: page allocation failure: order:7, mode:0x10c0d0 Sander Eikelenboom
2013-04-25  8:35   ` Roger Pau Monné
2013-04-25  8:40     ` Sander Eikelenboom
2013-04-25  8:43     ` Roger Pau Monné
2013-04-25 12:32       ` Sander Eikelenboom
2013-04-25 12:39         ` Roger Pau Monné [this message]
2013-04-25 15:52           ` Roger Pau Monné
2013-04-25 16:38             ` David Vrabel
2013-04-25 10:04   ` David Vrabel
2013-04-29 15:46   ` [Xen-devel] " Konrad Rzeszutek Wilk
2013-04-29 16:05     ` Sander Eikelenboom
2013-05-04  7:34       ` [Xen-devel] " Sander Eikelenboom
2013-05-06  9:00         ` Roger Pau Monné
2013-04-29 19:14     ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51792417.2090302@citrix.com \
    --to=roger.pau@citrix.com \
    --cc=david.vrabel@citrix.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux@eikelenboom.it \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).