xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: James Dingwall <james.dingwall@zynstra.com>
To: Bob Liu <bob.liu@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: Kernel 3.11 / 3.12 OOM killer and Xen ballooning
Date: Fri, 13 Dec 2013 16:59:42 +0000	[thread overview]
Message-ID: <52AB3CFE.9080702@zynstra.com> (raw)
In-Reply-To: <52A90B7C.6010400@oracle.com>

Bob Liu wrote:
> On 12/12/2013 12:30 AM, James Dingwall wrote:
>> Bob Liu wrote:
>>> On 12/10/2013 11:27 PM, Konrad Rzeszutek Wilk wrote:
>>>> On Tue, Dec 10, 2013 at 02:52:40PM +0000, James Dingwall wrote:
>>>>> Konrad Rzeszutek Wilk wrote:
>>>>>> On Mon, Dec 09, 2013 at 05:50:29PM +0000, James Dingwall wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Since 3.11 I have noticed that the OOM killer quite frequently
>>>>>>> triggers in my Xen guest domains which use ballooning to
>>>>>>> increase/decrease their memory allocation according to their
>>>>>>> requirements.  One example domain I have has a maximum memory
>>>>>>> setting of ~1.5Gb but it usually idles at ~300Mb, it is also
>>>>>>> configured with 2Gb swap which is almost 100% free.
>>>>>>>
>>>>>>> # free
>>>>>>>                total       used       free     shared    buffers
>>>>>>> cached
>>>>>>> Mem:        272080     248108      23972          0 1448      63064
>>>>>>> -/+ buffers/cache:     183596      88484
>>>>>>> Swap:      2097148          8    2097140
>>>>>>>
>>>>>>> There is plenty of available free memory in the hypervisor to
>>>>>>> balloon to the maximum size:
>>>>>>> # xl info | grep free_mem
>>>>>>> free_memory            : 14923
>>>>>>>
>>>>>>> An example trace (they are always the same) from the oom killer in
>>>>>>> 3.12 is added below.  So far I have not been able to reproduce this
>>>>>>> at will so it is difficult to start bisecting it to see if a
>>>>>>> particular change introduced this.  However it does seem that the
>>>>>>> behaviour is wrong because a) ballooning could give the guest more
>>>>>>> memory, b) there is lots of swap available which could be used as a
>>>>>>> fallback.
>>>> Keep in mind that swap with tmem is actually no more swap. Heh, that
>>>> sounds odd -but basically pages that are destined for swap end up
>>>> going in the tmem code which pipes them up to the hypervisor.
>>>>
>>>>>>> If other information could help or there are more tests that I could
>>>>>>> run then please let me know.
>>>>>> I presume you have enabled 'tmem' both in the hypervisor and in
>>>>>> the guest right?
>>>>> Yes, domU and dom0 both have the tmem module loaded and  tmem
>>>>> tmem_dedup=on tmem_compress=on is given on the xen command line.
>>>> Excellent. The odd thing is that your swap is not used that much, but
>>>> it should be (as that is part of what the self-balloon is suppose to
>>>> do).
>>>>
>>>> Bob, you had a patch for the logic of how self-balloon is suppose
>>>> to account for the slab - would this be relevant to this problem?
>>>>
>>> Perhaps, I have attached the patch.
>>> James, could you please apply it and try your application again? You
>>> have to rebuild the guest kernel.
>>> Oh, and also take a look at whether frontswap is in use, you can check
>>> it by watching "cat /sys/kernel/debug/frontswap/*".
>> I have tested this patch with a workload where I have previously seen
> Thank you so much.
>
>> failures and so far so good.  I'll try to keep a guest with it stressed
>> to see if I do get any problems.  I don't know if it is expected but I
>> did note that the system running with this patch + selfshrink has a
>> kswapd0 run time of ~30mins.  A guest without it and selfshrink disabled
> Could you run the test again with this patch but selfshrink disabled and
> compare the run time of kswapd0?
Here are the results against two vms with/without the patch.  They are 
running on the same dom0 and have comparable xen configs and were 
restarted at the same point.

With patch:

# uptime ; ps -ef | grep [k]swapd0
  14:58:55 up  6:32,  1 user,  load average: 0.00, 0.01, 0.05
root       310     2  0 08:26 ?        00:00:01 [kswapd0]
### BUILD GLIBC
# ps -ef | grep [k]swapd0
root       310     2  0 08:26 ?        00:00:16 [kswapd0]
### BUILD KDELIBS
# ps -ef | grep [k]swapd0
root       310     2  1 08:26 ?        00:09:15 [kswapd0]
# for i in /sys/module/tmem/parameters/* ; do echo $i $(< $i) ; done
/sys/module/tmem/parameters/cleancache Y
/sys/module/tmem/parameters/frontswap Y
/sys/module/tmem/parameters/selfballooning Y
/sys/module/tmem/parameters/selfshrinking N


Without patch:

# uptime ; ps -ef | grep [k]swapd0
  14:59:12 up  6:32,  1 user,  load average: 0.00, 0.01, 0.05
root       309     2  0 08:26 ?        00:00:01 [kswapd0]
### BUILD GLIBC
# ps -ef | grep [k]swapd0
root       309     2  0 08:26 ?        00:00:09 [kswapd0]
### BUILD KDELIBS
# ps -ef | grep [k]swapd0
root       309     2  0 08:26 ?        00:01:18 [kswapd0]
# for i in /sys/module/tmem/parameters/* ; do echo $i $(< $i) ; done
/sys/module/tmem/parameters/cleancache Y
/sys/module/tmem/parameters/frontswap Y
/sys/module/tmem/parameters/selfballooning Y
/sys/module/tmem/parameters/selfshrinking N

>> having run a similar workload has ~5mins. With the patch I also noted
>> the following kernel messages which I haven't seen before:
>>
>> [ 8733.646820] init_memory_mapping: [mem 0x120000000-0x127ffffff]
>> [ 8733.646825]  [mem 0x120000000-0x127ffffff] page 4k
>> [10506.639875] init_memory_mapping: [mem 0x128000000-0x137ffffff]
>> [10506.639881]  [mem 0x128000000-0x137ffffff] page 4k

  reply	other threads:[~2013-12-13 16:59 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-09 17:50 Kernel 3.11 / 3.12 OOM killer and Xen ballooning James Dingwall
2013-12-09 21:48 ` Konrad Rzeszutek Wilk
2013-12-10 14:52   ` James Dingwall
2013-12-10 15:27     ` Konrad Rzeszutek Wilk
2013-12-11  7:22       ` Bob Liu
2013-12-11  9:25         ` James Dingwall
2013-12-11  9:54           ` Bob Liu
2013-12-11 10:16             ` James Dingwall
2013-12-11 16:30         ` James Dingwall
2013-12-12  1:03           ` Bob Liu
2013-12-13 16:59             ` James Dingwall [this message]
2013-12-17  6:11               ` Bob Liu
2013-12-18 12:04           ` Bob Liu
2013-12-19 19:08             ` James Dingwall
2013-12-20  3:17               ` Bob Liu
2013-12-20 12:22                 ` James Dingwall
2013-12-26  8:42                 ` James Dingwall
2014-01-02  6:25                   ` Bob Liu
2014-01-07  9:21                     ` James Dingwall
2014-01-09 10:48                       ` Bob Liu
2014-01-09 10:54                         ` James Dingwall
2014-01-09 11:04                         ` James Dingwall
2014-01-15  8:49                         ` James Dingwall
2014-01-15 14:41                           ` Bob Liu
2014-01-15 16:35                             ` James Dingwall
2014-01-16  1:22                               ` Bob Liu
2014-01-16 10:52                                 ` James Dingwall
2014-01-28 17:15                                 ` James Dingwall
2014-01-29 14:35                                   ` Bob Liu
2014-01-29 14:45                                     ` James Dingwall
2014-01-31 16:56                                       ` Konrad Rzeszutek Wilk
2014-02-03  9:49                                         ` Daniel Kiper
2014-02-03 10:30                                           ` Konrad Rzeszutek Wilk
2014-02-03 11:20                                           ` James Dingwall
2014-02-03 14:00                                             ` Daniel Kiper
2013-12-10  8:16 ` Jan Beulich
2013-12-10 14:01   ` James Dingwall
2013-12-10 14:25     ` Jan Beulich
2013-12-10 14:52       ` James Dingwall
2013-12-10 14:59         ` Jan Beulich
2013-12-10 15:16           ` James Dingwall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52AB3CFE.9080702@zynstra.com \
    --to=james.dingwall@zynstra.com \
    --cc=bob.liu@oracle.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).