From: "Pasi Kärkkäinen" <pasik@iki.fi>
To: Fantu <fantonifabio@tiscali.it>
Cc: xen-devel@lists.xensource.com
Subject: Re: Low performance and some problem with kernel 2.6.32.13 and xen 4.0-rc1
Date: Fri, 28 May 2010 12:56:33 +0300 [thread overview]
Message-ID: <20100528095633.GG17817@reaktio.net> (raw)
In-Reply-To: <28704783.post@talk.nabble.com>
On Fri, May 28, 2010 at 02:45:04AM -0700, Fantu wrote:
>
> Server is Dell T610 with last bios (2.0.13)
> Dom0: Lenny 64 bit, kernel 2.6.32.13, xen 4.0-rc1
> Kernel is last commit of jeremy git xen/stable-2.6.32.x with this config:
> http://old.nabble.com/file/p28704783/config-2.6.32.13 config-2.6.32.13
> lspci -vvv: http://old.nabble.com/file/p28704783/lspcivvv.log lspcivvv.log
> dmidecode: http://old.nabble.com/file/p28704783/dmidecode.log dmidecode.log
> dmesg: http://old.nabble.com/file/p28704783/dmesg dmesg
> lsmod: http://old.nabble.com/file/p28704783/lsmod.log lsmod.log
> xend log: http://old.nabble.com/file/p28704783/xend.log xend.log
> menu.lst:
> title Xen 4 / Debian GNU/Linux, kernel 2.6.32.13
> root (hd0,0)
> kernel /boot/xen.gz dom0_mem=2560M
> module /boot/vmlinuz-2.6.32.13 root=LABEL=root-backup2 ro nomodeset
> module /boot/initrd.img-2.6.32.13
>
> The server run 12 domU windows xp with gplpv 0.11.0.213, all work
> On dom0 blktap not work (write about in other topic) and i must chage domU
> disk from tap:aio to file, is one or the problem about performance?
> With not set of dom0 memory and dom0 balloning have "freeze" with last msg
> calltrace about out of memory, i set dom0 memory in menu.lst and disable
> dom0 balloning, is necessary more ram that 2.5gb for dom0?
>
You really don't need 2,5 GB for dom0.
Start with 1024 MB. You might be able to go down to 512 MB or so.
It really depends what kind of tools you run in dom0.
Also remember that file: driver for guest disks uses dom0 page cache,
so it may appear that a lot of dom0 memory is used when using file: driver
since dom0 page cache will use *all* the free memory.. so increasing
dom0 memory just makes you use more of the memory as cache.
You really should get tap:aio: working instead. But I guess you open another
thread about it.
> There are some option in .config of kernel wrong or not good?
> with c state option in bios enable system is inusable, now i disable it but
> this is kernel, xen or bios problem?
>
> I hope this information helps improve the kernel and xen, tell me if need
> another information
>
> Thanks for any reply about my questions
>
-- Pasi
next prev parent reply other threads:[~2010-05-28 9:56 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-05-28 9:45 Low performance and some problem with kernel 2.6.32.13 and xen 4.0-rc1 Fantu
2010-05-28 9:56 ` Pasi Kärkkäinen [this message]
2010-05-28 11:27 ` Fantu
2010-05-29 14:04 ` blktap2 problem with pvops " Pasi Kärkkäinen
2010-05-29 14:33 ` Fantu
2010-05-29 14:45 ` Boris Derzhavets
2010-05-29 15:15 ` Pasi Kärkkäinen
2010-05-29 15:30 ` Boris Derzhavets
2010-06-03 9:30 ` Low performance and some problem with " Fantu
2010-06-03 9:48 ` Pasi Kärkkäinen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100528095633.GG17817@reaktio.net \
--to=pasik@iki.fi \
--cc=fantonifabio@tiscali.it \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).