From mboxrd@z Thu Jan 1 00:00:00 1970 From: Keir Fraser Subject: Re: Re: how to avoid lost trace records? Date: Mon, 22 Nov 2010 13:46:40 +0000 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: George Dunlap , Olaf Hering Cc: xen-devel@lists.xensource.com List-Id: xen-devel@lists.xenproject.org Is there a good reason that T_INFO_PAGES cannot be specified dynamically by the toolstack, when enabling tracing? It doesn't seem particularly necessar= y that this piece of policy is expressed statically within the hypervisor. -- Keir On 22/11/2010 11:53, "George Dunlap" wrote: > Olaf, >=20 > Dang, 8 megs per cpu -- but I guess that's really not so much overhead > on a big machine; and it's definitely worth getting around the lost > records issue. Send the T_INFO_PAGES patch to the list, and see what > Keir thinks. >=20 > There's probably a way to modify xenalyze to do start up gzip > directly; may not be a bad idea. >=20 > -George >=20 > On Sat, Nov 20, 2010 at 8:21 PM, Olaf Hering wrote: >> On Fri, Nov 19, Olaf Hering wrote: >>=20 >>>=20 >>> Today I inspected the xenalyze and the dump-raw output and noticed that >>> huge number of lost trace records, even when booted with tbuf_size=3D200: >>>=20 >>> grep -wn 1f001 log.sles11_6.xentrace.txt.dump-raw >>> 274438:R p 5 o000000000063ffd4 =A0 =A01f001 4 t0000006d215b3c6b [ b6aed 57f= ff >>> 9e668fb6 51 ] >> ... >>> That means more than 740K lost entries on cpu5,3,2,1,0. >>> Is this expected? >>=20 >> After reading the sources more carefully, its clear now. >> There are a few constraints: >>=20 >> If booted with tbuf_size=3DN, tracing starts right away and fills up the >> buffer until xentrace collects its content. So entries will be lost. >>=20 >> Once I just ran xentrace -e all > output, which filled up the whole disk >> during my testing. So I changed the way to collect the output to a >> compressed file: >>=20 >> =A0# mknod pipe p >> =A0# gzip -v9 < pipe > output.gz & >> =A0# xentrace -e all pipe & >>=20 >> This means xentrace will stall until gzip has made room in the pipe. >> Which also means xentrace cant collect more data from the tracebuffer >> while waiting. So that is the reason for the lost entries. >>=20 >> Now I changed T_INFO_PAGES in trace.c from 2 to 16, and reduced the >> compression rate to speedup gzip emptying the pipe. >>=20 >> =A0# mknod pipe p >> =A0# nice -n -19 gzip -v1 < pipe > output.gz & >> =A0# nice -n -19 xentrace -s 1 -S 2031 -e $(( 0x10f000 )) pipe & >>=20 >>=20 >> This means no more lost entries even with more than one guest running. >>=20 >>=20 >> Olaf >>=20 >>=20 >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >>=20 >=20 > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel