From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: Re: how to avoid lost trace records? Date: Mon, 22 Nov 2010 14:43:45 +0000 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Keir Fraser Cc: Olaf Hering , xen-devel@lists.xensource.com List-Id: xen-devel@lists.xenproject.org I think the main reason for making T_INFO_PAGES static was to limit the number of moving-part-changes when I went to the non-contiguous trace buffer allocation. Looking through the code, I don't see any reason it couldn't be allocated dynamically as well. Hmm... it would also have required an API change, which would have meant a lot of plumbing changes as well. Making it a xen command-line parameter should be really simple, and then we can look at making it toolstack configurable. Actually, since the trace buffers can only be set once anyway, maybe the best thing to do is calculate the number of t_info pages based on the requested trace_buf size. Then the interface doesn't have to change at all. I'll take a look at it sometime in the next week or two. -George On Mon, Nov 22, 2010 at 1:46 PM, Keir Fraser wrote: > Is there a good reason that T_INFO_PAGES cannot be specified dynamically = by > the toolstack, when enabling tracing? It doesn't seem particularly necess= ary > that this piece of policy is expressed statically within the hypervisor. > > =A0-- Keir > > On 22/11/2010 11:53, "George Dunlap" wrote: > >> Olaf, >> >> Dang, 8 megs per cpu -- but I guess that's really not so much overhead >> on a big machine; and it's definitely worth getting around the lost >> records issue. =A0Send the T_INFO_PAGES patch to the list, and see what >> Keir thinks. >> >> There's probably a way to modify xenalyze to do start up gzip >> directly; may not be a bad idea. >> >> =A0-George >> >> On Sat, Nov 20, 2010 at 8:21 PM, Olaf Hering wrote: >>> On Fri, Nov 19, Olaf Hering wrote: >>> >>>> >>>> Today I inspected the xenalyze and the dump-raw output and noticed tha= t >>>> huge number of lost trace records, even when booted with tbuf_size=3D2= 00: >>>> >>>> grep -wn 1f001 log.sles11_6.xentrace.txt.dump-raw >>>> 274438:R p 5 o000000000063ffd4 =A0 =A01f001 4 t0000006d215b3c6b [ b6ae= d 57fff >>>> 9e668fb6 51 ] >>> ... >>>> That means more than 740K lost entries on cpu5,3,2,1,0. >>>> Is this expected? >>> >>> After reading the sources more carefully, its clear now. >>> There are a few constraints: >>> >>> If booted with tbuf_size=3DN, tracing starts right away and fills up th= e >>> buffer until xentrace collects its content. So entries will be lost. >>> >>> Once I just ran xentrace -e all > output, which filled up the whole dis= k >>> during my testing. So I changed the way to collect the output to a >>> compressed file: >>> >>> =A0# mknod pipe p >>> =A0# gzip -v9 < pipe > output.gz & >>> =A0# xentrace -e all pipe & >>> >>> This means xentrace will stall until gzip has made room in the pipe. >>> Which also means xentrace cant collect more data from the tracebuffer >>> while waiting. So that is the reason for the lost entries. >>> >>> Now I changed T_INFO_PAGES in trace.c from 2 to 16, and reduced the >>> compression rate to speedup gzip emptying the pipe. >>> >>> =A0# mknod pipe p >>> =A0# nice -n -19 gzip -v1 < pipe > output.gz & >>> =A0# nice -n -19 xentrace -s 1 -S 2031 -e $(( 0x10f000 )) pipe & >>> >>> >>> This means no more lost entries even with more than one guest running. >>> >>> >>> Olaf >>> >>> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xensource.com >>> http://lists.xensource.com/xen-devel >>> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >