* how to avoid lost trace records?
@ 2010-11-19 15:46 Olaf Hering
2010-11-19 21:30 ` Olaf Hering
2010-11-20 20:21 ` Olaf Hering
0 siblings, 2 replies; 10+ messages in thread
From: Olaf Hering @ 2010-11-19 15:46 UTC (permalink / raw)
To: xen-devel
Today I inspected the xenalyze and the dump-raw output and noticed that
huge number of lost trace records, even when booted with tbuf_size=200:
grep -wn 1f001 log.sles11_6.xentrace.txt.dump-raw
274438:R p 5 o000000000063ffd4 1f001 4 t0000006d215b3c6b [ b6aed 57fff 9e668fb6 51 ]
274468:R p 3 o00000000006401e8 1f001 4 t0000006e21a1c004 [ b557d 37fff 87924040 51 ]
274554:R p 2 o0000000000640668 1f001 4 t0000006e62629592 [ c7747 40000 b8f5d7ac 4f ]
274562:R p 0 o00000000006406dc 1f001 4 t0000006e63e3415a [ be911 7fff fb77cb80 4a ]
274567:R p 1 o0000000000640728 1f001 4 t0000006e6e45e5d5 [ e624e 0 199d7db0 4b ]
321539:R p 1 o0000000000726128 1f001 4 t0000006e76da7821 [ 2554 0 756c584b 6e ]
329858:R p 1 o000000000074eba8 1f001 4 t0000006e95de2638 [ 289b7 0 781c474a 6e ]
348457:R p 1 o00000000008042a0 1f001 4 t0000006e9c0cea86 [ 15202 0 96ff6f1e 6e ]
352634:R p 7 o000000000082ca74 1f001 4 t0000006ed46cea24 [ b18f0 40000 9efcab44 51 ]
354038:R p 4 o0000000000837df0 1f001 4 t0000006fe86a765f [ c402e 1 86b51960 51 ]
356635:R p 6 o00000000008474e0 1f001 4 t0000007671b75052 [ b69dd 1 9e4272a2 51 ]
356716:R p 1 o0000000000847b80 1f001 4 t000000769dc250ca [ 21f5f 1 9c526d83 6e ]
That means more than 740K lost entries on cpu5,3,2,1,0.
Is this expected?
This is 4.0 with a simple boot into Linux and shut it down again.
My command line was first just '-e all', now its like this and it does
not really reduce the number of lost entries:
xentrace -s 1 -e $(( 0x0010f000 | 0x0001f000 + 1 | 0x0001f000 + 2 | 0x0001f000 + 3 )) $file
Is the tracebuffer handling well debugged?
Olaf
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: how to avoid lost trace records?
2010-11-19 15:46 how to avoid lost trace records? Olaf Hering
@ 2010-11-19 21:30 ` Olaf Hering
2010-11-20 20:21 ` Olaf Hering
1 sibling, 0 replies; 10+ messages in thread
From: Olaf Hering @ 2010-11-19 21:30 UTC (permalink / raw)
To: xen-devel
On Fri, Nov 19, Olaf Hering wrote:
> Today I inspected the xenalyze and the dump-raw output and noticed that
> huge number of lost trace records, even when booted with tbuf_size=200:
Adding some silly debug to insert_lost_records() to record the count of
the individual lost events shows lots of hypercall and stuff.
However, its odd that 20f101 gets past the 10f000 tb_event_mask in
__trace_var().
array index, cpu, event, number
(XEN) tb_control tb_event_mask 10f000
(XEN) trace.c:256:d0 tb_set_size from 239 to 20 not implemented
(XEN) insert_lost_records #0 c 2 e 20f101 n 7545b
(XEN) insert_lost_records #1 c 2 e 20f104 n 22de8
(XEN) insert_lost_records #2 c 2 e 21211 n 169e
(XEN) insert_lost_records #3 c 2 e 28004 n 16f2
(XEN) insert_lost_records #4 c 2 e 28006 n 183a
(XEN) insert_lost_records #5 c 2 e 2800e n 2fdb
(XEN) insert_lost_records #6 c 2 e 2800f n 2fdb
(XEN) insert_lost_records #7 c 2 e 2800a n 2fdb
(XEN) insert_lost_records #8 c 2 e 21021 n 17f2
(XEN) insert_lost_records #9 c 2 e 21101 n 2fdb
(XEN) insert_lost_records #10 c 2 e 21011 n 17df
(XEN) insert_lost_records #11 c 2 e 20f008 n ce
(XEN) insert_lost_records #12 c 2 e 20f103 n 717
(XEN) insert_lost_records #13 c 2 e 20f10b n 1145
(XEN) insert_lost_records #14 c 2 e 20f106 n 16f
(XEN) insert_lost_records #15 c 2 e 21002 n 97
(XEN) insert_lost_records #16 c 2 e 801001 n 5a
(XEN) insert_lost_records #17 c 2 e 802001 n 21b4
(XEN) insert_lost_records #18 c 2 e 802002 n 21b3
(XEN) insert_lost_records #19 c 2 e 21031 n a
(XEN) insert_lost_records #20 c 2 e 21311 n a
(XEN) insert_lost_records #21 c 2 e 1f004 n 1
(XEN) insert_lost_records #22 c 2 e 10f005 n 800
(XEN) insert_lost_records #23 c 2 e 10f03b n 800
(XEN) cpupool_add_domain(dom=1,pool=0) n_dom 2
(XEN) insert_lost_records #0 c 3 e 20f101 n 7c4df
(XEN) insert_lost_records #1 c 3 e 20f104 n 219b6
(XEN) insert_lost_records #2 c 3 e 20f103 n b28
(XEN) insert_lost_records #3 c 3 e 28006 n 1a31
(XEN) insert_lost_records #4 c 3 e 2800e n 33e2
(XEN) insert_lost_records #5 c 3 e 2800f n 33e2
(XEN) insert_lost_records #6 c 3 e 2800a n 33e2
(XEN) insert_lost_records #7 c 3 e 21021 n 19db
(XEN) insert_lost_records #8 c 3 e 21101 n 33e2
(XEN) insert_lost_records #9 c 3 e 802001 n 29db
(XEN) insert_lost_records #10 c 3 e 802002 n 29da
(XEN) insert_lost_records #11 c 3 e 21011 n 19f9
(XEN) insert_lost_records #12 c 3 e 21211 n 1b99
(XEN) insert_lost_records #13 c 3 e 28004 n 1de4
(XEN) insert_lost_records #14 c 3 e 20f008 n f3
(XEN) insert_lost_records #15 c 3 e 801001 n 53
(XEN) insert_lost_records #16 c 3 e 20f10b n 10d7
(XEN) insert_lost_records #17 c 3 e 21002 n ad
(XEN) insert_lost_records #18 c 3 e 21031 n e
(XEN) insert_lost_records #19 c 3 e 21311 n e
(XEN) insert_lost_records #20 c 3 e 20f106 n 28c
(XEN) insert_lost_records #21 c 3 e 1f004 n 529
(XEN) insert_lost_records #0 c 3 e 10f005 n 1c7a6
(XEN) insert_lost_records #1 c 3 e 10f03b n f1
(XEN) insert_lost_records #0 c 5 e 20f101 n 7134e
(XEN) insert_lost_records #1 c 5 e 20f104 n 23dc3
(XEN) insert_lost_records #2 c 5 e 20f103 n 62b
...
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: how to avoid lost trace records?
2010-11-19 15:46 how to avoid lost trace records? Olaf Hering
2010-11-19 21:30 ` Olaf Hering
@ 2010-11-20 20:21 ` Olaf Hering
2010-11-22 11:53 ` George Dunlap
1 sibling, 1 reply; 10+ messages in thread
From: Olaf Hering @ 2010-11-20 20:21 UTC (permalink / raw)
To: xen-devel
On Fri, Nov 19, Olaf Hering wrote:
>
> Today I inspected the xenalyze and the dump-raw output and noticed that
> huge number of lost trace records, even when booted with tbuf_size=200:
>
> grep -wn 1f001 log.sles11_6.xentrace.txt.dump-raw
> 274438:R p 5 o000000000063ffd4 1f001 4 t0000006d215b3c6b [ b6aed 57fff 9e668fb6 51 ]
...
> That means more than 740K lost entries on cpu5,3,2,1,0.
> Is this expected?
After reading the sources more carefully, its clear now.
There are a few constraints:
If booted with tbuf_size=N, tracing starts right away and fills up the
buffer until xentrace collects its content. So entries will be lost.
Once I just ran xentrace -e all > output, which filled up the whole disk
during my testing. So I changed the way to collect the output to a
compressed file:
# mknod pipe p
# gzip -v9 < pipe > output.gz &
# xentrace -e all pipe &
This means xentrace will stall until gzip has made room in the pipe.
Which also means xentrace cant collect more data from the tracebuffer
while waiting. So that is the reason for the lost entries.
Now I changed T_INFO_PAGES in trace.c from 2 to 16, and reduced the
compression rate to speedup gzip emptying the pipe.
# mknod pipe p
# nice -n -19 gzip -v1 < pipe > output.gz &
# nice -n -19 xentrace -s 1 -S 2031 -e $(( 0x10f000 )) pipe &
This means no more lost entries even with more than one guest running.
Olaf
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Re: how to avoid lost trace records?
2010-11-20 20:21 ` Olaf Hering
@ 2010-11-22 11:53 ` George Dunlap
2010-11-22 12:40 ` Olaf Hering
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: George Dunlap @ 2010-11-22 11:53 UTC (permalink / raw)
To: Olaf Hering; +Cc: xen-devel
Olaf,
Dang, 8 megs per cpu -- but I guess that's really not so much overhead
on a big machine; and it's definitely worth getting around the lost
records issue. Send the T_INFO_PAGES patch to the list, and see what
Keir thinks.
There's probably a way to modify xenalyze to do start up gzip
directly; may not be a bad idea.
-George
On Sat, Nov 20, 2010 at 8:21 PM, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Nov 19, Olaf Hering wrote:
>
>>
>> Today I inspected the xenalyze and the dump-raw output and noticed that
>> huge number of lost trace records, even when booted with tbuf_size=200:
>>
>> grep -wn 1f001 log.sles11_6.xentrace.txt.dump-raw
>> 274438:R p 5 o000000000063ffd4 1f001 4 t0000006d215b3c6b [ b6aed 57fff 9e668fb6 51 ]
> ...
>> That means more than 740K lost entries on cpu5,3,2,1,0.
>> Is this expected?
>
> After reading the sources more carefully, its clear now.
> There are a few constraints:
>
> If booted with tbuf_size=N, tracing starts right away and fills up the
> buffer until xentrace collects its content. So entries will be lost.
>
> Once I just ran xentrace -e all > output, which filled up the whole disk
> during my testing. So I changed the way to collect the output to a
> compressed file:
>
> # mknod pipe p
> # gzip -v9 < pipe > output.gz &
> # xentrace -e all pipe &
>
> This means xentrace will stall until gzip has made room in the pipe.
> Which also means xentrace cant collect more data from the tracebuffer
> while waiting. So that is the reason for the lost entries.
>
> Now I changed T_INFO_PAGES in trace.c from 2 to 16, and reduced the
> compression rate to speedup gzip emptying the pipe.
>
> # mknod pipe p
> # nice -n -19 gzip -v1 < pipe > output.gz &
> # nice -n -19 xentrace -s 1 -S 2031 -e $(( 0x10f000 )) pipe &
>
>
> This means no more lost entries even with more than one guest running.
>
>
> Olaf
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Re: how to avoid lost trace records?
2010-11-22 11:53 ` George Dunlap
@ 2010-11-22 12:40 ` Olaf Hering
2010-11-22 12:51 ` George Dunlap
2010-11-22 13:46 ` Keir Fraser
2010-11-25 21:04 ` Olaf Hering
2 siblings, 1 reply; 10+ messages in thread
From: Olaf Hering @ 2010-11-22 12:40 UTC (permalink / raw)
To: George Dunlap; +Cc: xen-devel
On Mon, Nov 22, George Dunlap wrote:
> Olaf,
>
> Dang, 8 megs per cpu -- but I guess that's really not so much overhead
> on a big machine; and it's definitely worth getting around the lost
> records issue. Send the T_INFO_PAGES patch to the list, and see what
> Keir thinks.
Yes, will do.
Sometimes there is a boost of records, like set_p2m_entry and the like.
They all happen on the same cpu, so the per_cpu buffer fills up quickly.
And thats what I'm debugging right now. Too bad, tracing adds overhead
so the bug disappears. So, back to reading source code and find the race
that way.
Olaf
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Re: how to avoid lost trace records?
2010-11-22 12:40 ` Olaf Hering
@ 2010-11-22 12:51 ` George Dunlap
0 siblings, 0 replies; 10+ messages in thread
From: George Dunlap @ 2010-11-22 12:51 UTC (permalink / raw)
To: Olaf Hering; +Cc: xen-devel
Hmm, there's a stack of patches for bugs related to the p2m / PoD /
ept code fixed in XenServer product that I neglected to send upstream
when I found. Unfortunately the p2m code has changed sufficiently
since 3.4 that they don't apply cleanly, and I've had a lot of trouble
actually getting a usable OSS system that I can test the p2m code on.
If you like, I can post the raw patches to the list, and you can see
if any of your bugs might be fixed among them...
-George
On Mon, Nov 22, 2010 at 12:40 PM, Olaf Hering <olaf@aepfle.de> wrote:
> On Mon, Nov 22, George Dunlap wrote:
>
>> Olaf,
>>
>> Dang, 8 megs per cpu -- but I guess that's really not so much overhead
>> on a big machine; and it's definitely worth getting around the lost
>> records issue. Send the T_INFO_PAGES patch to the list, and see what
>> Keir thinks.
>
> Yes, will do.
>
> Sometimes there is a boost of records, like set_p2m_entry and the like.
> They all happen on the same cpu, so the per_cpu buffer fills up quickly.
> And thats what I'm debugging right now. Too bad, tracing adds overhead
> so the bug disappears. So, back to reading source code and find the race
> that way.
>
> Olaf
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Re: how to avoid lost trace records?
2010-11-22 11:53 ` George Dunlap
2010-11-22 12:40 ` Olaf Hering
@ 2010-11-22 13:46 ` Keir Fraser
2010-11-22 14:43 ` George Dunlap
2010-11-25 21:04 ` Olaf Hering
2 siblings, 1 reply; 10+ messages in thread
From: Keir Fraser @ 2010-11-22 13:46 UTC (permalink / raw)
To: George Dunlap, Olaf Hering; +Cc: xen-devel
Is there a good reason that T_INFO_PAGES cannot be specified dynamically by
the toolstack, when enabling tracing? It doesn't seem particularly necessary
that this piece of policy is expressed statically within the hypervisor.
-- Keir
On 22/11/2010 11:53, "George Dunlap" <dunlapg@umich.edu> wrote:
> Olaf,
>
> Dang, 8 megs per cpu -- but I guess that's really not so much overhead
> on a big machine; and it's definitely worth getting around the lost
> records issue. Send the T_INFO_PAGES patch to the list, and see what
> Keir thinks.
>
> There's probably a way to modify xenalyze to do start up gzip
> directly; may not be a bad idea.
>
> -George
>
> On Sat, Nov 20, 2010 at 8:21 PM, Olaf Hering <olaf@aepfle.de> wrote:
>> On Fri, Nov 19, Olaf Hering wrote:
>>
>>>
>>> Today I inspected the xenalyze and the dump-raw output and noticed that
>>> huge number of lost trace records, even when booted with tbuf_size=200:
>>>
>>> grep -wn 1f001 log.sles11_6.xentrace.txt.dump-raw
>>> 274438:R p 5 o000000000063ffd4 1f001 4 t0000006d215b3c6b [ b6aed 57fff
>>> 9e668fb6 51 ]
>> ...
>>> That means more than 740K lost entries on cpu5,3,2,1,0.
>>> Is this expected?
>>
>> After reading the sources more carefully, its clear now.
>> There are a few constraints:
>>
>> If booted with tbuf_size=N, tracing starts right away and fills up the
>> buffer until xentrace collects its content. So entries will be lost.
>>
>> Once I just ran xentrace -e all > output, which filled up the whole disk
>> during my testing. So I changed the way to collect the output to a
>> compressed file:
>>
>> # mknod pipe p
>> # gzip -v9 < pipe > output.gz &
>> # xentrace -e all pipe &
>>
>> This means xentrace will stall until gzip has made room in the pipe.
>> Which also means xentrace cant collect more data from the tracebuffer
>> while waiting. So that is the reason for the lost entries.
>>
>> Now I changed T_INFO_PAGES in trace.c from 2 to 16, and reduced the
>> compression rate to speedup gzip emptying the pipe.
>>
>> # mknod pipe p
>> # nice -n -19 gzip -v1 < pipe > output.gz &
>> # nice -n -19 xentrace -s 1 -S 2031 -e $(( 0x10f000 )) pipe &
>>
>>
>> This means no more lost entries even with more than one guest running.
>>
>>
>> Olaf
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Re: how to avoid lost trace records?
2010-11-22 13:46 ` Keir Fraser
@ 2010-11-22 14:43 ` George Dunlap
0 siblings, 0 replies; 10+ messages in thread
From: George Dunlap @ 2010-11-22 14:43 UTC (permalink / raw)
To: Keir Fraser; +Cc: Olaf Hering, xen-devel
I think the main reason for making T_INFO_PAGES static was to limit
the number of moving-part-changes when I went to the non-contiguous
trace buffer allocation. Looking through the code, I don't see any
reason it couldn't be allocated dynamically as well.
Hmm... it would also have required an API change, which would have
meant a lot of plumbing changes as well. Making it a xen command-line
parameter should be really simple, and then we can look at making it
toolstack configurable.
Actually, since the trace buffers can only be set once anyway, maybe
the best thing to do is calculate the number of t_info pages based on
the requested trace_buf size. Then the interface doesn't have to
change at all.
I'll take a look at it sometime in the next week or two.
-George
On Mon, Nov 22, 2010 at 1:46 PM, Keir Fraser <keir@xen.org> wrote:
> Is there a good reason that T_INFO_PAGES cannot be specified dynamically by
> the toolstack, when enabling tracing? It doesn't seem particularly necessary
> that this piece of policy is expressed statically within the hypervisor.
>
> -- Keir
>
> On 22/11/2010 11:53, "George Dunlap" <dunlapg@umich.edu> wrote:
>
>> Olaf,
>>
>> Dang, 8 megs per cpu -- but I guess that's really not so much overhead
>> on a big machine; and it's definitely worth getting around the lost
>> records issue. Send the T_INFO_PAGES patch to the list, and see what
>> Keir thinks.
>>
>> There's probably a way to modify xenalyze to do start up gzip
>> directly; may not be a bad idea.
>>
>> -George
>>
>> On Sat, Nov 20, 2010 at 8:21 PM, Olaf Hering <olaf@aepfle.de> wrote:
>>> On Fri, Nov 19, Olaf Hering wrote:
>>>
>>>>
>>>> Today I inspected the xenalyze and the dump-raw output and noticed that
>>>> huge number of lost trace records, even when booted with tbuf_size=200:
>>>>
>>>> grep -wn 1f001 log.sles11_6.xentrace.txt.dump-raw
>>>> 274438:R p 5 o000000000063ffd4 1f001 4 t0000006d215b3c6b [ b6aed 57fff
>>>> 9e668fb6 51 ]
>>> ...
>>>> That means more than 740K lost entries on cpu5,3,2,1,0.
>>>> Is this expected?
>>>
>>> After reading the sources more carefully, its clear now.
>>> There are a few constraints:
>>>
>>> If booted with tbuf_size=N, tracing starts right away and fills up the
>>> buffer until xentrace collects its content. So entries will be lost.
>>>
>>> Once I just ran xentrace -e all > output, which filled up the whole disk
>>> during my testing. So I changed the way to collect the output to a
>>> compressed file:
>>>
>>> # mknod pipe p
>>> # gzip -v9 < pipe > output.gz &
>>> # xentrace -e all pipe &
>>>
>>> This means xentrace will stall until gzip has made room in the pipe.
>>> Which also means xentrace cant collect more data from the tracebuffer
>>> while waiting. So that is the reason for the lost entries.
>>>
>>> Now I changed T_INFO_PAGES in trace.c from 2 to 16, and reduced the
>>> compression rate to speedup gzip emptying the pipe.
>>>
>>> # mknod pipe p
>>> # nice -n -19 gzip -v1 < pipe > output.gz &
>>> # nice -n -19 xentrace -s 1 -S 2031 -e $(( 0x10f000 )) pipe &
>>>
>>>
>>> This means no more lost entries even with more than one guest running.
>>>
>>>
>>> Olaf
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xensource.com
>>> http://lists.xensource.com/xen-devel
>>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Re: how to avoid lost trace records?
2010-11-22 11:53 ` George Dunlap
2010-11-22 12:40 ` Olaf Hering
2010-11-22 13:46 ` Keir Fraser
@ 2010-11-25 21:04 ` Olaf Hering
2010-11-25 22:27 ` Keir Fraser
2 siblings, 1 reply; 10+ messages in thread
From: Olaf Hering @ 2010-11-25 21:04 UTC (permalink / raw)
To: George Dunlap; +Cc: xen-devel
[ Sorry, this mutt process was suspended and I didnt notice ... ]
On Mon, Nov 22, George Dunlap wrote:
> Olaf,
>
> Dang, 8 megs per cpu -- but I guess that's really not so much overhead
> on a big machine; and it's definitely worth getting around the lost
> records issue. Send the T_INFO_PAGES patch to the list, and see what
> Keir thinks.
The change is as simple as this, for xen-unstable. 4.0 needs to add the
get_order_from_pages() for alloc_xenheap_pages() calls.
A dynamic trace buffer allocation, and also allowing dynamic mask as
cmdline option, should be added. I have added this line to trace.c to
allow an event mask during booting:
integer_param("tbuf_event_mask", tb_event_mask);
Olaf
---
xen/common/trace.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- xen-unstable.hg-4.1.22415.orig/xen/common/trace.c
+++ xen-unstable.hg-4.1.22415/xen/common/trace.c
@@ -48,7 +48,8 @@ integer_param("tbuf_size", opt_tbuf_size
/* Pointers to the meta-data objects for all system trace buffers */
static struct t_info *t_info;
-#define T_INFO_PAGES 2 /* Size fixed at 2 pages for now. */
+#define T_INFO_PAGES_ORDER 1 /* Size fixed at 2 pages for now. */
+#define T_INFO_PAGES (1 << T_INFO_PAGES_ORDER)
#define T_INFO_SIZE ((T_INFO_PAGES)*(PAGE_SIZE))
static DEFINE_PER_CPU_READ_MOSTLY(struct t_buf *, t_bufs);
static DEFINE_PER_CPU_READ_MOSTLY(unsigned char *, t_data);
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Re: how to avoid lost trace records?
2010-11-25 21:04 ` Olaf Hering
@ 2010-11-25 22:27 ` Keir Fraser
0 siblings, 0 replies; 10+ messages in thread
From: Keir Fraser @ 2010-11-25 22:27 UTC (permalink / raw)
To: Olaf Hering, George Dunlap; +Cc: xen-devel
On 25/11/2010 21:04, "Olaf Hering" <olaf@aepfle.de> wrote:
> --- xen-unstable.hg-4.1.22415.orig/xen/common/trace.c
> +++ xen-unstable.hg-4.1.22415/xen/common/trace.c
> @@ -48,7 +48,8 @@ integer_param("tbuf_size", opt_tbuf_size
>
> /* Pointers to the meta-data objects for all system trace buffers */
> static struct t_info *t_info;
> -#define T_INFO_PAGES 2 /* Size fixed at 2 pages for now. */
> +#define T_INFO_PAGES_ORDER 1 /* Size fixed at 2 pages for now. */
> +#define T_INFO_PAGES (1 << T_INFO_PAGES_ORDER)
> #define T_INFO_SIZE ((T_INFO_PAGES)*(PAGE_SIZE))
> static DEFINE_PER_CPU_READ_MOSTLY(struct t_buf *, t_bufs);
> static DEFINE_PER_CPU_READ_MOSTLY(unsigned char *, t_data);
This patch looks like it has no effect whatsoever.
-- Keir
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2010-11-25 22:27 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-19 15:46 how to avoid lost trace records? Olaf Hering
2010-11-19 21:30 ` Olaf Hering
2010-11-20 20:21 ` Olaf Hering
2010-11-22 11:53 ` George Dunlap
2010-11-22 12:40 ` Olaf Hering
2010-11-22 12:51 ` George Dunlap
2010-11-22 13:46 ` Keir Fraser
2010-11-22 14:43 ` George Dunlap
2010-11-25 21:04 ` Olaf Hering
2010-11-25 22:27 ` Keir Fraser
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).