From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754353AbYIZDU4 (ORCPT ); Thu, 25 Sep 2008 23:20:56 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753272AbYIZDUs (ORCPT ); Thu, 25 Sep 2008 23:20:48 -0400 Received: from tomts40.bellnexxia.net ([209.226.175.97]:41925 "EHLO tomts40-srv.bellnexxia.net" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753100AbYIZDUr (ORCPT ); Thu, 25 Sep 2008 23:20:47 -0400 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApsEAHTu20hMQWq+/2dsb2JhbACBXrlDgWU Date: Thu, 25 Sep 2008 23:20:45 -0400 From: Mathieu Desnoyers To: Steven Rostedt Cc: Masami Hiramatsu , LKML , Ingo Molnar , Thomas Gleixner , Peter Zijlstra , Andrew Morton , prasad@linux.vnet.ibm.com, Linus Torvalds , "Frank Ch. Eigler" , David Wilder , hch@lst.de, Martin Bligh , Christoph Hellwig , Steven Rostedt Subject: Re: [RFC PATCH v4] Unified trace buffer Message-ID: <20080926032045.GA28771@Krystal> References: <20080925185154.230259579@goodmis.org> <20080925185236.244343232@goodmis.org> <48DC406D.1050508@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Content-Disposition: inline In-Reply-To: X-Editor: vi X-Info: http://krystal.dyndns.org:8080 X-Operating-System: Linux/2.6.21.3-grsec (i686) X-Uptime: 23:18:34 up 113 days, 7:58, 8 users, load average: 1.12, 1.14, 0.65 User-Agent: Mutt/1.5.16 (2007-06-11) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Steven Rostedt (rostedt@goodmis.org) wrote: > > On Thu, 25 Sep 2008, Masami Hiramatsu wrote: > > > Hi Steven, > > > > Steven Rostedt wrote: > > > This version has been cleaned up a bit. I've been running it as > > > a back end to ftrace, and it has been handling pretty well. > > > > Thank you for your great work. > > It seems good to me(especially, encapsulating events :)). > > Thanks! > > > > > I have one request of enhancement. > > > > > +static struct ring_buffer_per_cpu * > > > +ring_buffer_allocate_cpu_buffer(struct ring_buffer *buffer, int cpu) > > > +{ > > [...] > > > + cpu_buffer->pages = kzalloc_node(ALIGN(sizeof(void *) * pages, > > > + cache_line_size()), GFP_KERNEL, > > > + cpu_to_node(cpu)); > > > > Here, you are using a slab object for page managing array, > > the largest object size is 128KB(x86-64), so it can contain > > 16K pages = 64MB. > > > > As I had improved relayfs, in some rare case(on 64bit arch), > > we'd like to use larger buffer than 64MB. > > > > http://sourceware.org/ml/systemtap/2008-q2/msg00103.html > > > > So, I think similar hack can be applicable. > > > > Would it be acceptable for the next version? > > I would like to avoid using vmalloc as much as possible, but I do see the > limitation here. Here's my compromise. > > Instead of using vmalloc if the page array is greater than one page, > how about using vmalloc if the page array is greater than > KMALLOC_MAX_SIZE? > > This would let us keep the vmap area free unless we have no choice. > > -- Steve > You could also fallback on a 2-level page array when buffer size is > 64MB. The cost is mainly a supplementary pointer dereference, but one more should not make sure a big difference overall. Mathieu -- Mathieu Desnoyers OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68